repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
10,917
closed
longformer speed compared to bert model
We are trying to use a LongFormer and Bert model for multi-label classification of different documents. When we use the BERT model (BertForSequenceClassification) with max length 512 (batch size 8) each epoch takes approximately 30 minutes. When we use LongFormer (LongformerForSequenceClassification with the 'allenai/longformer-base-4096' and gradient_checkpointing=True) with max length 4096 (batch size 1, Gradient Accumulation step 8) each epoch takes approximately 12 hours. Is this reasonable or are we missing something? Is there anything that we can try to make the training faster?
03-26-2021 03:31:22
03-26-2021 03:31:22
Hi, Is it possible to ask questions related to training on the [forum](https://discuss.huggingface.co/) rather than here? For example, all questions related to training LongFormer can be found [here](https://discuss.huggingface.co/search?q=longformer). The authors of Transformers like to keep Github issues for bugs/feature requests. Thank you. <|||||>sure. thank you for the quick response
transformers
10,916
closed
AttributeError: 'Trainer' object has no attribute 'log_metrics'
I try to fine tune the distilbert-base-uncased on my own dataset, which are csv made up of each news line by line. here is my command: ``` nohup python run_mlm.py \ --model_name_or_path distilbert-base-uncased \ --train_file df_finetune_train.csv \ --validation_file df_finetune_test.csv \ --do_train \ --do_eval \ --preprocessing_num_workers 72 \ --output_dir ./finetuned_bert \ --overwrite_cache True \ --max_seq_length 256 \ --line_by_line True > log_fintune_mlm & ``` Here is the error. > {'loss': 1.7847, 'learning_rate': 3.264263411864888e-07, 'epoch': 2.98} > {'loss': 1.7906, 'learning_rate': 1.7858832434478192e-07, 'epoch': 2.99} > {'loss': 1.7839, 'learning_rate': 3.075030750307503e-08, 'epoch': 3.0} > {'train_runtime': 65966.5445, 'train_samples_per_second': 2.563, 'epoch': 3.0} > Traceback (most recent call last): > File "run_mlm.py", line 487, in <module> > main() > File "run_mlm.py", line 462, in main > trainer.log_metrics("train", metrics) > AttributeError: 'Trainer' object has no attribute 'log_metrics' transformers version: 4.3.3 torch version: 1.5.0+cu101
03-26-2021 01:33:30
03-26-2021 01:33:30
You should install Transformers from source. See #10446.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,915
closed
Bump pyyaml from 5.3.1 to 5.4 in /examples/research_projects/lxmert
Bumps [pyyaml](https://github.com/yaml/pyyaml) from 5.3.1 to 5.4. <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/yaml/pyyaml/blob/master/CHANGES">pyyaml's changelog</a>.</em></p> <blockquote> <p>5.4 (2021-01-19)</p> <ul> <li><a href="https://github-redirect.dependabot.com/yaml/pyyaml/pull/407">yaml/pyyaml#407</a> -- Build modernization, remove distutils, fix metadata, build wheels, CI to GHA</li> <li><a href="https://github-redirect.dependabot.com/yaml/pyyaml/pull/472">yaml/pyyaml#472</a> -- Fix for CVE-2020-14343, moves arbitrary python tags to UnsafeLoader</li> <li><a href="https://github-redirect.dependabot.com/yaml/pyyaml/pull/441">yaml/pyyaml#441</a> -- Fix memory leak in implicit resolver setup</li> <li><a href="https://github-redirect.dependabot.com/yaml/pyyaml/pull/392">yaml/pyyaml#392</a> -- Fix py2 copy support for timezone objects</li> <li><a href="https://github-redirect.dependabot.com/yaml/pyyaml/pull/378">yaml/pyyaml#378</a> -- Fix compatibility with Jython</li> </ul> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/yaml/pyyaml/commit/58d0cb7ee09954c67fabfbd714c5673b03e7a9e1"><code>58d0cb7</code></a> 5.4 release</li> <li><a href="https://github.com/yaml/pyyaml/commit/a60f7a19c0b418fe95fcf2ec0957005ae39e1090"><code>a60f7a1</code></a> Fix compatibility with Jython</li> <li><a href="https://github.com/yaml/pyyaml/commit/ee98abd7d7bd2ca9c7b98aa19164fd0306a3f3d2"><code>ee98abd</code></a> Run CI on PR base branch changes</li> <li><a href="https://github.com/yaml/pyyaml/commit/ddf20330be1fae8813b8ce1789c48f244746d252"><code>ddf2033</code></a> constructor.timezone: _<em>copy</em> &amp; <strong>deepcopy</strong></li> <li><a href="https://github.com/yaml/pyyaml/commit/fc914d52c43f499224f7fb4c2d4c47623adc5b33"><code>fc914d5</code></a> Avoid repeatedly appending to yaml_implicit_resolvers</li> <li><a href="https://github.com/yaml/pyyaml/commit/a001f2782501ad2d24986959f0239a354675f9dc"><code>a001f27</code></a> Fix for CVE-2020-14343</li> <li><a href="https://github.com/yaml/pyyaml/commit/fe150624146ee631bb0f95e45731e8b01281fed6"><code>fe15062</code></a> Add 3.9 to appveyor file for completeness sake</li> <li><a href="https://github.com/yaml/pyyaml/commit/1e1c7fb7c09e9149967c208a6fd07276a6140d57"><code>1e1c7fb</code></a> Add a newline character to end of pyproject.toml</li> <li><a href="https://github.com/yaml/pyyaml/commit/0b6b7d61719fbe0a11f0980489f1bf8ce746c164"><code>0b6b7d6</code></a> Start sentences and phrases for capital letters</li> <li><a href="https://github.com/yaml/pyyaml/commit/c97691596eec279ef9191a9b3bba583a17139d5a"><code>c976915</code></a> Shell code improvements</li> <li>Additional commits viewable in <a href="https://github.com/yaml/pyyaml/compare/5.3.1...5.4">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=pyyaml&package-manager=pip&previous-version=5.3.1&new-version=5.4)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
03-26-2021 00:39:22
03-26-2021 00:39:22
Looks like pyyaml is up-to-date now, so this is no longer needed.
transformers
10,914
closed
[vulnerability] fix dependency
this PR fixes https://github.com/huggingface/transformers/security/dependabot/examples/research_projects/lxmert/requirements.txt/PyYAML/open @LysandreJik
03-25-2021 23:36:43
03-25-2021 23:36:43
transformers
10,913
closed
pegasus xsum won't train on xsum dataset
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 5.0dev - Platform: linux - Python version: 3.6.9 - PyTorch version (GPU?): pip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html - Tensorflow version (GPU?): 2.4.1 - Using GPU in script?: Both - - Using distributed or parallel set-up in script?: no ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @sgugger @patrickvonplaten ## Information Model I am using (Bert, XLNet ...): Pegasus XSUM The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: I am using run_summarization.py to retrain a fine-tuned model (before I try it on my own data). I first fine-tuned on gigaword for a few thousand iterations, tested it on gigaword, then switched to evaluate on the xsum dataset. The xsum eval dataset produces the following error on the CPU (similar error on GPU, just with a lot of extra fluff) ``` File "run_summarization.py", line 593, in <module> main() File "run_summarization.py", line 550, in main max_length=data_args.val_max_target_length, num_beams=data_args.num_beams, metric_key_prefix="eval" File "/disk1/work/marc/pegasus/venv/lib/python3.6/site-packages/transformers/trainer_seq2seq.py", line 74, in evaluate return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix) File "/disk1/work/marc/pegasus/venv/lib/python3.6/site-packages/transformers/trainer.py", line 1707, in evaluate metric_key_prefix=metric_key_prefix, File "/disk1/work/marc/pegasus/venv/lib/python3.6/site-packages/transformers/trainer.py", line 1838, in prediction_loop loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys) File "/disk1/work/marc/pegasus/venv/lib/python3.6/site-packages/transformers/trainer_seq2seq.py", line 167, in prediction_step **gen_kwargs, File "/disk1/work/marc/pegasus/venv/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/disk1/work/marc/pegasus/venv/lib/python3.6/site-packages/transformers/generation_utils.py", line 927, in generate model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(input_ids, model_kwargs) File "/disk1/work/marc/pegasus/venv/lib/python3.6/site-packages/transformers/generation_utils.py", line 412, in _prepare_encoder_decoder_kwargs_for_generation model_kwargs["encoder_outputs"]: ModelOutput = encoder(input_ids, return_dict=True, **encoder_kwargs) File "/disk1/work/marc/pegasus/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/disk1/work/marc/pegasus/venv/lib/python3.6/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 725, in forward embed_pos = self.embed_positions(input_shape) File "/disk1/work/marc/pegasus/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/disk1/work/marc/pegasus/venv/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/disk1/work/marc/pegasus/venv/lib/python3.6/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 139, in forward return super().forward(positions) File "/disk1/work/marc/pegasus/venv/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 147, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "/disk1/work/marc/pegasus/venv/lib/python3.6/site-packages/torch/nn/functional.py", line 1913, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
03-25-2021 23:34:37
03-25-2021 23:34:37
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,912
open
Summarization length not controlled by max_length, min_length
I am using the pertained ctrlsum-cnndm model from transformers. I noticed that summarization text length is not exactly controlled by max_length, min_length arguments of model.generate(). Not sure why. It appears that empty spaces are included, but not sure. Please help. Thanks. ``` text1="The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct." from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("hyunwoongko/ctrlsum-cnndm") model = AutoModelForSeq2SeqLM.from_pretrained("hyunwoongko/ctrlsum-cnndm") inputs = tokenizer.encode(text1, return_tensors="pt", max_length=1024)#16 outputs = model.generate(inputs, max_length=100, min_length=50, num_beams=5, early_stopping=True) print(tokenizer.decode(outputs[0])) ``` Results: max_length=100, min_length=50, actually 36 words `</s> The Eiffel Tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building. It is the tallest structure in Paris and the second tallest free-standing structure in France after the Millau Viaduct.</s> ` max_length=200, min_length=100, actually 83 words `</s> The Eiffel Tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. It was the tallest man-made structure in the world for 41 years until the Chrysler Building in New York City was finished in 1930. It is the second tallest free-standing structure in France after the Millau Viaduct, which measures 125 metres (410 ft) on each side. The tower is now taller than the Chrysler building by 5.2 metres (17 ft)</s> `
03-25-2021 21:41:56
03-25-2021 21:41:56
The `max_length` and `min_length` are in terms of tokens, not words. As some words consist of multiple tokens, this results in fewer words to be generated than you might expect. <|||||>@NielsRogge Thanks for the answer. It makes sense. But when are words consist of multiple tokens, can you give me some examples? Also, would it be better for arguments (max_length, min_length) refer to number of words instead of tokens as to better control the outputs, which are natural language for human?<|||||>Running into a similar issue when using `generator = pipeline('text-generation', model='EleutherAI/gpt-neo-2.7B')` . I can get better control when using `min_length=..,max_length=..` but I have no ultimate control when e.g. querying for `Below is the code for a react app with a blue button that says 'click me'` ``` {'generated_text': "Below is the code for a react app with a blue button that says 'click me' that is to be used by react-router. \nimport React, { Component } from 'react';\n\nimport { Link } from 'react"}] ``` My result is "cut off" and I would be very happy to set a desired length of resulting words.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Stalebots are so much an anti-quality thing :-/<|||||>> Running into a similar issue when using `generator = pipeline('text-generation', model='EleutherAI/gpt-neo-2.7B')` . I can get better control when using `min_length=..,max_length=..` but I have no ultimate control when e.g. querying for `Below is the code for a react app with a blue button that says 'click me'` > > ``` > {'generated_text': "Below is the code for a react app with a blue button that says 'click me' that is to be used by react-router. \nimport React, { Component } from 'react';\n\nimport { Link } from 'react"}] > ``` > > My result is "cut off" and I would be very happy to set a desired length of resulting words. Same issue for me, anyone found a solution regarding this? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>***Stalebots are so much an anti-quality measure and have not been fixed***<|||||>cc @patil-suraj @patrickvonplaten <|||||>@chris-aeviator - do you want to have exactly `max_length` words? In this case you have to disable the eos_token_id => you should be able to just do `model.generate(...., eos_token_id=None)`
transformers
10,911
closed
Add nvidia megatron models
# What does this PR do? Add the megatron_gpt2 model. That model reuses the existing GPT2 model. This commit includes a script to convert a Megatron-GPT2 checkpoint downloaded from NVIDIA GPU Cloud. See examples/megatron-models/README.md for details. Add the megatron_bert model. That model is implemented as a modification of the existing BERT model in Transformers. This commit includes a script to convert a Megatron-BERT checkpoint downloaded from NVIDIA GPU Cloud. See examples/megatron-models/README.md for details. @LysandreJik
03-25-2021 20:59:42
03-25-2021 20:59:42
@LysandreJik - you'll see that the last test (marked 'slow') in tests/test_modeling_megatron_bert.py points to a checkpoint in examples/megatron-models (downloaded following the instructions described in examples/megatron-models/README.md). I was not sure how to deal with that so suggestions are welcome (for other items too ;)).<|||||>We have a few failing tests, let me break them down for you: ### build_doc The build_doc is failing because of the following errors: ``` /home/circleci/transformers/docs/source/model_doc/megatron_bert.rst:document isn't included in any toctree ``` The `megatron_bert.rst` should be defined in the index of the docs :) ### check_code_quality The error is: ``` 2 files would be reformatted, 783 files would be left unchanged. ``` For this, you should install the quality tools: `pip install -e .[quality]` (from the root of the repo) and run the following: ``` make fixup ``` This is going to fix some files, and tell you if there are errors it cannot resolve. If there are some, it should tell you how to fix them. ### run_test_flax, run_tests_tf, and run_tests_pipelines_tf This is due to the following error: ``` ____________ ERROR collecting tests/test_modeling_megatron_bert.py _____________ tests/test_modeling_megatron_bert.py:256: in <module> class MegatronBertModelTest(ModelTesterMixin, unittest.TestCase): tests/test_modeling_megatron_bert.py:259: in MegatronBertModelTest MegatronBertModel, E NameError: name 'MegatronBertModel' is not defined ``` I think this comes from a missing `@require_torch` decorator on one of your tests. This decorator tells the suite that this test requires torch, and to not run this test if torch is not found as a dependency inside the environment. If that's not it, then it may be that it's missing a dummy object. Running `make fix-copies` should fix this, but you should already have run this if you have done the fix mentioned above relative to the style/code quality.
transformers
10,910
closed
Wav2Vec2 CommonVoice training - Save the processor before training starts
# What does this PR do? Currently, the Wav2Vec2 processor is saved at the end of training. However, the vocabulary is non-deterministic and varies between runs. Thus, if the training is killed before it's done, the processor is not saved, meaning that the checkpoints do not contain the processor configuration files, making them unusable for resuming training or for evaluating on the checkpoint. Hence, this PR saves the processor before the training begins. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @patrickvonplaten @patil-suraj
03-25-2021 20:27:40
03-25-2021 20:27:40
transformers
10,908
closed
Improve the documentation for TrainingArguments.label_names, and if possible raise an error if users misinterpret this attribute like I did
### Original Issue Title: Possible typo in trainer.py: prediction_step(), forgetting to exclude loss item of outputs dict when assigning logits _**Update**: I determined the root cause of my error to stem from an incorrect assignment of `TrainingArgument.label_names`. **There is not a typo** in `Trainer.prediction_step()`, as I've suggested below. However there is still an issue: see my comment for elaboration._ I was using the `Trainer` trying to fine-tune [KB-Bert-Base-Swedish-Cased](https://huggingface.co/KB/bert-base-swedish-cased) for multi-class SequenceClassification, when I got a `IndexError: tuple index out of range` during the evaluation stage (I set up `Trainer` to evaluate after each Epoch). I started PDB and paused at this line in the evaluation phase: https://github.com/huggingface/transformers/blob/6bc89ed9295443e5a3ee236ad544101752563917/src/transformers/trainer.py#L1805 With the debugger, I saw that `loss=None`, `labels=None`, and `logits` is actually `tuple` with two items. The first item is the prediction loss as, and the second element is the actual output logits from the models forward pass. I think this strange assignment of the local `logits` variable is coming from here, inside `prediction_step`: https://github.com/huggingface/transformers/blob/6bc89ed9295443e5a3ee236ad544101752563917/src/transformers/trainer.py#L1933 As the `outputs `dict includes the loss, and "loss" is not in ignore_keys, the loss value in outputs gets baked into `logits`. I'm pretty sure it's a typo, as when I'm comparing it to a few lines above, (which is executed when has_labels=True), the similar line is: https://github.com/huggingface/transformers/blob/6bc89ed9295443e5a3ee236ad544101752563917/src/transformers/trainer.py#L1922 The above links are all from Version 4.4.2, but this possible typo is still present in master: https://github.com/huggingface/transformers/blob/9856c9213dfe9f8355fe00dd6cd0fa1ceae4fa5a/src/transformers/trainer.py#L1966 I haven't been able to read and grasp the code too much, but it looks to me like either we're forgetting to ignore the "loss" key in outputs, or the return statement of `prediction_step` should be somehaw unpacking the logits tuple, so the two variables in "logits" tuple are unpacked into `loss` and` logits`: https://github.com/huggingface/transformers/blob/6bc89ed9295443e5a3ee236ad544101752563917/src/transformers/trainer.py#L1947 **For clarity, this is the stacktrace of how I encounter the tuple index error from the above typo:** In the evaluation phase, `prediction_loop` runs over all the batches in my dev dataset. It gets the model output/prediction of each dev batch here: https://github.com/huggingface/transformers/blob/6bc89ed9295443e5a3ee236ad544101752563917/src/transformers/trainer.py#L1805 Later in` prediction_loop`, we, concatenate each prediction batch with the previous predictions here, calling the function `nested_concat`: https://github.com/huggingface/transformers/blob/6bc89ed9295443e5a3ee236ad544101752563917/src/transformers/trainer.py#L1810 Inside `nested_concat`, in the line below, `new_tensors` is the above mentioned "logits" tuple. https://github.com/huggingface/transformers/blob/6bc89ed9295443e5a3ee236ad544101752563917/src/transformers/trainer_pt_utils.py#L95 The above line does a recursive call to `nested_concat`, and we arrive in the line below. https://github.com/huggingface/transformers/blob/6bc89ed9295443e5a3ee236ad544101752563917/src/transformers/trainer_pt_utils.py#L97 Which calls this: https://github.com/huggingface/transformers/blob/6bc89ed9295443e5a3ee236ad544101752563917/src/transformers/trainer_pt_utils.py#L58 And I get a index error, as it's trying to index into what is actually the `loss` tensor.
03-25-2021 19:00:13
03-25-2021 19:00:13
### Update: **I was wrong, my error is originally coming from badly assigned TrainingArguments.label_names. However, I strongly recommend fixes.** Continued investigating and realised my error is appearing because I don't understand the attribute `TrainingArguments.label_names`. I thought that it should be a list of strings, where each string is my specified name of a class. **Some background on my code/data structure:** I'm doing Multi-Class classification on sentences and my training data is an ordered list of a string sentences, and an associated list of classes in string form (i.e. the class's name). I.e. `sentences[i]` is a sentence sample, and `classes[i]` is the class of that sentence, more specifically the _name_ of that class, a string. If I understand the HuggingFace - Transformers documentation correctly, I should be passing these classes to the Model as the `indices` to a One-Hot encoded list of classes instead. Or in another interpretation, class[i] should be a class number. So in my custom Dataset class, I send my sentences through a Transformers Tokenizer, and I use `MultiLabelBinarizer` from scikit-learn to One-Hot-Encode all my classes, convert it to a tensor, and then call argmax(dim=-1) on the `classes` tensor. Of course, I don't want the metrics report to just say "class 0 has the f1-score X", so I thought I could pass original class names to Trainer to be used when printing this metrics report. This is what I thought `TrainingArgument.label_names` was for, so I set `TrainingArguments.label_names = MultiLabelBinarizer.classes_`. **How my error appears** During the evaluation stage in `prediction_step()`, I saw with pdb that `inputs` indeed has a `labels` item, which makes sense as I'm trying to evaluate how well the model is performing here. I now understand that `prediction_step()` is used both when **predicting** _(i.e. we have no target labels in `inputs` and thus expect to not obtain a loss value)_, and for **evaluating** _(i.e. we **do** have target labels and should be able to get a loss value)_. And of course, this is what the `has_labels` variable is used for - indicating the presence of target labels in `input`, and thus that `prediction_step()` should be able to get a loss value using `Trainer.compute_loss`. However if `has_labels=False`, `prediction_step()` assumes that `outputs` variable **will not** have the item `loss`, and so we do not need to ignore this key when converting the `outputs` dict/SequenceClassificationOutput. However, since I apparently specified `TrainingArguments.label_names` incorrectly, `has_labels` becomes False _when it shouldn't be_, and everything gets messed up. `prediction_step()` thus assumes that we're predicting and doesn't filter out `loss` in outputs, which leads to the error described in my first post. I still don't understand what `TrainingArguments.label_names` is or should be. I recommend that two things should be done: - Improve the documentation regarding `TrainingArguments.label_names`, specifying the expected format, etc. - Earlier in the `Trainer` class, check that `TrainingArguments.label_names` is reasonably formatted and raise a specific error if it isn't, so the user doesn't recieve the above extremely confusing rabbit hole of an error that I did.. <|||||>Ping @sgugger <|||||>I'm unsure what you want to improve, the [documentation](https://huggingface.co/transformers/main_classes/trainer.html#trainingarguments) states the following: """ label_names (List[str], optional) – The list of keys in your dictionary of inputs that correspond to the labels. Will eventually default to ["labels"] except if the model used is one of the XxxForQuestionAnswering in which case it will default to ["start_positions", "end_positions"]. """ It's clearly indicated it needs to be a list of strings, that have to be the names of the keys for the labels in your input dictionary. I have no idea what your `MultiLabelBinarizer.classes_` contains since you did not share the code of this class. More generally, please look at the [warning in the Trainer documentation](https://huggingface.co/transformers/main_classes/trainer.html#trainer) since you appear to be using the Trainer with a model that is not a model of the Transformers library. <|||||>Oh alright, I didn't see that warning. Thank you! The `MultiLabelBinarizer` from scikit-learn transforms list of class/label strings into a matrix, where each row is a one-hot-encoded version of the label. `MultiLabelBinarizer.classes_` returns the list of all class/label names detected in the original class list, with same ordering as the one-hot-encoded version. It sounds like I understood `TrainingArguments.label_names` correctly then, but that my usage of a custom model is messing up the behaviour somehow. Are there any tips/strategies to fix these strange behaviours? Should I just override `prediction_step` and try to fix how has_labels is being assigned?<|||||>The easiest way to have your custom model work with `Trainer` with no strange behavior is to subclass `PreTrainedModel` (for instance by copying the `XXXForSequenceClassification` and tweaking it to your needs). Otherwise, subclassing and overriding the `prediction_step` method is the most straightforward path.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I ran into exactly the same issue today. I was also thinking that the parameter `label_names` in `TrainingArguments` refers to `data["train"].features["label"].names`. The error message `IndexError: tuple index out of range` was not helpful at all and I only found the problem by trial and error. Actually, I was not able to find the description for `label_names` in the [documentation](https://huggingface.co/docs/transformers/v4.14.1/en/main_classes/trainer#transformers.TrainingArguments) but only in the linked source code. Besides, I don't even understand what "The list of keys in your dictionary of inputs that correspond to the labels." should mean. What "dictionary of inputs" and what "list of keys"? My dataset looks like this ``` DatasetDict({ train: Dataset({ features: ['text', 'label'], num_rows: 9245 }) test: Dataset({ features: ['text', 'label'], num_rows: 1028 }) }) ``` The only dictionaries I see is `DatasetDict` with keys "train" and "test" and each `Dataset` with keys "features" and "num_rows". It would be really helpful if the description of the parameter `label_names` and the error message could be improved.<|||||>YES I completely agree. This was very confusing in the documentation. I also interpreted it to mean the list of keys in my label2id dictionary, but it turns out that all it wanted for `label_names` in my case was `['labels']`. That is the name of the column in my input that holds the labels. I hope this helps anyone who was still struggling to understand what "a list of the names of the keys for the labels in your input dictionary" means :)
transformers
10,907
closed
Exception: cannot import name 'Regex' from 'tokenizers'
From: site-packages/transformers/convert_slow_tokenizer.py When first-time call: from transformers import XLMRobertaTokenizer tokenizers-0.10.1 transformers-4.4.2
03-25-2021 18:35:09
03-25-2021 18:35:09
same problem. Edit: I updated the package "tokenizers" to the latest version and it works fine.<|||||>Hi! Do you have a colab so we can reproduce the issue? Or some commands we can run to obtain the same environment you have and test it out?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,906
closed
Return global attentions (see #7514)
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # 7514 (see discussion of March 22, 2021 with @patrickvonplaten ) ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-25-2021 16:53:22
03-25-2021 16:53:22
transformers
10,905
closed
Add ImageFeatureExtractionMixin
# What does this PR do? This PR adds a new `ImageFeatureExtractionMixin` to implement the common functionality needed for images (conversion to PIL Image / NumPy array /normalize/ resize) in a framework agnostic way. While it only adds support for torch (not tf) tensors as input, support for TF tensors is easy to add in the design and will be done when we have a TF model with a vision modality. Along the way, this PR adds a new `is_vision_available` check (depends only on PIL for now, but we can add other dependencies later on if we fill we need them. It could for instance check for torchvision when torch is installed) and the "dummy" vision objects. I will work on adding tests tomorrow, but the general design can already be reviewed to check if it has everything needed. cc @NielsRogge
03-25-2021 16:29:33
03-25-2021 16:29:33
transformers
10,904
closed
ONNX export: move sample input to same device as model when inferring shapes
# What does this PR do? Training a model on a GPU and exporting it afterwards to ONNX has raised a `RuntimeError`, because the model and the [sample input](https://github.com/huggingface/transformers/blob/1a3e0c4fe6868b4eb1105dfe601a79d7e5d11a0f/src/transformers/convert_graph_to_onnx.py#L196) in [`transformers.convert_graph_to_onnx.infer_shapes()`](https://github.com/huggingface/transformers/blob/1a3e0c4fe6868b4eb1105dfe601a79d7e5d11a0f/src/transformers/convert_graph_to_onnx.py#L161-L222) were not on the same device. This PR moves the output to the same device as the model. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests) Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @mfuntowicz (according to `git blame`)
03-25-2021 16:01:17
03-25-2021 16:01:17
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,903
closed
Add 3D attention mask to T5 model (#9643)
# What does this PR do? It allows for 3D attention mask in T5 model (modeling_t5.py). <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #9643 This is a solution for allowing the 3D attention mask in the T5 model by making it broadcastable. It is based on what is used in BERT. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-25-2021 15:28:59
03-25-2021 15:28:59
Hey @lexhuismans, Thanks a lot for your PR! Could you also add a test to verify that T5 can be used with a 3D mask? <|||||>Hey @patrickvonplaten, Thanks for your message. I had a shot in adding a test for the 3D attention mask. The test passed on my device. I based the test on a similar test for a default attention mask in test_modeling_bert.py. (Not sure if bert already tests for 3D attention mask?) Also, I did a rebase before pushing which is why there are so many other commits in-between. Let me know if something is still missing or incorrect so I can have a look at it. <|||||>I made a new PR with just the two commits #11197.
transformers
10,902
closed
Add `examples/run_ner_no_trainer.py`
This PR adds an example of token classification tasks (`"ner", "pos", "chunk"`) to show the functionalities of the new `accelerate` library. <hr> **Reviewers:** @sgugger
03-25-2021 15:06:06
03-25-2021 15:06:06
Ok, added the missing part in Accelerate. In your code, before gathering the labels and predictions, you should pad them across the processes running if `pad_to_max_length` is False: ```python if not args.pad_to_max_length: predictions = accelerator.pad_across_processes(predictions, dim=1, pad_index=-100) labels = accelerator.pad_across_processes(batch["labels"], dim=1, pad_index=-100) ``` This should solve the issue when `pad_to_max_length` is left at `False`!<|||||>Thanks for your comment, @sgugger. I fix the bugs and add documentation to README.<|||||>Thank you for spotting the mistake with unintentionally deleted `--label_all_tokens` from argarser. I added that argument back.<|||||>Just checked on TPU for completeness and it runs perfectly fine there, so we're all good! Thanks for your contribution!!!
transformers
10,901
closed
Error with detecting cached files when running without Internet connection (related to #10067)
## Environment info - `transformers` version: 4.5.0.dev0 - Platform: Linux-3.10.0-957.5.1.el7.x86_64-x86_64-with-centos-7.6.1810-Core - Python version: 3.7.10 - PyTorch version (GPU?): 1.8.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help @LysandreJik (related to #10235 and #10067) ## Information I'm trying to run ``` from transformers import BertTokenizer BertTokenizer.from_pretrained("bert-large-uncased-whole-word-masking") ``` from an environment without Internet access. It crashes even though I have all files downloaded and cached. The uncaught exception: https://github.com/huggingface/transformers/blob/5f1491d3b366d19cc08832d09bcfe007a2643089/src/transformers/file_utils.py#L1347-L1350 When `file_id == 'added_tokens_file'` `file_path` equals https://huggingface.co/bert-large-uncased-whole-word-masking/resolve/main/added_tokens.json which does not exist. (https://github.com/huggingface/transformers/blob/1a3e0c4fe6868b4eb1105dfe601a79d7e5d11a0f/src/transformers/tokenization_utils_base.py#L1653) This results in line https://github.com/huggingface/transformers/blob/1a3e0c4fe6868b4eb1105dfe601a79d7e5d11a0f/src/transformers/file_utils.py#L1294 throwing `ConnectTimeout` which is caught in https://github.com/huggingface/transformers/blob/1a3e0c4fe6868b4eb1105dfe601a79d7e5d11a0f/src/transformers/file_utils.py#L1313 and further ignored until another exception in https://github.com/huggingface/transformers/blob/1a3e0c4fe6868b4eb1105dfe601a79d7e5d11a0f/src/transformers/tokenization_utils_base.py#L1672 which is not caught enywhere. When trying to get the same file with the internet is on the code work differently: line https://github.com/huggingface/transformers/blob/1a3e0c4fe6868b4eb1105dfe601a79d7e5d11a0f/src/transformers/file_utils.py#L1295 throws `requests.exceptions.HTTPError`, which is caught and processed here https://github.com/huggingface/transformers/blob/1a3e0c4fe6868b4eb1105dfe601a79d7e5d11a0f/src/transformers/tokenization_utils_base.py#L1674-L1677 The rest of the code works just fine after `resolved_vocab_files[file_id] = None` Using `BertTokenizer.from_pretrained(bert_version, local_files_only=True)` works just fine because of this condition: https://github.com/huggingface/transformers/blob/1a3e0c4fe6868b4eb1105dfe601a79d7e5d11a0f/src/transformers/tokenization_utils_base.py#L1668-L1672 The current workaround is to use `BertTokenizer.from_pretrained(bert_version, local_files_only=True)` but this does not allow to use same code with and without Internet. ## To reproduce Steps to reproduce the behavior: Run ``` from transformers import BertTokenizer BertTokenizer.from_pretrained("bert-large-uncased-whole-word-masking") ``` from env without internet but all the required cache files pre-downloaded. ## Expected behavior Works exactly as ``` from transformers import BertTokenizer BertTokenizer.from_pretrained("bert-large-uncased-whole-word-masking", local_files_only=True) ```
03-25-2021 13:58:37
03-25-2021 13:58:37
Related issue: https://github.com/huggingface/transformers/issues/9147, with proposed fix in https://github.com/huggingface/transformers/pull/9807<|||||>Why do we need this condition? https://github.com/huggingface/transformers/blob/1a3e0c4fe6868b4eb1105dfe601a79d7e5d11a0f/src/transformers/tokenization_utils_base.py#L1669-L1672 Was introduced here: https://github.com/huggingface/transformers/commit/863e553f75daeaf09aea9cd521ac3a3b3f09e29f Is it needed in any other scenario? Would it be better to do `unresolved_files.append(file_id)` unconditionally?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,900
closed
Getting a model to work on a system with no internet access
## Environment info We are trying to use the model from https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment on a system that has no connection to the internet. Normally one can do: `pipeline = Pipeline(model='model_name")` and huggingface will fetch everything it needs from the internet. Unfortunately, some datasets reside within a highly protected environment that does not allow for any internet connection and we can't fetch our models. (Hell, we can't even copy/paste errors). Uploading files to that environment is really cumbersome and every file needs to go through a review process. Through trial and error, i have gotten the model and tokenizer to load, but it is now missing a "vocabulary". Before I got a submit a request for extra files to be uploaded, could someone just confirm for me what files do I need so that the model could be loaded offline. As far as I understand i have to have every single file from here: https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment/tree/main and my script should look like: `pipeline = Pipeline(model='/path/to/dir")` ### Who can help Models: - pipelines: @LysandreJik ## Information https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment The problem arises when using: Running in a restricted environment ## To reproduce Use the example script from https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment and try to manually input the files ## Expected behavior The model should work just as if it were in a system connected to the internet.
03-25-2021 11:06:33
03-25-2021 11:06:33
Yes, if you place all files which you find on the model page on the hub in a directory, then it will work.<|||||>You can even just `git clone` the model repo<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,899
closed
updates sagemaker documentation
# What does this PR do? Extends **local environment** configuration and adds import to make it more clear. Also, replaced two links.
03-25-2021 09:57:52
03-25-2021 09:57:52
transformers
10,898
closed
run_glue_no_trainer: datasets -> raw_datasets
# What does this PR do? Use the correct variable (raw_datasets) instead of the module (datasets) where appropriate. The script will otherwise fail with "module object is not subscriptable". Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger
03-25-2021 07:36:54
03-25-2021 07:36:54
Creating a quick PR now to alert to some issues I'm encountering, will backfill with proper GitHub issues after I knock off from work. Other issues: - [ ] If task_name is `None`, `metrics` is undefined, and will throw `... metrics used before assignment`
transformers
10,897
closed
[doc] Custom datasets page reference dataset library as NLP library
### Who can help @sgugger ## Information The page [Fine-tuning with custom datasets](https://huggingface.co/transformers/custom_datasets.html) is referencing dataset library a lot, but in the old name (NLP library), and I've noticed that this still holds true in the [source .rst](https://github.com/huggingface/transformers/blob/master/docs/source/custom_datasets.rst) file. If it wasn't left unmodified intended, I'm willing to help submitting a PR, just filling the issue to confirm.
03-25-2021 07:01:29
03-25-2021 07:01:29
Yes, the name should indeed be updated! If you want to do a PR with this, please go ahead!
transformers
10,896
closed
save only the best performing checkpoint
# 🚀 Feature request In the Trainer - Enable an option to save only the best performing checkpoints (rather than the newsest) ## Motivation Usually when we train a model we would like to keep only the best performing checkpoints (on the dev set according to the specified metric) rather than the newest checkpoints.
03-25-2021 06:14:31
03-25-2021 06:14:31
The checkpoints are saved to resume training in case you are interrupted, so saving only the best checkpoints wouldn't work with this. The `load_best_model_at_end` functionality already keeps track of the best checkpoint during training and reloads it at the end, I think it should cover what you need.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>So,when I finish training,how can I load the best performing checkpoint ? @sgugger <|||||> When I check the `trainer_state.json` file I found these mssage: ``` "best_metric": null, "best_model_checkpoint": null, "epoch": 100.0, "global_step": 559300, "is_hyper_param_search": false, "is_local_process_zero": true, "is_world_process_zero": true, ``` as shown above,"best_model_checkpoint" is null. <|||||>If you did not specify `--load_best_model_at_end` for your script, you won't get it automatically.
transformers
10,895
closed
Add missing global_attentions into the return_dict of Longformer models
The `global_attentions` is missing in the return_dict of `LongformerForSequenceClassification`, `LongformerForMaskedLM`, and `LongformerForTokenClassification` classes in `modeling_longformer.py`. # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @patrickvonplaten Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-25-2021 02:59:25
03-25-2021 02:59:25
Hey @joe32140, Sorry, I think this PR already fixes the problem: https://github.com/huggingface/transformers/pull/10906/files
transformers
10,894
closed
Invalid argument: Incompatible shapes: [24,1536,12,514] vs. [24,1536,12,513]
I had a 16 class classification dataset, but I am getting an error when using longformer, what am I doing wrong here? ``` from transformers import LongformerTokenizerFast, TFLongformerForSequenceClassification import tensorflow as tf import pickle tokenizer = LongformerTokenizerFast.from_pretrained('allenai/longformer-base-4096') model = TFLongformerForSequenceClassification.from_pretrained('allenai/longformer-base-4096', num_labels=16, gradient_checkpointing=True) df = pd.read_csv("dataset.csv") df1 = pd.read_csv("dataset1.csv") y_train = pickle.load(open("y_train.pkl", "rb")) y_test = pickle.load(open("y_test.pkl", "rb")) x_train = tokenizer(df.posts.tolist(), max_length=1500, return_tensors="tf", padding="max_length", truncation=True) x_test = tokenizer(df1.posts.tolist(), max_length=1500, return_tensors="tf", padding="max_length", truncation=True) print(y_train.nunique()) # return 16 model.fit(x_train, y_train, batch_size=24, steps_per_epoch=steps_per_epoch, validation_data=(x_test, y_test)) ``` Why do I get this shape mismatch error? What am I doing wrong. ``` WARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`). WARNING:tensorflow:AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7fd3d7fdf9a0>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module, class, method, function, traceback, frame, or code object was expected, got cython_function_or_method To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert WARNING: AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7fd3d7fdf9a0>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module, class, method, function, traceback, frame, or code object was expected, got cython_function_or_method To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert WARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`. WARNING:tensorflow:From /home/intellectfaces/Desktop/Work/apps/env/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py:5043: calling gather (from tensorflow.python.ops.array_ops) with validate_indices is deprecated and will be removed in a future version. Instructions for updating: The `validate_indices` argument has no effect. Indices are always validated on CPU and never validated on GPU. WARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`). WARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`. --------------------------------------------------------------------------- InvalidArgumentError Traceback (most recent call last) <ipython-input-17-c08295c7f1ca> in <module> ----> 1 model.fit(x_train, y_train, batch_size=24, 2 steps_per_epoch=steps_per_epoch, 3 validation_data=(x_test, y_test)) ~/Desktop/Work/apps/env/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing) 1155 _r=1): 1156 callbacks.on_train_batch_begin(step) -> 1157 tmp_logs = self.train_function(iterator) 1158 if data_handler.should_sync: 1159 context.async_wait() ~/Desktop/Work/apps/env/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds) 865 tracing_count = self.experimental_get_tracing_count() 866 with trace.Trace(self._name) as tm: --> 867 result = self._call(*args, **kwds) 868 compiler = "xla" if self._jit_compile else "nonXla" 869 new_tracing_count = self.experimental_get_tracing_count() ~/Desktop/Work/apps/env/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds) 926 # Lifting succeeded, so variables are initialized and we can run the 927 # stateless function. --> 928 return self._stateless_fn(*args, **kwds) 929 else: 930 _, _, _, filtered_flat_args = \ ~/Desktop/Work/apps/env/lib/python3.8/site-packages/tensorflow/python/eager/function.py in __call__(self, *args, **kwargs) 3016 (graph_function, 3017 filtered_flat_args) = self._maybe_define_function(args, kwargs) -> 3018 return graph_function._call_flat( 3019 filtered_flat_args, captured_inputs=graph_function.captured_inputs) # pylint: disable=protected-access 3020 ~/Desktop/Work/apps/env/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _call_flat(self, args, captured_inputs, cancellation_manager) 1958 and executing_eagerly): 1959 # No tape is watching; skip to running the function. -> 1960 return self._build_call_outputs(self._inference_function.call( 1961 ctx, args, cancellation_manager=cancellation_manager)) 1962 forward_backward = self._select_forward_and_backward_functions( ~/Desktop/Work/apps/env/lib/python3.8/site-packages/tensorflow/python/eager/function.py in call(self, ctx, args, cancellation_manager) 589 with _InterpolateFunctionError(self): 590 if cancellation_manager is None: --> 591 outputs = execute.execute( 592 str(self.signature.name), 593 num_outputs=self._num_outputs, ~/Desktop/Work/apps/env/lib/python3.8/site-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name) 57 try: 58 ctx.ensure_initialized() ---> 59 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, 60 inputs, attrs, num_outputs) 61 except core._NotOkStatusException as e: InvalidArgumentError: 2 root error(s) found. (0) Invalid argument: Incompatible shapes: [24,1536,12,514] vs. [24,1536,12,513] [[node gradient_tape/tf_longformer_for_sequence_classification/longformer/encoder/layer_._0/attention/self/BroadcastGradientArgs_1 (defined at <ipython-input-17-c08295c7f1ca>:1) ]] [[tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/cond_1/pivot_t/_985/_1717]] (1) Invalid argument: Incompatible shapes: [24,1536,12,514] vs. [24,1536,12,513] [[node gradient_tape/tf_longformer_for_sequence_classification/longformer/encoder/layer_._0/attention/self/BroadcastGradientArgs_1 (defined at <ipython-input-17-c08295c7f1ca>:1) ]] 0 successful operations. 0 derived errors ignored. [Op:__inference_train_function_95332] Function call stack: train_function -> train_function ​ ``` ### Environment info transformers version: 4.3.3 Platform: Ubuntu 20.04 LTS Python version: 3.8.x PyTorch version (GPU?): 1.8.0+cu111 Tensorflow version (GPU?): 2.5.0-dev20210311 CUDA: cuda_11.1 Using GPU in script?: Yes Using distributed or parallel set-up in script?: No
03-25-2021 01:14:06
03-25-2021 01:14:06
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I have the same error.<|||||>Me too
transformers
10,893
closed
[trainer] large scale models support
As I am integrating DeepSpeed ZeRO-3 which can run on hundreds of gpus and train models with Trillion of params https://github.com/huggingface/transformers/pull/10753 I see an emerging need to adjust how the trainer is used. Currently the usage is: ``` model = T5ForConditionalGeneration.from_pretrained("t5-small") trainer = Trainer(model=model, ....) trainer.train() ``` The problem is that this implies that the model can fit in the first node's general RAM and it's not always the case. So for example in my PR I propose the following change: ``` from transformers.integrations import deepspeed_is_zero3_enabled deepspeed_is_zero3_enabled(True) model = T5ForConditionalGeneration.from_pretrained("t5-small") ``` and I change `from_pretrained` to not init the model right away on cpu and to deal with pre-trained weights loading directly on all participating gpus - which allows loading models that are bigger than one gpu. Since the PR hasn't been reviewed yet - (I'm still working on it), the API may change, but the what I'm trying t communicate here is that we need DeepSpeed configuration before we create the model. This change is only needed for ZeRO3 and at the moment I have no knowledge of that until the trainer is created. (but I'm changing this). While we can automagically can discover if we are running under zero3 if a user is using cl args and passes `--deepspeed ds_config.json`, but I can't do this if a user isn't using the command line to launch the script. In addition in the Trainer we already have a ton of logic where we purposefully don't `model.to(device)` - so it's another indication where the model placement needs a special treatment. So the paradigm shift that may have to happen is where we init the `Trainer` first, gather all the info we need about how the model will be used. Then we init the model and pass it to the existing Trainer object, then we train. So something like: ``` trainer = Trainer(...) new_model_init_specific_args = trainer.model_init_specific_args() model = T5ForConditionalGeneration.from_pretrained("t5-small", **new_model_init_specific_args) trainer.model(model) trainer.train() ``` Please let me know if the need makes sense. I think I can manage the current PR with some hacks to avoid this, but eventually I think we will need to switch to something that I proposed here to move into the future where we support very large models. Nothing that needs to be done right away, just sharing the emerging need. Here is a bit of a preview of how I had to change `from_pretrained()`: https://github.com/huggingface/transformers/blob/538a4026a1c6c477c1932b435dcce7cbacfc5898/src/transformers/modeling_utils.py#L1062-L1068 https://github.com/huggingface/transformers/blob/538a4026a1c6c477c1932b435dcce7cbacfc5898/src/transformers/modeling_utils.py#L1124-L1135 This allows loading the exact partition of the params for each gpu w/o ever loading it in CPU or a single gpu (well state_dict loading is a problem at the moment as it still gets fully copied in cpu, but we will have to sort this out down the road). In the following addition, we invade `generation_utils` because now we have to make all gpus work in sync and can't stop running `forward` until all gpus finished generating their sequence. https://github.com/huggingface/transformers/blob/538a4026a1c6c477c1932b435dcce7cbacfc5898/src/transformers/generation_utils.py#L1273-L1287 so that's another new concept, but this one is less of an issue with how the Trainer is run - just wanted to give a complete picture of the major needs. (And this particular code will change a bit thanks to @patrickvonplaten's commentary - just didn't get to do it yet) Please also feel free to comment in the PR directly as that part of the code is pretty complete. I just made this issue separate to discuss the bigger need. Thank you! @sgugger, @LysandreJik, @patrickvonplaten
03-24-2021 23:54:02
03-24-2021 23:54:02
I'm not sure you are aware but the `Trainer` can take a `model_init` parameter that... well... creates the model ;-) Have you explored how it could help with this particular problem? The changes in the other parts of the lib look reasonable to me at first glance.<|||||>Thanks for the very detailed summary @stas00! All of the changes you propose make sense. The changes to `from_pretrained` look inevitable, and the approach you propose looks like it does the job without being invasive in other parts of the library that we want to keep readable like the model files. I know the API isn't final and prone to changes, but could we imagine a flag like `deepspeed_aware_instantiation` or `deepspeed_partitioning` in the `from_pretrained` method, rather than a `deepspeed_is_zero3_enabled(True)`? I think this would be more in line with how we manage things in the library from the user's perspective (which is principally through kwargs). I know none of this is final, but thinking of the API beforehand doesn't sound like a bad idea before everything is implemented :)<|||||>> I'm not sure you are aware but the `Trainer` can take a `model_init` parameter that... well... creates the model I need trainer init to complete before `model_init` is called then - i.e. I need the fully initialized Trainer object inside `model_init`. the model init will depends on how the training is expected to run, so I guess we need a sort of trainer - pre-init :)<|||||>> I know the API isn't final and prone to changes, but could we imagine a flag like `deepspeed_aware_instantiation` or `deepspeed_partitioning` in the `from_pretrained` method, rather than a `deepspeed_is_zero3_enabled(True)`? > I think this would be more in line with how we manage things in the library from the user's perspective (which is principally through kwargs). I know none of this is final, but thinking of the API beforehand doesn't sound like a bad idea before everything is implemented :) Absolutely, this would be ideal. the problem with this approach is that for example we will have to change all examples to support new arguments to `.from_pretrained`, that's why I am trying to make it work transparently. But the examples will still have to call `deepspeed_is_zero3_enabled()` and just pass the result to `.from_pretrained`... But we could support both - if the new argument is there we use it, if it's not there as a fallback we check the global state helper function. I'm trying to solve this on the global level and not just on the use-case where the user calls each function explicitly (which is not the case in examples, but of course we could change them too.) I'm totally not attached to the proposed way, we can choose what resonates the most. Thank you for your feedback, @LysandreJik and @sgugger <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,892
closed
ImportError: cannot import name 'BertLayerNorm' when upgrading to latest transformers
# 📚 Migration ## Getting error when upgrading from pytorch-transformers to transformers <!-- Important information --> Model I am using (Bert, XLNet ...): Bert Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: Yes * [ ] my own modified scripts: Yes The tasks I am working on is: * [ ] an official GLUE/SQUaD task: No * [ ] my own task or dataset: No ## Details I am using Oscar repo (https://github.com/microsoft/Oscar), which uses an older version of Huggingface pytorch-transformers (https://github.com/huggingface/transformers/tree/067923d3267325f525f4e46f357360c191ba562e). I am trying to upgrade the repo to use latest version of transformers (https://github.com/huggingface/transformers). However, I am getting following error when I try to use the latest version of transformers: ``` Traceback (most recent call last): File "oscar/run_captioning.py", line 22, in <module> from oscar.modeling.modeling_bert import BertForImageCaptioning File "/home/default/ephemeral_drive/work/image_captioning/Oscar_edited/oscar/modeling/modeling_bert.py", line 15, in <module> from transformers.models.bert.modeling_bert import BertLayerNorm ImportError: cannot import name 'BertLayerNorm' ``` I have tried running an example script given with the latest transformers repo- https://github.com/huggingface/transformers/blob/master/examples/research_projects/movement-pruning/emmental/modeling_bert_masked.py, which uses `BertLayerNorm`, but that gives following error: ``` $ python emmental/modeling_bert_masked.py Traceback (most recent call last): File "emmental/modeling_bert_masked.py", line 29, in <module> from emmental import MaskedBertConfig File "/home/default/ephemeral_drive/work/image_captioning/Oscar_edited/transformers/examples/research_projects/movement-pruning/emmental/__init__.py", line 3, in <module> from .modeling_bert_masked import ( File "/home/default/ephemeral_drive/work/image_captioning/Oscar_edited/transformers/examples/research_projects/movement-pruning/emmental/modeling_bert_masked.py", line 33, in <module> from transformers.models.bert.modeling_bert import ACT2FN, BertLayerNorm, load_tf_weights_in_bert ImportError: cannot import name 'BertLayerNorm' ``` I tried looking for definition of `BertLayerNorm` in the current version of transformers, but it is not present there. The definition is present in the older version of transformer, here - https://github.com/huggingface/transformers/blob/067923d3267325f525f4e46f357360c191ba562e/pytorch_transformers/modeling_bert.py#L223-L240 How can I import `BertLayerNorm` in my project using the latest transformer? ## Environment info <!-- You can run the command `python transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: https://github.com/huggingface/transformers - Platform: x86_64 GNU/Linux - Python version: 3.6.8 - PyTorch version (GPU?): 1.7.0+cu101 (GPU) - Tensorflow version (GPU?): 2.3.0 (GPU) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: No <!-- IMPORTANT: which version of the former library do you use? --> * `pytorch-transformers` or `pytorch-pretrained-bert` version (or branch): https://github.com/huggingface/transformers/tree/067923d3267325f525f4e46f357360c191ba562e ## Checklist - [ Yes] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers)) - [ yes] I checked if a related official extension example runs on my machine.
03-24-2021 22:10:16
03-24-2021 22:10:16
There is no `BertLayerNorm` anymore since all it was adding has been ported to main PyTorch. The BERT model is now jusing `torch.nn.LayerNorm`. So to make your code working, instead of trying to import it from transformers, just define it as: ``` BertLayerNorm = torch.nn.LayerNorm ```<|||||>@sgugger your suggestion resolved the issue. Thanks!<|||||>Closing then :-)
transformers
10,891
closed
Update Training Arguments Documentation: ignore_skip_data -> ignore_data_skip
Currently, docs/docstring for TrainingArguments refers to `ignore_skip_data` as the argument for skipping dataloader replay on resume. However, actual argument is called `ignored_data_skip` which leads to errors if you just go off the docs. (Separate note, doing a full replay for long runs is pretty annoying --> thinking about a way to eliminate this/speed up considerably, but would love to here what the Transformers team is up to in this regard!). ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. Tagging @sgugger as this has to do with documentation.
03-24-2021 20:08:59
03-24-2021 20:08:59
transformers
10,890
closed
Remove version warning in pretrained BART models
# What does this PR do? This PR fixes the warnings when loading any pretrained BART model: ``` Some weights of the model checkpoint at facebook/bart-large-mnli were not used when initializing BartModelForSequenceClassification: ['model.encoder.version', 'model.decoder.version'] - This IS expected if you are initializing BartModelForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing BartModelForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). ```
03-24-2021 19:00:42
03-24-2021 19:00:42
transformers
10,889
closed
Fix overflowing bad word ids
As of now, bad word IDs are not checked when added to the configuration/passed as inputs to the generate method. This is an issue when an invalid bad word ID is defined: if the vocab size is 30k, then defining a bad word ID for `30001` crashes the generation function with the following error: ``` torch.sparse.LongTensor(banned_mask.t(), indices, scores.size()).to(scores.device).to_dense().bool() RuntimeError: size is inconsistent with indices: for dim 1, size is 30000 but found index 30001 ``` Please let me know if you think this should raise a better error instead, rather than a warning.
03-24-2021 18:56:29
03-24-2021 18:56:29
transformers
10,888
closed
Instantiate model only once in pipeline
# What does this PR do? The current implementation of `pipeline` is inefficient in the sense it instantiates the model twice just to guess the proper framework. This PR does not add any breaking change but reworks the function that infers the framework from the model to: 1. instantiate the proper class of the model (this avoids getting weird warnings about missing weights) 2. return the model instantiated so it's not re-instantiated later on. cc @mfuntowicz and @Narsil
03-24-2021 18:47:17
03-24-2021 18:47:17
Checked all the slow tests run. I have no idea on how to implement a test that checks the model is only loaded once, so I'm going merge this and if anyone wants to tackle that, it can be done in a separate PR.
transformers
10,887
closed
Error Loading a Hub Model (Multilingual-MiniLM)
## Code Snippet ``` from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("microsoft/Multilingual-MiniLM-L12-H384") model = AutoModel.from_pretrained("microsoft/Multilingual-MiniLM-L12-H384") ``` - `transformers` version: 4.1.1, 3.1.0 (error in both) ## Error ``` TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType ``` ## Expected behavior The [model and tokenizer](https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384) loads correctly. The error could be reproduced in a [colab notebook](https://colab.research.google.com/drive/1uFnBN-WdpK4PiamvdyizMCzJsrBtewKx?usp=sharing) .
03-24-2021 17:55:59
03-24-2021 17:55:59
Please note: This checkpoint uses BertModel with XLMRobertaTokenizer so AutoTokenizer won't work with this checkpoint!
transformers
10,886
closed
Fix comment in modeling_t5.py
# What does this PR do? This PR completes an incomplete comment in the modeling_t5.py file. `# ourselves in which case we just need to make it broadcastable to all heads.` to ``` # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length] # ourselves in which case we just need to make it broadcastable to all heads. ```
03-24-2021 16:06:23
03-24-2021 16:06:23
transformers
10,885
closed
Memory accumulates when training in a loop
The problem is that GPU memory allocated accumulates for each run. This eventually results in a `RuntimeError: CUDA out of memory` error. You can see the wandb GPU memory allocated, produced by the code below, here: [wandb](https://wandb.ai/jwa018/Bug/reports/Shared-panel-21-03-24-15-03-56--Vmlldzo1NTYxODI?accessToken=6euxv33b2zmga0uwegtws13724totvgs13hr6l1ni4bsek376cutfte3l3gtx5dz) I had the same problem when using Trainer's built in hyperparameter_search, which also runs training in a loop I assume. Similar issues from the past are: https://github.com/huggingface/transformers/issues/1742 https://github.com/huggingface/transformers/issues/1134 https://gitmemory.com/issue/huggingface/transformers/9929/770965726 ## Environment info - `transformers` version: 4.4.2 - Platform: Linux-4.15.0-128-generic-x86_64-with-glibc2.10 - Python version: 3.8.8 - PyTorch version (GPU?): 1.8.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: I don't explicitly use GPU but I assume the Trainer object does. See code below - Using distributed or parallel set-up in script?: No ### Who can help Library: - trainer: @sgugger ## Information Model I am using (Bert, XLNet ...): `BertForSequenceClassification.from_pretrained('bert-base-cased')` The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) I have my own dataset, but I've reproduced the issue wtih the Amazon polarity dataset from huggingface's datasets ## To reproduce Steps to reproduce the behavior: 1. Create Trainer object in a loop 2. Run training in the loop This code reproduces the error. ```python from transformers import ( BertForSequenceClassification, BertTokenizer, Trainer, TrainingArguments, BertConfig, ) from datasets import load_dataset from torch.utils.data import Dataset import torch as th import wandb import os class AmazonDataset(Dataset): def __init__(self, data, tokenizer, max_len): self.tokenizer = tokenizer self.text = data['content'] self.labels = data['label'] self.max_len = max_len self.n_datapoints = len(self.labels) def __len__(self): return self.n_datapoints def __getitem__(self, idx): text = self.text[idx] assert type(text) is str inputs = self.tokenizer( text=text, text_pair=None, add_special_tokens=True, padding='max_length', truncation=True, max_length=self.max_len, return_tensors='pt' ) return { 'input_ids': th.flatten(inputs['input_ids']).type(th.long), 'token_type_ids': th.flatten( inputs['token_type_ids']).type(th.long), 'attention_mask': th.flatten( inputs['attention_mask']).type(th.long), 'labels': th.tensor(self.labels[idx], dtype=th.long) } def model_init(): return BertForSequenceClassification.from_pretrained( MODEL_NAME, return_dict=True ) if __name__ == '__main__': os.environ['WANDB_WATCH'] = 'all' tokenizer = BertTokenizer.from_pretrained('bert-base-cased') dataset = load_dataset('amazon_polarity') train = AmazonDataset( data=dataset['train'][:5000], tokenizer=tokenizer, max_len=300 ) test = AmazonDataset( data=dataset['test'][:500], tokenizer=tokenizer, max_len=300 ) MODEL_NAME = 'bert-base-cased' N_EPOCHS = 1 warmup_steps = int(len(train)*N_EPOCHS) for i in range(10): training_args = TrainingArguments( output_dir='output', do_train=True, do_eval=True, evaluation_strategy='steps', learning_rate=2e-5, weight_decay=0.1, logging_steps=50, per_device_eval_batch_size=30, per_device_train_batch_size=15, seed=1, num_train_epochs=N_EPOCHS, disable_tqdm=True, report_to=['wandb'], load_best_model_at_end=False, lr_scheduler_type='linear', warmup_steps=warmup_steps ) model_config = BertConfig( vocab_size=tokenizer.vocab_size, pretrained_model_name_or_path=MODEL_NAME, num_labels=2, return_dict=True ) trainer = Trainer( args=training_args, train_dataset=train, eval_dataset=test, tokenizer=tokenizer, model_init=model_init ) run = wandb.init( project='Bug', name=f'Bug{i}' ) trainer.train() run.finish() ``` ## Expected behavior The loops runs without memory accumulating for each run.
03-24-2021 14:41:48
03-24-2021 14:41:48
I tried adding ```python del training_args del trainer del model_config del run gc.collect() th.cuda.empty_cache() ``` to the end of each loop, but it does not seem to change anything. <|||||>I think the memory problem comes from the wandb integration. I do not see the problem without it: memory resets at 0 at each new step of the loop and goes back to the same max value.<|||||>Use torc with no grad inside for loop<|||||>Seems like the same problem occurs with wandb's sweeps, so it looks like a wandb problem more than a huggingface one. I can't use wandb then, sucks :/<|||||>cc @borisdayma so you are aware.
transformers
10,884
closed
Wav2vec2 Training Loss not decreasing
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version:4.4.0 - Platform: Google Colab - Python version: python 3.6 @patrickvonplaten Models: wav2vec2 I am following the recent implementation of wav2vec2 for fine-tuning: https://huggingface.co/blog/fine-tune-wav2vec2-english Settings: Pretrained model: "facebook/wav2vec2-base-960h", gradient_checkpointing=True, ctc_loss_reduction="mean", pad_token_id=processor.tokenizer.pad_token_id attention_dropout=0.1, hidden_dropout=0.1, feat_proj_dropout=0.0, mask_time_prob=0.05, layerdrop=0.1, gradient_checkpointing=True, ctc_loss_reduction="mean", group_by_length=True, per_device_train_batch_size=32, evaluation_strategy="steps", num_train_epochs=1500, fp16=True, save_steps=400, #this would mean every 400 steps model gets saved which also means Google drive gets full eval_steps=400, logging_steps=400, learning_rate=0.0005, warmup_steps=500, save_total_limit=2, Issue: Step | Training Loss | Validation Loss | Wer | Runtime | Samples Per Second -- | -- | -- | -- | -- | -- 400 | 5.063200 | 4.566135 | 1.000000 | 0.715900 | 6.984000 800 | 5.115200 | 4.514411 | 1.000000 | 0.732400 | 6.827000 1200 | 5.119200 | 4.485986 | 1.000000 | 0.724300 | 6.903000 The training loss is marginally decreasing and WER is still 1. What can be done to improve and faster training with better accuracy. I also tried with a higher learning rate but training loss was still very poor, it seems the model is not converging.
03-24-2021 12:51:13
03-24-2021 12:51:13
It seems that your training epochs is set to 1500. Set it up to 5 for a quick trial<|||||>It is giving training loss of more than a 100 in that case!<|||||>Is your training finished or is still running? If it is the second, just try again with less number of epochs for example<|||||>This is the output for 5 epochs: TrainOutput(global_step=10, training_loss=93.45169677734376, metrics={'train_runtime': 48.9011, 'train_samples_per_second': 0.204, 'total_flos': 2.6027104384512e+16, 'epoch': 5.0, 'init_mem_cpu_alloc_delta': 348007, 'init_mem_gpu_alloc_delta': 377847808, 'init_mem_cpu_peaked_delta': 18306, 'init_mem_gpu_peaked_delta': 0, 'train_mem_cpu_alloc_delta': 706705, 'train_mem_gpu_alloc_delta': 1120621568, 'train_mem_cpu_peaked_delta': 161498645, 'train_mem_gpu_peaked_delta': 7221921792})<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,883
closed
[Community notebooks] Add notebook for fine-tuning Bart with Trainer in two langs
Add a community notebook on fine-tuning Bart for summarization on wiki_lingua with Trainer. Includes: - a non-English example (English, French) - DataCollatorForSeq2Seq - label padding with -100 (ignore in loss) - Wandb integration
03-24-2021 09:58:01
03-24-2021 09:58:01
transformers
10,882
closed
AttributeError: 'RobertaConfig' object has no attribute 'attn_type'
**Environment** Google Colab. Installed the '4.5.0.dev0' version of transformers by `!pip install git+https://github.com/huggingface/transformers ` **Issues** Hi guys, I tried to fine-tune RoBERTa on WikiText-2 by following the commands shared in the examples/language-modeling section of the [github page](https://github.com/huggingface/transformers/tree/master/examples/language-modeling#robertabertdistilbert-and-masked-language-modeling) as follows: `python run_mlm.py \ --model_name_or_path roberta-base \ --dataset_name wikitext \ --dataset_config_name wikitext-2-raw-v1 \ --do_train \ --do_eval \ --output_dir /tmp/test-mlm` but I ran into and error `AttributeError: 'RobertaConfig' object has no attribute 'attn_type'`. Looks like it cannot find the config needed. Please advise What Did I do wrong. Thanks! **To reproduce** `python run_mlm.py \ --model_name_or_path roberta-base \ --dataset_name wikitext \ --dataset_config_name wikitext-2-raw-v1 \ --do_train \ --do_eval \ --output_dir /tmp/test-mlm` **Error message I got:** `2021-03-24 08:51:51.464928: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0 03/24/2021 08:51:52 - WARNING - __main__ - Process rank: -1, device: cpu, n_gpu: 0distributed training: False, 16-bits training: False 03/24/2021 08:51:53 - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir=/tmp/test-mlm, overwrite_output_dir=False, do_train=True, do_eval=True, do_predict=False, evaluation_strategy=IntervalStrategy.NO, prediction_loss_only=False, per_device_train_batch_size=8, per_device_eval_batch_size=8, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=3.0, max_steps=-1, lr_scheduler_type=SchedulerType.LINEAR, warmup_ratio=0.0, warmup_steps=0, logging_dir=runs/Mar24_08-51-52_f7b8b5062dd4, logging_strategy=IntervalStrategy.STEPS, logging_first_step=False, logging_steps=500, save_strategy=IntervalStrategy.STEPS, save_steps=500, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level=O1, fp16_backend=auto, fp16_full_eval=False, local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name=/tmp/test-mlm, disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, sharded_ddp=[], deepspeed=None, label_smoothing_factor=0.0, adafactor=False, group_by_length=False, report_to=['tensorboard'], ddp_find_unused_parameters=None, dataloader_pin_memory=True, skip_memory_metrics=False, _n_gpu=0) 03/24/2021 08:51:53 - WARNING - datasets.builder - Reusing dataset wikitext (/root/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/47c57a6745aa5ce8e16a5355aaa4039e3aa90d1adad87cef1ad4e0f29e74ac91) [INFO|configuration_utils.py:472] 2021-03-24 08:51:53,301 >> loading configuration file https://huggingface.co/roberta-base/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/733bade19e5f0ce98e6531021dd5180994bb2f7b8bd7e80c7968805834ba351e.35205c6cfc956461d8515139f0f8dd5d207a2f336c0c3a83b4bc8dca3518e37b [INFO|configuration_utils.py:508] 2021-03-24 08:51:53,301 >> Model config RobertaConfig { "architectures": [ "RobertaForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "bos_token_id": 0, "eos_token_id": 2, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-05, "max_position_embeddings": 514, "model_type": "roberta", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 1, "position_embedding_type": "absolute", "transformers_version": "4.5.0.dev0", "type_vocab_size": 1, "use_cache": true, "vocab_size": 50265 } [INFO|configuration_utils.py:472] 2021-03-24 08:51:53,358 >> loading configuration file https://huggingface.co/roberta-base/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/733bade19e5f0ce98e6531021dd5180994bb2f7b8bd7e80c7968805834ba351e.35205c6cfc956461d8515139f0f8dd5d207a2f336c0c3a83b4bc8dca3518e37b [INFO|configuration_utils.py:508] 2021-03-24 08:51:53,359 >> Model config RobertaConfig { "architectures": [ "RobertaForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "bos_token_id": 0, "eos_token_id": 2, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-05, "max_position_embeddings": 514, "model_type": "roberta", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 1, "position_embedding_type": "absolute", "transformers_version": "4.5.0.dev0", "type_vocab_size": 1, "use_cache": true, "vocab_size": 50265 } [INFO|tokenization_utils_base.py:1702] 2021-03-24 08:51:53,706 >> loading file https://huggingface.co/roberta-base/resolve/main/vocab.json from cache at /root/.cache/huggingface/transformers/d3ccdbfeb9aaa747ef20432d4976c32ee3fa69663b379deb253ccfce2bb1fdc5.d67d6b367eb24ab43b08ad55e014cf254076934f71d832bbab9ad35644a375ab [INFO|tokenization_utils_base.py:1702] 2021-03-24 08:51:53,707 >> loading file https://huggingface.co/roberta-base/resolve/main/merges.txt from cache at /root/.cache/huggingface/transformers/cafdecc90fcab17011e12ac813dd574b4b3fea39da6dd817813efa010262ff3f.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b [INFO|tokenization_utils_base.py:1702] 2021-03-24 08:51:53,707 >> loading file https://huggingface.co/roberta-base/resolve/main/tokenizer.json from cache at /root/.cache/huggingface/transformers/d53fc0fa09b8342651efd4073d75e19617b3e51287c2a535becda5808a8db287.fc9576039592f026ad76a1c231b89aee8668488c671dfbe6616bab2ed298d730 [INFO|tokenization_utils_base.py:1702] 2021-03-24 08:51:53,707 >> loading file https://huggingface.co/roberta-base/resolve/main/added_tokens.json from cache at None [INFO|tokenization_utils_base.py:1702] 2021-03-24 08:51:53,707 >> loading file https://huggingface.co/roberta-base/resolve/main/special_tokens_map.json from cache at None [INFO|tokenization_utils_base.py:1702] 2021-03-24 08:51:53,707 >> loading file https://huggingface.co/roberta-base/resolve/main/tokenizer_config.json from cache at None [INFO|modeling_utils.py:1051] 2021-03-24 08:51:53,860 >> loading weights file https://huggingface.co/roberta-base/resolve/main/pytorch_model.bin from cache at /root/.cache/huggingface/transformers/51ba668f7ff34e7cdfa9561e8361747738113878850a7d717dbc69de8683aaad.c7efaa30a0d80b2958b876969faa180e485944a849deee4ad482332de65365a7 Traceback (most recent call last): File "/content/drive/MyDrive/Colab Notebooks/run_mlm.py", line 461, in <module> main() File "/content/drive/MyDrive/Colab Notebooks/run_mlm.py", line 306, in main use_auth_token=True if model_args.use_auth_token else None, File "/usr/local/lib/python3.7/dist-packages/transformers/modeling_utils.py", line 1058, in from_pretrained model = cls(config, *model_args, **model_kwargs) File "/usr/local/lib/python3.7/dist-packages/transformers/models/xlnet/modeling_xlnet.py", line 1309, in __init__ self.attn_type = config.attn_type AttributeError: 'RobertaConfig' object has no attribute 'attn_type'`
03-24-2021 09:26:42
03-24-2021 09:26:42
Found solution from [#10446](https://github.com/huggingface/transformers/issues/10446). Should follow this step instead: `git clone https://github.com/huggingface/transformers` `cd transformers` `pip install .`
transformers
10,881
closed
MlFlow log artefacts
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.4.2 - Platform: Darwin-20.3.0-x86_64-i386-64bit - Python version: 3.7.4 - PyTorch version (GPU?): 1.3.1 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @sgugger <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): Bert The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: NER * [ ] my own task or dataset: (give details below) ## To reproduce The bug is for the PR #8016. Steps to reproduce the behavior: 1. MlFlow installed and the following env variables exported ``` export HF_MLFLOW_LOG_ARTIFACTS=TRUE export MLFLOW_S3_ENDPOINT_URL=<custom endpont> export MLFLOW_TRACKING_URI=<custom uri> export MLFLOW_TRACKING_TOKEN=<custom token> ``` 2. Run the token classification example with the following command ``` python run_ner.py \ --model_name_or_path bert-base-uncased \ --dataset_name conll2003 \ --output_dir /tmp/test-ner \ --do_train \ --do_eval ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> When the training finishes, before the evaluation is performed, the `integrations.MLflowCallback` executes the method `on_train_end`, where if the env variable `HF_MLFLOW_LOG_ARTIFACTS` is set to `TRUE`, it logs the model artifacts to mlflow. The problem is, however, when the method `on_train_end` is called and the following line is executed: `self._ml_flow.log_artifacts(args.output_dir)`, the model is not stored on the `args.output_dir`. The model artefacts are stored once the `trainer.save_model()` is called, which is after the training ending. There is no callback in the `trainer.save_model()` that can be called from a `TrainerCallback` to save the model. There is a method `TrainierCallback.on_save()` method, that is called `trainer._maybe_log_save_evaluate()`, but even then the model is not available on the `output_dir`. Possible solutions would be to extend the `TrainierCallback` with `on_model_save()` callback method, insert the callback in the `trainer.save_model()`. Or, a workaround I have now is to change `on_train_end ` with `on_evaluate` in `integrations.MLflowCallback`, that is called after the model is saved in the example script. However, this is not the right solution since it depends on having set the `do_eval` parameter, and it is not semantically correct.
03-24-2021 08:33:58
03-24-2021 08:33:58
I have no written the `MLFlowCallback` (external integrations are maintained by contributors or the authors of the external libraries themselves) but I can confirm the command will indeed not log the model weights. The callback does get the model in the kwargs, so it's completely possible to get it from there and upload it, like it's done in the `WandbCallback`.<|||||>Thanks @sgugger for your fast reply! I tested your suggestion, but it doesn't quite work. I don't know about the `WandbCallback`, but in `MLFlowCallback` in the `on_train_end`, when the model is saved and logged to `mlflow`, the `mlflow` run is ended, and the later loggin of metrics, like evaluation and testing are logged in separate run, which is not what we want. I don't know if this happens when you create `fake_trainer` with `fake_trainer = Trainer(args=args, model=model, tokenizer=tokenizer)` or when the artifacts are logged with `self._ml_flow.log_artifacts(temp_dir)`. Also the current `MLFlowCallback` doesn't log testing metrics. I know this is not a fault of the callback, it is in the `trainer.predict()` method that doesn't have call to `log()` internally. The workaround is to call `trainer.log(metrics)` after ``` trainer.log_metrics("test", metrics) trainer.save_metrics("test", metrics) ``` in the example. <|||||>As I said, I haven't written that callback: integrations with reporting platforms are entirely maintained by the developers of those integrations or the community. You can open a PR with your fixes!<|||||>I understand. However, not having a callback hook on the `save_model` would be difficult. If somebody is interested, a dirty workaround I did is, 1. Register own `MLflowCallback` ``` trainer = Trainer( ... callbacks=[MLflowCallback] ) trainer.remove_callback(transformers.integrations.MLflowCallback) ``` 2. Add method in the class: ``` def log_artifact(self, output_dir): if self._initialized: logger.info("Logging artifacts. This may take time.") self._ml_flow.log_artifacts(output_dir) ``` 3. In the `run_ner.py` file, at the very end (or after ` trainer.save_model()`) added ``` ml_flow_callback = trainer.pop_callback(MLflowCallback) ml_flow_callback.log_artifact(training_args.output_dir) ``` Which removes the `MLflowCallback` and tells to log the model. I know it is dirty, but if I come up with better solution I will open PR. Thanks!<|||||>> However, not having a callback hook on the save_model would be difficult. Not that this hook would be called when each checkpoint is saved, not just at the end of training. So you would not only save the last model.<|||||>You are right, even with my hack of logging the saved model from the `output_dir`, transfers the checkpoint models as well, which is not what we need. I think modifying `MLflowCallback.on_train_end` with the code from `Trainer._save` should save only the model in temp directory and log it to mlflow. This way, we don't lose the current mlflow run and we dont save everything from the `output_dir`. ``` def on_train_end(self, args, state, control, model=None, tokenizer=None, **kwargs): if self._initialized and state.is_world_process_zero and self._log_artifacts: logger.info("Logging artifacts. This may take time.") with tempfile.TemporaryDirectory() as temp_dir: if not isinstance(model, PreTrainedModel): if isinstance(unwrap_model(model), PreTrainedModel): state_dict = model.state_dict() unwrap_model(model).save_pretrained(temp_dir, state_dict=state_dict) else: logger.info("Trainer.model is not a `PreTrainedModel`, only saving its state dict.") state_dict = model.state_dict() torch.save(state_dict, os.path.join(temp_dir, WEIGHTS_NAME)) else: state_dict = model.state_dict() model.save_pretrained(temp_dir, state_dict=state_dict) if tokenizer is not None: tokenizer.save_pretrained(temp_dir) # Good practice: save your training arguments together with the trained model torch.save(args, os.path.join(temp_dir, "training_args.bin")) self._ml_flow.log_artifacts(temp_dir) ``` If you think this is a good idea, maybe it can be added in the `MLflowCallback` integration. Thanks!<|||||>That sounds like the best compromise yes.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hello, any update on this problem? I am trying to log a model with mlflow but the artifacts aren't registered. Could you please help me with this ? Best Regards,<|||||>> Hello, any update on this problem? I am trying to log a model with mlflow but the artifacts aren't registered. > > Could you please help me with this ? > > Best Regards, Did you export `HF_MLFLOW_LOG_ARTIFACTS` environment variable and set it to `True`?<|||||>I was just trying with `HF_MLFLOW_LOG_ARTIFACTS` set and nothing was appearing in the mlflow artifacts
transformers
10,880
closed
Scheduler Not Pickleable
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.4.2 - Platform: Linux-3.10.0-1127.13.1.el7.x86_64-x86_64-with-glibc2.10 - Python version: 3.8.8 (Anaconda) - PyTorch version (GPU): 1.8.0+cu111 (True) - Tensorflow version (GPU): not installed (NA) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes ### Who can help This seems to be a bug concerning the `Optimization` class. ## Information Model I am using (Bert, XLNet ...): BertMultipleChoice The problem arises when using: * [ ] my own modified scripts: I'm using transformers with Pytorch Lightning, and the distributed training function is provided by PyTorch Lightening. The tasks I am working on is: * Reading Comprehensive on RACE Dataset ## To reproduce Steps to reproduce the behavior: 1. load RACE into a datamodule 2. finetune BertMultipleChoice on this datamodule 3. start training with `gpus=-1` Output: ```text Traceback (most recent call last): File "train.local.py", line 35, in <module> trainer.fit(model, dm) File "/home/<user>/.conda/envs/RACE/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 498, in fit self.dispatch() File "/home/<user>/.conda/envs/RACE/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 545, in dispatch self.accelerator.start_training(self) File "/home/<user>/.conda/envs/RACE/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 73, in start_training self.training_type_plugin.start_training(trainer) File "/home/<user>/.conda/envs/RACE/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/ddp_spawn.py", line 106, in start_training mp.spawn(self.new_process, **self.mp_spawn_kwargs) File "/home/<user>/.conda/envs/RACE/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 230, in spawn return start_processes(fn, args, nprocs, join, daemon, start_method='spawn') File "/home/<user>/.conda/envs/RACE/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 179, in start_processes process.start() File "/home/<user>/.conda/envs/RACE/lib/python3.8/multiprocessing/process.py", line 121, in start self._popen = self._Popen(self) File "/home/<user>/.conda/envs/RACE/lib/python3.8/multiprocessing/context.py", line 284, in _Popen return Popen(process_obj) File "/home/<user>/.conda/envs/RACE/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__ super().__init__(process_obj) File "/home/<user>/.conda/envs/RACE/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__ self._launch(process_obj) File "/home/<user>/.conda/envs/RACE/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 47, in _launch reduction.dump(process_obj, fp) File "/home/<user>/.conda/envs/RACE/lib/python3.8/multiprocessing/reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) AttributeError: Can't pickle local object 'get_linear_schedule_with_warmup.<locals>.lr_lambda' ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> It should start training on all gpus.
03-24-2021 07:11:24
03-24-2021 07:11:24
Succeed to reproduce with another model and dataset. https://gist.github.com/iamNCJ/a30afcbac392f6036bed65198ce5295e [gist](https://gist.github.com/iamNCJ/a30afcbac392f6036bed65198ce5295e) This gist is derived from [an example provided by the pytorch lightening team](https://colab.research.google.com/github/PytorchLightning/pytorch-lightning/blob/master/notebooks/04-transformers-text-classification.ipynb), but it also causes this problem with multiple gpus. Output: ```text Traceback (most recent call last): File "glue.py", line 272, in <module> trainer.fit(model, dm) File "/home/<user>/.conda/envs/RACE/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 498, in fit self.dispatch() File "/home/<user>/.conda/envs/RACE/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 545, in dispatch self.accelerator.start_training(self) File "/home/<user>/.conda/envs/RACE/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 73, in start_training self.training_type_plugin.start_training(trainer) File "/home/<user>/.conda/envs/RACE/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/ddp_spawn.py", line 106, in start_training mp.spawn(self.new_process, **self.mp_spawn_kwargs) File "/home/<user>/.conda/envs/RACE/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 230, in spawn return start_processes(fn, args, nprocs, join, daemon, start_method='spawn') File "/home/<user>/.conda/envs/RACE/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 179, in start_processes process.start() File "/home/<user>/.conda/envs/RACE/lib/python3.8/multiprocessing/process.py", line 121, in start self._popen = self._Popen(self) File "/home/<user>/.conda/envs/RACE/lib/python3.8/multiprocessing/context.py", line 284, in _Popen return Popen(process_obj) File "/home/<user>/.conda/envs/RACE/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__ super().__init__(process_obj) File "/home/<user>/.conda/envs/RACE/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__ self._launch(process_obj) File "/home/<user>/.conda/envs/RACE/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 47, in _launch reduction.dump(process_obj, fp) File "/home/<user>/.conda/envs/RACE/lib/python3.8/multiprocessing/reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) AttributeError: Can't pickle local object 'get_linear_schedule_with_warmup.<locals>.lr_lambda' ```<|||||>I succeed to start training after using DDP instead of DDP Spawn, since DDP Spawn forces the model to be pickleable but DDP doesn't, but I still wonder if it's possible to make the scheduler pickleable.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,879
closed
error type of tokenizer in __init__ definition
the orignal code in line 246 is ``` tokenizer: Optional["PreTrainedTokenizerBase"] = None, ``` it should be ``` tokenizer: Optional[PreTrainedTokenizerBase] = None, ``` # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-24-2021 06:15:44
03-24-2021 06:15:44
transformers
10,878
closed
RuntimeError: while running run_common_voice.py (XLSR wav2vec finetuning week)
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.5.0.dev0 (I tried running it on 4.4.0 as well, gave the same error) - Platform: Ubuntu (running on a virtual machine) - Python version: 3.8 - PyTorch version (GPU?): 1.6.0 - Using GPU in script?: yes, running [this script](https://github.com/huggingface/transformers/blob/master/examples/research_projects/wav2vec2/run_common_voice.py) - Using distributed or parallel set-up in script?: Distributed ### Who can help @patrickvonplaten (as per the message on slack group) <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: - [ ] the official example scripts: (give details below) - [ ] my own modified scripts: (give details below) Tried running both official command and modified script (running command changed based on the language) The tasks I am working on is - [ ] common voice dataset (ta) ## To reproduce Steps to reproduce the behavior: 1. run common voice script [from here](https://github.com/huggingface/transformers/blob/master/examples/research_projects/wav2vec2/run_common_voice.py) 2. For multi-gpu setup I used this command `python -m torch.distributed.launch \ --nproc_per_node 4 run_common_voice.py \ --model_name_or_path="facebook/wav2vec2-large-xlsr-53" \ --dataset_config_name="tr" \ # use this argument to specify the language code --output_dir=./wav2vec2-large-xlsr-turkish-demo \ --overwrite_output_dir \ --num_train_epochs="5" \ --per_device_train_batch_size="16" \ --learning_rate="3e-4" \ --warmup_steps="500" \ --evaluation_strategy="steps" \ --save_steps="400" \ --eval_steps="400" \ --logging_steps="400" \ --save_total_limit="3" \ --freeze_feature_extractor \ --feat_proj_dropout="0.0" \ --layerdrop="0.1" \ --gradient_checkpointing \ --fp16 \ --group_by_length \ --do_train --do_eval ` ## Error: `RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument 'find_unused_parameters=True' to 'torch.nn.parallel.DistributedDataParallel'; (2) making sure all 'forward' function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's 'forward' function. Please include the loss function and the structure of the return value of 'forward' of your module when reporting this issue (e.g. list, dict, iterable).` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Model would train without any error <!-- A clear and concise description of what you would expect to happen. -->
03-24-2021 04:08:48
03-24-2021 04:08:48
I am experiencing the same error. I have been all the day working around without solving it. I have tried trough Docker Cuda versions 9.2, 10.0 and 10.1 and different versions of pytorch including 1.3, 1.5 and 1.6. I have tried with different combinations of GTX1080 and RTX2080 Adding "--ddp_find_unused_parameters=true" to the python command does not fix the error. Any help is really appreciated as I am working on the fine-tuning week @patrickvonplaten <|||||>I am experience this error too. CUDA 11.2 4xT4 - 16Gb `--dataset_config_name="ru"`<|||||>@raja1196 I think I have found the bug. Could you try modifying in run_common_voice.py the gradient_checkpointing to False, as it is written below: ``` gradient_checkpointing: Optional[bool] = field( default=False, metadata={ "help": "If True, use gradient checkpointing to save memory at the expense of slower backward pass." }, ) ``` And then running the script without gradient_checkpointing as follows: `python -m torch.distributed.launch \ --nproc_per_node 4 run_common_voice.py \ --model_name_or_path="facebook/wav2vec2-large-xlsr-53" \ --dataset_config_name="tr" \ # use this argument to specify the language code --output_dir=./wav2vec2-large-xlsr-turkish-demo \ --overwrite_output_dir \ --num_train_epochs="5" \ --per_device_train_batch_size="16" \ --learning_rate="3e-4" \ --warmup_steps="500" \ --evaluation_strategy="steps" \ --save_steps="400" \ --eval_steps="400" \ --logging_steps="400" \ --save_total_limit="3" \ --freeze_feature_extractor \ --feat_proj_dropout="0.0" \ --layerdrop="0.1" \ --fp16 \ --group_by_length \ --do_train --do_eval` This solves the problem in my case and now I am able to run it with two GPUs. If it works to you, I will do PR<|||||>@ivangtorre s solution works. unfortunately, I have to reduce the batchsize quite a lot. Update: I stopped using distributed training for now, as I did not get any performance gains somehow. Does anyone know whether the CTC loss of this model is computed in a distributed way, or are the outputs gathered on a single gpu before computing loss?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,877
closed
`XLMRobertaTokenizer` `encode_plus` api producing `<unk>` for a valid token
## Environment info - `transformers` version: 4.5.0.dev0 (latest master) - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.7.10 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @LysandreJik ## Information `XLMRobertaTokenizer` `encode_plus` api producing `<unk>` for a valid token ## To reproduce ```Python from transformers import XLMRobertaTokenizer tokenizer = XLMRobertaTokenizer.from_pretrained("xlm-roberta-base") text = "请在黄鹂餐厅预订今晚7点半的位置。" toks = tokenizer.tokenize(text) assert toks == ['▁', '请', '在', '黄', '鹂', '餐厅', '预订', '今晚', '7', '点', '半', '的位置', '。'] output = tokenizer.encode_plus(text, add_special_tokens=False) toks_converted = tokenizer.convert_ids_to_tokens(output['input_ids']) assert toks_converted == ['▁', '请', '在', '黄', '<unk>', '餐厅', '预订', '今晚', '7', '点', '半', '的位置', '。'] ``` ## Expected behavior ```Python assert toks_converted[4] == '鹂' # not <unk> ```
03-24-2021 02:54:00
03-24-2021 02:54:00
Hi, thanks for opening an issue! Seen with @n1to, and this comes from the Unigram-based tokenizers: Unigram's tokenize cuts the string into tokens, and then converts them to IDs. Unknown tokens are detected during the token to IDs conversion, rather than when the string is cut into tokens. This is different to BPE, where the string is cut in a lot of independant characters, converted to IDs, then merged to gether. This is also different to WordPiece, where we start from the word and cut it until we find a token representation for each word piece; if we don't, then that's unknown.<|||||>hi @LysandreJik . Thanks for looking into this, and sharing the info based on your response it seems that for the `XLMRobertaTokenizer` tokenizer, we **cannot** guarantee that the following holds: ```Python assert tokenizer.decode(tokenizer.convert_tokens_to_ids(tokenizer.tokenize(text))) == text ``` am i right ?<|||||>I believe that's true for any tokenizer. If the tokenizer cannot tokenize one part of your text as it is not part of your vocabulary, then some information is lost.<|||||>Hey guys, for the sake of completeness, here's the double check with the reference implementation/tokenizer: ```python import torch xlmr = torch.hub.load('pytorch/fairseq', 'xlmr.base') xlmr.eval() tokens = xlmr.encode('请在黄鹂餐厅预订今晚7点半的位置。') ``` It outputs: ```bash tensor([ 0, 6, 9736, 213, 19390, 3, 113638, 209093, 155755, 966, 2391, 6193, 57486, 30, 2]) ``` 3 is the id for the unknown token, but you "reverse" tokenization with: ```python xlmr.decode(tokens) ``` This outputs: ```bash '请在黄<unk>餐厅预订今晚7点半的位置。' ``` So the `<unk>` token also appears :)<|||||>@LysandreJik . agree that for any tokenizer, some information loss might happen, if the token is not part of the vocab. I guess, `SentencePiece` tokenizer is unique in a way : in the sense that - `SentencePieceProcessor provides a lossless data conversion that allows the original raw sentence to be perfectly reconstructed from the encoded data, i.e., Decode(Encode(input)) == input.` - where, Encode and Decode correspond to tokenization and de-tokenization respectively. - https://github.com/google/sentencepiece/blob/bc53923a9147dc8ffa54034c8ed774de78cc4d39/src/sentencepiece_processor.h#L118 Because of this, in the `tokenize` api for `XLMRobertaTokenizer`, there is no `<unk>` when the string is being cut into tokens But, in the `encode` api when the tokens are converted to ids, `<unk>` are permitted as @stefan-it confirmed. https://github.com/google/sentencepiece/blob/9cf136582d9cce492ba5a0cfb775f9e777fe07ea/src/unigram_model.cc#L433 <|||||>Thanks folks for the discussion and insight into the behaviour of tokenizers in HF. Closing this issue, since its not a bug per se.<|||||> hi guys. I try to reproduce the code that is at the beginning of the topic and I get the following: ![token roberta](https://user-images.githubusercontent.com/70067770/115905249-73e70100-a42b-11eb-8aa8-accbbc7a1e5b.png)
transformers
10,876
closed
Add new notebook links in the docs
# What does this PR do? This PR adds links to the three missing tasks in the notebooks page: multiple choice, translation and summarization.
03-23-2021 23:46:22
03-23-2021 23:46:22
Oh actually, an additional comment: the title of the summarization notebook is currently "Text classification on GLUE" (and same for the translation)<|||||>Fixed the titles and the sentence, so merging. Thanks for the review!
transformers
10,875
closed
Fix test_trainer_distributed
# What does this PR do? #10861 introduced a change in the way metrics are prefixed byt default in `Trainer.predict`, which in turn made `tests/test_trainer_distributed.py` fail. This PR fixes that.
03-23-2021 22:56:46
03-23-2021 22:56:46
transformers
10,874
closed
transformers.models.auto.tokenization_auto
I installed the version transformers 3.5.1 to get the version in GitHub using !pip3 install transformers==3.5.1 and !pip3 install transformers but then when I try to install SentenceTransofrmer using : from sentence_transformers import SentenceTransformer I get ModuleNotFoundError: No module named 'transformers.models.auto.tokenization_auto'. I am not sure how to resolve this issue.
03-23-2021 20:12:11
03-23-2021 20:12:11
Could you reproduce this in a colab so that we can take a look? Thanks!<|||||>I reworked my code so I didn't get the error anymore I'm honestly not sure how it got fixed<|||||>I think it maybe because I didn't restart the runtime <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,873
closed
Wav2Vec2/XLRS-Wav2Vec2 Pre-Training
Dear 🤗-team, I'd like to do pre-training with your implementation of Wav2Vec2 and/or XLRS-Wav2Vec2. I was wondering if there are any plans to add such scripts (or even a demo) to the repository? PS: I already did pre-training in NVIDIA NeMo, but I'm having problems with porting my checkpoints. Being able to do everything within the Huggingface framework would be great.
03-23-2021 19:32:07
03-23-2021 19:32:07
Wav2Vec2 Pre-Training is more important.<|||||>I would also like to be able to pre-train a Wav2Vec2 model using my own raw audio files in a self-supervised way. It would be even better if I could use a pre-trained model as a starting point. Is there any way to do this currently?<|||||>@czonios Yes, you can fine-tune a Wav2Vec2 model! Please check [this blogpost](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) by @patrickvonplaten. Pre-training is not available as of now.<|||||>Hey, We should have Wav2Vec2 Pretraining added in ~2 weeks.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Unstale<|||||>#11306 is under way I think<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,872
closed
Training GPT2 does not use GPU
Im using [run_clm.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_clm.py) to train GPT2. When using the [example from the documetnation](https://github.com/huggingface/transformers/tree/master/examples/language-modeling) it works fine and uses the GPU. But when I start it on my custom dataset it does not use any GPUs. Can you give me a tip on how to get it use the GPUs, or what might be wrong? this works and uses GPUs: ``` python run_clm.py \ --model_name_or_path gpt2 \ --dataset_name wikitext \ --dataset_config_name wikitext-2-raw-v1 \ --do_train \ --do_eval \ --output_dir /netscratch/nehring/projects/opensubtitles/datadir/tmp \ --per_device_train_batch_size 2 \ --per_device_eval_batch_size 2 ``` this starts training but it does not use GPUs: ``` python run_clm.py \ --model_type gpt2 \ --tokenizer_name gpt2 \ --train_file $DATA_PATH/train.txt \ --validation_file $DATA_PATH/valid.txt \ --do_train \ --do_eval \ --output_dir /netscratch/nehring/projects/opensubtitles/datadir/models/gpt2-small \ --per_device_train_batch_size 32 \ --per_device_eval_batch_size 32 \ --num_train_epochs 10 ``` This is my environement as created by `transformers-cli env`. It says that I did not install tensorflow. But when I do `python -c 'import tensorflow as tf; print(tf.__version__)` then the commandline prints "1.15.0". ``` - `transformers` version: 4.5.0.dev0 - Platform: Linux-5.4.0-65-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.8.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: this is the problem - Using distributed or parallel set-up in script?: no ``` This here is part of the output of run_clm.py. it says `_n_gpu=6`. So the GPUs are detected but for some reason they are not used. ``` 03/23/2021 18:34:14 - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir=/netscratch/nehring/projects/opensubtitles/datadir/models/gpt2-small, overwrite_output_dir=False, do_train=True, do_eval=True, do_predict=False, evaluation_strategy=IntervalStrategy.NO, prediction_loss_only=False, per_device_train_batch_size=32, per_device_eval_batch_size=32, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=10.0, max_steps=-1, lr_scheduler_type=SchedulerType.LINEAR, warmup_ratio=0.0, warmup_steps=0, logging_dir=runs/Mar23_18-34-14_graz, logging_strategy=IntervalStrategy.STEPS, logging_first_step=False, logging_steps=500, save_strategy=IntervalStrategy.STEPS, save_steps=500, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level=O1, fp16_backend=auto, fp16_full_eval=False, local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name=/netscratch/nehring/projects/opensubtitles/datadir/models/gpt2-small, disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, sharded_ddp=[], deepspeed=None, label_smoothing_factor=0.0, adafactor=False, group_by_length=False, report_to=['tensorboard'], ddp_find_unused_parameters=None, dataloader_pin_memory=True, skip_memory_metrics=False, _n_gpu=6) ```
03-23-2021 17:47:17
03-23-2021 17:47:17
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,871
closed
not created config.json in Wav2Vec2ForCTC for ASR
After saving the trained file I can to read the model. Here gives the error /content/gdrive/MyDrive/wav2vec2-large-xlsr-hindi' is the correct path to a directory containing a config.json file Here is my reading model model = Wav2Vec2ForCTC.from_pretrained("/content/gdrive/MyDrive/wav2vec2-large-xlsr-hindi").to("cuda") from transformers import TrainingArguments training_args = TrainingArguments( output_dir="/content/gdrive/MyDrive/wav2vec2-large-xlsr-hindi", group_by_length=True, per_device_train_batch_size=16, gradient_accumulation_steps=2, evaluation_strategy="steps", num_train_epochs=30, fp16=True, save_steps=400, #this would mean every 400 steps model gets saved which also means Google drive gets full eval_steps=400, logging_steps=400, #learning_rate=3e-4, learning_rate=0.1, # this is just for demo warmup_steps=500, save_total_limit=2, ) In my saved model there is not created config.json
03-23-2021 17:45:12
03-23-2021 17:45:12
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,870
closed
Sm trainer smp init fix
# What does this PR do? Fixes `SageMakerTrainer` `smp.init` for `smp 1.3`. It also removes the `is_smdistributed_available` for are more robust `is_sagemaker_model_parallel_available`.
03-23-2021 17:14:58
03-23-2021 17:14:58
LGTM! I ran one training job successfully with these changes. <|||||>I tested it with `pytorch1.7.1` `564829616587.dkr.ecr.us-east-1.amazonaws.com/huggingface-pytorch-training:1.7.1-transformers4.4.0-py36-gpu-cu110-ubuntu18.04` and `pytorch1.6` `763104351884.dkr.ecr.us-east-1.amazonaws.com/huggingface-pytorch-training:1.6.0-transformers4.4.2-gpu-py36-cu110-ubuntu18.04`
transformers
10,869
closed
Camembert-base MaskedLM has different config settings that actual camambert-base
## Environment info - `transformers` version: 4.1.1 - Platform: Linux-4.15.0-45-generic-x86_64-with-debian-10.2 - Python version: 3.7.3 - PyTorch version (GPU?): 1.7.1 (True) - Tensorflow version (GPU?): 2.4.1 (False) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help @LysandreJik @sgugger ## Information Model I am using: [CamemBERT](https://huggingface.co/transformers/model_doc/camembert.html#camembert): The problem arises when using: [CamembertForMaskedLM](https://huggingface.co/transformers/model_doc/camembert.html#camembertformaskedlm) The tasks I am working on is: I am training Camambert model with MaskedLM head. (using a private dataset) ## To reproduce Steps to reproduce the behaviour: 1. load camambert config file: ```python from transformers import CamembertConfig config = CamembertConfig() config ``` output: ``` CamembertConfig { "attention_probs_dropout_prob": 0.1, "bos_token_id": 0, "eos_token_id": 2, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "camembert", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 1, "position_embedding_type": "absolute", "type_vocab_size": 2, "vocab_size": 30522 } ``` 2. load camambert tokenizer ```python from transformers import CamembertTokenizer tokenizer = CamembertTokenizer.from_pretrained(TOKENIZER_DIR) ``` 3. load camembert for MLM ``` from transformers import CamembertForMaskedLM model = CamembertForMaskedLM.from_pretrained( model_name_or_path, config=config) ``` output: ```python --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-94-3a1a4ae80b3a> in <module> 1 from transformers import CamembertForMaskedLM ----> 2 model = CamembertForMaskedLM.from_pretrained( model_name_or_path, config=config) /usr/local/lib/python3.7/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 1154 raise RuntimeError( 1155 "Error(s) in loading state_dict for {}:\n\t{}".format( -> 1156 model.__class__.__name__, "\n\t".join(error_msgs) 1157 ) 1158 ) RuntimeError: Error(s) in loading state_dict for CamembertForMaskedLM: size mismatch for roberta.embeddings.word_embeddings.weight: copying a param with shape torch.Size([32005, 768]) from checkpoint, the shape in current model is torch.Size([30522, 768]). size mismatch for roberta.embeddings.position_embeddings.weight: copying a param with shape torch.Size([514, 768]) from checkpoint, the shape in current model is torch.Size([512, 768]). size mismatch for roberta.embeddings.token_type_embeddings.weight: copying a param with shape torch.Size([1, 768]) from checkpoint, the shape in current model is torch.Size([2, 768]). size mismatch for lm_head.bias: copying a param with shape torch.Size([32005]) from checkpoint, the shape in current model is torch.Size([30522]). size mismatch for lm_head.decoder.weight: copying a param with shape torch.Size([32005, 768]) from checkpoint, the shape in current model is torch.Size([30522, 768]). ``` ## Expected behavior If I replace step 3 with: ```python from transformers import CamembertForMaskedLM model = CamembertForMaskedLM.from_pretrained( model_name_or_path) ```` I won't receive any error, but it's not the correct config details (`model.config`) when I print out the config details: output: ``` CamembertConfig { "_name_or_path": "./models_weight/camembert-base", "architectures": [ "CamembertForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "bos_token_id": 5, "eos_token_id": 6, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-05, "max_position_embeddings": 514, "model_type": "camembert", "num_attention_heads": 12, "num_hidden_layers": 12, "output_past": true, "pad_token_id": 1, "position_embedding_type": "absolute", "type_vocab_size": 1, "vocab_size": 32005 } ``` The correct camembert config is provided [here](https://huggingface.co/camembert-base/resolve/main/config.json).
03-23-2021 17:07:04
03-23-2021 17:07:04
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>This is still a problem and if someone could address this would be great. <|||||>Hello! You're instantiating a `CamembertConfig` without specifying any parameters, so it is initialized with the defaults (which are based on the BERT architecture as the configuration inherits from it). It is not expected to be the same as `camembert-base`, nor is it specified in the documentation. If you would like to obtain a configuration object that is the exact same as `camembert-base`, I would recommend instantiating your configuration object from that checkpoint: ```py from transformers import CamembertConfig config = CamembertConfig.from_pretrained("camembert-base") ``` You won't have a problem to load the model then: ```py from transformers import CamembertForMaskedLM model = CamembertForMaskedLM.from_pretrained("camembert-base", config=config) ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,868
closed
[Examples] Added predict stage and Updated Example Template
# What does this PR do? * Adds Predict stage in `run_xlni.py` text-classification example * Updated Example Template for Predict Stage Fixes #10482 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? #10482 - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @stas00 @sgugger
03-23-2021 15:58:09
03-23-2021 15:58:09
transformers
10,867
closed
Amazon SageMaker Documentation
# What does this PR do? Adds the Documentation page for "Run training on Amazon SageMaker".
03-23-2021 14:56:00
03-23-2021 14:56:00
transformers
10,866
closed
add processing "cache" and augmentation
# What does this PR do? This PR stores the resampled commoinvoice on disk to speedup multiple runs. Furthermore it adds data augmentation to double the dataset size <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @patrickvonplaten @patil-suraj Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-23-2021 14:11:03
03-23-2021 14:11:03
Hey @flozi00, Could you make the code quality test pass -> then I think we can merge this one :-)
transformers
10,865
closed
Update the example template for a no Trainer option
# What does this PR do? Expand the template of new examples with a new option to build an example like the new [run_glue_no_trainer.py](https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_glue_no_trainer.py).
03-23-2021 13:31:05
03-23-2021 13:31:05
transformers
10,864
closed
transformers import error
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.4.2 - Platform: MACOS - Python version: Python 3.6.13 - PyTorch version (GPU?): 1.8.0 (cpu) - Tensorflow version (GPU?): tensorflow-cpu 2.4.1 - Using GPU in script?: no - Using distributed or parallel set-up in script?:no ### Who can help @LysandreJik I setup my conda envs as below <img width="470" alt="스크린샷 2021-03-23 오후 8 57 52" src="https://user-images.githubusercontent.com/55866896/112142854-70890b80-8c1a-11eb-8752-14857d4529e7.png"> and there are all librarys needed list in !pip3 list <img width="310" alt="스크린샷 2021-03-23 오후 9 05 00" src="https://user-images.githubusercontent.com/55866896/112143640-84813d00-8c1b-11eb-8e6c-735d0a57e1c2.png"> but whenever I try to import BertTokenizer(from transformers import BertTokenizer), ImportError occurs. <img width="662" alt="스크린샷 2021-03-23 오후 8 59 13" src="https://user-images.githubusercontent.com/55866896/112142993-a0381380-8c1a-11eb-8ec1-722d71a947ed.png"> I tried all process in google colab , it works well I have no idea why it does not work in my Macbook local environment please help me
03-23-2021 12:08:00
03-23-2021 12:08:00
Hello! I would guess something is wrongly setup in your local environment; especially it it works in colab! Are you sure you're using the `python` from the same environment as your `pip3`?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,863
closed
Fix p_mask cls token masking in question-answering pipeline
# What does this PR do? It fixes a really small bug described in detail in the issue - it only adds a condition to the if statement responsible for unmasking the `cls_token_id` in the `p_mask` used in the question answering pipeline. Fixes #10810 ## Who can review? Anyone in the community is free to review the PR once the tests have passed.
03-23-2021 11:01:00
03-23-2021 11:01:00
transformers
10,862
closed
Fixed confusing order of args in generate() docstring
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR addresses a (IMO) confusing parameter description in the docstring of `generate()`. Specifically, it is about the parameter `prefix_allowed_tokens_fn` which has to be of type `Callable[[int, torch.Tensor], List[int]]`. Since the description says _"This function takes 2 arguments `inputs_ids` and the batch ID `batch_id`"_ I created a function ```Python def restrict_vocab(input_ids, batch_id): # incorrect! # logic ``` But then I realised that the order of the parameters is wrong (the type hint would indicate that `batch_id` comes first though): ```Python def restrict_vocab(batch_id, input_ids): # correct :) # logic ``` Therefore, I fixed the order of the parameters in the description of `prefix_allowed_tokens_fn`, i.e. exchanged `inputs_ids` with `batch_id`. Now it should be easier to read and understand. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. Documentation -> @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-23-2021 08:31:42
03-23-2021 08:31:42
transformers
10,861
closed
[trainer] Fixes Typo in Predict Method of Trainer
# What does this PR do? Fixes typo in Predict Method of Trainer. This will enable saving files with the correct prefix `test` in Predict stage. Earlier it was saving it with the `eval` prefix for both predict and evaluate stage. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? This is discussed in #10482 - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review?: @stas00 @sgugger
03-23-2021 04:57:22
03-23-2021 04:57:22
transformers
10,860
closed
The exact English pretraining data and Chinese pretraining data that are exact same to the BERT paper's pretraining data.
(Sorry I can not visit the forum.) Any one know where to get them? Thank you and thank you.
03-23-2021 03:43:44
03-23-2021 03:43:44
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,859
closed
[file_utils] import refactor
This is just as a small import code refactor which currently looks a bit odd due to 8 levels of nesting - no functional change. @LysandreJik, @sgugger
03-23-2021 02:30:05
03-23-2021 02:30:05
transformers
10,858
closed
If run trainer._maybe_log_save_evaluate() twice continuously, it will appear “ZeroDivisionError: float division by zero”
## Environment info - `transformers` version: 4.3.3 - Platform: Linux-4.15.0-139-generic-x86_64-with-debian-buster-sid - Python version: 3.7.0 - PyTorch version (GPU?): 1.8.0+cu111 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Irrelevant - Using distributed or parallel set-up in script?: no ### Who can help Library: - trainer: @sgugger ## Information In `transformers/trainer.py`, if I use the function `trainer._maybe_log_save_evaluate() `continuously, `self.state.global_step - self._globalstep_last_logged` will be zero, so raise `ZeroDivisionError` exception in line 1044: `logs["loss"] = round(tr_loss_scalar / (self.state.global_step - self._globalstep_last_logged), 4)` This situation occurs when an epoch is finished but `_maybe_log_save_evaluate()` is called twice in line 983 and line 989 , waiting for next epoch.
03-23-2021 01:57:52
03-23-2021 01:57:52
@niuzaisheng The function starting with '_' is a hint that this is an internal function and is not for external use. Is there any particular reason you are using this and not any alternatives?<|||||>Looking into the function `_maybe_log_save_evaluate` should ideally not contain line 1225 `self._globalstep_last_logged = self.state.global_step`. I can open a PR and rectify this but not sure if this is a necessary change. @sgugger Please comment.<|||||>Because my training ended after the first epoch and raised `ZeroDivisionError` exception, I looked inside the source code. I think this is the reason why my training cannot go on.<|||||>@niuzaisheng Can you provide sample code and the full stacktrace?<|||||>Sorry, I can't give out all my training script. But this problem appeared coincidentally. If `should_log ` just right at the end of an epoch, the func `_maybe_log_save_evaluate` will be called twice continuously. <img width="1280" alt="截屏2021-03-23 下午7 30 29" src="https://user-images.githubusercontent.com/29062892/112139872-429dca00-8c0e-11eb-82f5-d6c20b65bd0e.png"> here is my stacktrace: ``` 100%|█████████▉| 18028/18030 [5:21:43<00:01, 1.49it/s] 100%|█████████▉| 18029/18030 [5:21:43<00:00, 1.52it/s] 100%|██████████| 18030/18030 [5:21:44<00:00, 1.62it/s]{'loss': 1.3667, 'learning_rate': 0.0, 'epoch': 10.0, 'step': 18030} Traceback (most recent call last): File "/home/XXXX/anaconda3/envs/allennlp/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/XXXX/anaconda3/envs/allennlp/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/XXXX/XXXX/XXXX/run_train.py", line 297, in <module> main() File "/home/XXXX/XXXX/XXXX/run_train.py", line 257, in main model_path=model_args.name_or_path if os.path.isdir(model_args.name_or_path) else None File "/home/XXXX/anaconda3/envs/allennlp/lib/python3.7/site-packages/transformers/trainer.py", line 989, in train self._maybe_log_save_evaluate(tr_loss, model, trial, epoch) File "/home/XXXX/anaconda3/envs/allennlp/lib/python3.7/site-packages/transformers/trainer.py", line 1044, in _maybe_log_save_evaluate logs["loss"] = round(tr_loss_scalar / (self.state.global_step - self._globalstep_last_logged), 4) ZeroDivisionError: float division by zero 100%|██████████| 18030/18030 [5:21:54<00:00, 1.07s/it] ``` My `logging_steps` is set to 10 steps. And 18030 is just at the end of an epoch, Coincidentally.<|||||>So, at the first time in line 983 call `_maybe_log_save_evaluate()`, ` self._globalstep_last_logged ` will be set equal to `self.state.global_step` by line 1052. At second time in line 989 call `_maybe_log_save_evaluate()` , `logs["loss"] = round(tr_loss_scalar / (self.state.global_step - self._globalstep_last_logged), 4)` will raise ZeroDivisionError in line 1044. I can avoid this problem by modifying `logging_steps` to other numbers.<|||||>Got it. This should be rectified. A simple : `if (step + 1) != steps_in_epoch: self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)` in line 1138 should work? <|||||>If you want to avoid two consecutive calls `_maybe_log_save_evaluate`, we can do this, but will it affect `should_evaluate` and `should_save` in `_maybe_log_save_evaluate` ? If `evaluation_strategy` is set to be `epoch`, will it affect ?<|||||>As its name indicates `_maybe_log_save_evaluate` does not log at each epoch, it depends on the value of the `self.control.should_log` variable which won't always be `True`. Since your log strategy is either `"steps"` or `"epoch"` it won't run the line ``` logs["loss"] = round(tr_loss_scalar / (self.state.global_step - self._globalstep_last_logged), 4) ``` twice in a row. To debug further why you have a problem, we would need to know what training arguments you are using and how you launch your training, which you are not willing to provide.<|||||>I know why now. I have override the`trainer.log()` func, and didn't add `self.control = self.callback_handler.on_log(self.args, self.state, self.control, logs)` at the end of the func. So `self.control.should_log` didn't set to False after log action. Because my code was upgraded from the previous transformers 3.X version, this piece of code was not updated. Thanks for your help!
transformers
10,857
closed
Make convert_to_onnx runable as script again
# What does this PR do? When rewroking the inits, `convert_graph_to_onnx.py` got its import replaced by relative imports which broke the fact it can be run as a script. This PR fixes that.
03-22-2021 21:30:53
03-22-2021 21:30:53
transformers
10,856
closed
Use DataCollatorForSeq2Seq in run_summarization in all cases
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #10791 This PR uses an instance of DataCollatorForSeq2Seq as a data collator regardless of the value of pad_to_max_length. It fixes the problem of the script breaking with the two parameters set: - label_smoothing_factor - pad_to_max_length Removes unnecessary `default_data_collator` import. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. Discussion: #10791 - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger Now ran make quality ;), thanks! <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-22-2021 18:39:04
03-22-2021 18:39:04
Thanks a lot!
transformers
10,855
closed
m2m_100 finetuning not working (KeyError: none)
- `transformers` version: 4.5.0.dev0 - Python version: 3.8 - PyTorch version (GPU?): 1.7.1+cu110 - Using GPU in script?: Yes, RTX 3090 - Using distributed or parallel set-up in script?: No I am trying to finetune m2m: python3 run_translation.py \ --model_name_or_path=facebook/m2m100_418M \ --do_train \ --do_eval \ --source_lang de \ --target_lang en \ --fp16=True \ --num_train_epochs 1 \ --evaluation_strategy epoch \ --dataset_name wmt15 \ --dataset_config_name de-en \ --output_dir /home/s/m2m_output/DE-EN \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate And I'm getting this error: All model checkpoint weights were used when initializing M2M100ForConditionalGeneration. All the weights of M2M100ForConditionalGeneration were initialized from the model checkpoint at facebook/m2m100_418M. If your task is similar to the task the model of the checkpoint was trained on, you can already use M2M100ForConditionalGeneration for predictions without further training. Traceback (most recent call last): File "run_translation.py", line 562, in <module> main() File "run_translation.py", line 401, in main train_dataset = train_dataset.map( File "/home/s/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1120, in map update_data = does_function_return_dict(test_inputs, test_indices) File "/home/s/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1091, in does_function_return_dict function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) File "run_translation.py", line 382, in preprocess_function with tokenizer.as_target_tokenizer(): File "/opt/conda/lib/python3.8/contextlib.py", line 113, in __enter__ return next(self.gen) File "/home/s/.local/lib/python3.8/site-packages/transformers/models/m2m_100/tokenization_m2m_100.py", line 299, in as_target_tokenizer self.set_tgt_lang_special_tokens(self.tgt_lang) File "/home/s/.local/lib/python3.8/site-packages/transformers/models/m2m_100/tokenization_m2m_100.py", line 312, in set_tgt_lang_special_tokens lang_token = self.get_lang_token(tgt_lang) File "/home/s/.local/lib/python3.8/site-packages/transformers/models/m2m_100/tokenization_m2m_100.py", line 318, in get_lang_token return self.lang_code_to_token[lang] KeyError: None @patrickvonplaten @patil-suraj Any ideas on how to fix this? Thanks
03-22-2021 16:54:23
03-22-2021 16:54:23
Hi @sergej-d The `run_translation.py` script now supports fine-tuning `M2M100` (see #11170), for this model you should now also pass the `--forced_bos_token` argument which is usually similar to the the `--target_lang` <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,854
closed
Run summarization always use data collator for seq2 seq
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #10791 This PR uses an instance of DataCollatorForSeq2Seq as a data collator regardless of the value of `pad_to_max_length`. It fixes the problem of the script breaking with the two parameters set: - label_smoothing_factor - pad_to_max_length ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. Discussion: https://github.com/huggingface/transformers/issues/10791 - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-22-2021 14:49:50
03-22-2021 14:49:50
transformers
10,853
closed
Error building extension 'fused_adam'
Hi, I recently updated `transformers` to `4.4.2` for `DebertaV2` and while training DebertaV2 with DeepSpeed, got an error regarding `deepspeed` version. So I upgraded to latest deepspeed (0.3.13) and started training and getting this error - **RuntimeError: Error building extension 'fused_adam'** Here is the env info - ![image](https://user-images.githubusercontent.com/41769919/111992621-a0151680-8b3b-11eb-8f6f-eea89cd0e9a8.png) `transformers - 4.4.2` Also tried with torch==1.8.0+cu101 and getting same error. I was able to train with deepspeed using `transformers-4.3.2` and `deepspeed-0.3.10`. Plz suggest how to proceed further..
03-22-2021 12:59:45
03-22-2021 12:59:45
Could you paste the whole stacktrace you're having? Also it seems you're running on a colab, would it be possible to share that colab so we can take a look? Thank you! Pinging @stas00 <|||||>1. As @LysandreJik suggested always report the full backtrace 2. deepspeed requires a pytorch matching cuda version installed and configured - please refer to: https://huggingface.co/transformers/main_classes/trainer.html#installation-notes see the notebook I created on how to make it work on colab: https://github.com/stas00/porting/blob/master/transformers/deepspeed/DeepSpeed_on_colab_CLI.ipynb 3. In general deepspeed building errors belong to https://github.com/microsoft/DeepSpeed/issues as HF only integrates it <|||||>Thanks @LysandreJik and @stas00 . I upgraded to `torch-1.8.0+cu101` and still getting the error. I think issue is with DeepSpeed itself. So raised an issue in their repo and [here](https://github.com/microsoft/DeepSpeed/issues/885) it is. Below is the stacktrace - ``` [2021-03-23 07:03:49,374] [INFO] [logging.py:60:log_dist] [Rank 0] DeepSpeed info: version=0.3.13, git-hash=unknown, git-branch=unknown [2021-03-23 07:03:49,407] [INFO] [engine.py:77:_initialize_parameter_parallel_groups] data_parallel_size: 1, parameter_parallel_size: 1 Using /home/jovyan/.cache/torch_extensions as PyTorch extensions root... Creating extension directory /home/jovyan/.cache/torch_extensions/fused_adam... Detected CUDA files, patching ldflags Emitting ninja build file /home/jovyan/.cache/torch_extensions/fused_adam/build.ninja... Building extension module fused_adam... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) --------------------------------------------------------------------------- CalledProcessError Traceback (most recent call last) ~/.local/lib/python3.6/site-packages/torch/utils/cpp_extension.py in _run_ninja_build(build_directory, verbose, error_prefix) 1672 check=True, -> 1673 env=env) 1674 except subprocess.CalledProcessError as e: /usr/lib/python3.6/subprocess.py in run(input, timeout, check, *popenargs, **kwargs) 437 raise CalledProcessError(retcode, process.args, --> 438 output=stdout, stderr=stderr) 439 return CompletedProcess(process.args, retcode, stdout, stderr) CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1. The above exception was the direct cause of the following exception: RuntimeError Traceback (most recent call last) <ipython-input-24-3435b262f1ae> in <module> ----> 1 trainer.train() ~/.local/lib/python3.6/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, **kwargs) 901 delay_optimizer_creation = self.sharded_ddp is not None and self.sharded_ddp != ShardedDDPOption.SIMPLE 902 if self.args.deepspeed: --> 903 model, optimizer, lr_scheduler = init_deepspeed(self, num_training_steps=max_steps) 904 self.model = model.module 905 self.model_wrapped = model # will get further wrapped in DDP ~/.local/lib/python3.6/site-packages/transformers/integrations.py in init_deepspeed(trainer, num_training_steps) 416 model=model, 417 model_parameters=model_parameters, --> 418 config_params=config, 419 ) 420 ~/.local/lib/python3.6/site-packages/deepspeed/__init__.py in initialize(args, model, optimizer, model_parameters, training_data, lr_scheduler, mpu, dist_init_required, collate_fn, config_params) 123 dist_init_required=dist_init_required, 124 collate_fn=collate_fn, --> 125 config_params=config_params) 126 else: 127 assert mpu is None, "mpu must be None with pipeline parallelism" ~/.local/lib/python3.6/site-packages/deepspeed/runtime/engine.py in __init__(self, args, model, optimizer, model_parameters, training_data, lr_scheduler, mpu, dist_init_required, collate_fn, config_params, dont_change_device) 181 self.lr_scheduler = None 182 if model_parameters or optimizer: --> 183 self._configure_optimizer(optimizer, model_parameters) 184 self._configure_lr_scheduler(lr_scheduler) 185 self._report_progress(0) ~/.local/lib/python3.6/site-packages/deepspeed/runtime/engine.py in _configure_optimizer(self, client_optimizer, model_parameters) 596 logger.info('Using client Optimizer as basic optimizer') 597 else: --> 598 basic_optimizer = self._configure_basic_optimizer(model_parameters) 599 if self.global_rank == 0: 600 logger.info( ~/.local/lib/python3.6/site-packages/deepspeed/runtime/engine.py in _configure_basic_optimizer(self, model_parameters) 670 optimizer = FusedAdam(model_parameters, 671 **optimizer_parameters, --> 672 adam_w_mode=effective_adam_w_mode) 673 674 elif self.optimizer_name() == LAMB_OPTIMIZER: ~/.local/lib/python3.6/site-packages/deepspeed/ops/adam/fused_adam.py in __init__(self, params, lr, bias_correction, betas, eps, adam_w_mode, weight_decay, amsgrad, set_grad_none) 70 self.set_grad_none = set_grad_none 71 ---> 72 fused_adam_cuda = FusedAdamBuilder().load() 73 # Skip buffer 74 self._dummy_overflow_buf = torch.cuda.IntTensor([0]) ~/.local/lib/python3.6/site-packages/deepspeed/ops/op_builder/builder.py in load(self, verbose) 213 return importlib.import_module(self.absolute_name()) 214 else: --> 215 return self.jit_load(verbose) 216 217 def jit_load(self, verbose=True): ~/.local/lib/python3.6/site-packages/deepspeed/ops/op_builder/builder.py in jit_load(self, verbose) 250 extra_cuda_cflags=self.nvcc_args(), 251 extra_ldflags=self.extra_ldflags(), --> 252 verbose=verbose) 253 build_duration = time.time() - start_build 254 if verbose: ~/.local/lib/python3.6/site-packages/torch/utils/cpp_extension.py in load(name, sources, extra_cflags, extra_cuda_cflags, extra_ldflags, extra_include_paths, build_directory, verbose, with_cuda, is_python_module, is_standalone, keep_intermediates) 1089 is_python_module, 1090 is_standalone, -> 1091 keep_intermediates=keep_intermediates) 1092 1093 ~/.local/lib/python3.6/site-packages/torch/utils/cpp_extension.py in _jit_compile(name, sources, extra_cflags, extra_cuda_cflags, extra_ldflags, extra_include_paths, build_directory, verbose, with_cuda, is_python_module, is_standalone, keep_intermediates) 1300 verbose=verbose, 1301 with_cuda=with_cuda, -> 1302 is_standalone=is_standalone) 1303 finally: 1304 baton.release() ~/.local/lib/python3.6/site-packages/torch/utils/cpp_extension.py in _write_ninja_file_and_build_library(name, sources, extra_cflags, extra_cuda_cflags, extra_ldflags, extra_include_paths, build_directory, verbose, with_cuda, is_standalone) 1405 build_directory, 1406 verbose, -> 1407 error_prefix=f"Error building extension '{name}'") 1408 1409 ~/.local/lib/python3.6/site-packages/torch/utils/cpp_extension.py in _run_ninja_build(build_directory, verbose, error_prefix) 1681 if hasattr(error, 'output') and error.output: # type: ignore 1682 message += f": {error.output.decode()}" # type: ignore -> 1683 raise RuntimeError(message) from e 1684 1685 RuntimeError: Error building extension 'fused_adam' ```<|||||>Resolved here: https://github.com/microsoft/DeepSpeed/issues/885 - simply very low resources colab instance - I updated https://github.com/stas00/porting/blob/master/transformers/deepspeed/DeepSpeed_on_colab_CLI.ipynb to give more instructions/guidelines.
transformers
10,852
closed
Longformer training : CUDA error: device-side assert triggered
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: - Python version: 3.7 - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: yes - Using distributed or parallel set-up in script?: sharedddp (Fairscale) ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - longformer, reformer, transfoxl, xlnet: @patrickvonplaten Library: - trainer: @sgugger Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): Longformer The problem arises when using: * [ ] the official example scripts: (give details below) * [ x ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ x ] my own task or dataset: (give details below) ## To reproduce When i use the same configuration to train model type bert it works but this does not work for longformer. Steps to reproduce the behavior: /opt/conda/bin/python -m torch.distributed.launch \ --nnodes=$WORLD_SIZE \ --node_rank=$RANK \ --master_addr=$MASTER_ADDR \ --master_port=$MASTER_PORT \ --nproc_per_node=1 $SCRIPT \ --output_dir=$OUT_DIR \ --logging_dir=$OUT_DIR \ --tokenizer_name=$TOKENIZER \ --model_type=longformer --do_train --do_eval \ --cache_dir=$CACHE_DIR \ --overwrite_cache \ --validation_file=$EVAL_DATA \ --overwrite_output_dir \ --train_file=$TRAIN_DATA_FOLDER \ --dataset_name=$DATASET_NAME \ --line_by_line \ --learning_rate=${INIT_LR} \ --save_steps=${SAVE_STEPS} \ --max_seq_length=${BLOCK_SIZE} \ --gradient_accumulation_steps=${GRAD_ACCUM_STEPS} \ --fp16 \ --num_train_epochs=$EPOCHS \ --per_device_train_batch_size=$BATCH_SIZE_PER_GPU \ --local_rank=$LOCAL_RANK \ --train_dataset_info_path=$TRAIN_DATASET_INFO \ --test_dataset_info_path=$TEST_DATASET_INFO \ --sharded_ddp \ Traceback (most recent call last): File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 661, in <module> main() File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 465, in main train_result = trainer.train(resume_from_checkpoint=model_path) File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1003, in train tr_loss += self.training_step(model, inputs) File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1443, in training_step loss = self.compute_loss(model, inputs) File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1477, in compute_loss outputs = model(**inputs) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 726, in _call_impl result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/fairscale/nn/data_parallel/sharded_ddp.py", line 218, in forward return self.module(*inputs, **kwargs) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 726, in _call_impl result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/transformers/models/longformer/modeling_longformer.py", line 1765, in forward return_dict=return_dict, File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 726, in _call_impl result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/transformers/models/longformer/modeling_longformer.py", line 1669, in forward return_dict=return_dict, File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 726, in _call_impl result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/transformers/models/longformer/modeling_longformer.py", line 1245, in forward Traceback (most recent call last): File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 661, in <module> Traceback (most recent call last): File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 661, in <module> is_global_attn = is_index_global_attn.flatten().any().item() RuntimeError: CUDA error: device-side assert triggered main() File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 465, in main main() File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 465, in main train_result = trainer.train(resume_from_checkpoint=model_path) File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1003, in train train_result = trainer.train(resume_from_checkpoint=model_path) File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1003, in train tr_loss += self.training_step(model, inputs) File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1443, in training_step tr_loss += self.training_step(model, inputs) File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1443, in training_step loss = self.compute_loss(model, inputs) File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1477, in compute_loss loss = self.compute_loss(model, inputs) File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1477, in compute_loss outputs = model(**inputs) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 726, in _call_impl outputs = model(**inputs) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 726, in _call_impl result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/fairscale/nn/data_parallel/sharded_ddp.py", line 218, in forward result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/fairscale/nn/data_parallel/sharded_ddp.py", line 218, in forward return self.module(*inputs, **kwargs) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 726, in _call_impl return self.module(*inputs, **kwargs) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 726, in _call_impl result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/transformers/models/longformer/modeling_longformer.py", line 1765, in forward result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/transformers/models/longformer/modeling_longformer.py", line 1765, in forward Traceback (most recent call last): File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 661, in <module> return_dict=return_dict, File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 726, in _call_impl return_dict=return_dict, File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 726, in _call_impl result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/transformers/models/longformer/modeling_longformer.py", line 1669, in forward result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/transformers/models/longformer/modeling_longformer.py", line 1669, in forward Traceback (most recent call last): File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 661, in <module> return_dict=return_dict, File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 726, in _call_impl return_dict=return_dict, File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 726, in _call_impl main() File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 465, in main result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/transformers/models/longformer/modeling_longformer.py", line 1245, in forward result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/transformers/models/longformer/modeling_longformer.py", line 1245, in forward is_global_attn = is_index_global_attn.flatten().any().item() RuntimeError: CUDA error: device-side assert triggered is_global_attn = is_index_global_attn.flatten().any().item() RuntimeError: CUDA error: device-side assert triggered train_result = trainer.train(resume_from_checkpoint=model_path) File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1003, in train main() File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 465, in main tr_loss += self.training_step(model, inputs) File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1443, in training_step loss = self.compute_loss(model, inputs) File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1477, in compute_loss train_result = trainer.train(resume_from_checkpoint=model_path) File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1003, in train outputs = model(**inputs) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 726, in _call_impl tr_loss += self.training_step(model, inputs) File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1443, in training_step result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/fairscale/nn/data_parallel/sharded_ddp.py", line 218, in forward return self.module(*inputs, **kwargs) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 726, in _call_impl result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/transformers/models/longformer/modeling_longformer.py", line 1765, in forward loss = self.compute_loss(model, inputs) File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1477, in compute_loss return_dict=return_dict, File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 726, in _call_impl outputs = model(**inputs) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 726, in _call_impl result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/transformers/models/longformer/modeling_longformer.py", line 1669, in forward result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/fairscale/nn/data_parallel/sharded_ddp.py", line 218, in forward return self.module(*inputs, **kwargs) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 726, in _call_impl result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/transformers/models/longformer/modeling_longformer.py", line 1765, in forward return_dict=return_dict, File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 726, in _call_impl result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/transformers/models/longformer/modeling_longformer.py", line 1245, in forward return_dict=return_dict, File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 726, in _call_impl is_global_attn = is_index_global_attn.flatten().any().item() RuntimeError: CUDA error: device-side assert triggered result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/transformers/models/longformer/modeling_longformer.py", line 1669, in forward return_dict=return_dict, File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 726, in _call_impl result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/transformers/models/longformer/modeling_longformer.py", line 1245, in forward is_global_attn = is_index_global_attn.flatten().any().item() RuntimeError: CUDA error: device-side assert triggered terminate called after throwing an instance of 'c10::Error' what(): CUDA error: device-side assert triggered Exception raised from create_event_internal at ../c10/cuda/CUDACachingAllocator.cpp:687 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x6b (0x7fc78c43d99b in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10.so) frame #1: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0xc10 (0x7fc78c680280 in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10_cuda.so) frame #2: c10::TensorImpl::release_resources() + 0x4d (0x7fc78c425dfd in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10.so) frame #3: <unknown function> + 0x5414e2 (0x7fc7c549d4e2 in /opt/conda/lib/python3.6/site-packages/torch/lib/libtorch_python.so) frame #4: <unknown function> + 0x19aaae (0x5603f8975aae in /opt/conda/bin/python) frame #5: <unknown function> + 0xf2868 (0x5603f88cd868 in /opt/conda/bin/python) frame #6: <unknown function> + 0x1f0d91 (0x5603f89cbd91 in /opt/conda/bin/python) frame #7: <unknown function> + 0xf270d (0x5603f88cd70d in /opt/conda/bin/python) frame #8: <unknown function> + 0x19aa90 (0x5603f8975a90 in /opt/conda/bin/python) frame #9: <unknown function> + 0xf2868 (0x5603f88cd868 in /opt/conda/bin/python) frame #10: <unknown function> + 0x1f0d91 (0x5603f89cbd91 in /opt/conda/bin/python) frame #11: <unknown function> + 0xf2828 (0x5603f88cd828 in /opt/conda/bin/python) frame #12: <unknown function> + 0x19aa90 (0x5603f8975a90 in /opt/conda/bin/python) frame #13: <unknown function> + 0xf2868 (0x5603f88cd868 in /opt/conda/bin/python) frame #14: <unknown function> + 0x1f0d91 (0x5603f89cbd91 in /opt/conda/bin/python) frame #15: <unknown function> + 0x1688cb (0x5603f89438cb in /opt/conda/bin/python) frame #16: _PyGC_CollectNoFail + 0x2a (0x5603f89cb79a in /opt/conda/bin/python) frame #17: PyImport_Cleanup + 0x278 (0x5603f897ffa8 in /opt/conda/bin/python) frame #18: Py_FinalizeEx + 0x61 (0x5603f89ea961 in /opt/conda/bin/python) frame #19: Py_Main + 0x35e (0x5603f89f4cae in /opt/conda/bin/python) frame #20: main + 0xee (0x5603f88bef2e in /opt/conda/bin/python) frame #21: __libc_start_main + 0xe7 (0x7fc7f2cf3b97 in /lib/x86_64-linux-gnu/libc.so.6) frame #22: <unknown function> + 0x1c327f (0x5603f899e27f in /opt/conda/bin/python) terminate called after throwing an instance of 'c10::Error' what(): CUDA error: device-side assert triggered Exception raised from create_event_internal at ../c10/cuda/CUDACachingAllocator.cpp:687 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x6b (0x7fa371cb999b in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10.so) frame #1: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0xc10 (0x7fa371efc280 in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10_cuda.so) frame #2: c10::TensorImpl::release_resources() + 0x4d (0x7fa371ca1dfd in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10.so) frame #3: <unknown function> + 0x5414e2 (0x7fa3aad194e2 in /opt/conda/lib/python3.6/site-packages/torch/lib/libtorch_python.so) frame #4: <unknown function> + 0x19aaae (0x5559699ffaae in /opt/conda/bin/python) frame #5: <unknown function> + 0xf2868 (0x555969957868 in /opt/conda/bin/python) frame #6: <unknown function> + 0x1f0d91 (0x555969a55d91 in /opt/conda/bin/python) frame #7: <unknown function> + 0xf270d (0x55596995770d in /opt/conda/bin/python) frame #8: <unknown function> + 0x19aa90 (0x5559699ffa90 in /opt/conda/bin/python) frame #9: <unknown function> + 0xf2868 (0x555969957868 in /opt/conda/bin/python) frame #10: <unknown function> + 0x1f0d91 (0x555969a55d91 in /opt/conda/bin/python) frame #11: <unknown function> + 0xf2828 (0x555969957828 in /opt/conda/bin/python) frame #12: <unknown function> + 0x19aa90 (0x5559699ffa90 in /opt/conda/bin/python) frame #13: <unknown function> + 0xf2868 (0x555969957868 in /opt/conda/bin/python) frame #14: <unknown function> + 0x1f0d91 (0x555969a55d91 in /opt/conda/bin/python) frame #15: <unknown function> + 0x1688cb (0x5559699cd8cb in /opt/conda/bin/python) frame #16: _PyGC_CollectNoFail + 0x2a (0x555969a5579a in /opt/conda/bin/python) frame #17: PyImport_Cleanup + 0x278 (0x555969a09fa8 in /opt/conda/bin/python) frame #18: Py_FinalizeEx + 0x61 (0x555969a74961 in /opt/conda/bin/python) frame #19: Py_Main + 0x35e (0x555969a7ecae in /opt/conda/bin/python) frame #20: main + 0xee (0x555969948f2e in /opt/conda/bin/python) frame #21: __libc_start_main + 0xe7 (0x7fa3d856fb97 in /lib/x86_64-linux-gnu/libc.so.6) frame #22: <unknown function> + 0x1c327f (0x555969a2827f in /opt/conda/bin/python) terminate called after throwing an instance of 'c10::Error' what(): CUDA error: device-side assert triggered Exception raised from create_event_internal at ../c10/cuda/CUDACachingAllocator.cpp:687 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x6b (0x7f121fb5299b in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10.so) frame #1: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0xc10 (0x7f121fd95280 in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10_cuda.so) frame #2: c10::TensorImpl::release_resources() + 0x4d (0x7f121fb3adfd in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10.so) frame #3: <unknown function> + 0x5414e2 (0x7f1258bb24e2 in /opt/conda/lib/python3.6/site-packages/torch/lib/libtorch_python.so) frame #4: <unknown function> + 0x19aaae (0x5601c5024aae in /opt/conda/bin/python) frame #5: <unknown function> + 0xf2868 (0x5601c4f7c868 in /opt/conda/bin/python) frame #6: <unknown function> + 0x1f0d91 (0x5601c507ad91 in /opt/conda/bin/python) frame #7: <unknown function> + 0xf270d (0x5601c4f7c70d in /opt/conda/bin/python) frame #8: <unknown function> + 0x19aa90 (0x5601c5024a90 in /opt/conda/bin/python) frame #9: <unknown function> + 0xf2868 (0x5601c4f7c868 in /opt/conda/bin/python) frame #10: <unknown function> + 0x1f0d91 (0x5601c507ad91 in /opt/conda/bin/python) frame #11: <unknown function> + 0xf2828 (0x5601c4f7c828 in /opt/conda/bin/python) frame #12: <unknown function> + 0x19aa90 (0x5601c5024a90 in /opt/conda/bin/python) frame #13: <unknown function> + 0xf2868 (0x5601c4f7c868 in /opt/conda/bin/python) frame #14: <unknown function> + 0x1f0d91 (0x5601c507ad91 in /opt/conda/bin/python) frame #15: <unknown function> + 0x1688cb (0x5601c4ff28cb in /opt/conda/bin/python) frame #16: _PyGC_CollectNoFail + 0x2a (0x5601c507a79a in /opt/conda/bin/python) frame #17: PyImport_Cleanup + 0x278 (0x5601c502efa8 in /opt/conda/bin/python) frame #18: Py_FinalizeEx + 0x61 (0x5601c5099961 in /opt/conda/bin/python) frame #19: Py_Main + 0x35e (0x5601c50a3cae in /opt/conda/bin/python) frame #20: main + 0xee (0x5601c4f6df2e in /opt/conda/bin/python) frame #21: __libc_start_main + 0xe7 (0x7f1286408b97 in /lib/x86_64-linux-gnu/libc.so.6) frame #22: <unknown function> + 0x1c327f (0x5601c504d27f in /opt/conda/bin/python) terminate called after throwing an instance of 'c10::Error' what(): CUDA error: device-side assert triggered Exception raised from create_event_internal at ../c10/cuda/CUDACachingAllocator.cpp:687 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x6b (0x7fe94f54799b in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10.so) frame #1: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0xc10 (0x7fe94f78a280 in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10_cuda.so) frame #2: c10::TensorImpl::release_resources() + 0x4d (0x7fe94f52fdfd in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10.so) frame #3: <unknown function> + 0x5414e2 (0x7fe9885a74e2 in /opt/conda/lib/python3.6/site-packages/torch/lib/libtorch_python.so) frame #4: <unknown function> + 0x19aaae (0x55ab4542baae in /opt/conda/bin/python) frame #5: <unknown function> + 0xf2868 (0x55ab45383868 in /opt/conda/bin/python) frame #6: <unknown function> + 0x1f0d91 (0x55ab45481d91 in /opt/conda/bin/python) frame #7: <unknown function> + 0xf270d (0x55ab4538370d in /opt/conda/bin/python) frame #8: <unknown function> + 0x19aa90 (0x55ab4542ba90 in /opt/conda/bin/python) frame #9: <unknown function> + 0xf2868 (0x55ab45383868 in /opt/conda/bin/python) frame #10: <unknown function> + 0x1f0d91 (0x55ab45481d91 in /opt/conda/bin/python) frame #11: <unknown function> + 0xf2828 (0x55ab45383828 in /opt/conda/bin/python) frame #12: <unknown function> + 0x19aa90 (0x55ab4542ba90 in /opt/conda/bin/python) frame #13: <unknown function> + 0xf2868 (0x55ab45383868 in /opt/conda/bin/python) frame #14: <unknown function> + 0x1f0d91 (0x55ab45481d91 in /opt/conda/bin/python) frame #15: <unknown function> + 0x1688cb (0x55ab453f98cb in /opt/conda/bin/python) frame #16: _PyGC_CollectNoFail + 0x2a (0x55ab4548179a in /opt/conda/bin/python) frame #17: PyImport_Cleanup + 0x278 (0x55ab45435fa8 in /opt/conda/bin/python) frame #18: Py_FinalizeEx + 0x61 (0x55ab454a0961 in /opt/conda/bin/python) frame #19: Py_Main + 0x35e (0x55ab454aacae in /opt/conda/bin/python) frame #20: main + 0xee (0x55ab45374f2e in /opt/conda/bin/python) frame #21: __libc_start_main + 0xe7 (0x7fe9b5dfdb97 in /lib/x86_64-linux-gnu/libc.so.6) frame #22: <unknown function> + 0x1c327f (0x55ab4545427f in /opt/conda/bin/python) terminate called after throwing an instance of 'c10::Error' what(): CUDA error: device-side assert triggered Exception raised from create_event_internal at ../c10/cuda/CUDACachingAllocator.cpp:687 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x6b (0x7fce50e8399b in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10.so) frame #1: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0xc10 (0x7fce510c6280 in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10_cuda.so) frame #2: c10::TensorImpl::release_resources() + 0x4d (0x7fce50e6bdfd in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10.so) frame #3: <unknown function> + 0x5414e2 (0x7fce89ee34e2 in /opt/conda/lib/python3.6/site-packages/torch/lib/libtorch_python.so) frame #4: <unknown function> + 0x19aaae (0x55919a5ffaae in /opt/conda/bin/python) frame #5: <unknown function> + 0xf2868 (0x55919a557868 in /opt/conda/bin/python) frame #6: <unknown function> + 0x1f0d91 (0x55919a655d91 in /opt/conda/bin/python) frame #7: <unknown function> + 0xf270d (0x55919a55770d in /opt/conda/bin/python) frame #8: <unknown function> + 0x19aa90 (0x55919a5ffa90 in /opt/conda/bin/python) frame #9: <unknown function> + 0xf2868 (0x55919a557868 in /opt/conda/bin/python) frame #10: <unknown function> + 0x1f0d91 (0x55919a655d91 in /opt/conda/bin/python) frame #11: <unknown function> + 0xf2828 (0x55919a557828 in /opt/conda/bin/python) frame #12: <unknown function> + 0x19aa90 (0x55919a5ffa90 in /opt/conda/bin/python) frame #13: <unknown function> + 0xf2868 (0x55919a557868 in /opt/conda/bin/python) frame #14: <unknown function> + 0x1f0d91 (0x55919a655d91 in /opt/conda/bin/python) frame #15: <unknown function> + 0x1688cb (0x55919a5cd8cb in /opt/conda/bin/python) frame #16: _PyGC_CollectNoFail + 0x2a (0x55919a65579a in /opt/conda/bin/python) frame #17: PyImport_Cleanup + 0x278 (0x55919a609fa8 in /opt/conda/bin/python) frame #18: Py_FinalizeEx + 0x61 (0x55919a674961 in /opt/conda/bin/python) frame #19: Py_Main + 0x35e (0x55919a67ecae in /opt/conda/bin/python) frame #20: main + 0xee (0x55919a548f2e in /opt/conda/bin/python) frame #21: __libc_start_main + 0xe7 (0x7fceb7739b97 in /lib/x86_64-linux-gnu/libc.so.6) frame #22: <unknown function> + 0x1c327f (0x55919a62827f in /opt/conda/bin/python) terminate called after throwing an instance of 'c10::Error' what(): CUDA error: device-side assert triggered Exception raised from create_event_internal at ../c10/cuda/CUDACachingAllocator.cpp:687 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x6b (0x7f01ad8c799b in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10.so) frame #1: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0xc10 (0x7f01adb0a280 in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10_cuda.so) frame #2: c10::TensorImpl::release_resources() + 0x4d (0x7f01ad8afdfd in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10.so) frame #3: <unknown function> + 0x5414e2 (0x7f01e69274e2 in /opt/conda/lib/python3.6/site-packages/torch/lib/libtorch_python.so) frame #4: <unknown function> + 0x19aaae (0x55c9bc565aae in /opt/conda/bin/python) frame #5: <unknown function> + 0xf2868 (0x55c9bc4bd868 in /opt/conda/bin/python) frame #6: <unknown function> + 0x1f0d91 (0x55c9bc5bbd91 in /opt/conda/bin/python) frame #7: <unknown function> + 0xf270d (0x55c9bc4bd70d in /opt/conda/bin/python) frame #8: <unknown function> + 0x19aa90 (0x55c9bc565a90 in /opt/conda/bin/python) frame #9: <unknown function> + 0xf2868 (0x55c9bc4bd868 in /opt/conda/bin/python) frame #10: <unknown function> + 0x1f0d91 (0x55c9bc5bbd91 in /opt/conda/bin/python) frame #11: <unknown function> + 0xf2828 (0x55c9bc4bd828 in /opt/conda/bin/python) frame #12: <unknown function> + 0x19aa90 (0x55c9bc565a90 in /opt/conda/bin/python) frame #13: <unknown function> + 0xf2868 (0x55c9bc4bd868 in /opt/conda/bin/python) frame #14: <unknown function> + 0x1f0d91 (0x55c9bc5bbd91 in /opt/conda/bin/python) frame #15: <unknown function> + 0x1688cb (0x55c9bc5338cb in /opt/conda/bin/python) frame #16: _PyGC_CollectNoFail + 0x2a (0x55c9bc5bb79a in /opt/conda/bin/python) frame #17: PyImport_Cleanup + 0x278 (0x55c9bc56ffa8 in /opt/conda/bin/python) frame #18: Py_FinalizeEx + 0x61 (0x55c9bc5da961 in /opt/conda/bin/python) frame #19: Py_Main + 0x35e (0x55c9bc5e4cae in /opt/conda/bin/python) frame #20: main + 0xee (0x55c9bc4aef2e in /opt/conda/bin/python) frame #21: __libc_start_main + 0xe7 (0x7f021417db97 in /lib/x86_64-linux-gnu/libc.so.6) frame #22: <unknown function> + 0x1c327f (0x55c9bc58e27f in /opt/conda/bin/python) terminate called after throwing an instance of 'c10::Error' what(): CUDA error: device-side assert triggered Exception raised from create_event_internal at ../c10/cuda/CUDACachingAllocator.cpp:687 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x6b (0x7ff569f1599b in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10.so) frame #1: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0xc10 (0x7ff56a158280 in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10_cuda.so) frame #2: c10::TensorImpl::release_resources() + 0x4d (0x7ff569efddfd in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10.so) frame #3: <unknown function> + 0x5414e2 (0x7ff5a2f754e2 in /opt/conda/lib/python3.6/site-packages/torch/lib/libtorch_python.so) frame #4: <unknown function> + 0x19aaae (0x562bbdb46aae in /opt/conda/bin/python) frame #5: <unknown function> + 0xf2868 (0x562bbda9e868 in /opt/conda/bin/python) frame #6: <unknown function> + 0x1f0d91 (0x562bbdb9cd91 in /opt/conda/bin/python) frame #7: <unknown function> + 0xf270d (0x562bbda9e70d in /opt/conda/bin/python) frame #8: <unknown function> + 0x19aa90 (0x562bbdb46a90 in /opt/conda/bin/python) frame #9: <unknown function> + 0xf2868 (0x562bbda9e868 in /opt/conda/bin/python) frame #10: <unknown function> + 0x1f0d91 (0x562bbdb9cd91 in /opt/conda/bin/python) frame #11: <unknown function> + 0xf2828 (0x562bbda9e828 in /opt/conda/bin/python) frame #12: <unknown function> + 0x19aa90 (0x562bbdb46a90 in /opt/conda/bin/python) frame #13: <unknown function> + 0xf2868 (0x562bbda9e868 in /opt/conda/bin/python) frame #14: <unknown function> + 0x1f0d91 (0x562bbdb9cd91 in /opt/conda/bin/python) frame #15: <unknown function> + 0x1688cb (0x562bbdb148cb in /opt/conda/bin/python) frame #16: _PyGC_CollectNoFail + 0x2a (0x562bbdb9c79a in /opt/conda/bin/python) frame #17: PyImport_Cleanup + 0x278 (0x562bbdb50fa8 in /opt/conda/bin/python) frame #18: Py_FinalizeEx + 0x61 (0x562bbdbbb961 in /opt/conda/bin/python) frame #19: Py_Main + 0x35e (0x562bbdbc5cae in /opt/conda/bin/python) frame #20: main + 0xee (0x562bbda8ff2e in /opt/conda/bin/python) frame #21: __libc_start_main + 0xe7 (0x7ff5d07cbb97 in /lib/x86_64-linux-gnu/libc.so.6) frame #22: <unknown function> + 0x1c327f (0x562bbdb6f27f in /opt/conda/bin/python) terminate called after throwing an instance of 'c10::Error' what(): CUDA error: device-side assert triggered Exception raised from create_event_internal at ../c10/cuda/CUDACachingAllocator.cpp:687 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x6b (0x7f9808d0299b in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10.so) frame #1: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0xc10 (0x7f9808f45280 in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10_cuda.so) frame #2: c10::TensorImpl::release_resources() + 0x4d (0x7f9808ceadfd in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10.so) frame #3: <unknown function> + 0x5414e2 (0x7f9841d624e2 in /opt/conda/lib/python3.6/site-packages/torch/lib/libtorch_python.so) frame #4: <unknown function> + 0x19aaae (0x55ba33d58aae in /opt/conda/bin/python) frame #5: <unknown function> + 0xf2868 (0x55ba33cb0868 in /opt/conda/bin/python) frame #6: <unknown function> + 0x1f0d91 (0x55ba33daed91 in /opt/conda/bin/python) frame #7: <unknown function> + 0xf270d (0x55ba33cb070d in /opt/conda/bin/python) frame #8: <unknown function> + 0x19aa90 (0x55ba33d58a90 in /opt/conda/bin/python) frame #9: <unknown function> + 0xf2868 (0x55ba33cb0868 in /opt/conda/bin/python) frame #10: <unknown function> + 0x1f0d91 (0x55ba33daed91 in /opt/conda/bin/python) frame #11: <unknown function> + 0xf2828 (0x55ba33cb0828 in /opt/conda/bin/python) frame #12: <unknown function> + 0x19aa90 (0x55ba33d58a90 in /opt/conda/bin/python) frame #13: <unknown function> + 0xf2868 (0x55ba33cb0868 in /opt/conda/bin/python) frame #14: <unknown function> + 0x1f0d91 (0x55ba33daed91 in /opt/conda/bin/python) frame #15: <unknown function> + 0x1688cb (0x55ba33d268cb in /opt/conda/bin/python) frame #16: _PyGC_CollectNoFail + 0x2a (0x55ba33dae79a in /opt/conda/bin/python) frame #17: PyImport_Cleanup + 0x278 (0x55ba33d62fa8 in /opt/conda/bin/python) frame #18: Py_FinalizeEx + 0x61 (0x55ba33dcd961 in /opt/conda/bin/python) frame #19: Py_Main + 0x35e (0x55ba33dd7cae in /opt/conda/bin/python) frame #20: main + 0xee (0x55ba33ca1f2e in /opt/conda/bin/python) frame #21: __libc_start_main + 0xe7 (0x7f986f5b8b97 in /lib/x86_64-linux-gnu/libc.so.6) frame #22: <unknown function> + 0x1c327f (0x55ba33d8127f in /opt/conda/bin/python) ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
03-22-2021 12:11:28
03-22-2021 12:11:28
Seems like my issue. Maybe can help: https://github.com/huggingface/transformers/issues/10832<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>How to fix it?I come up with this issue too.<|||||>Also
transformers
10,851
closed
Small inconsistency in tokenization_utils for special tokens retrieval
Hi there, This is just a minor issue I have spotted. When a special token (cls_token, sep_token, etc.) is accessed in an instantiated Tokenizer, this is the piece of code executed to retrieve the property: ```python @property def eos_token(self) -> str: """ :obj:`str`: End of sentence token. Log an error if used while not having been set. """ if self._eos_token is None and self.verbose: logger.error("Using eos_token, but it is not set yet.") return None return str(self._eos_token) ``` The None check is tied to the verbose flag, so when the verbose flag is set to False the condition is not triggered, and even if the special token is None, a 'None' literal string is returned (the last line). It happens the same for all the special tokens, leading to unexpected behavior if you are expecting an actual None outside the tokenizer. I think that the "verbose" flag and the "is None" should be checked in separated conditionals. The mentioned code can be located at: https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils_base.py#L949 Thank you very much.
03-22-2021 11:41:08
03-22-2021 11:41:08
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,850
closed
How to train encoder decoder for explicit negation generation
Hi I am trying to generate negations out of non-negated sentences. I used a simple “I have tea” => “I don’t have tea” formatted dataset for training an XLMR encoder-decoder model using the example provided in the collab. ``` # set special tokens roberta_shared.config.decoder_start_token_id = tokenizer.bos_token_id roberta_shared.config.eos_token_id = tokenizer.eos_token_id # sensible parameters for beam search # set decoding params roberta_shared.config.max_length = 64 roberta_shared.config.early_stopping = True roberta_shared.config.no_repeat_ngram_size = 3 roberta_shared.config.length_penalty = 2.0 roberta_shared.config.num_beams = 4 roberta_shared.config.vocab_size = roberta_shared.config.encoder.vocab_size ``` But the test set produces different tokens than the source. How can I preserve the source tokens when generating the output. [“I have it.”, “I love tea”, “I can have coffee.”] => [‘I have no it.’, “I’ll not love.” “I can’t have food.”] Where the model modifies the words in the sentence.
03-22-2021 10:02:22
03-22-2021 10:02:22
Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discusss.huggingface.co) instead? Thanks!
transformers
10,849
closed
Fix: typo in FINE_TUNE_XLSR_WAV2VEC2.md
Fix typo. # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. This is a simple typo fix. Could you review it @sgugger ? <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-22-2021 09:26:31
03-22-2021 09:26:31
transformers
10,848
closed
GPT Neo
# What does this PR do? This PR adds the [GPT Neo model](https://github.com/EleutherAI/gpt-neo). The model architecture is very similar to GPT2 except it local attention in alternate layers - `LocalAttention` module implements the local attention. The implementation is not as clean as it should be and will be cleaned-up in follow-up PR. - To enable caching (`use_cache`) the local attention layer caches the `hidden_states` instead of `past_key_value_states`. Also right now when `use_cache` is enabled the current length can-not be greater than 1. - The model uses the same tokenizer as GPT2 so does not need a new tokenizer class. Example: usage ```python import torch from transformers import GPTNeoForCausalLM, AutoTokenizer model = GPTNeoForCausalLM.from_pretrained("EleutherAI/gpt-neo-1.3B") tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-1.3B") unicorns = "In a shocking finding, scientists discovered a herd of unicorns living in a remote, " \ "previously unexplored valley, in the Andes Mountains. Even more surprising to the " \ "researchers was the fact that the unicorns spoke perfect English." input_ids = tokenizer(unicorns, return_tensors="pt").input_ids # add the length of the prompt tokens to match with the mesh-tf generation max_length = 400 + input_ids.shape[1] temperature = .9 do_sample = True # set seed to reproduce samples torch.manual_seed(42) gen_tokens = model.generate( input_ids, do_sample=do_sample, min_length=max_length, max_length=max_length, temperature=temperature, ) gen_text = tokenizer.batch_decode(gen_tokens)[0] ``` Future TODOs: - clean-up the implementation of `LocalAttention` especially the creation of `attention_mask`. - test fine-tuning. - enable current length > 1 when `use_cache` is enabled. - Add more robust and aggressive tests for the `LocalAttention` module. - Add `TF` model.
03-22-2021 08:38:41
03-22-2021 08:38:41
@sdtblck @leogao2 this is the Neo PR, reviews/comments appreciated !<|||||>I tried running this with the 2.7B checkpoint and got ``` (base) stellabiderman@Stellas-MacBook-Pro research % python transformers/src/transformers/models/gpt_neo/convert_gpt_neo_mesh_tf_to_pytorch.py --tf_checkpoint_path GPT3_2.7B/checkpoint --config_file GPT3_2-7B/config.json --pytorch_dump_path GPT3_2-7B Building PyTorch model from configuration: GPTNeoConfig { "activation_function": "gelu", "ada_epsilon1": "1e-30", "ada_epsilon2": 0.001, "attention_types": [ [ [ "global", "local" ], 16 ] ], "attn_dropout": 0, "attn_layers": [ "global", "local", "global", "local", "global", "local", "global", "local", "global", "local", "global", "local", "global", "local", "global", "local", "global", "local", "global", "local", "global", "local", "global", "local" ], "attn_pdrop": 0.1, "beta1": 0.9, "beta2": 0.95, "bos_token_id": 50256, "datasets": [ [ "pile", null, null, null ] ], "embd_pdrop": 0.1, "embed_dropout": 0, "eos_id": 50256, "eos_token_id": 50256, "epsilon": 1e-08, "eval_batch_size": 128, "eval_steps": 10, "gradient_checkpointing": false, "gradient_clipping": 1.0, "initializer_range": 0.02, "iterations": 500, "layer_norm_epsilon": 1e-05, "layout": "batch:x,embd:y", "lr": 0.00016, "lr_decay": "cosine", "lr_decay_end": 300000, "mesh_shape": "x:64,y:4", "model_path": "gs://neo-d/models/GPT3_2-7B", "model_type": "gpt_neo", "n_ctx": 2048, "n_embd": 2560, "n_head": 20, "n_inner": null, "n_layer": 32, "n_positions": 2048, "n_vocab": 50257, "opt_name": "adam", "padding_id": 50257, "predict_batch_size": 1, "predict_steps": 0, "recompute_grad": true, "res_dropout": 0, "resid_pdrop": 0.1, "scale_by_depth": true, "scale_by_in": false, "summary_activation": null, "summary_first_dropout": 0.1, "summary_proj_to_labels": true, "summary_type": "cls_index", "summary_use_proj": true, "tokens_per_mb_per_replica": 4096, "train_batch_size": 512, "train_steps": 400000, "transformers_version": "4.5.0.dev0", "use_cache": false, "vocab_size": 50257, "warmup_steps": 3000, "weight_decay": 0, "window_size": 256 } Traceback (most recent call last): File "transformers/src/transformers/models/gpt_neo/convert_gpt_neo_mesh_tf_to_pytorch.py", line 59, in <module> convert_tf_checkpoint_to_pytorch(args.tf_checkpoint_path, args.config_file, args.pytorch_dump_path) File "transformers/src/transformers/models/gpt_neo/convert_gpt_neo_mesh_tf_to_pytorch.py", line 31, in convert_tf_checkpoint_to_pytorch model = GPTNeoForCausalLM(config) File "/Users/stellabiderman/Documents/Research/transformers/src/transformers/models/gpt_neo/modeling_gpt_neo.py", line 778, in __init__ self.transformer = GPTNeoModel(config) File "/Users/stellabiderman/Documents/Research/transformers/src/transformers/models/gpt_neo/modeling_gpt_neo.py", line 597, in __init__ self.h = nn.ModuleList([Block(config, layer_id=i) for i in range(config.n_layer)]) File "/Users/stellabiderman/Documents/Research/transformers/src/transformers/models/gpt_neo/modeling_gpt_neo.py", line 597, in <listcomp> self.h = nn.ModuleList([Block(config, layer_id=i) for i in range(config.n_layer)]) File "/Users/stellabiderman/Documents/Research/transformers/src/transformers/models/gpt_neo/modeling_gpt_neo.py", line 434, in __init__ self.attn = GPTNeoAttention(config, layer_id) File "/Users/stellabiderman/Documents/Research/transformers/src/transformers/models/gpt_neo/modeling_gpt_neo.py", line 381, in __init__ self.attention_type = self.attn_layers[layer_id] IndexError: list index out of range ```<|||||>Hi @StellaAthena , 2.7B models has 32 layers, so `attn_layers` should be ```python ['global', 'local', 'global', 'local', 'global', 'local', 'global', 'local', 'global', 'local', 'global', 'local', 'global', 'local', 'global', 'local', 'global', 'local', 'global', 'local', 'global', 'local', 'global', 'local', 'global', 'local', 'global', 'local', 'global', 'local', 'global', 'local'] ``` I've converted these checkpoints and will push them to the hub in a couple of hours. I'll ping you once that's done, so you can directly download them.<|||||>I see! Is this a problem with my local config file, or is something up with the code on the repo? I downloaded my file directly from the-eye before running the conversion script, so if the local config file is wrong that’s a bit of a problem for us.<|||||>Hey @patil-suraj haven't had a chance to look over the whole PR yet, so i'm not sure how you load up the configuration, but I wonder why you even have separate fields for "attention_types" and "attention_layers" since they configure the same thing, and attention layers can be derived from attention types<|||||>Hi @sdtblck `attention_types` is not used by the config, it only uses `attention_layers`, but yeah `attention_layers` can be derived from `attention_types`. For an example config file, see https://huggingface.co/valhalla/gpt_neo_xl_test/blob/main/config.json I've uploaded the 1.3B checkpoint under my namespace temporarily, here's a [colab](https://colab.research.google.com/drive/1EE2oMOXj2lAxPDS5KB3t7R5lWKTln0pk?usp=sharing) if you wanna give it a try.<|||||>> Hi @sdtblck > > `attention_types` is not used by the config, it only uses `attention_layers`, but yeah `attention_layers` can be derived from > `attention_types`. Our config file doesn't define `attention _layers`. It appears that you [hard-coded](https://github.com/patil-suraj/transformers/blob/b35d805b81516a1c44f32f98205709c3b95a6be8/src/transformers/models/gpt_neo/configuration_gpt_neo.py#L102) this specific attention pattern. I agree with @sdtblck that it would make much more sense to derive `attention_layers` from `attention_types`. I believe the correct place to do that would be [here](https://github.com/patil-suraj/transformers/blob/b35d805b81516a1c44f32f98205709c3b95a6be8/src/transformers/models/gpt_neo/convert_gpt_neo_mesh_tf_to_pytorch.py#L29).<|||||>Yes, you are right! I hardcoded it since we usually prefer to keep everything explicit but yeah I agree this would be a problem for your side. I will change it so that `attention_layers` will be derived from `attention_types`. Are there any other issues?<|||||>@StellaAthena @sdtblck The 2.7B model is up! https://huggingface.co/valhalla/gpt_neo_2.7B/tree/main<|||||>I tried out the 2.7B model you posted @patil-suraj but it wouldn't run. I get the error ``` Some weights of the model checkpoint at valhalla/gpt_neo_2.7B were not used when initializing GPT2LMHeadModel: ... You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Traceback (most recent call last): File "main.py", line 9, in <module> from lm_eval import models, tasks, evaluator, base File "/home/mchorse/lm-evaluation-harness/lm_eval/models/__init__.py", line 7, in <module> "gpt-neo": gpt2.GPT2LM(device="cuda",pretrained="valhalla/gpt_neo_2.7B"), File "/home/mchorse/lm-evaluation-harness/lm_eval/models/gpt2.py", line 14, in __init__ self.gpt2 = transformers.GPT2LMHeadModel.from_pretrained(pretrained).to(self.device) File "/home/mchorse/.local/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1181, in from_pretrained raise RuntimeError( RuntimeError: Error(s) in loading state_dict for GPT2LMHeadModel: ... ``` Looking through the readout, I see ``` size mismatch for transformer.h.0.mlp.c_fc.weight: copying a param with shape torch.Size([10240, 2560]) from checkpoint, the shape in current model is torch.Size([2560, 10240]). size mismatch for transformer.h.0.mlp.c_proj.weight: copying a param with shape torch.Size([2560, 10240]) from checkpoint, the shape in current model is torch.Size([10240, 2560]). size mismatch for transformer.h.1.mlp.c_fc.weight: copying a param with shape torch.Size([10240, 2560]) from checkpoint, the shape in current model is torch.Size([2560, 10240]). size mismatch for transformer.h.1.mlp.c_proj.weight: copying a param with shape torch.Size([2560, 10240]) from checkpoint, the shape in current model is torch.Size([10240, 2560]). size mismatch for transformer.h.2.mlp.c_fc.weight: copying a param with shape torch.Size([10240, 2560]) from checkpoint, the shape in current model is torch.Size([2560, 10240]). size mismatch for transformer.h.2.mlp.c_proj.weight: copying a param with shape torch.Size([2560, 10240]) from checkpoint, the shape in current model is torch.Size([10240, 2560]). ``` I think that there's an unneeded transpose hanging out in the code.<|||||>It looks like you are using the `GPT2LMHeadModel` class. We've added a new class `GPTNeoForCasualLM` for `gpt-neo` , which should be used instead of `GPT2LMHeadModel`. Could you checkout this PR and try loading it using the `GPTNeoForCasualLM` class ? And yes, `GPT2` uses this [`Conv1D`](https://github.com/huggingface/transformers/blob/master/src/transformers/models/gpt2/modeling_gpt2.py#L255) layer which has transposed weights, hence the error.<|||||>> Was there no way to add some "# Copied from" statements to ensure that the two models do not diverge? I have made some changes to the code mostly related to naming and passing `config` to `Block` and `Attention` instead of individual arguments, so can't really use `# Copied from`<|||||>An update from our end: We got the 2.7B model up and running in our evaluation harness! Unfortunately the run revealed that the harness is bugged... Running it by hand gives reasonable-looking results, but I don't know how much I should trust myself to judge that.<|||||>(to clarify: the bugs in eval harness were introduced by a series of pretty aggressive optimizations i implemented just a few hours earlier today)<|||||>I tried finetuning the model with deepspeed and gradient checkpointing, but unlike with GPT2, the loss explodes. I used the default run_clm.py from the examples folder, but added one line to activate gradient checkpointing. Here is then the command i ran: ``` deepspeed --num_gpus=1 run_clm.py \ --deepspeed ds_config_gptneo.json \ --model_name_or_path valhalla/gpt_neo_2.7B \ --train_file train.csv \ --validation_file validation.csv \ --do_train \ --do_eval \ --fp16 \ --overwrite_cache \ --evaluation_strategy="steps" \ --output_dir finetuned \ --num_train_epochs 2 \ --eval_steps 15 \ --gradient_accumulation_steps 2 \ --per_device_train_batch_size 4 \ --use_fast_tokenizer False \ --learning_rate 1e-05 \ --adam_beta1 0.9 \ --adam_beta2 0.95 \ --weight_decay 0.1 \ --warmup_steps 50 ``` Here is my ds_config_gptneo.json (is almost the default, except for a lower min_loss_scaling, otherwise i got overflows) (optimizer and warmup hps are overwritten by the flags above): ``` { "fp16": { "enabled": true, "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": -3, "hysteresis": 2, "min_loss_scale": -1000 }, "zero_optimization": { "stage": 2, "allgather_partitions": true, "allgather_bucket_size": 5e7, "overlap_comm": true, "reduce_scatter": true, "reduce_bucket_size": 5e7, "contiguous_gradients": true, "cpu_offload": true }, "optimizer": { "type": "AdamW", "params": { "lr": 0.00001, "betas": [ 0.9, 0.95 ], "eps": 1e-8, "weight_decay": 0.1 } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": 0, "warmup_max_lr": 0.00001, "warmup_num_steps": 50 } }, "steps_per_print": 1000, "wall_clock_breakdown": false } ``` I tried the exact hyperparameters as well that EleutherAi used, with long warmup phases, but it is still the same. If the learning rate is low enough the loss doesn't change and once its big enough, it immediately explodes. I also did an hyperparameter sweep with the same result. Could this be an issue with the model implementation, as finetuning with EleutherAi's implementation in Mesh Tensorflow on Colab seems to work? Here are the exact steps that i did (on the bottom half part): https://github.com/Xirider/finetune-gpt2xl<|||||>hi @Xirider let me take a look, but meanwhile could you try without `fp16` ?<|||||>Hi, yes, i will try it<|||||>Hm, setting no fp16 doesn't work with Zero: AssertionError: DeepSpeedConfig: ZeRO is only supported if fp16 is enabled. And without deepspeed's zero i don't think i have enough gpu memory.<|||||>> It looks like you are using the `GPT2LMHeadModel` class. We've added a new class `GPTNeoForCasualLM` for `gpt-neo` , which should be used instead of `GPT2LMHeadModel`. > > Could you checkout this PR and try loading it using the `GPTNeoForCasualLM` class ? > > And yes, `GPT2` uses this [`Conv1D`](https://github.com/huggingface/transformers/blob/master/src/transformers/models/gpt2/modeling_gpt2.py#L255) layer which has transposed weights, hence the error. I tried using GPTNeoForCausalLM to load the 2.7B model and encountered similar errors in loading state_dict: ``` Some weights of the model checkpoint at valhalla/gpt_neo_2.7B were not used when initializing GPTNeoForCausalLM: ['transformer.h.24.ln_1.weight', 'transformer.h.24.ln_1.bias', 'transformer.h.24.attn.attention.bias', 'transformer.h.24.attn.attention.masked_bias', 'transformer.h.24.attn.attention.k_proj.weight', ...] - This IS expected if you are initializing GPTNeoForCausalLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing GPTNeoForCausalLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Traceback (most recent call last): File "<input>", line 1, in <module> File "/usr/local/lib/python3.8/dist-packages/transformers-4.5.0.dev0-py3.8.egg/transformers/modeling_utils.py", line 1181, in from_pretrained raise RuntimeError( RuntimeError: Error(s) in loading state_dict for GPTNeoForCausalLM: size mismatch for transformer.wte.weight: copying a param with shape torch.Size([50257, 2560]) from checkpoint, the shape in current model is torch.Size([50257, 2048]). size mismatch for transformer.wpe.weight: copying a param with shape torch.Size([2048, 2560]) from checkpoint, the shape in current model is torch.Size([2048, 2048]). size mismatch for transformer.h.0.ln_1.weight: copying a param with shape torch.Size([2560]) from checkpoint, the shape in current model is torch.Size([2048]). size mismatch for transformer.h.0.ln_1.bias: copying a param with shape torch.Size([2560]) from checkpoint, the shape in current model is torch.Size([2048]). ... ``` <|||||>Hi @esperie This is a WIP PR, so things are supposed to break, please wait till it's merged to report issues. Thanks! Also it's because I renamed a few config params and need to update the model on the hub<|||||>One thing I've caught testing the neo model is that if i try to add a padding token to the tokenizer after loading it from pretrained (i.e to predict batches instead of a single sequence at a time), then i get: `RuntimeError: CUDA error: device-side assert triggered` I guess because the tokenizer vocabulary is different to the way it was initialized. I'm not sure if this is a HF-wide problem (although I don't recall this being a problem with GPT2Tokenizer.from_pretrained('gpt2')) or specific to neo, but here is the code to reproduce the error: ```python import torch from transformers import GPTNeoForCausalLM, GPT2Tokenizer ckpt_2b = "EleutherAI/gpt_neo_2-7B" tokenizer = GPT2Tokenizer.from_pretrained(ckpt_2b) tokenizer.add_special_tokens({'pad_token': '<|padding|>'}) ids = tokenizer("hello world", return_tensors="pt").input_ids.to("cuda") ```<|||||>maybe I'm just going insane, or doing something stupid, because swapping out ckpt_2b for 'gpt2' is giving the same error. We never had this problem training with gpt-neox. Can anyone reproduce, and if so, should I open up a new issue?<|||||>Hey @sdtblck! I think the issue here is because you're adding a new token to your tokenizer (so you're extending your vocab), but you're not resizing the token embedding matrix. When you're creating the GPT-2 tokenizer from your checkpoint, you should have a tokenizer size of 50257: ```py from transformers import GPTNeoForCausalLM, GPT2Tokenizer ckpt_2b = "EleutherAI/gpt_neo_2-7B" tokenizer = GPT2Tokenizer.from_pretrained(ckpt_2b) print(len(tokenizer)) # 50257 ``` That's the same size as the model token embedding matrix: ```py print(model.get_input_embeddings()) # Embedding(50257, 2560) ``` When adding a new token, you should also resize the token embedding matrix alongside it. Otherwise you'll get some index out of range issues, as you'll be trying to obtain the 50258th row of a matrix with 50257 rows. Please add the following line to your code, once you have added a token to your tokenizer and instantiated your model: ```py model.resize_token_embeddings(len(tokenizer)) ``` Everything should be working smoothly now :)<|||||>Hm, @LysandreJik so doing that does make the error to go away, but sampling with the model when I've added padding tokens seems to cause almost everything in the prediction to become padding. Let me know if i should take this somewhere else btw, don't want to clog up this PR if this issue doesn't relate to it at all. `predict` below is pretty much just a wrapper around model.generate() ```python prompt = "Q: What is the meaning of life? A:" gen_text = predict(prompt) print('-'*100) print(gen_text) tokenizer.add_special_tokens({'pad_token': '<|padding|>'}) model.resize_token_embeddings(len(tokenizer)) model.half() gen_text = predict(prompt) print('-'*100) print(gen_text) ``` Outputs: ``` Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation. ---------------------------------------------------------------------------------------------------- Q: What is the meaning of life? A: It is the sum total of the events and happenings which lead to the end of this human life. A person dies because of the event or occurrence which gives birth to his life. In other words, every time a person dies he brings a new life beginning from his own death. In short, if something happens in a human life, it will lead to a life, but if there is no event or occurrence, it will lead to death. Every life matters greatly - everyone has their own life. Life is a measure of happiness, a measure of fulfillment, and a measure of the value and the quality of a person. It is a reflection of everything that has led to a person's development; therefore, Column 1 of the book contains the questions, "What is the meaning of life?" and "What is happiness?" Column 2 contains the answers. The third column contains the answers taken from the column of questions raised by the readers. Q: What is the meaning of life? A: It is the sum total of the events and happenings which lead to the end of this human life. A person dies because of the event or occurrence which gives birth to his life. In other words, every time a person dies he brings a new life beginning from his own death. In short, if something happens in a human life, it will lead to a life, but if there is no event or occurrence, it will lead to death. Every life matters greatly - everyone has their ---------------------------------------------------------------------------------------------------- Q: What is the meaning of life? A: It<|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|> ... ```<|||||>Hi @sdtblck For batch generation with GPT like models, the text should be padded to the left. this is how batch generation works ```python model.config.pad_token_id = tokenizer.pad_token_id tokenizer.padding_side = "left" inputs = tokenizer(sentences, return_tensors="pt", padding=True) outputs = model.generate( input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"] ) ```<|||||>Also, the actual vocab size of the model is 50257 so token ids range from 0 to 50256. This `<|padding|>` padding token is not in the embedding matrix, so I doubt if generation will work as expected when using `<|padding|>` as pad token. Instead, this is what we can do, set the `eos_token` as pad token and set the padding side to `left`. ```python tokenizer.pad_token_id = tokenizer.eos_token model.config.pad_token_id = tokenizer.eos_token_id tokenizer.padding_side = "left" inputs = tokenizer(sentences, return_tensors="pt", padding=True) gen_tokens = model.generate( inputs["input_ids"], attention_mask=inputs["attention_mask"] ) ``` This should work. Or feel free to open an issue if this is not working.<|||||>@StellaAthena The `convert_gpt2_original_tf_checkpoint_to_pytorch.py` now works with the GPT-Neo config, it reads the neo config and initializess HF config from that. Should be now easy to convert the mesh-tf models to PT. <|||||>> @StellaAthena > > The `convert_gpt2_original_tf_checkpoint_to_pytorch.py` now works with the GPT-Neo config, it reads the neo config and initializess HF config from that. Should be now easy to convert the mesh-tf models to PT. Do you by any chance have an example input/output with the conversion script? I was having trouble getting the new code to work with the default configs in the gpt-neo repo.<|||||>There are models listed on the eleutherai HuggingFace account that AFAIK we did not post. Are these the pretrained models @patil-suraj had been hosting?<|||||>I was referring to the pre-trained models posted here: https://the-eye.eu/public/AI/gptneo-release/<|||||>Hi @StellaAthena, which models are you talking about? The only two models available are the 1.3B and the 2.7B versions.<|||||>Hi. I'm getting this issue on colab when trying to import it: `cannot import name 'GPTNeoForCausalLM' from 'transformers' (unknown location)`<|||||>Hi @zanderbush, please make sure you: - are using the master branch, as it is only available from source as of now - ~have torch installed in your environment, as otherwise the model cannot be imported.~ Actually that's untrue. You can import it, but can't correctly instantiate it and the error should be more explicit.<|||||>@LysandreJik Thank you! That worked. I face a new issue, however, as I look to return the most probable next token. This works with the typical GPT-2, but not this for some reason: ``` import torch prompt = """In the""" prompt = prompt.strip() text = tokenizer.encode(prompt) myinput, past = torch.tensor([text]), None logits, past = model(myinput, past_key_values = past) logits = logits[0,-1] probabilities = torch.nn.functional.softmax(logits) best_logits, best_indices = logits.topk(10) best_words = [tokenizer.decode([idx.item()]) for idx in best_indices] text.append(best_indices[0].item()) best_probabilities = probabilities[best_indices].tolist() words = [] for i in range(10): m = (best_words[i]) print(m) ``` `TypeError: string indices must be integers`<|||||>Why is `n_ctx` not present on `GPTNeoConfig`? Afaict, `max_position_embeddings` is the closest replacement but I just wanted to double check that it's reasonable to use it as a guarantee that the model can handle sequences of that length. <|||||>@leogao2 yes,` max_position_embeddings` is the replacement of `n_ctx`, and the positional embedding are initialized using that value so it does accept the specified sequence length (2048), see https://github.com/huggingface/transformers/blob/master/src/transformers/models/gpt_neo/modeling_gpt_neo.py#L651<|||||>@zanderbush I believe this is unrelated to GPT Neo and related to your code instead. Please open a new issue with a reproducible code example (tokenizer and model defined). Thank you!
transformers
10,847
closed
fix code quality issues
### Description Hi :wave: I sent PR #8950 which has changes in a lot of files so sending this PR with fixes in few files so that it can be easily reviewed. You can have a look at the various issues that were caught in the codebase [here](https://deepsource.io/gh/withshubh/transformers/issues/?category=recommended). ### Summary of changes - Removed length check in favour of truthiness of the object > Boosts minor performance, see the description [here](https://deepsource.io/gh/withshubh/transformers/issue/PYL-C1801/description/). - Removed unnecessary comprehension > boosts minor performance, see the description [here](https://deepsource.io/gh/withshubh/transformers/issue/PTC-W0016/description/). - Removed unnecessary use of comprehension > boosts minor performance, see the description [here](https://deepsource.io/gh/withshubh/transformers/issue/PTC-W0019/description/). - Refactored the comparison involving `not` > fixed [antipattern](https://deepsource.io/gh/withshubh/transformers/issue/PYL-C0113/description/) - Removed unnecessary return statement > removes [antipattern](https://deepsource.io/gh/withshubh/transformers/issue/PYL-R1711/description/) - Iterated dictionary directly > removes [antipattern](https://deepsource.io/gh/withshubh/transformers/issue/PYL-C0201/description/) - Used literal syntax instead of function calls to create data structure > boosts minor [performance](https://deepsource.io/gh/withshubh/transformers/issue/PYL-C1801/description/) - Added .deepsource.toml > config file to continuously analyze the repo for code quality issues
03-22-2021 08:03:56
03-22-2021 08:03:56
Hi @LysandreJik :wave: Please review this PR<|||||>Hi @withshubh, as said before in https://github.com/huggingface/transformers/pull/8950#issuecomment-781222459, we would like to stay with our current tooling for now. Thank you.
transformers
10,846
closed
[Wav2Vec2] Small tab fix
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-22-2021 07:32:09
03-22-2021 07:32:09
transformers
10,845
closed
Option to change loss function for fine tuning
# 🚀 Feature request ## Motivation I was working in a multi class text classification problem for which I was using `DistilBertForSequenceClassification` and I found out that there is no way for me to change the loss function from CrossEntropyLoss. ## Your contribution I can submit a PR, if this feature request is approved.
03-22-2021 07:09:30
03-22-2021 07:09:30
You can change the loss function to anything you want. Here's an example: ``` from transformers import BertModel from transformers.modeling_outputs import SequenceClassifierOutput import torch.nn as nn class FancyBertModelWithCustomLossFunction(nn.Module): def __init__(self): super(FancyBertModelWithCustomLossFunction, self).__init__() self.bert = BertModel.from_pretrained("bert-base-uncased") self.dropout = nn.Dropout(0.3) self.classifier = nn.Linear(768, 1) def forward(self, ids, mask, token_type_ids, labels=None): outputs = self.bert(ids, attention_mask=mask, token_type_ids=token_type_ids) output = self.dropout(outputs.pooler_output) logits = self.classifier(output) loss = None if labels is not None: if self.num_labels == 1: # We are doing regression loss_fct = MSELoss() loss = loss_fct(logits.view(-1), labels.view(-1)) else: # you can define any loss function here yourself # see https://pytorch.org/docs/stable/nn.html#loss-functions for an overview loss_fct = nn.BinaryCrossEntropyLoss() # next, compute the loss based on logits + ground-truth labels loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) return SequenceClassifierOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) ```<|||||>@NeilsRogge I am aware of this, what I was referring to is an option that can be passed directly to let's say `DistilBertForSequenceClassification` or any other model class without having to write a pytorch model like this. <|||||>This has already been asked before, but we are not planning to do this. See also [this comment](https://github.com/huggingface/transformers/issues/9625#issuecomment-762167788) in #9625<|||||>Oh ok, got it. Thanks @NeilsRogge. Closing this.
transformers
10,844
closed
Add GPT-Neo
# 🌟 New model addition Please add GPT-Neo ## Model description > GPT-Neo is the code name for a series of transformer-based language models loosely styled around the GPT architecture that Eleuther AI plans to train and open source. Eleuther AI's primary goal is to replicate a GPT-3 sized model and open source it to the public, for free. <!-- Important information --> ## Open source status * [x] the model implementation is available: [Repo](https://github.com/EleutherAI/gpt-neo) * [x] the model weights are available: [Download](https://the-eye.eu/eleuther_staging/gptneo-release/) (1.3B & 2.7B) * [x] who are the authors: @sdtblck, @leogao2, @lucidrains, @ConnorJL, @StellaAthena & [others](https://github.com/EleutherAI) Somewhat related to #4658, #4679, especially [this](https://github.com/huggingface/transformers/issues/4658#issuecomment-754247106)
03-22-2021 06:22:48
03-22-2021 06:22:48
transformers
10,843
closed
Is there a `DataCollator` cat mask n-gram words for LM?
# 📚 Migration ## Information I want a mask n-gram words when pre-train bert model? but I can't find a DataCollator from the lib https://github.com/huggingface/transformers/blob/master/src/transformers/data/data_collator.py I want to build it by myself, but I don't know how to build my own DataCollator, who can give me a demo? ## Checklist - [ ] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers)) - [ ] I checked if a related official extension example runs on my machine.
03-22-2021 05:17:59
03-22-2021 05:17:59
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,842
closed
How to fine-tune RAG on MS-MARCO dataset?
03-22-2021 03:04:11
03-22-2021 03:04:11
Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discusss.huggingface.co) instead? Thanks!
transformers
10,841
closed
issue of run_mlm.py
hello guys, I try to finetune bert in my own dataset (line by line txt, language:Chinese), follow the guid code of run_mlm.py example. The tokenizer is bert pretrained tokenizer (tokenizer=AutoTokenizer.from_pretrained('bert-base-chinese')), and model is bert-base-chinese ,as folllows: config=BertConfig.from_pretrained('bert-base-chinese') print(config) model=BertForMaskedLM(config=config) when I started the trainer, I got the following errors: Using custom data configuration default-cd6deed448eea358 Downloading and preparing dataset text/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/hcl/.cache/huggingface/datasets/text/default-cd6deed448eea358/0.0.0/293ecb642f9fca45b44ad1f90c8445c54b9d80b95ab3fca3cfa5e1e3d85d4a57... Dataset text downloaded and prepared to /home/hcl/.cache/huggingface/datasets/text/default-cd6deed448eea358/0.0.0/293ecb642f9fca45b44ad1f90c8445c54b9d80b95ab3fca3cfa5e1e3d85d4a57. Subsequent calls will reuse this data. 100%|██████████| 3264/3264 [01:06<00:00, 48.83ba/s] 0%| | 0/3264 [00:00<?, ?ba/s] Traceback (most recent call last): File "/home/hcl/miniconda3/envs/pytorch/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1582, in _map_single writer.write_batch(batch) File "/home/hcl/miniconda3/envs/pytorch/lib/python3.7/site-packages/datasets/arrow_writer.py", line 276, in write_batch pa_table = pa.Table.from_pydict(typed_sequence_examples) File "pyarrow/table.pxi", line 1559, in pyarrow.lib.Table.from_pydict File "pyarrow/array.pxi", line 331, in pyarrow.lib.asarray File "pyarrow/array.pxi", line 222, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/home/hcl/miniconda3/envs/pytorch/lib/python3.7/site-packages/datasets/arrow_writer.py", line 98, in __arrow_array__ if trying_type and out[0].as_py() != self.data[0]: File "pyarrow/array.pxi", line 1067, in pyarrow.lib.Array.__getitem__ File "pyarrow/array.pxi", line 549, in pyarrow.lib._normalize_index IndexError: index out of bounds python-BaseException for now, I don't know why, the first epoch runs well. Any helps? ps: my os is deepin15; I choose nvidia rtx2080ti as my gpu, and finetune my dataset on pytorch1.6
03-22-2021 02:48:12
03-22-2021 02:48:12
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,840
closed
why My Albert pretrain loss can't decrease?
# 📚 Migration ## Information <!-- Important information --> Model i am using Albert Language I am using the model on just digits(Desensitized Chinese) The problem arises when using: Albert Trainer ## Details The loss can decrease normal when I take this config to RoBerta, I use Albert replace RoBerta, The loss can't decrease, I don't know what's the problem, please help ``` %%time bert_file = './albert' from transformers import Trainer, TrainingArguments from transformers import LineByLineTextDataset, DataCollatorForLanguageModeling from transformers import AlbertConfig, AlbertForMaskedLM config = AlbertConfig( hidden_size = 768, num_attention_heads = 12, intermediate_size = 3072, vocab_size = vocab_size + 10 ) model = AlbertForMaskedLM(config=config) data_collator = DataCollatorForLanguageModeling( tokenizer=tokenizer, mlm=True, mlm_probability=0.15 ) train_dataset = LineByLineTextDataset( tokenizer=tokenizer, file_path="./my_wiki_file", block_size=128, ) training_args = TrainingArguments( output_dir=bert_file, overwrite_output_dir=True, num_train_epochs=40, per_device_train_batch_size=64, save_steps=100000, save_total_limit=2, prediction_loss_only=False, ) %%time trainer = Trainer( model = model, args = training_args, data_collator = data_collator, train_dataset = train_dataset ) trainer.train() ``` This is result ``` Step Training Loss 500 6.687300 1000 4.034700 1500 3.826200 2000 3.777200 2500 3.788800 3000 3.751100 3500 3.780000 4000 3.772900 4500 3.795800 5000 3.737000 5500 3.782300 6000 3.775600 6500 3.821400 7000 3.730200 7500 3.751700 8000 3.787000 8500 3.824500 9000 3.746300 9500 3.782600 10000 3.770600 ``` ## Environment info - `transformers` version:4.5.0.dev0 - Platform:Kaggle notebook - Python version:3.7 - PyTorch version (GPU?): - torch version: 1.8.0 - Using GPU in script?:YES - Using distributed or parallel set-up in script?:NO ## Checklist - [ ] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers)) - [ ] I checked if a related official extension example runs on my machine.
03-22-2021 01:55:44
03-22-2021 01:55:44
Hi! ALBERT is known to have issues converging in some cases as all layers are shared. See the following issues for similar issues and potential resolutions: https://github.com/huggingface/transformers/issues/5984 https://github.com/huggingface/transformers/issues/4727 https://github.com/huggingface/transformers/issues/2553<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,839
closed
Fix on_step_begin and on_step_end Callback Sequencing
# What does this PR do? Currently, the Trainer exhibits the following behavior (simplified): ``` for step, input in epoch_iterator: if (step + 1) % self.args.gradient_accumulation_steps == 0: callback_handler.on_step_begin() ... if (step + 1) % self.args.gradient_accumulation_steps == 0: # Apply Gradient Update (Finished accumulating) optimizer.step() callback_handler.on_step_end() ``` Unfortunately, this means that `on_step_begin()` gets called during the same iteration, *before* `on_step_end()` which is incorrect, and confuses folks implementing custom callbacks for timing individual iterations (like my team!). Instead, this fix starts by calling `on_step_begin()` at steps = 0 (iteration 0) and will only be called on the next step after `on_step_end()`. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. Code updates part of the Trainer, so tagging @sgugger.
03-21-2021 22:47:22
03-21-2021 22:47:22
transformers
10,838
closed
Can’t download the pre-trained pegasus-large model
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.4.2 - Platform: - Python version: 3.7.10 - PyTorch version (GPU?): - Tensorflow version (GPU?): 2.4.1 - Using GPU in script?: yes - Using distributed or parallel set-up in script?: ## Who can help - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj ## Information It appears that the huggingface.co model url has some problem. Code: ``` from transformers import AutoTokenizer, TFAutoModelForSeq2SeqLM tokenizer2 = AutoTokenizer.from_pretrained("google/pegasus-large") model2 = TFAutoModelForSeq2SeqLM.from_pretrained("google/pegasus-large") inputs1_2 = tokenizer2.encode("summarize: " + text1, return_tensors="tf", max_length=1024) outputs1_2 = model2.generate(inputs1_2, max_length=150, min_length=40, length_penalty=2.0, num_beams=4, early_stopping=True) outputs1_2, tokenizer2.decode(outputs1_2[0]) ``` error message: ``` 404 Client Error: Not Found for url: https://huggingface.co/google/pegasus-large/resolve/main/tf_model.h5 … OSError: Can't load weights for 'google/pegasus-large'. Make sure that: - 'google/pegasus-large' is a correct model identifier listed on 'https://huggingface.co/models' - or 'google/pegasus-large' is the correct path to a directory containing a file named one of tf_model.h5, pytorch_model.bin. ```
03-21-2021 21:33:50
03-21-2021 21:33:50
Hey @xiaohy9, Thanks for the issue! It should be resolved now. See: https://huggingface.co/google/pegasus-large/commit/4510ba69cc183d23e892e7728a40fdcf42e83079 . Could you try again? <|||||>Yes, it works now. Thanks for the quick response! However, I saw some similar issue using pegasus-large as using pegasus-xsum, with details mentioned here: https://github.com/huggingface/transformers/issues/10837<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,837
closed
pegasus-xsum summarized a story of Eiffel Tower into one on the World Trade Center
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.4.2 - Platform: - Python version: 3.7.10 - PyTorch version (GPU?): - Tensorflow version (GPU?): 2.4.1 - Using GPU in script?: yes - Using distributed or parallel set-up in script?: ## Who can help - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj ## Information The wired thing happened when I tried model pegasus-xsum for text summarization using the example code and data. I noticed that the output describes a similar but obviously different story than the one in the input. I expected to see some description of the Eiffel Tower, but the output is all about New York's World Trade Center!! I noticed that the online demo version works fine, and the summary output is still on the Eiffel Tower. https://huggingface.co/google/pegasus-xsum It appears that pegasus-xsum model in my code generated the summary from some training data, but not the input I gave (retained memory?). How can I git the model behave normally like the online version? The code I used (adopted from the online demo page): ``` text1="The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct." from transformers import AutoTokenizer, TFAutoModelForSeq2SeqLM tokenizer1 = AutoTokenizer.from_pretrained("google/pegasus-xsum") model1 = TFAutoModelForSeq2SeqLM.from_pretrained("google/pegasus-xsum") inputs1_1 = tokenizer1.encode("summarize: " + text1, return_tensors="tf", max_length=1024) outputs1_1 = model1.generate(inputs1_1, max_length=150, min_length=40, length_penalty=2.0, num_beams=4, early_stopping=True) tokenizer1.decode(outputs1_1[0]) ``` the output I got: `"<pad> New York's World Trade Center is the tallest building in the United States and one of the world's tallest structures, with a total height of 1,776ft (541m), according to Guinness World Records."`
03-21-2021 21:27:39
03-21-2021 21:27:39
Hey @xiaoda99, You should **not** append a `summarize: ` prefix for Pegasus. Running this code: ```python text1="The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct." from transformers import AutoTokenizer, TFAutoModelForSeq2SeqLM tokenizer1 = AutoTokenizer.from_pretrained("google/pegasus-xsum") model1 = TFAutoModelForSeq2SeqLM.from_pretrained("google/pegasus-xsum") inputs1_1 = tokenizer1.encode(text1, return_tensors="tf", max_length=1024) outputs1_1 = model1.generate(inputs1_1, max_length=150, min_length=40, length_penalty=2.0, num_beams=4, early_stopping=True) tokenizer1.decode(outputs1_1[0]) ``` gives me better results: ``` "<pad> The Eiffel Tower is a free-standing structure in Paris, France, built in 1889 by Gustave Eiffel as a monument to his country's national symbol, the Eiffel Tower, which was later renamed the Louvre." ```<|||||>@patrickvonplaten , thanks for the response. The summary you posted is about Eiffel Tower, but the information is not really from the input text. The same problem still exists, it spit out some different story than the one in the input, which is likely from the original training data. Can you check on why this happens? thanks.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,836
closed
Generating text with MBart Large 50 on GPU with Tensorflow is significantly slower than with Pytorch
This applies to the [MBart Large MMT 50-language model](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt). Takes about 1m20s to process 10 batches of size 16 x 171 on pytorch, but 8min for tensorflow. Both are running on P100 through Kaggle. - [Tensorflow notebook](https://www.kaggle.com/xhlulu/tf-mbartforconditionalgeneration-speed-test) - [Pytorch Notebook](https://www.kaggle.com/xhlulu/mbartforconditionalgeneration-speed-test) ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.4.2 - Platform: Linux-5.4.89+-x86_64-with-debian-buster-sid - Python version: 3.7.9 - PyTorch version (GPU?): 1.7.0 (True) - Tensorflow version (GPU?): 2.4.1 (True) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ## Code for pytorch ```python from tqdm.auto import tqdm import torch from transformers import MBartForConditionalGeneration, MBart50TokenizerFast article_hi = "संयुक्त राष्ट्र के प्रमुख का कहना है कि सीरिया में कोई सैन्य समाधान नहीं है" model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-many-mmt") tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-many-mmt") # translate Hindi to French tokenizer.src_lang = "hi_IN" for i in tqdm(range(10)): encoded_hi = tokenizer([article_hi*10]*16, return_tensors="pt") generated_tokens = model.generate( **encoded_hi, forced_bos_token_id=tokenizer.lang_code_to_id["fr_XX"] ) out = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)[0] # => "Le chef de l 'ONU affirme qu 'il n 'y a pas de solution militaire dans la Syrie." ``` ## Code for Tensorflow ```python from tqdm.auto import tqdm import tensorflow as tf from transformers import TFMBartForConditionalGeneration, MBart50TokenizerFast strategy = tf.strategy.get_strategy() article_hi = "संयुक्त राष्ट्र के प्रमुख का कहना है कि सीरिया में कोई सैन्य समाधान नहीं है" with strategy.scope(): model = TFMBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-many-mmt", from_pt=True) tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-many-mmt") # translate Hindi to French tokenizer.src_lang = "hi_IN" with strategy.scope(): for i in tqdm(range(10)): encoded_hi = tokenizer([article_hi*10]*16, return_tensors="tf") generated_tokens = model.generate( **encoded_hi, forced_bos_token_id=tokenizer.lang_code_to_id["fr_XX"] ) out = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)[0] # => "Le chef de l 'ONU affirme qu 'il n 'y a pas de solution militaire dans la Syrie." ```
03-21-2021 20:04:48
03-21-2021 20:04:48
It seems that the generation function is handled by the [`TFGenerationMixin`](https://github.com/huggingface/transformers/blob/696e8a43655a63b7312e036616f4abd2106e179e/src/transformers/generation_tf_utils.py#L48-L72) whereas in torch it is handled by [`GenerationMixin`](https://github.com/huggingface/transformers/blob/d4d4447d536e5cf8c78518b8b3359168346a4134/src/transformers/generation_utils.py#L665-L699); quickly glancing over the code I notice that the implementation is different. Could there be a discrepancy in implementation that would affect the generation speed?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,835
closed
Issues finetuning MBART 50 many to many
- `transformers` version: Latest - Platform: - Python version: 1.8.0 - Using GPU in script?: Yes A100 - Using distributed or parallel set-up in script?: No I am trying to finetune MBART50-many-to-many ``` python ./transformers/examples/seq2seq/run_translation.py \ --model_name_or_path facebook/mbart-large-50-many-to-many-mmt \ --do_train \ --do_eval \ --source_lang ru_RU \ --target_lang en_XX \ --train_file ./corpus_v2/train.json \ --validation_file ./corpus_v2/valid.json \ --output_dir /local/nlpswordfish/tuhin/mbart50/tst-translation \ --per_device_train_batch_size=32 \ --per_device_eval_batch_size=8 \ --overwrite_output_dir \ --predict_with_generate \ --max_train_samples 51373 \ --max_val_samples 6424 \ --gradient_accumulation_steps 1\ --num_train_epochs 8 \ --save_strategy epoch \ --evaluation_strategy epoch ``` Even though I explicitly pass Src lang as ru_RU and Target as en_XX I get an error and see my log. I tried printing Src and Tgt language ``` Assigning ['ar_AR', 'cs_CZ', 'de_DE', 'en_XX', 'es_XX', 'et_EE', 'fi_FI', 'fr_XX', 'gu_IN', 'hi_IN', 'it_IT', 'ja_XX', 'kk_KZ', 'ko_KR', 'lt_LT', 'lv_LV', 'my_MM', 'ne_NP', 'nl_XX', 'ro_RO', 'ru_RU', 'si_LK', 'tr_TR', 'vi_VN', 'zh_CN', 'af_ZA', 'az_AZ', 'bn_IN', 'fa_IR', 'he_IL', 'hr_HR', 'id_ID', 'ka_GE', 'km_KH', 'mk_MK', 'ml_IN', 'mn_MN', 'mr_IN', 'pl_PL', 'ps_AF', 'pt_XX', 'sv_SE', 'sw_KE', 'ta_IN', 'te_IN', 'th_TH', 'tl_XX', 'uk_UA', 'ur_PK', 'xh_ZA', 'gl_ES', 'sl_SI'] to the additional_special_tokens key of the tokenizer Src lang is en_XX ids [250004] ids [2] loading weights file https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt/resolve/main/pytorch_model.bin from cache at /home/tuhin.chakr/.cache/huggingface/transformers/e33fcda1a71396b8475e16e2fe1458cfa62c6013f8cb3787d6aa4364ec5251c6.d802a5ca7720894045dd2c9dcee6069d27aa92fbbe33f52b44d479538dc3ccc3 All model checkpoint weights were used when initializing MBartForConditionalGeneration. All the weights of MBartForConditionalGeneration were initialized from the model checkpoint at facebook/mbart-large-50-many-to-many-mmt. If your task is similar to the task the model of the checkpoint was trained on, you can already use MBartForConditionalGeneration for predictions without further training. Tgt lang is None self.prefix_tokens is [None] ids [None] Traceback (most recent call last): File "./transformers/examples/seq2seq/run_translation.py", line 564, in <module main() File "./transformers/examples/seq2seq/run_translation.py", line 403, in main train_dataset = train_dataset.map( File "/home/tuhin.chakr/yes/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1289, in map update_data = does_function_return_dict(test_inputs, test_indices) File "/home/tuhin.chakr/yes/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1260, in does_function_return_dict function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) File "./transformers/examples/seq2seq/run_translation.py", line 384, in preprocess_function with tokenizer.as_target_tokenizer(): File "/home/tuhin.chakr/yes/lib/python3.8/contextlib.py", line 113, in __enter__ return next(self.gen) File "/home/tuhin.chakr/yes/lib/python3.8/site-packages/transformers/models/mbart/tokenization_mbart50_fast.py", line 242, in as_target_tokenizer self.set_tgt_lang_special_tokens(self.tgt_lang) File "/home/tuhin.chakr/yes/lib/python3.8/site-packages/transformers/models/mbart/tokenization_mbart50_fast.py", line 269, in set_tgt_lang_special_tokens prefix_tokens_str = self.convert_ids_to_tokens(self.prefix_tokens) File "/home/tuhin.chakr/yes/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py", line 287, in convert_ids_to_tokens index = int(index) TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType' ``` Also as far I understand in many to many for finetuning it requires some separate processing based on the paper which is missing ? ![image](https://user-images.githubusercontent.com/3104771/111915770-500c5600-8a4e-11eb-8454-77c959bed2b4.png) What should be the data format. Additionally will u guys release a many to one model as well ? although many to one is a subset of many to many @patrickvonplaten, @patil-suraj
03-21-2021 18:05:17
03-21-2021 18:05:17
@patil-suraj any help is appreciated<|||||>@patrickvonplaten<|||||>Can anyone look into this @patil-suraj @patrickvonplaten <|||||>Hi @tuhinjubcse , sorry to reply only now, I've been a bit busy with the sprint and other projects so couldn't really allocate any time for this. I will get back to you by tomorrow. Also please don't tag people who are not related to this model, it might disturb them unnecessarily. Thank you for your patience.<|||||>Thank you, it would be good to know how to finetune a many to many models with more than one lang pairs in train and validation like fairseq multilingual https://github.com/pytorch/fairseq/tree/master/examples/multilingual<|||||>Okay, one issue at a time I'm taking a look at the error that you posted above. Also, the many-to-one model was not released when we ported this model to `Transformers`, it seems to have been released recently. I will convert and push it by tomorrow. And regarding multi-lingual fine-tuning, I will try to write a notebook about it. What we need to do here is, say we are fine-tuning on two language pairs, in that case, we need to concatenate the two datasets or in case the two language pairs don't have the same number of examples then add some sort of sampler which will sample the example from the datasets depending on the number of examples in which one. And when processing each language pair, set the appropriate `src_lang` and `tgt_lang` tokens. The processing part is explained in the [docs](https://huggingface.co/transformers/model_doc/mbart.html#training-of-mbart-50).<|||||>That would be really helpful if you can have a notebook which documents how to do that , or even a read me , just so that its clear<|||||>Thanks so much for your response and looking forward to use it <|||||>The many to one checkpoint is now available on the hub https://huggingface.co/facebook/mbart-large-50-many-to-one-mmt<|||||>Thanks for releasing this. Looking forward to the instructions to do many to one finetuning as that is what this model will be superuseful for<|||||>Any updates on how to run many to one, can we pass --source_lang ru_RU,es_XX as a ',' separated string. Sorry I am not sure if that support is available yet. Would be really helpful if you could help here. The EMNLP arxiv deadline is super close on 17th April :) I know you are busy but this would be a huge favor <|||||>Multilingual fine-tuning won't be included in the example script, the goal of examples is to keep them simple and let the user extend them for custom training. I'm working on the notebook, but can probably share that on Monday. As I said in the above comment, for multilingual fine-tuning, in the simplest case you would just need to process the two datasets by setting correct `src_lang`, `tgt_lang` tokens, the rest of the training will be similar to traditional fine-tuning. Feel free to post the question on the [forum](https://discuss.huggingface.co/) as well, someone there might have better ideas for this.<|||||>Thank you so much, if you post the notebook here by Monday that would solve my problem. I am trying on my own to do it as well<|||||>Hi @tuhinjubcse We just merged #11170 which now allows to fine-tune mBART-50 on **single language pair** using the `run_translation.py` script. This should resolve the issue that you posted in the first comment.<|||||>Thanks so much<|||||>Suraj I got multilingual to work, however, while decoding I get this error. My added token dictionary is `{"uk_UA": 250049, "mk_MK": 250036, "mn_MN": 250038, "id_ID": 250033, "he_IL": 250031, "sl_SI": 250053, "pt_XX": 250042, "hr_HR": 250032, "th_TH": 250047, "tl_XX": 250048, "pl_PL": 250040, "ka_GE": 250034, "ta_IN": 250045, "km_KH": 250035, "te_IN": 250046, "xh_ZA": 250051, "sv_SE": 250043, "sw_KE": 250044, "ps_AF": 250041, "bn_IN": 250029, "ml_IN": 250037, "az_AZ": 250027, "af_ZA": 250028, "gl_ES": 250052, "ur_PK": 250050, "mr_IN": 250039, "fa_IR": 250030} ` File "translate.py", line 26, in <module> tokenizer = MBart50Tokenizer.from_pretrained(path) File "/home/tuhin.chakr/yes/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1704, in from_pretrained return cls._from_pretrained( File "/home/tuhin.chakr/yes/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1810, in _from_pretrained assert index == len(tokenizer), ( AssertionError: Non-consecutive added token 'bn_IN' found. Should have index 250054 but has index 250029 in saved vocabulary. The error comes from MBart50Tokenizer model = MBartForConditionalGeneration.from_pretrained(path) model.eval() model.to('cuda') tokenizer = MBart50Tokenizer.from_pretrained(path) It works fine with MBartTokenizer I can use MBartTokenizer for common languages in mbart25 and mbart50 for my manytoone model but for languages like pt_XX i can't .<|||||>HI @tuhinjubcse Glad you got it working. And this seems like a bug, I will take a look. How many new tokens did you add?<|||||>I tried adding tokens using the `add_tokens` and `add_special_tokens` method, saved and loaded it again, I didn't observe this issue. Here's what I did ```python tok = MBart50Tokenizer.from_pretrained("facebook/mbart-large-50") tok.add_special_tokens({"MY_XX": "MY_XX"}) tok.add_special_tokens({"additional_special_tokens": ["MY2_XX"]}) tok.save_pretrained("./tmp") tok = MBart50Tokenizer.from_pretrained("./tmp") tok.convert_tokens_to_ids("MY_XX") # 250054 tok.convert_tokens_to_ids("MY2_XX") # 250055 ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Unstale<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,834
closed
Local Attention for GPT2
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> Our model uses local attention in some layers (i.e each position can only see the last k=256 tokens in every other layer). We would like to be able to specify this in the config on the model hub. ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> Right now we can't integrate the 1.3B and 2.7B EleutherAI GPT models because local attention is not supported in transformers.
03-21-2021 16:35:54
03-21-2021 16:35:54
@patil-suraj is working on implementing GPT Neo over at https://github.com/huggingface/transformers/pull/10848! The 1.3B and 2.7B should be loadable in that architecture once finalized.
transformers
10,833
closed
weird large memory usage of mbert model
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.4.2 - Platform: - Python version: 3.7 - PyTorch version (GPU?): 1.8 - Tensorflow version (GPU?): - Using GPU in script?: - - Using distributed or parallel set-up in script?: ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> albert, bert, xlm: @LysandreJik ## Information I am using mbert model, this is 110 M params, I am testing the pretrianing codes with mt5-small and compare this with mbert, mbert weirdly use a lot of memory, I need to reduce 1/2 of batch_size on the same machine I train my mt5-small model which is 3x larger than mbert model. This is weird that mbert while being 1/3 of mt5-small require large memory. * I am using run_mlm command * I run the codes on V100 GPU ## To reproduce Steps to reproduce the behavior: python run_mlm.py --model_name_or_path bert-base-multilingual-uncased --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --do_eval --output_dir output --per_device_train_batch_size 88 --fp16 --max_seq_length 128 ## Expected behavior larger batch size should be possible with mbert
03-21-2021 16:10:15
03-21-2021 16:10:15
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>One possible reason for that is bigger tokenizer cardinality and the fact that HF code predicts all tokens, not only one that are missing, so this operation might dominate overall runtime/memory while doing no useful output. In my case it was the case and I did a small fix here https://github.com/yurymalkov/transformers/commit/0fe0725c0f7fcc13df698bba1bd01847c1494e43 that ended up in 6X larger batch at my memory usage (1060 GTX) and 3-4X times faster training of a small mbert model, but it causes some tests to fail (I think mainly due to pytorch jit failure which sees named tensor in slice, secondly to missing output of non-masked tokens - though who cares about them?).
transformers
10,832
closed
run_mlm.py: CUDA error: device-side assert triggered, THCTensorIndex
## Environment info - `transformers` version: 4.4.2 - Platform: Linux - Python version: Python 3.4.9 - PyTorch version (GPU?): 1.6.0+cu101 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes - GPU details: 4 GPUs V100 16GB ## Information I am using Bert and Roberta. I'm try to train from scratch on Wikipedia dataset using your examples run_mlm and your dataset wikipedia (20200501.en) Before using distributed set up, I was stacked on the first optimization step. Without distributed setup I was stack on first optimization steps or received the reported error. With distributed setup I always receive the reported error. The problem arises when using: * [x] the official example scripts: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: MLM train from scratch Bert and Roberta * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: ``` export CUDA_LAUNCH_BLOCKING=1 export TOKENIZERS_PARALLELISM=true export OMP_NUM_THREADS=32 source /data/medioli/env/bin/activate python3 -m torch.distributed.launch \ --nproc_per_node 4 run_mlm.py \ --dataset_name wikipedia \ --tokenizer_name roberta-base \ --model_type roberta \ --dataset_config_name 20200501.en \ --do_train \ --do_eval \ --learning_rate 1e-5 \ --num_train_epochs 5 \ --save_steps 5000 \ --output_dir /data/medioli/models/mlm/wikipedia_roberta_5ep_1e5_lbl \ --line_by_line \ --use_fast_tokenizer \ --logging_dir /data/medioli/models/mlm/wikipedia_roberta_5ep_1e5_lbl/runs \ --cache_dir /data/medioli/datasets/wikipedia/ \ --overwrite_output_dir \ ``` ## Errors and Output Many errors like this: ``` /pytorch/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [372,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed. ``` Then: ``` Traceback (most recent call last): File "/data/medioli/transformers/examples/language-modeling/run_mlm.py", line 491, in <module> main() File "/data/medioli/transformers/examples/language-modeling/run_mlm.py", line 457, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/data/medioli/env/lib/python3.6/site-packages/transformers/trainer.py", line 1053, in train tr_loss += self.training_step(model, inputs) File "/data/medioli/env/lib/python3.6/site-packages/transformers/trainer.py", line 1443, in training_step loss = self.compute_loss(model, inputs) File "/data/medioli/env/lib/python3.6/site-packages/transformers/trainer.py", line 1475, in compute_loss outputs = model(**inputs) File "/data/medioli/env/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/data/medioli/env/lib64/python3.6/site-packages/torch/nn/parallel/distributed.py", line 511, in forward output = self.module(*inputs[0], **kwargs[0]) File "/data/medioli/env/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/data/medioli/env/lib/python3.6/site-packages/transformers/models/roberta/modeling_roberta.py", line 1057, in forward return_dict=return_dict, File "/data/medioli/env/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/data/medioli/env/lib/python3.6/site-packages/transformers/models/roberta/modeling_roberta.py", line 810, in forward past_key_values_length=past_key_values_length, File "/data/medioli/env/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/data/medioli/env/lib/python3.6/site-packages/transformers/models/roberta/modeling_roberta.py", line 123, in forward embeddings += position_embeddings RuntimeError: CUDA error: device-side assert triggered terminate called after throwing an instance of 'c10::Error' what(): CUDA error: device-side assert triggered Exception raised from create_event_internal at /pytorch/c10/cuda/CUDACachingAllocator.cpp:687 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x42 (0x7fa4517ed1e2 in /data/medioli/env/lib64/python3.6/site-packages/torch/lib/libc10.so) frame #1: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0xad2 (0x7fa451a3bf92 in /data/medioli/env/lib64/python3.6/site-packages/torch/lib/libc10_cuda.so) frame #2: c10::TensorImpl::release_resources() + 0x4d (0x7fa4517db9cd in /data/medioli/env/lib64/python3.6/site-packages/torch/lib/libc10.so) frame #3: std::vector<c10d::Reducer::Bucket, std::allocator<c10d::Reducer::Bucket> >::~vector() + 0x25a (0x7fa427f8489a in /data/medioli/env/lib64/python3.6/site-packages/torch/lib/libtorch_python.so) frame #4: c10d::Reducer::~Reducer() + 0x28a (0x7fa427f79b1a in /data/medioli/env/lib64/python3.6/site-packages/torch/lib/libtorch_python.so) frame #5: std::_Sp_counted_ptr<c10d::Reducer*, (__gnu_cxx::_Lock_policy)2>::_M_dispose() + 0x12 (0x7fa427f593c2 in /data/medioli/env/lib64/python3.6/site-packages/torch/lib/libtorch_python.so) frame #6: std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release() + 0x46 (0x7fa4277577a6 in /data/medioli/env/lib64/python3.6/site-packages/torch/lib/libtorch_python.so) frame #7: <unknown function> + 0xa6b08b (0x7fa427f5a08b in /data/medioli/env/lib64/python3.6/site-packages/torch/lib/libtorch_python.so) frame #8: <unknown function> + 0x273c00 (0x7fa427762c00 in /data/medioli/env/lib64/python3.6/site-packages/torch/lib/libtorch_python.so) frame #9: <unknown function> + 0x274e4e (0x7fa427763e4e in /data/medioli/env/lib64/python3.6/site-packages/torch/lib/libtorch_python.so) <omitting python frames> frame #22: main + 0x16e (0x400a3e in /data/medioli/env/bin/python3) frame #23: __libc_start_main + 0xf5 (0x7fa48f4903d5 in /lib64/libc.so.6) frame #24: /data/medioli/env/bin/python3() [0x400b02] ``` Discussion in pytorch: https://discuss.pytorch.org/t/solved-assertion-srcindex-srcselectdimsize-failed-on-gpu-for-torch-cat/1804/22 Who can help me Models: @LysandreJik Library: - tokenizers: @LysandreJik - trainer: @sgugger Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj
03-21-2021 15:44:53
03-21-2021 15:44:53
Hi there! So the problem is a bit complex and linked to the way RoBERTa is implemented in Transformers with a small hack: its toknizer has 512 + 2 position embeddings, not 512. When you run your command, the model is randomly initialized with 512 position embeddings (the default in the config) but you still use it with that `robert-base` tokenizer which returns up to 514. This results in an index error that throws the "device-side assert triggered". To fix this, you need to either use another tokenizer, or prepare your random model like this: ``` from transformers import RobertaForMaskedLM, RobertaConfig model = RobertaForMaskedLM(RobertaConfig(max_position_embeddings=514)) model.save_pretrained("model_dir") ``` then use `model_dir` for `--model_name_or_path` when launching your script. You can also tweak the script directly to add `max_position_embeddings=514` in [this line](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py#L282).<|||||>Thank you! Now it works! :)
transformers
10,831
closed
Encoder Decoder Model didn't return a reasonable result
Hello, I tried the example code in the official website as below. # code `from transformers import EncoderDecoderModel, BertTokenizer import torch tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = EncoderDecoderModel.from_encoder_decoder_pretrained('bert-base-uncased', 'bert-base-uncased') input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) outputs = model(input_ids=input_ids, decoder_input_ids=input_ids) outputs = model(input_ids=input_ids, decoder_input_ids=input_ids, labels=input_ids) loss, logits = outputs.loss, outputs.logits model.save_pretrained("bert2bert") model = EncoderDecoderModel.from_pretrained("bert2bert") generated = model.generate(input_ids, decoder_start_token_id=model.config.decoder.pad_token_id) for i, sample_output in enumerate(generated): print("{}: {}".format(i, tokenizer.decode(sample_output, skip_special_tokens=True)))` # output However, it returned to such a result. `Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertLMHeadModel: ['cls.seq_relationship.weight', 'cls.seq_relationship.bias'] - This IS expected if you are initializing BertLMHeadModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing BertLMHeadModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of BertLMHeadModel were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['bert.encoder.layer.0.crossattention.self.query.weight', 'bert.encoder.layer.0.crossattention.self.query.bias', 'bert.encoder.layer.0.crossattention.self.key.weight', 'bert.encoder.layer.0.crossattention.self.key.bias', 'bert.encoder.layer.0.crossattention.self.value.weight', 'bert.encoder.layer.0.crossattention.self.value.bias', 'bert.encoder.layer.0.crossattention.output.dense.weight', 'bert.encoder.layer.0.crossattention.output.dense.bias', 'bert.encoder.layer.0.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.0.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.1.crossattention.self.query.weight', 'bert.encoder.layer.1.crossattention.self.query.bias', 'bert.encoder.layer.1.crossattention.self.key.weight', 'bert.encoder.layer.1.crossattention.self.key.bias', 'bert.encoder.layer.1.crossattention.self.value.weight', 'bert.encoder.layer.1.crossattention.self.value.bias', 'bert.encoder.layer.1.crossattention.output.dense.weight', 'bert.encoder.layer.1.crossattention.output.dense.bias', 'bert.encoder.layer.1.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.1.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.2.crossattention.self.query.weight', 'bert.encoder.layer.2.crossattention.self.query.bias', 'bert.encoder.layer.2.crossattention.self.key.weight', 'bert.encoder.layer.2.crossattention.self.key.bias', 'bert.encoder.layer.2.crossattention.self.value.weight', 'bert.encoder.layer.2.crossattention.self.value.bias', 'bert.encoder.layer.2.crossattention.output.dense.weight', 'bert.encoder.layer.2.crossattention.output.dense.bias', 'bert.encoder.layer.2.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.2.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.3.crossattention.self.query.weight', 'bert.encoder.layer.3.crossattention.self.query.bias', 'bert.encoder.layer.3.crossattention.self.key.weight', 'bert.encoder.layer.3.crossattention.self.key.bias', 'bert.encoder.layer.3.crossattention.self.value.weight', 'bert.encoder.layer.3.crossattention.self.value.bias', 'bert.encoder.layer.3.crossattention.output.dense.weight', 'bert.encoder.layer.3.crossattention.output.dense.bias', 'bert.encoder.layer.3.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.3.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.4.crossattention.self.query.weight', 'bert.encoder.layer.4.crossattention.self.query.bias', 'bert.encoder.layer.4.crossattention.self.key.weight', 'bert.encoder.layer.4.crossattention.self.key.bias', 'bert.encoder.layer.4.crossattention.self.value.weight', 'bert.encoder.layer.4.crossattention.self.value.bias', 'bert.encoder.layer.4.crossattention.output.dense.weight', 'bert.encoder.layer.4.crossattention.output.dense.bias', 'bert.encoder.layer.4.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.4.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.5.crossattention.self.query.weight', 'bert.encoder.layer.5.crossattention.self.query.bias', 'bert.encoder.layer.5.crossattention.self.key.weight', 'bert.encoder.layer.5.crossattention.self.key.bias', 'bert.encoder.layer.5.crossattention.self.value.weight', 'bert.encoder.layer.5.crossattention.self.value.bias', 'bert.encoder.layer.5.crossattention.output.dense.weight', 'bert.encoder.layer.5.crossattention.output.dense.bias', 'bert.encoder.layer.5.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.5.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.6.crossattention.self.query.weight', 'bert.encoder.layer.6.crossattention.self.query.bias', 'bert.encoder.layer.6.crossattention.self.key.weight', 'bert.encoder.layer.6.crossattention.self.key.bias', 'bert.encoder.layer.6.crossattention.self.value.weight', 'bert.encoder.layer.6.crossattention.self.value.bias', 'bert.encoder.layer.6.crossattention.output.dense.weight', 'bert.encoder.layer.6.crossattention.output.dense.bias', 'bert.encoder.layer.6.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.6.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.7.crossattention.self.query.weight', 'bert.encoder.layer.7.crossattention.self.query.bias', 'bert.encoder.layer.7.crossattention.self.key.weight', 'bert.encoder.layer.7.crossattention.self.key.bias', 'bert.encoder.layer.7.crossattention.self.value.weight', 'bert.encoder.layer.7.crossattention.self.value.bias', 'bert.encoder.layer.7.crossattention.output.dense.weight', 'bert.encoder.layer.7.crossattention.output.dense.bias', 'bert.encoder.layer.7.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.7.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.8.crossattention.self.query.weight', 'bert.encoder.layer.8.crossattention.self.query.bias', 'bert.encoder.layer.8.crossattention.self.key.weight', 'bert.encoder.layer.8.crossattention.self.key.bias', 'bert.encoder.layer.8.crossattention.self.value.weight', 'bert.encoder.layer.8.crossattention.self.value.bias', 'bert.encoder.layer.8.crossattention.output.dense.weight', 'bert.encoder.layer.8.crossattention.output.dense.bias', 'bert.encoder.layer.8.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.8.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.9.crossattention.self.query.weight', 'bert.encoder.layer.9.crossattention.self.query.bias', 'bert.encoder.layer.9.crossattention.self.key.weight', 'bert.encoder.layer.9.crossattention.self.key.bias', 'bert.encoder.layer.9.crossattention.self.value.weight', 'bert.encoder.layer.9.crossattention.self.value.bias', 'bert.encoder.layer.9.crossattention.output.dense.weight', 'bert.encoder.layer.9.crossattention.output.dense.bias', 'bert.encoder.layer.9.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.9.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.10.crossattention.self.query.weight', 'bert.encoder.layer.10.crossattention.self.query.bias', 'bert.encoder.layer.10.crossattention.self.key.weight', 'bert.encoder.layer.10.crossattention.self.key.bias', 'bert.encoder.layer.10.crossattention.self.value.weight', 'bert.encoder.layer.10.crossattention.self.value.bias', 'bert.encoder.layer.10.crossattention.output.dense.weight', 'bert.encoder.layer.10.crossattention.output.dense.bias', 'bert.encoder.layer.10.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.10.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.11.crossattention.self.query.weight', 'bert.encoder.layer.11.crossattention.self.query.bias', 'bert.encoder.layer.11.crossattention.self.key.weight', 'bert.encoder.layer.11.crossattention.self.key.bias', 'bert.encoder.layer.11.crossattention.self.value.weight', 'bert.encoder.layer.11.crossattention.self.value.bias', 'bert.encoder.layer.11.crossattention.output.dense.weight', 'bert.encoder.layer.11.crossattention.output.dense.bias', 'bert.encoder.layer.11.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.11.crossattention.output.LayerNorm.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. 2021-03-21 16:47:27.243389: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found 2021-03-21 16:47:27.243603: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. 0: . as as as as as as as as as as as as as as as as as as Process finished with exit code 0 ` # issue Would you be too kindly to help me find out the reason why it returned word 'as' ? Much thanks! Besides, as a newbie, would it be possible if I could use BERT as encoder and Transformer as Decoder in this EncoderDecoderModel? I would be too grateful if you could help me out!
03-21-2021 09:01:21
03-21-2021 09:01:21
Hi! You're using two `bert-base-uncased` as encoder/decoders. This is possible, but you'll need to train your resulting encoder-decoder model on a downstream task in order to obtain coherent results. The `bert-base-uncased` checkpoint is originally from an encoder-only setup. If I may recommend some notebooks/documentation: - [Documentation of encoder/decoder framework](https://huggingface.co/transformers/model_doc/encoderdecoder.html) - [Training a Bert2Bert model for summarization](https://github.com/patrickvonplaten/notebooks/blob/master/BERT2BERT_for_CNN_Dailymail.ipynb) - [Training a shared Roberta2Roberta for summarization](https://github.com/patrickvonplaten/notebooks/blob/master/RoBERTaShared_for_BBC_XSum.ipynb)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,830
closed
getting nans with t5-large + fix
## Environment info - `transformers` version: 4.5.0.dev0 - Platform: Linux-4.15.0-65-generic-x86_64-with-glibc2.10 - Python version: 3.8.8 - PyTorch version (GPU?): 1.7.1+cu101 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @patil-suraj @patrickvonplaten ## Information Model I am using (Bert, XLNet ...): t5-large The problem arises when using: * [ ] my own modified scripts: run_seq2seq with minor modifications (attached) The tasks I am working on is: * [ ] my own task or dataset: Closed-Book Open Domain QA ## To reproduce Steps to reproduce the behavior (the fix I'm suggesting is very simple, so perhaps there is no reason to reproduce): 1. unzip the attached zip (below). 2. run ```bash python run_seq2seq.py --model_name_or_path=t5-large --do_train --do_eval --task=qa --train_file=data/PAQ.filtered.regular.16000.json --validation_file=data/PAQ.filtered.regular.16000.json --output_dir=results/5e-5-t5-large-4096000-128-140-1792000-0.1-regular-true-4 --overwrite_output_dir --per_device_train_batch_size=1 --per_device_eval_batch_size=128 --predict_with_generate --fp16 --max_steps=1000 --evaluation_strategy=steps --text_column=question --summary_column=answer --save_total_limit=5 --cache_dir=../.cache --save_steps=500000 --learning_rate=5e-5 --eval_steps=96000 --warmup_steps=100 --run_name=5e-5-t5-large-4096000-128-140-1792000-0.1-regular-true-4 --dropout_rate=0.1 --gradient_accumulation_steps=1 --logging_steps=1 ``` ## Expected behavior Training without nans. ## Possible fix I debugged and saw that we get nans at the `modeling_t5.py` script in line 241 ```python hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon) ``` By modifing this line to: ```python clamp_value = torch.finfo(hidden_states.dtype).max - 1000 hidden_states = torch.clamp(hidden_states, min=-clamp_value, max=clamp_value) * torch.rsqrt(variance + self.variance_epsilon) ``` It seems to be solved. BTW it happens in the last layers (this might explain why it wasn't caught in [this fix](https://discuss.huggingface.co/t/t5-fp16-issue-is-fixed/3139)) [seq2seq.zip](https://github.com/huggingface/transformers/files/6177063/seq2seq.zip)
03-21-2021 08:52:41
03-21-2021 08:52:41
Hi I also observe the similar issue with mt5 models, https://github.com/huggingface/transformers/issues/10819 , deepspeed is still not working for me due to this issue with mt5 models. I greatly appreciate having a look @patil-suraj @patrickvonplaten <|||||>We didn't really manage to resolve the problems with t5/mt5 + mixed precision fp16 (cc @patil-suraj). I'm not sure whether anybody has tried internally to fine-tune t5/mt5 with deepspeed (@stas00 maybe?)<|||||>the issue arises without deepspeed, just vanilla mt5-small model. Also, I see similar nans with deepspeed with a model based on mt5-small slightly modified, please see the issue here https://github.com/huggingface/transformers/issues/10821#issuecomment-803453998, I think if the issue with fp16 option could get resolved, hopefully this will be also more stable with model changes in deepspeed as well. Thanks a lot.<|||||>Indeed, this has nothing to do with deepspeed, other than that deepspeed trains in mixed precision and evals in full fp16 at the moment. I've started studying the bfloat16 vs. float16 numerical properties and their correlation to each other. And once I understand it well I will try to see if there some sort of magical remapping that perhaps could be done - this is my fantasy of course. I just need to finish a few other more urgent things with deepspeed stage3 integration first. But please don't let my comment prevent you from merging the proposed fix if it already solves the problem. <|||||>I got similar issue with mt5 model, @patrickvonplaten thanks a lot in advance for your help<|||||>@dorost1234 + @yuvalkirstain, please kindly try this branch: https://github.com/huggingface/transformers/tree/t5-fp16-no-nans and let me know if it solves the problem - It seems that the problem is due to `autocast` in `T5LayerFF` so this branch tries to turn off `autocast` just for that layer. It also disables the previously added clamping. There is also a lot of debug statements in the branch but they will be silent unless nan/inf is detected. I tested it work on a small sample with t5-small/t5-base/t5-large/google/mt5-small. The main part of the fix is just: ``` class T5LayerFF(nn.Module): def forward(self, hidden_states): with torch.cuda.amp.autocast(enabled=False): forwarded_states = self.layer_norm(hidden_states) forwarded_states = self.DenseReluDense(forwarded_states) hidden_states = hidden_states + self.dropout(forwarded_states) return hidden_states ``` and removing some code. So use the branch first. If it works I guess we could just monkey patch this version for AMP or come up with some cleaner solution. Probably with `torch.is_autocast_enabled()` check<|||||>Dear @stas00 Thank you very much for taking time looking into this issue, this would be really awesome if this could fix the issue, I tried to test it, for this I got the branch, and then I install it locally with "python setup.py develop", then I run this command: `python run_translation.py --model_name_or_path google/mt5-small --do_train --do_eval --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config_name ro-en --output_dir /temp/test --per_device_train_batch_size=4 --per_device_eval_batch_size=4 --overwrite_output_dir --predict_with_generate --logging_step 10 --fp16` I got this error: ``` Traceback (most recent call last): File "run_translation.py", line 562, in <module> main() File "run_translation.py", line 448, in main pad_to_multiple_of=8 if training_args.fp16 else None, TypeError: __init__() got an unexpected keyword argument 'model' ``` I think there is some version mismatch. I removed the model from input to the collator, as below ``` data_collator = DataCollatorForSeq2Seq( tokenizer, #model=model, label_pad_token_id=label_pad_token_id, pad_to_multiple_of=8 if training_args.fp16 else None, ) ``` and then here is what I got with fp16 option: ``` {'loss': 23.3523, 'learning_rate': 4.999890767684712e-05, 'epoch': 0.0} {'loss': 22.5557, 'learning_rate': 4.999781535369424e-05, 'epoch': 0.0} {'loss': 25.9471, 'learning_rate': 4.999672303054136e-05, 'epoch': 0.0} {'loss': 23.0994, 'learning_rate': 4.9995630707388475e-05, 'epoch': 0.0} {'loss': 24.9974, 'learning_rate': 4.999453838423559e-05, 'epoch': 0.0} {'loss': 23.3743, 'learning_rate': 4.999344606108271e-05, 'epoch': 0.0} {'loss': 24.2147, 'learning_rate': 4.999235373792983e-05, 'epoch': 0.0} {'loss': 26.7845, 'learning_rate': 4.9991261414776954e-05, 'epoch': 0.0} {'loss': 25.2277, 'learning_rate': 4.9990169091624065e-05, 'epoch': 0.0} {'loss': 23.3156, 'learning_rate': 4.998907676847119e-05, 'epoch': 0.0} {'loss': 21.275, 'learning_rate': 4.99879844453183e-05, 'epoch': 0.0} {'loss': 23.7031, 'learning_rate': 4.9986892122165426e-05, 'epoch': 0.0} {'loss': 23.8086, 'learning_rate': 4.9985799799012544e-05, 'epoch': 0.0} {'loss': 25.8143, 'learning_rate': 4.998470747585966e-05, 'epoch': 0.0} {'loss': 24.4319, 'learning_rate': 4.998361515270678e-05, 'epoch': 0.0} {'loss': 26.8277, 'learning_rate': 4.99825228295539e-05, 'epoch': 0.0} ``` here is loss without fp16: ``` {'loss': 27.0258, 'learning_rate': 4.999890767684712e-05, 'epoch': 0.0} {'loss': 23.141, 'learning_rate': 4.999781535369424e-05, 'epoch': 0.0} {'loss': 21.2312, 'learning_rate': 4.999672303054136e-05, 'epoch': 0.0} {'loss': 19.3567, 'learning_rate': 4.9995630707388475e-05, 'epoch': 0.0} {'loss': 18.7998, 'learning_rate': 4.999453838423559e-05, 'epoch': 0.0} {'loss': 17.9632, 'learning_rate': 4.999344606108271e-05, 'epoch': 0.0} {'loss': 17.2105, 'learning_rate': 4.999235373792983e-05, 'epoch': 0.0} {'loss': 17.5506, 'learning_rate': 4.9991261414776954e-05, 'epoch': 0.0} {'loss': 15.2566, 'learning_rate': 4.9990169091624065e-05, 'epoch': 0.0} {'loss': 14.8667, 'learning_rate': 4.998907676847119e-05, 'epoch': 0.0} {'loss': 13.7132, 'learning_rate': 4.99879844453183e-05, 'epoch': 0.0} {'loss': 13.4058, 'learning_rate': 4.9986892122165426e-05, 'epoch': 0.0 ``` So I think this is not optimizing the loss well. I greatly appreciate having a look. Thanks a lot. <|||||>re errors - this is all on master - the source code and `run_translation.py`. When you install `pip install -e .` sometimes conda/pip don't clean up an old install, so it helps to do `pip uninstall transformers -y` at least 2 times! I solve such problems by running locally and not relying on the installed `transformers`, i.e.: ``` git clone https://github.com/huggingface/transformers cd transformers PYTHONPATH=src python examples/seq2seq/run_translation.py ... ``` now you never need to worry about what `transformers` version is installed in the environment. wrt not getting the loss going down - this is odd, I just run your code: ``` PYTHONPATH=src python examples/seq2seq/run_translation.py --model_name_or_path google/mt5-small --do_train --do_eval --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config_name ro-en --output_dir /tmp/test --per_device_train_batch_size=4 --per_device_eval_batch_size=4 --overwrite_output_dir --predict_with_generate --logging_step 10 --fp16 {'loss': 29.7519, 'learning_rate': 4.999781535369424e-05, 'epoch': 0.0} {'loss': 26.3593, 'learning_rate': 4.9995630707388475e-05, 'epoch': 0.0} {'loss': 23.4431, 'learning_rate': 4.999344606108271e-05, 'epoch': 0.0} {'loss': 21.431, 'learning_rate': 4.9991261414776954e-05, 'epoch': 0.0} {'loss': 19.2445, 'learning_rate': 4.998907676847119e-05, 'epoch': 0.0} {'loss': 17.8293, 'learning_rate': 4.9986892122165426e-05, 'epoch': 0.0} {'loss': 16.9441, 'learning_rate': 4.998470747585966e-05, 'epoch': 0.0} {'loss': 15.7572, 'learning_rate': 4.99825228295539e-05, 'epoch': 0.0} {'loss': 15.2937, 'learning_rate': 4.9980338183248135e-05, 'epoch': 0.0} {'loss': 14.4368, 'learning_rate': 4.997815353694237e-05, 'epoch': 0.0} {'loss': 14.6709, 'learning_rate': 4.997596889063661e-05, 'epoch': 0.0} {'loss': 13.2806, 'learning_rate': 4.9973784244330843e-05, 'epoch': 0.0} {'loss': 12.9245, 'learning_rate': 4.997159959802508e-05, 'epoch': 0.0} {'loss': 12.4647, 'learning_rate': 4.9969414951719316e-05, 'epoch': 0.0} {'loss': 11.4738, 'learning_rate': 4.996723030541355e-05, 'epoch': 0.0} ``` Must be your hardware? Try to lower the learning rate? I tried with 1 or 2 gpus and it worked in both cases. <|||||>Hi @stas00 thank you very much for the pointers, I did it as you mentioned and now I see this is going down nicely ``` {'loss': 28.1802, 'learning_rate': 4.999890767684712e-05, 'epoch': 0.0} {'loss': 27.4353, 'learning_rate': 4.999781535369424e-05, 'epoch': 0.0} {'loss': 21.3904, 'learning_rate': 4.999672303054136e-05, 'epoch': 0.0} {'loss': 22.8854, 'learning_rate': 4.9995630707388475e-05, 'epoch': 0.0} {'loss': 19.6943, 'learning_rate': 4.999453838423559e-05, 'epoch': 0.0} {'loss': 21.253, 'learning_rate': 4.999344606108271e-05, 'epoch': 0.0} {'loss': 20.1937, 'learning_rate': 4.999235373792983e-05, 'epoch': 0.0} {'loss': 18.6606, 'learning_rate': 4.9991261414776954e-05, 'epoch': 0.0} {'loss': 18.0337, 'learning_rate': 4.9990169091624065e-05, 'epoch': 0.0} {'loss': 16.1259, 'learning_rate': 4.998907676847119e-05, 'epoch': 0.0} {'loss': 15.4007, 'learning_rate': 4.99879844453183e-05, 'epoch': 0.0} {'loss': 15.6753, 'learning_rate': 4.9986892122165426e-05, 'epoch': 0.0} {'loss': 15.0481, 'learning_rate': 4.9985799799012544e-05, 'epoch': 0.0} {'loss': 14.5833, 'learning_rate': 4.998470747585966e-05, 'epoch': 0.0} {'loss': 14.0758, 'learning_rate': 4.998361515270678e-05, 'epoch': 0.0} {'loss': 13.7096, 'learning_rate': 4.99825228295539e-05, 'epoch': 0.0} {'loss': 13.3216, 'learning_rate': 4.998143050640102e-05, 'epoch': 0.0} {'loss': 13.2331, 'learning_rate': 4.9980338183248135e-05, 'epoch': 0.0} {'loss': 12.1556, 'learning_rate': 4.997924586009525e-05, 'epoch': 0.0} ``` This is such a great, wonderful, amazing fix. Looking forward to using it when this is pushed to the repository. For all the hard problems, you are our only hope @stas00 Thank you very much for this great fix. <|||||>Thank you for your kind words, I'm so happy to hear that it worked, @dorost1234. I will make a proper PR after I clean this branch up.<|||||>@yuvalkirstain, please kindly test if this PR fixes the problem: https://github.com/huggingface/transformers/pull/10956<|||||>Thank you @stas00 ! It seems to work were my proposed fix failed with T5-Small. I will now run some additional experiments with T5-Large and update.<|||||>Thank you for validating that, @yuvalkirstain! Indeed, I tried first local fixes but the problem would just pop-up elsewhere. I'm just thinking that perhaps we could find if it's all calls to FF that lead to the problem or only some of them, and then we could optimize the solution I proposed by only disabling `autocast` in some cases and not all. I haven't tested that yet. If you experiment I recommend for you to try my branch, since I left the "detector" on and it'll immediately tell you when the first `inf` is encountered. What I'm most interested in is some longer runs to ensure it doesn't start overflowing at a later point. Thank you for your contribution.<|||||>Finetuned T5-Base using this branch with the standard T5 finetuning HPs on NQ (except from batch_size - used only ~26k tokens) and didn't get nans (it has been running for over 3 hours and training converged). Thanks again, I guess the issue can be closed for time being.<|||||>Thank you for this validation, @yuvalkirstain. I still would like to see if we can find a more efficient solution before merging it, but this is great that we have one that works. This unfortunately doesn't help with deepspeed since it doesn't use pytorch AMP and has its own version, but which doesn't use context manager so can't be turned off locally like `autocast`. So we hope to find a different solution. I linked this issue to the PR so it'll get closed automatically when it's merged.<|||||>Well, the nans are back. `T5LayerFF: 1 has inf T5LayerNorm has inf T5LayerNorm variance has inf T5LayerNorm hidden_states has nans T5LayerNorm hidden_states before return has nans T5LayerFF: 2 has nans T5LayerFF: 3 has nans T5LayerFF: 5 has nans T5Block after T5LayerFF has nans T5Stack loop end has nans T5LayerNorm has nans T5LayerNorm variance has nans T5LayerNorm hidden_states has nans T5LayerNorm hidden_states before return has nans` The model I used here was T5-large-ssm-nqo. @stas00 If you'd like to replicate I can send the relevant training file + command.<|||||>Yes, please, I'm working in parallel on gpt-neo that has the same issues, so the more reproducible cases we have the higher are the chances we can find a solid fix. Also those would be good candidates for tests (hoping that we can find a quick way to get to overflow).<|||||>Let's continue the discussion in the PR that is trying to solve this issue: https://github.com/huggingface/transformers/pull/10956<|||||>@dorost1234 hI, Could you please tell me how you solved this loss optimization problem. I am facing same issue<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>So is this fix now in the main version of transformers?<|||||>I found that results are different when you load like this: (first is better) model1a_CPU = T5ForConditionalGeneration.from_pretrained(best_model_path, low_cpu_mem_usage=True,torch_dtype=torch.float16).to("cuda") than when you load via: model1a_CPU = T5ForConditionalGeneration.from_pretrained(best_model_path, low_cpu_mem_usage=True) model1a_CPU.half() model1a_CPU.eval() model1a_CPU.to("cuda") So this could be a solution, I will compare result on /CPU versus /This versus /Half <|||||>@seems like the solution is already implemented in this call: (model1a_CPU = T5ForConditionalGeneration.from_pretrained(best_model_path, low_cpu_mem_usage=True,torch_dtype=torch.float16).to("cuda")) Probably it is trigered by torch_dtype=torch.float16. So a part of model is (likely) moved to fp32 from fp16, so it works properly, exactly the same as with FP32, and exactly the same as on CPU. Of course it does use a little bit more of memory. When you call it second way, the memory usage is around 2.5 GB for T5-large, while with first it is around 2.9GB. It is slower around 10-15 percent.
transformers
10,829
closed
[Wav2Vec2] Small improvements for wav2vec2 info script
03-21-2021 08:41:16
03-21-2021 08:41:16
transformers
10,828
closed
[wav2vec sprint doc] add doc for Local machine
# What does this PR do? Add instructions for how to do Wav2Vec2 fine-tuning on local machine.
03-21-2021 07:40:39
03-21-2021 07:40:39
transformers
10,827
closed
Log continuously models with wandb
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> wandb integration currently logs last model (which can be the best by using `TrainingArguments.load_best_model_at_end`). It would be great to allow continuous upload of model with appropriate aliases to versions. Options would be: * `WANDB_LOG_MODEL = True` which just logs at the end as currently (not sure if we want to add scheduler and optimizer) * `WANDB_LOG_MODEL = 'all'` which logs continuously the model * `WANDB_LOG_MODEL = False` which does not log the model ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> Training can be very long and it would be so sad to lose a model :sob: ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md --> I can probably propose a PR but would love brainstorming on the ideal logic: 1. should we leverage `Trainer.save_model` (as currently) or `Trainer._save_checkpoint` 2. should we consider an artifact version as containing only the model & config or also containing optimizer and scheduler? Or should it actually be 2 separate artifacts? 3. if we leverage `on_save`, can we avoid the same current logic (fake trainer saving to a temporary directory that is then uploaded async) and just use an actual copy of what has been saved. We would just need the path or list of files that have been saved (should be straightforward) 4. If we log continuously the model, should we upload it only if it's improved (when `metric_for_best_model` is defined)? If that's the case, we'll need to be able to detect when that is the case. If that's not the case we'll still need to be able to know which one is the best.
03-21-2021 00:37:32
03-21-2021 00:37:32
I'm realizing now it would be so helpful with xlsr trainings (lost models due to crash after very long training). Would you have any input or suggestions @sgugger on how I could try to implement it?<|||||>Hi @borisdayma, I hadn't seen this issue. On our side we're more focused on how to continuously push the checkpoints to our own hub ;-). That being said, we can definitely leverage the `on_save` event and just look for the last `checkpoint-xxx` folder, then push its content as artifact. If you have a tracking with `metric_for_best_model`, then you won't even have to look for the checkpoint, it will be in the state with `state.best_model_checkpoint`. As for having one or separate checkpoints, I guess it really depends on what you think is best for WandB, you have more expertise than me here.<|||||>Thanks, it makes sense! Actually I imagined you may probably have some interest in a similar logic for the model hub so I wanted to work on something that would be useful for everyone. As a side note, models could also be stored on the model hub AND tracked by W&B (some people do it with ASW S3 for example). In this way, only checksums are actually stored and the files point back to the storage space so it could be complementary.<|||||>To do the same on the hub, my idea was to leverage the versioning system and just push the saved checkpoint every save with a commit message like "checkpoint step xxx". Ideally inside a Callback to avoid adding more stuff to the main training loop. I'll try to focus on this next week and see what we can easily do!<|||||>Nice, if possible it would be cool to allow a logger method to be called right after you push your checkpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Still interested in working on it! Let me know when you have a hook for the model hub!<|||||>We have started working on it (the Trainer gained a push_to_hub API!) the last step missing is the continuous checkpointing, will try to work on that last part soon!<|||||>Awesome, if you can somehow have a hook after pushing to the hub (with access to the url maybe) then we could link link them to the runs.<|||||>The call to `Trainer.push_to_hub` returns the url of the commit to the model hub (it's not part of the train for now).<|||||>I really like where this is going! Is the goal for `TrainingArguments.push_to_hub` to be eventually directly used by the `Trainer` or will it always be handled by the scripts? Also would it be possible to save the return of `_push_to_hub` somewhere in the `Trainer` (that way that url could be used by wandb).<|||||>Yes, we can definitely save the URL somewhere! Would you like to make a PR with that? I'm on another project that we will release soon right now but also plan to go back to the continuous integration after (should be in two weeks!)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,826
closed
feat(wandb): logging and configuration improvements
# What does this PR do? Following improvements to `wandb` integration: * ensure unique artifact id → previously it was based on run name which could create duplicates and mismatches (as runs can be updated manually in the UI) * allow manual calls to `wandb.init()` → previously it would have closed the run and started a new one * when a wandb run already exists (manually created), adds automatically model config parameters * simplify reinit logic → now explicitly closes a run for hp search and avoid use of reinit which can have complex side effects (different behavior in notebooks vs scripts) * ensure we have no dropped values (when step is below a previous logged value) by logging the step as an independent `train/global_step` metric and set it as default x-axis in the UI (can be edited manually). Note: this auto-setting of x-axis will be activated in upcoming release of wandb * get values committed immediately so they appear in the UI with no delay * fixes compatibility with sagemaker Fixes https://github.com/wandb/client/issues/1499, #8754, #10486 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. Documentation: @sgugger
03-20-2021 21:12:09
03-20-2021 21:12:09
transformers
10,825
closed
ReformerEmbedding unclear behavior
## Environment info - `transformers` version: 4.3.3 - Platform: Linux-4.15.0-136-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.7.1 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help @patrickvonplaten ## Information Model I am using (Bert, XLNet ...): Reformer The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: MNIST ## To reproduce Steps to reproduce the behavior: 1. Leave max_position_embeddings undefined or different than axial_pos_embedding shape 2. Observe the assert get triggered for large sequence lengths (within limit of axial_pos_embedding) https://github.com/huggingface/transformers/blob/master/src/transformers/models/reformer/modeling_reformer.py#L255 ## Expected behavior * No assert ## Additional Details: `max_position_embeddings` is only used by `PositionEmbeddings`, thus if we provide `axial_pos_embds` in the configuration, the `max_position_embeddings` will be not used for positional embedding. Moreover it is considered for factorizing `num_buckets` in `LSHSelfAttention` layer. Thus in a scenario of using axial positional embedding, both the assert check will be useless as well as the [bucket factorization](https://github.com/huggingface/transformers/blob/master/src/transformers/models/reformer/modeling_reformer.py#L711)
03-20-2021 18:51:58
03-20-2021 18:51:58
I think I don't fully agree here...`max_position_embeddings` is important even if `axial_pos_embedding` is used. If a user makes use of `axial_pos_embeddings` then `max_position_embeddings` is clearly defined by the tuple of `axial_pos_embeddings`. IMO, it's important that the user fully understands how `axial_pos_embeddigs` work and should therefore also set `max_position_embeddigns`<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,824
closed
Running "convert_graph_to_onnx.py" doesn't work.
## Environment info - `transformers` version: 4.5.0.dev0 - Platform: Darwin-19.6.0-x86_64-i386-64bit - Python version: 3.7.2 - PyTorch version (GPU?): 1.7.1 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @patrickvonplaten, @patil-suraj Models: Bart (https://huggingface.co/transformers/model_doc/marian.html) --> ## Information A problem arises when I run "python3 convert_graph_to_onnx.py" where I receive the following error message ``` Traceback (most recent call last): File "convert_graph_to_onnx.py", line 22, in <module> from .file_utils import ModelOutput, is_tf_available, is_torch_available ModuleNotFoundError: No module named '__main__.file_utils'; '__main__' is not a package ``` ## To reproduce Run "python3 convert_graph_to_onnx.py" inside the following directory transformers/src/transformers. Steps to reproduce the behavior: 1. cd "transformers/src/transformers/" 2. python3 convert_graph_to_onnx.py ## Expected behavior I expect convert_graph_to_onnx.py to begin running.
03-20-2021 17:47:09
03-20-2021 17:47:09
This appears to be the proper way to run it. `python3 -m transformers.convert_graph_to_onnx --framework pt --model bert-base-cased bert-base-cased.onnx`
transformers
10,823
closed
Modify the Trainer class to handle simultaneous execution of Ray Tune and Weights & Biases
# What does this PR do? The proper way to integrate Ray Tune and Weights & Biases is to pass a `wandb` parameter to `tune.run`. However, this parameter is handled as a dictionary inside the `config` argument, and there is no distinction between `wandb` parameters and standard model optimization parameters. The following code comes from [their docs](https://docs.wandb.ai/integrations/ray-tune): ```python from ray.tune.logger import DEFAULT_LOGGERS from ray.tune.integration.wandb import WandbLogger tune.run( train_fn, config={ # define search space here "parameter_1": tune.choice([1, 2, 3]), "parameter_2": tune.choice([4, 5, 6]), # wandb configuration "wandb": { "project": "Optimization_Project", "api_key_file": "/path/to/file", "log_config": True } }, loggers=DEFAULT_LOGGERS + (WandbLogger, )) ``` This is not a problem for Ray Tune. However, it is a problem for the `transformers` integration because it treats wandb as a model parameter, and therefore configuring wandb in this way will raise an error message claiming that `wandb is not a training argument`. The following code will raise such an error: ```python # Initialize our Trainer trainer = Trainer( model_init=model_init, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset if training_args.do_eval else None, compute_metrics=compute_metrics, tokenizer=tokenizer, data_collator=data_collator, ) # Hyperparameter Search def hp_space_fn(empty_arg): config = { "warmup_steps": tune.choice([50, 100, 500, 1000]), "learning_rate": tune.choice([1.5e-5, 2e-5, 3e-5, 4e-5]), "num_train_epochs": tune.quniform(0.0, 10.0, 0.5), } wandb_config = { "wandb": { "project": os.environ.get( 'WANDB_PROJECT', 'wandb_project'), "api_key": os.environ.get('API_KEY'), "log_config": True } } config.update(wandb_config) return config best_run = trainer.hyperparameter_search( direction="maximize", backend="ray", scheduler=PopulationBasedTraining( time_attr='time_total_s', metric='eval_f1_thr_0', mode='max', perturbation_interval=600.0 ), hp_space=hp_space_fn, loggers=DEFAULT_LOGGERS + (WandbLogger, ), ) ``` One way to work around this is to instantiate a subclass based on the Trainer: ```python class CustomTrainer(Trainer): def __init__(self, *args, **kwargs): super(CustomTrainer, self).__init__(*args, **kwargs) def _hp_search_setup(self, trial: Any): try: trial.pop('wandb', None) except AttributeError: pass super(CustomTrainer, self)._hp_search_setup(trial) ``` However, this looks like a hack because throwing away `wandb` arguments in model config on `_hp_search_setup` should be standard Trainer behavior. That's why I'm submitting a PR that directly modifies the `_hp_search_setup` of the Trainer class to ignore `wandb` arguments if Ray is chosen as a backend. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? I'm tagging @richardliaw and @amogkam as they're directly involved in Ray Tune.
03-20-2021 17:27:09
03-20-2021 17:27:09
transformers
10,822
closed
Correct AutoConfig call docstrings
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> The `AutoConfig` class has no method named `from_json_file`, so the [examples in the documentation](https://huggingface.co/transformers/model_doc/auto.html#transformers.TFAutoModelForSequenceClassification.from_pretrained) are incorrect. Most likely the intention is to call `from_pretrained`. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-20-2021 16:52:48
03-20-2021 16:52:48
transformers
10,821
closed
checkpoint breaks with deepspeed
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.3.3 - Platform: linux - Python version: 3.7 - PyTorch version (GPU?): 1.8 - Tensorflow version (GPU?): - - Using GPU in script?: - - Using distributed or parallel set-up in script?: - ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> deepspeed: @stas00 ## Information Dear @stas00 Having your permission, I opened up this bug, you are really my only hope with this issue and I truly appreciate your help. Thank you very much. I am using mt5 model, I modified it with adding adapters layers. The problem arises when: * loading checkpoints from the model trained with deepspeed The tasks I am working on is: * paraphrase detection using paws-x dataset on mt5 model ## To reproduce Steps to reproduce the behavior: ``` git clone [email protected]:dorost1234/codes.git conda create --name deepspeed python=3.7 conda activate deepspeed conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c conda-forge python setup.py develop pip install deepspeed ``` running the codes: ``` deepspeed run_seq2seq.py configs/test.json ``` I save a checkpoint every 10 steps, the output would look like the below: ``` After first checkpoint I kill the codes, here is the output: onfiguration saved in outputs/checkpoint-10/config.json Model weights saved in outputs/checkpoint-10/pytorch_model.bin [2021-03-20 15:18:45,897] [INFO] [logging.py:60:log_dist] [Rank 0] Saving model checkpoint: outputs/checkpoint-10/global_step10/mp_rank_00_model_states.pt [2021-03-20 15:18:51,783] [INFO] [engine.py:1680:_save_zero_checkpoint] zero checkpoint saved outputs/checkpoint-10/global_step10/zero_pp_rank_0_mp_rank_00optim_states.pt Configuration saved in outputs/config.json Model weights saved in outputs/pytorch_model.bin ``` Then, I contunue training with running the command again: ``` deepspeed run_seq2seq.py configs/test.json ``` once loading the checkpoint, it cannot load it with deepspeed: ``` successfully loaded 1 ZeRO state_dicts for rank 0 Traceback (most recent call last): File "run_seq2seq.py", line 512, in <module> main() File "run_seq2seq.py", line 476, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/users/dara/dev/debug_codes/seq2seq/third_party/trainers/trainer.py", line 780, in train self._load_optimizer_and_scheduler(resume_from_checkpoint) File "/users/dara/dev/debug_codes/seq2seq/third_party/trainers/trainer.py", line 1169, in _load_optimizer_and_scheduler self.deepspeed.load_checkpoint(checkpoint, load_optimizer_states=True, load_lr_scheduler_states=True) File "/users/dara/libs/anaconda3/envs/deepspeed/lib/python3.7/site-packages/deepspeed/runtime/engine.py", line 1416, in load_checkpoint load_optimizer_states=load_optimizer_states) File "/users/dara/libs/anaconda3/envs/deepspeed/lib/python3.7/site-packages/deepspeed/runtime/engine.py", line 1488, in _load_zero_checkpoint load_from_fp32_weights=self.zero_load_from_fp32_weights()) File "/users/dara/libs/anaconda3/envs/deepspeed/lib/python3.7/site-packages/deepspeed/runtime/zero/stage2.py", line 1844, in load_state_dict self._restore_base_optimizer_state(state_dict_list) File "/users/dara/libs/anaconda3/envs/deepspeed/lib/python3.7/site-packages/deepspeed/runtime/zero/stage2.py", line 1805, in _restore_base_optimizer_state self.optimizer.state[p][key].data.copy_(saved.data) RuntimeError: The size of tensor a (302612288) must match the size of tensor b (129296512) at non-singleton dimension 0 Killing subprocess 23829 Traceback (most recent call last): File "/users/dara/libs/anaconda3/envs/deepspeed/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/users/dara/libs/anaconda3/envs/deepspeed/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/users/dara/libs/anaconda3/envs/deepspeed/lib/python3.7/site-packages/deepspeed/launcher/launch.py", line 171, in <module> main() File "/users/dara/libs/anaconda3/envs/deepspeed/lib/python3.7/site-packages/deepspeed/launcher/launch.py", line 161, in main sigkill_handler(signal.SIGTERM, None) # not coming back File "/users/dara/libs/anaconda3/envs/deepspeed/lib/python3.7/site-packages/deepspeed/launcher/launch.py", line 139, in sigkill_handler raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd) subprocess.CalledProcessError: Command '['/users/dara/anaconda3/envs/deepspeed/bin/python', '-u', 'run_seq2seq.py', '--local_rank=0', 'configs/test.json']' returned non-zero exit status 1. ``` ## Expected behavior being able to continue training from the saved checkpoints
03-20-2021 14:57:47
03-20-2021 14:57:47
I'm able to reproduce your issue, @dorost1234 So this particular error happens because you change something in the model structure after the checkpoint is saved and before it's resumed. For example at the resume point your model is different than what it was just before you saved the checkpoint. The reason you encountered this problem with deepspeed and did not without it, is because deepspeed by default saves the optimizer state and resumes from it. So here is a quick hack to overcome this in short-term: ``` --- a/seq2seq/third_party/trainers/trainer.py +++ b/seq2seq/third_party/trainers/trainer.py @@ -1166,7 +1166,7 @@ class Trainer: if self.deepspeed: # Not sure how to check if there is a saved deepspeed checkpoint, but since it just return None if it fails to find a deepspeed checkpoint this is sort of a check-n-load function - self.deepspeed.load_checkpoint(checkpoint, load_optimizer_states=True, load_lr_scheduler_states=True) + self.deepspeed.load_checkpoint(checkpoint, load_optimizer_states=False, load_lr_scheduler_states=False) def hyperparameter_search( self, diff --git a/seq2seq/ds_config.json b/seq2seq/ds_config.json index 18ce5a3..44b5cc0 100644 --- a/seq2seq/ds_config.json +++ b/seq2seq/ds_config.json @@ -15,7 +15,8 @@ "reduce_scatter": true, "reduce_bucket_size": 2e8, "contiguous_gradients": true, - "cpu_offload": false + "cpu_offload": false, + "load_from_fp32_weights": false }, "zero_allow_untested_optimizer": true, ``` Basically, we are telling deepspeed not to resume the optimizer/scheduler states and ignore the fp32 weights as well. The last one is not great and may not do what you want, as it's likely to impact the precision. I added some debug code to deepspeed and the key it fails on in your traceback is "exp_avg" . If it helps here is what I did: ``` diff --git a/deepspeed/runtime/zero/stage2.py b/deepspeed/runtime/zero/stage2.py index e0ca4f0..02d904b 100755 --- a/deepspeed/runtime/zero/stage2.py +++ b/deepspeed/runtime/zero/stage2.py @@ -1802,7 +1802,11 @@ class FP16_DeepSpeedZeroOptimizer(object): p = group['params'][0] for key, saved in base_optimizer_group_states[i].items(): if torch.is_tensor(self.optimizer.state[p][key]): - self.optimizer.state[p][key].data.copy_(saved.data) + try: + self.optimizer.state[p][key].data.copy_(saved.data) + except: + print(f"failed with key={key}") + raise else: self.optimizer.state[p][key] = saved ``` ------------ Also: unrelated to this particular issue, the code base your forked from is quite outdated and many deepspeed-related bug fixes were applied since then, so I highly recommend that you sync your fork with the current trainer. If possible try to subclass the Trainer rather than hacking it directly, so you always get the most up-to-date code base you inherit from. Deepspeed is very fresh and expect many more changes happening in the next few weeks/months. ----------- Now to actually solving the problem. Study your code and see where your model's structure gets modified between its creation and the checkpoint saving. Trainer tries to resume from the checkpoint as soon as `train()` starts and it appears that at that point the model is different structurally (dimensions are different) then it is later when it gets saved. So you need your model to be in an identical shape during resume and saving points. Please let me know if this helps. I'd attack it as simple as dumping the model's param dimensions just before the checkpoint is loaded and just before it's saved, comparing the two - most likely finding the mismatch and then going forward from loading or backward from saving in the code and finding a spot where you modify the model. Then move the modification to before you resume from the checkpoint. i.e. before you call `train()`. ---------- a minor comment - your repro instructions are 95% working, I had to do a few fixes to make it work (e.g. your `setup.py` is slightly broken), so it's always a good idea to re-validate that it's actually reproducible ;) But it's all good, I figured it out. It was very helpful to have what you provided to reproduce the problem.<|||||>Dear @stas00 Thank you very much for taking your precious time to look into this issue to assist me, I am indebted to you, and for all the incredible job you do. About reproducibility, I very honestly created a new condo environment and tested it before sending it out, and was working on my side, please accept my sincere apologies for any shortcomings and if I missed something without realizing it. I will investigate the issue with the great pointer you shared with me and I will keep this issue updated. Thank you very much again for the great help. <|||||>Dear @stas00 I will be indebted to also ask also about the nans I get with deepspeed just with the same code you run the loss is nan. I very much appreciate any suggestion you might have for me to try to resolve the issue I face with deepspeed, I am using mt5-small. It would be a great help to me if I could use the great work you have done in deepspeed in huggingface repo and overcome the nan issue. Thank you.<|||||>can we close this one now? we are dealing with nans at https://github.com/huggingface/transformers/pull/10956/files<|||||>Dear Stas I unfortunately could not still figure this out. I am a bit confused by where fp16 casting is applied in MT5, specially with the new PR, where this is disabled in forward path. To me, in the middle there is some fp16 casting, which causes this, but I could not figure this out so far, I was wondering if you could give me more time. I appreciate any hints. thanks On Tue, Mar 30, 2021 at 12:52 AM Stas Bekman ***@***.***> wrote: > can we close this one now? we are dealing with nans at > https://github.com/huggingface/transformers/pull/10956/files > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/10821#issuecomment-809771691>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AS37NMTDDH2FJGNOVF6ELVDTGEAELANCNFSM4ZQQFMIA> > . > <|||||>I had a closer look and - you have a very dated trainer code with multiple bugs that have been fixed since then - you will need to update your code base to `transformers` master. I'm pretty sure that once you had it synced this problem will no longer be there. Please do update me if this is still not the case and then we will fix it then. Thank you! <|||||>Dear @stas00 Thank you very much for the help. Much appreciated. I upgraded the codes to the last version of the codes in huggingface repository and I am still having the same issue. I will make an updated the repository asap and keep you updated on this. Thank you very much.<|||||>Yes, let me know when you have a repo I can reproduce the issue with. Thank you.<|||||>Hi @stas00 I finally found this bug, this is the issue reported also here https://github.com/huggingface/transformers/issues/11294 I was freezing some parameters, and during checkpoiting, since huggingface codes does not handle the freezed parameters properly and has a bug currently regarding this, those parameters were not freezed when its loads from the checkpoints, and caused the difference in the number of parameters for the deepspeed. thanks a lot for all the hints and helps on this.
transformers
10,820
closed
JSONLINES support on examples/seq2seq/run_translation.py
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: windows 10 - Python version: 3.8 - PyTorch version (GPU?): 1.8.0 - Using GPU in script?: yes, tesla v100 - Using distributed or parallel set-up in script?: no ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @sgugger @stas00 @LysandreJik ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce It is said in the seq2seq READ.me that > The task of translation supports only custom JSONLINES files However, in the line 202, the extension of file should be `.json` ```py if self.train_file is not None: extension = self.train_file.split(".")[-1] assert extension == "json", "`train_file` should be a json file." ``` Even if I changed it to ```py assert extension in ("json", "jsonl") ``` it throws another error, which says that there's no jsonl process file in library `datasets` ```py Traceback (most recent call last): File "/home/zchelllo/anaconda3/envs/ex/lib/python3.8/site-packages/datasets/load.py", line 323, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/home/zchelllo/anaconda3/envs/ex/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 274, in cached_path output_path = get_from_cache( File "/home/zchelllo/anaconda3/envs/ex/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 614, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/jsonl/jsonl.py During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/zchelllo/anaconda3/envs/ex/lib/python3.8/site-packages/datasets/load.py", line 335, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/home/zchelllo/anaconda3/envs/ex/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 274, in cached_path output_path = get_from_cache( File "/home/zchelllo/anaconda3/envs/ex/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 614, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/jsonl/jsonl.py During handling of the above exception, another exception occurred: Traceback (most recent call last): File "examples/seq2seq/run_translation.py", line 562, in <module> main() File "examples/seq2seq/run_translation.py", line 295, in main datasets = load_dataset(extension, data_files=data_files) File "/home/zchelllo/anaconda3/envs/ex/lib/python3.8/site-packages/datasets/load.py", line 707, in load_dataset module_path, hash, resolved_file_path = prepare_module( File "/home/zchelllo/anaconda3/envs/ex/lib/python3.8/site-packages/datasets/load.py", line 343, in prepare_module raise FileNotFoundError( FileNotFoundError: Couldn't find file locally at jsonl/jsonl.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/jsonl/jsonl.py. The file is also not present on the master branch on github. ``` Is this part in the stage of dev now? I think it is related to [#1943](https://github.com/huggingface/datasets/pull/1943) Could you add the original csv support before this implementation is over? Thanks in advance!
03-20-2021 09:05:07
03-20-2021 09:05:07
I think you just have to name your file with a ".json" extension for the script to work. There is no support for other formats that will be added to this script as it's not easy to add the csv format while continue to support all the translation datasets in Datasets. You should just tweak the data processing of your example (for instance by doing the same as in `run_summarization`) to your needs if you need to use a csv file.<|||||>Thank you very much! It works. Because the extension for JSONLINES format is `jsonl`, it's better to explain it in the readme.<|||||>This actually might be a good idea to change the extension to `.jsonl` to make it less ambiguous and it would require no special documentation then.
transformers
10,819
closed
mt5 getting nans with fp16
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.4.2 - Platform: linux - Python version: 3.7 - PyTorch version (GPU?): 1.8 - Tensorflow version (GPU?): - - Using GPU in script?: - - Using distributed or parallel set-up in script?: - ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> t5: @patrickvonplaten, @patil-suraj ## Information I am using mt5-small model: * the problem arises when using fp16 with mt5 The tasks I am working on is: * translation ## To reproduce Steps to reproduce the behavior: `python run_translation.py --model_name_or_path google/mt5-small --do_train --do_eval --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config_name ro-en --output_dir test/tst-translation --per_device_train_batch_size=4 --per_device_eval_batch_size=4 --overwrite_output_dir --predict_with_generate --max_train_samples 100 --fp16` outputs: ``` ***** eval metrics ***** epoch = 3.0 eval_bleu = 0.0039 eval_gen_len = 2.95 eval_loss = nan eval_mem_cpu_alloc_delta = 4MB eval_mem_cpu_peaked_delta = 5MB eval_mem_gpu_alloc_delta = 0MB eval_mem_gpu_peaked_delta = 1080MB eval_runtime = 72.1865 eval_samples = 1999 eval_samples_per_second = 27.692 ``` ## Expected behavior being able to use fp16 with mt5 models. Thank you very much for your help, this is really crucial for me to be able to run these models with fp16 to be able to fit more data into old GPUs I have access to and I appreciate a lot your help.
03-20-2021 06:44:36
03-20-2021 06:44:36
Duplicate of https://github.com/huggingface/transformers/issues/10830<|||||>Hi @patrickvonplaten this is not exact duplicate, I am using mt5-small and the other user in #10830 is using t5-large, I appreciate considering both thank you <|||||>@dorost1234, please kindly test if this PR fixes the problem: https://github.com/huggingface/transformers/pull/10956<|||||>@stas00 thank you very much for the contributions, it now works for me for the mt5-small, I am running some more experiments with it and update.<|||||>Dear @stas00 I tested more codes, without deepspeed, it works fine with setting the feedforward layer to float32, as suggested in the PR, but the moment I switch to deepspeed I still get nan issue in my codes. I greatly appreciate if you can spare some moments from your precious time and provide me with a suggestion for the case of deepspeed for the same problem. Thank you very much I also used your debug codes: ``` ^M 0%| | 0/38600 [00:00<?, ?it/s]WARNING:seq2seq.third_party.models.t5.debug_utils:gelu 5 has inf WARNING:seq2seq.third_party.models.t5.debug_utils:T5Block after T5LayerFF has nans WARNING:seq2seq.third_party.models.t5.debug_utils:T5Block after T5LayerFF has inf WARNING:seq2seq.third_party.models.t5.debug_utils:T5Stack loop end has nans WARNING:seq2seq.third_party.models.t5.debug_utils:T5Stack loop start has nans WARNING:seq2seq.third_party.models.t5.debug_utils:T5Block has nans WARNING:seq2seq.third_party.models.t5.debug_utils:T5LayerNorm has nans WARNING:seq2seq.third_party.models.t5.debug_utils:T5LayerNorm variance has nans WARNING:seq2seq.third_party.models.t5.debug_utils:T5LayerNorm hidden_states has nans WARNING:seq2seq.third_party.models.t5.debug_utils:T5LayerNorm hidden_states before return has nans WARNING:seq2seq.third_party.models.t5.debug_utils:T5Block after T5LayerSelfAttention has nans WARNING:seq2seq.third_party.models.t5.debug_utils:T5Block before T5LayerFF has nans WARNING:seq2seq.third_party.models.t5.debug_utils:T5LayerNorm has nans WARNING:seq2seq.third_party.models.t5.debug_utils:T5LayerNorm variance has nans WARNING:seq2seq.third_party.models.t5.debug_utils:T5LayerNorm hidden_states has nans WARNING:seq2seq.third_party.models.t5.debug_utils:T5LayerNorm hidden_states before return has nans WARNING:seq2seq.third_party.models.t5.debug_utils:gelu 1 has nans WARNING:seq2seq.third_party.models.t5.debug_utils:gelu 2 has nans WARNING:seq2seq.third_party.models.t5.debug_utils:gelu 3 has nans ``` <|||||>I was just thinking about it, so thank you for confirming that. Deepspeed is not using `autocast` so in essence the proposed fixed makes no difference under Deepspeed as we aren't running under `autocast` in the first place. Let's ask the DeepSpeed developers https://github.com/microsoft/DeepSpeed/issues/908 Though let's continue the discussion on the deepspeed in the other issue you opened, since these are related but different problems. That's we may fix one but not the other, or the fixes may come at different times, so it's easier to track separate issues. Or if there is not one specific issue to t5/mt5+deepspeed please open one. Thank you. <|||||>Dear @stas00 Sure, thank you very much for coming back to me. Having your permission I will open up an issue on this. Thank you very much.<|||||>I already did - please see the link in my last comment. Please do not worry, we will surely find one way or another to resolve this.<|||||>oh, great, thank you very much <|||||>Dear @stas00 I tested the code more (without deepspeed) on larger scale and when I train on opus100 (I train on 20 languages of it), after 2000 iterations with mt5-small, after applying the fix, this gets nan still. I will share with you a reproducible code soon. thanks a lot for all the great work. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,818
closed
Bump jinja2 from 2.11.2 to 2.11.3 in /examples/research_projects/lxmert
Bumps [jinja2](https://github.com/pallets/jinja) from 2.11.2 to 2.11.3. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/pallets/jinja/releases">jinja2's releases</a>.</em></p> <blockquote> <h2>2.11.3</h2> <p>This contains a fix for a speed issue with the <code>urlize</code> filter. <code>urlize</code> is likely to be called on untrusted user input. For certain inputs some of the regular expressions used to parse the text could take a very long time due to backtracking. As part of the fix, the email matching became slightly stricter. The various speedups apply to <code>urlize</code> in general, not just the specific input cases.</p> <ul> <li>PyPI: <a href="https://pypi.org/project/Jinja2/2.11.3/">https://pypi.org/project/Jinja2/2.11.3/</a></li> <li>Changes: <a href="https://jinja.palletsprojects.com/en/2.11.x/changelog/#version-2-11-3">https://jinja.palletsprojects.com/en/2.11.x/changelog/#version-2-11-3</a></li> </ul> </blockquote> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/pallets/jinja/blob/master/CHANGES.rst">jinja2's changelog</a>.</em></p> <blockquote> <h2>Version 2.11.3</h2> <p>Released 2021-01-31</p> <ul> <li>Improve the speed of the <code>urlize</code> filter by reducing regex backtracking. Email matching requires a word character at the start of the domain part, and only word characters in the TLD. :pr:<code>1343</code></li> </ul> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/pallets/jinja/commit/cf215390d4a4d6f0a4de27e2687eed176878f13d"><code>cf21539</code></a> release version 2.11.3</li> <li><a href="https://github.com/pallets/jinja/commit/15ef8f09b659f9100610583938005a7a10472d4d"><code>15ef8f0</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/pallets/jinja/issues/1343">#1343</a> from pallets/urlize-speedup</li> <li><a href="https://github.com/pallets/jinja/commit/ef658dc3b6389b091d608e710a810ce8b87995b3"><code>ef658dc</code></a> speed up urlize matching</li> <li><a href="https://github.com/pallets/jinja/commit/eeca0fecc3318d43f61bc340ad61db641b861ade"><code>eeca0fe</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/pallets/jinja/issues/1207">#1207</a> from mhansen/patch-1</li> <li><a href="https://github.com/pallets/jinja/commit/2dd769111cbb1a2637f805b3b4c652ec8096d371"><code>2dd7691</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/pallets/jinja/issues/1209">#1209</a> from mhansen/patch-3</li> <li><a href="https://github.com/pallets/jinja/commit/48929401db7228db04dfd8e88115dd5c30dc2d86"><code>4892940</code></a> do_dictsort: update example ready to copy/paste</li> <li><a href="https://github.com/pallets/jinja/commit/7db7d336ba12574e6205fdd929386fd529e3fad4"><code>7db7d33</code></a> api.rst: bugfix in docs, import PackageLoader</li> <li><a href="https://github.com/pallets/jinja/commit/9ec465baefe32e305bd4e61da49e6c39360c194e"><code>9ec465b</code></a> fix changelog header</li> <li>See full diff in <a href="https://github.com/pallets/jinja/compare/2.11.2...2.11.3">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=jinja2&package-manager=pip&previous-version=2.11.2&new-version=2.11.3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
03-20-2021 05:36:07
03-20-2021 05:36:07
transformers
10,817
closed
[vulnerability] in example deps fix
Takes care of: https://github.com/huggingface/transformers/security/dependabot/examples/research_projects/lxmert/requirements.txt/jinja2/open @LysandreJik
03-20-2021 04:44:21
03-20-2021 04:44:21
Ah actually the dependabot PR was earlier in my notifications so I merged it without seeing you had opened a PR here. Sorry about that, closing as already taken care of in https://github.com/huggingface/transformers/commit/dbfe3795147e1360b3afac53a9ee0e14374d2ea6<|||||>Actually, the proposed `>=` is probably better, so fixing conflicts and merging this. Thanks @stas00!