repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
β | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 8,906 | closed | Corrected a typo in the ReadMe | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 12-02-2020 17:04:07 | 12-02-2020 17:04:07 | Thank you very much for this correction, @devangi2000 |
transformers | 8,905 | closed | Fix typo in docstring in src/transformers/models/bert_japanese/tokenization_bert_japanese.py | # What does this PR do?
Only fix typo (thi -> this).
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
@sgugger, @LysandreJik | 12-02-2020 17:02:03 | 12-02-2020 17:02:03 | |
transformers | 8,904 | closed | Using doc chunks without answer token during training ( BertForQuestionAnswering ) | Hi!
When creating features from the squad examples, you use sliding window approach to generate doc chunks, when the input data is too long. There you state, **_if the document chunk does not contain an annotation, you throw it out, since there is nothing to predict._** Hence you set the start_position=0 and end_position=0. Then you re-set it to cls_index in training mode. When you do **CLS token at the beginning, then your cls_index = 0**. Finishing up, you add this item to the InputFeatures.
https://github.com/huggingface/transformers/blob/e768f2322abd2a2f60a3a6d64a6a94c2d957fe89/examples/utils_squad.py#L332-L351
During training - forward step, you calculate loss on these items:
Let's say the max seq length is 512, then the ignored_index = 512, meaning you only ignore those start/end positions which are >= 512. With cls_index = 0 we have start_position=0 and end_position=0. So we end up getting a loss calculated using the start_position with start_logits and end_position with end_logits.
https://github.com/huggingface/transformers/blob/a8c3f9aa760ed7b516ee00f602e8efc0e5d80285/src/transformers/models/bert/modeling_bert.py#L1651-L1665
Then **you return with the calculated loss and in run_squad.py you add this new loss to the training loss**:
https://github.com/huggingface/transformers/blob/a8c3f9aa760ed7b516ee00f602e8efc0e5d80285/examples/question-answering/run_squad.py#L219
**So actually you do not throw these chunks out, but use them for training.**
**Solution maybe:**
Why not set the start_position = end_position = **max_seq_length** instead of cls_index in the utils_squad.py, making sure they will be ignored during training and the loss calculated with them will be 0 ??
Hope you will understand my point, let me know what you think!
**Update:**
I got this idea from another repo using bits and pieces from your implementation (they only used labels with answer, so squad-v1 like dataset) :
In case of too long sequences the sliding window approach creates multiple doc chunks, some of those are without an annotated answer. The model learns to predict start and end positions as 0 (if CLS is on 0. pos) when the answer is NOT present in the document chunk. This will help reduce false predictions on test data where documents are too long and split into multiple chunks.
**So either you actually want to use those doc chunks with no annotation, but your comments are misleading,
OR you don't want to use those chunks, hence the comment, but you failed to implement it.**
Hope it helps!
| 12-02-2020 16:55:58 | 12-02-2020 16:55:58 | @sgugger you might be interested in that issue given that you're refactoring the squad example!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 8,903 | closed | [trainer] improve code readability | This PR:
* removes redundant code, as:
```
self.model = model if model is not None else None
```
and
```
self.model = model
```
are the same.
* decouples attribute assignment from code logic - which simplifies things further.
@sgugger, @LysandreJik | 12-02-2020 16:30:43 | 12-02-2020 16:30:43 | |
transformers | 8,902 | closed | fix(pipeline): error when model not in AutoModel |
# What does this PR do?
During pipeline initialize, it will call get_framework to check whether tf or pt model. get_framework will call AutoModel to load model and return where it depends on.
However, not all the models are in AutoModel. For example, `Helsinki-NLP/opus-mt-en-fr` is under AutoModelForSeq2SeqLM which will cause an error when calling pipeline.
get_framework should depend on task instead. This fix pass targeted_task
to get_framework, if the model not in AutoModel, it will use the targeted_task model instead.
error example
```
from transformers import pipeline
model = pipeline(task="translation_en_to_fr",model="Helsinki-NLP/opus-mt-en-fr")
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 12-02-2020 14:42:01 | 12-02-2020 14:42:01 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 8,901 | closed | Removing Head Layer/Model Conversion | I am currently working on some research in which I am to delve into the analysis of decision boundaries in text classification tasks and I am aiming to use recent work from the `ExBERT` paper, allowing me to visualise the importance of particular features across sentences.
Since the library is built on top of models from the Transformers library and requires that *The model architecture must be supported by the `AutoModelWithLMHead`*, I was wondering if it was possible to modify a fine-tuned model to work with that architecture. I am currently using `DistilBERTForSequenceClassification` in my pipeline and was wondering if it were possible to essentially fine-tune for a classification task and use the underlying `DistilBERT` model, as I assume all of the attention weights etc. will still be included in the model? ie. Could I change the loaded model after training to work with the library and to work with the `AutoModelWithLMHead` architecture so that I could inspect the attention heads?
I wasn't sure if I was only able to use models trained for Masked LM or if I could use models trained for downstream tasks? Apologies if this is a question best for the ExBERT github but since it was built into the library, thought I'd ask. | 12-02-2020 13:59:50 | 12-02-2020 13:59:50 | A model trained for sequence classification can definitely be loaded with a different head. Here's an example:
```py
from transformers import DistilBertForSequenceClassification, DistilBertForMaskedLM
sequence_classifier = DistilBertForSequenceClassification.from_pretrained("...")
# Do stuff with your model, train it, do what you like
# Save the weights in a local directory
sequence_classifier.save_pretrained("model-trained-on-xxx")
# Load the weights in the *ForMaskedLM model.
language_model = DistilBertForMaskedLM.from_pretrained("model-trained-on-xxx")
```
This `language_model` has kept all the weights of the base transformer model, has discarded the sequence classification layers, and has randomly initialized the new layers. This model can be loaded in an `AutoModelWithLMHead`.<|||||>Oh wow, did not expect it to be this easy. Thanks very much! |
transformers | 8,900 | closed | [Bart] Refactor - fix issues, consistency with the library, naming | # What does this PR do?
This PR refactors the Bart model. The goal is to fix a couple of bugs related to Bart, make Bart more consistent with other models in the library and make Bart the "default" Seq2Seq template model for other models. The PR may be a bit difficult to review, so the following sections lists the main changes and the reasons why they are taken.
## In-detail explanation of main changes
1. Fix a bug related to `past_key_values`, `use_cache` and `decoder_input_ids`. Previously it was assumed that if `use_cache=True`, then `decoder_input_ids` have to be of length 1. This is not always the case! E.g. If the first decoder_input_ids prompt is longer than 1 and `use_cache=True` this would have led to errors previously - see #7814, #6353. This is fixed now so that any length of `past_key_values` can be combined with any length of `decoder_input_ids`, just as it can be done for GPT2, T5, CTRL, ... In order to make the pt_tf_equivalence tests pass, some hotfixes are applied for TFBart. TFBart will be refactored in a later PR. A test `create_and_check_decoder_model_past_large_inputs` is added to ensure that this functionality works.
2. Allow to use `BartEncoder` and `BartDecoder` separately from the `BartModel`. Because Bart is the default seq2seq model it's a great opportunity to combine just the `BartDecoder` with other "encoder-only" models. E.g. if someone wants to run experiments on long-range summarization `Longformer-Bart` could be an interesting combination (@ibeltagy). This PR lays the groundwork to easily combine these models by making `BartEncoder` and `BartDecoder` fully functional models on their own. One should probably also add a `BartForCausalLM` class analogs to https://github.com/huggingface/transformers/blob/df311a5ccf50be3031474e289b43b1be43111144/src/transformers/models/prophetnet/modeling_prophetnet.py#L1882 (could be a good first issue). further improves how to handle an issue like #5282
3. Simplify query, key, value projections in attention layer. A rather difficult if-else cascade with a complex follow-up function to concat past_key_values is simplified to a single if-elif-elif-else clause. IMO, the code in `BartAttention.forward()` is much clearer now.
4. Change the cache from dict to tuple and make it stateless. The general design in the library is to have a stateless design for the cache. Bart previously used a dict -> this PR changes the cache to a tuple for consistency. It should also be a bit more efficient, more consistent and easier to use with torchscript and onnx.
5. Bart did a lot of dimensions transposing from time -> batch and batch -> time. This is not at all necessary IMO. We can just have the batch dimension in the first spot the whole time just like the other models do too. Therefore, I deleted a bunch of `transpose(0, 1)` operations.
6. Add inputs_embeds. Just like other models Bart can make use of inputs_embeds.
7. Rename all classes from `...Model` to `Bart...Model`. Public class names that needed to be renamed were depreciated for backwards compatibility. This is better for look-up and consistency with other models.
8. Simpler handling of attention_masks. Previously Bart moved many different masks with many different names through-out the model. This PR aligns the functionality with other models, but creating the full attention mask for each model in the beginning of `BartEncoder` and `BartDecoder` instead of doing it in the attention function. This simplifies the code and is more consistent with other models.
9. Re-structure order in `modeling_bart.py`. Usually, modeling files have helper functions in the beginning, followed by submodules, followed by docstring, followed by the pre-trained models. This PR re-orders `modeling_bart.py` accordingly.
10. Replace functionality to make lm head embeddings on-the-fly by the usual `_init_weights` tying mechanism that we have in PyTorch. This is a) much more consistent with other models and b) cleaner because we don't have to instantiate a new class each time `get_output_embeddings()` is called. Solves #5282.
11. (subjectively) better naming. Replace x -> hidden_states, etc...
## Breaking changes
- There are no breaking changes to the "public" API IMO (except if it corrects a bug). `BartModel`, `BartForConditionalGeneration` and all other `BartPretrainedModel`s have exactly the same as before except for the following case which was a bug:
Previously, the following code:
```python
from transformers import BartForConditionalGeneration, BartTokenizer
model = BartForConditionalGeneration.from_pretrained("facebook/bart-large-cnn")
tokenizer = BartTokenizer.from_pretrained("facebook/bart-large-cnn")
input_ids = tokenizer("the encoded sequence in all its beauty", return_tensors="pt").input_ids
decoder_input_ids = tokenizer("the decoder sequence", return_tensors="pt").input_ids
print(model(input_ids, decoder_input_ids=decoder_input_ids).logits.shape)
```
would have printed out only a single output because `use_cache` is enabled which was wrong because no causal-mask was used. This PR corrects the behavior so that the output seq length matches the decoder_input_ids seq lengths.
- BartEncoder and BartDecoder now have a rather different API. This is OK for me since it was not possible to import the models directly and there were only model components.
- sub modules of Bart are named differently, *e.g.* LayerNorm is now called BartLayerNorm. Since these modules are also not public I don't think we have to depreciate the names.
- The API of `BartModel`, ... is extended by `inputs_embeds` and `decoder_inputs_embeds`.
## Review:
Because Bart is the most important Seq2Seq model in the library (5 other models classes depend on it), I would be very happy for a couple of thorough reviews. Also all kinds of comments, improvements, discussions, questions are welcome! I ran all slow tests and tried to be careful with the changes. In case @sshleifer is interested I'd also be more than happy about some feedback from you ;-)
## TODO-List
- [x] Keep dims consistent within the model -> no switching around between time x batch_size and batch_size x time. We can just stick to batch_size x time throughout the whole forward pass just like other models do too.
- [x] Add same `lm_head` logic, other models have as well. Bart should make use of the `tie_weight_embeddings` function instead of doing weird `"on-the-fly"` output embeddings, #5282
- [x] Clean the Attention layer: Replace dict cache by past_key_values tuple (consistency with other models and stateless which is better IMO). Break up complicated if-else cascade and remove unnecessary parameters.
- [x] Make Encoder/Decoder stand-alone models to be used on their own: #7127, this way pretrained weights can be used in the Encoder-Decoder framework as well. If I remember correctly @ibeltagy was interested in this as well
- [x] Correct error with past_key_values/decoder_input_ids/use_cache: #7814, #6353,
- [x] Make Bart torchscriptable: #6348
- [x] Add input_embeds to Bart
- [x] (very subjectively) better naming
- [x] Check that all slow tests are passing - ran the following slow tests:
```
[
# assumes USE_CUDA is exported, rather than passed
RUN_SLOW=1 pytest tests/test_modeling_pegasus.py
RUN_SLOW=1 pytest tests/test_modeling_bart.py
RUN_SLOW=1 pytest tests/test_modeling_marian.py
RUN_SLOW=1 pytest tests/test_modeling_mbart.py
RUN_SLOW=1 pytest tests/test_modeling_fsmt.py
RUN_SLOW=1 pytest tests/test_modeling_blenderbot.py
RUN_PIPELINE_TESTS=1 RUN_SLOW=1 pytest tests/test_pipelines_conversational.py
RUN_PIPELINE_TESTS=1 RUN_SLOW=1 pytest tests/test_pipelines_text2text_generation.py
RUN_PIPELINE_TESTS=1 RUN_SLOW=1 pytest tests/test_pipelines_summarization.py
RUN_PIPELINE_TESTS=1 RUN_SLOW=1 pytest tests/test_pipelines_translation.py
RUN_PIPELINE_TESTS=1 RUN_SLOW=1 pytest tests/test_pipelines_dialog.py
]
```
=> MBartEnroIntegrationTest.test_enro_generate_batch fails on PR, but also on master with the same message, so that's ok for me!
- [x] Update docstring and final design change check
- [x] Refactor Bart tests
- [x] Check no speed regression
- [x] Check no training performance regression (Is there a good fine-tuning script I can run for this @patil-suraj, @sshleifer)? | 12-02-2020 13:17:47 | 12-02-2020 13:17:47 | Try the various Marian fine-tuning scripts. You should easily be able to get 22+ BLEU on wmt-en-ro with both `finetune_trainer.py` and `finetune.py` in < 30 minutes on brutasse.<|||||>Speed / Memory benchmark of master vs. this PR is ok for me:

<|||||>@patrickvonplaten I would measure impact of running cnn summarization (which uses seq_len 1024) to fully acknowledge the trade off you are making:
```bash
cd examples/seq2seq
python run_eval.py facebook/bart-large-cnn cnn_dm/test.source cnn_gens.txt \
--reference_path cnn_dm/test.target \
--score_path cnn_rouge.json --task summarization \
--n_obs 500 --fp16
```
This should take ~5 mins per branch.
Otherwise, LGTM! Thanks for cleaning up after me :)<|||||>> @patrickvonplaten I would measure impact of running cnn summarization (which uses seq_len 1024) to fully acknowledge the trade off you are making:
>
> ```shell
> cd examples/seq2seq
> python run_eval.py facebook/bart-large-cnn cnn_dm/test.source cnn_gens.txt \
> --reference_path cnn_dm/test.target \
> --score_path cnn_rouge.json --task summarization \
> --n_obs 500 --fp16
> ```
>
> This should take ~5 mins per branch.
>
> Otherwise, LGTM! Thanks for cleaning up after me :)
> @patrickvonplaten I would measure impact of running cnn summarization (which uses seq_len 1024) to fully acknowledge the trade off you are making:
>
> ```shell
> cd examples/seq2seq
> python run_eval.py facebook/bart-large-cnn cnn_dm/test.source cnn_gens.txt \
> --reference_path cnn_dm/test.target \
> --score_path cnn_rouge.json --task summarization \
> --n_obs 500 --fp16
> ```
>
> This should take ~5 mins per branch.
>
> Otherwise, LGTM! Thanks for cleaning up after me :)
Thanks for the command! What do you mean by "per branch"? <|||||>Also @sshleifer I didn't really manage to find a good marian command for fine-tuning. Can you by chance copy-paste a command that fine-tunes a marian model in ~30min to verify that fine-tuning works as expected?<|||||>> @patrickvonplaten I would measure impact of running cnn summarization (which uses seq_len 1024) to fully acknowledge the trade off you are making:
>
> ```shell
> cd examples/seq2seq
> python run_eval.py facebook/bart-large-cnn cnn_dm/test.source cnn_gens.txt \
> --reference_path cnn_dm/test.target \
> --score_path cnn_rouge.json --task summarization \
> --n_obs 500 --fp16
> ```
>
> This should take ~5 mins per branch.
>
> Otherwise, LGTM! Thanks for cleaning up after me :)
I got this result:

on brutasse - does this look reasonable to you? took 2min30 <|||||>What I meant by "per branch" was to also run that command on master to facilitate comparison. Your `refactor-bart` output looks completely reasonable.
#### Train Command
replace num_train_epochs=1 in this [./examples/seq2seq/builtin_trainer/train_distil_marian_enro.sh](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/builtin_trainer/train_distil_marian_enro.sh).
+ It should take 12-25 minutes on 1 GPU.
+ I don't know the BLEU/timing to expect, you should again run on `master` and `bart-refactor` to compare.
<|||||>> @patrickvonplaten I would measure impact of running cnn summarization (which uses seq_len 1024) to fully acknowledge the trade off you are making:
>
> ```shell
> cd examples/seq2seq
> python run_eval.py facebook/bart-large-cnn cnn_dm/test.source cnn_gens.txt \
> --reference_path cnn_dm/test.target \
> --score_path cnn_rouge.json --task summarization \
> --n_obs 500 --fp16
> ```
>
> This should take ~5 mins per branch.
>
> Otherwise, LGTM! Thanks for cleaning up after me :)
Applied as much perf improvement as possible -> Time from master to this PR for the above command is reduced from ~2min10s to ~2min05s (ran three times) = 2.5% speed-up. Removed as many `contiguous()` operations as possible<|||||>Training gives good/equal results to master. However, I see a 5% slow-down in training. `generation()` is as fast or faster than master, but training yields a slow-down of around 5% => so still investigating. Could be the masks<|||||>Reran the fine-tuning script a couple of times on a gcp instance so that no other tasks can interfere and float masks are actually faster than boolean masks and and give more or less same results than previous bart model on master. Here the results:
Refactor (no boolean masks)

master

<|||||>good to merge for me |
transformers | 8,899 | closed | Wrong Length of Dataset in examples/seq2seq/finetune_trainer.py | `train/validation/test examples. -1 means use all` may be not correct
https://github.com/huggingface/transformers/blob/693ac3594b96e86dd282fdf8e413f3a48b176892/examples/seq2seq/finetune_trainer.py#L97-L99
`n_train/val/test` is used to compute the length of the dataset, there will lack one line if set to -1. It should be None to use all examples.
https://github.com/huggingface/transformers/blob/693ac3594b96e86dd282fdf8e413f3a48b176892/examples/seq2seq/utils.py#L136-L137 | 12-02-2020 12:34:14 | 12-02-2020 12:34:14 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 8,898 | closed | Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-install-Q_fyRn/sacrebleu/ | Hi
I am trying with master branch getting this error when installign requirements inside examples
thanks.
Collecting seqeval (from -r ../requirements.txt (line 3))
Downloading https://files.pythonhosted.org/packages/9d/2d/233c79d5b4e5ab1dbf111242299153f3caddddbb691219f363ad55ce783d/seqeval-1.2.2.tar.gz (43kB)
100% |ββββββββββββββββββββββββββββββββ| 51kB 13.4MB/s
Collecting psutil (from -r ../requirements.txt (line 4))
Downloading https://files.pythonhosted.org/packages/33/e0/82d459af36bda999f82c7ea86c67610591cf5556168f48fd6509e5fa154d/psutil-5.7.3.tar.gz (465kB)
100% |ββββββββββββββββββββββββββββββββ| 471kB 2.7MB/s
Collecting sacrebleu (from -r ../requirements.txt (line 5))
Downloading https://files.pythonhosted.org/packages/b9/d6/258a1e63463b4731a387f0872dca759c330bf4845cc0464f2c65028674b6/sacrebleu-1.3.7.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-install-Q_fyRn/sacrebleu/setup.py", line 65, in <module>
version = get_version(),
File "/tmp/pip-install-Q_fyRn/sacrebleu/setup.py", line 56, in get_version
with open(os.path.join(os.path.dirname(__file__), 'sacrebleu.py'), encoding='utf-8') as fin:
TypeError: 'encoding' is an invalid keyword argument for this function
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-install-Q_fyRn/sacrebleu/
| 12-02-2020 12:22:32 | 12-02-2020 12:22:32 | Installing transformers is also broken
(test) rabeeh@gpu4:~/transformers/examples/seq2seq$ pip install git+https://github.com/huggingface/transformers.git
Collecting git+https://github.com/huggingface/transformers.git
Cloning https://github.com/huggingface/transformers.git to /tmp/pip-req-build-V7nNeF
Installing build dependencies ... done
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-req-build-V7nNeF/setup.py", line 156
entries = "\n".join([f' "{k}": "{v}",' for k, v in deps.items()])
^
SyntaxError: invalid syntax
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-req-build-V7nNeF/
<|||||>issue solved with python = 3.7 |
transformers | 8,897 | closed | finetune_trainer with python -m torch.distributed.launch | Hi
I need to run the finetune_trainer with multiple gpus, I am getting the error of
"Default process group is not initialized"
AssertionError: Default process group is not initialized
I am using custom dataloader, might be hard to share all the parts of codes, but I defined sampler as DistributedSampler.
this is transformer 3.5.1, python 3.7 on GPU.
thanks
Best
Rabeeh
| 12-02-2020 11:55:25 | 12-02-2020 11:55:25 | Also, could you add the command to run distributed training with GPUs with finetune_trainer in README? thanks <|||||>I tried with latest version of transformers on 4 gpu with distributed training
rabeeh@gpu4:~/transformers/examples/seq2seq$ python -m torch.distributed.launch finetune.py --learning_rate=3e-5 --fp16 --gpus 4 --do_train --do_predict --n_val 1000 --val_check_interval 0.1 --data_dir wmt_en_ro --train_batch_size=1 --eval_batch_size=1 --output_dir=xsum_results --num_train_epochs 1 --model_name_or_path t5-smal
getting the following error, thanks
finetune.py: error: unrecognized arguments: --local_rank=0
Traceback (most recent call last):
File "/opt/conda/envs/transformers/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/opt/conda/envs/transformers/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/opt/conda/envs/transformers/lib/python3.7/site-packages/torch/distributed/launch.py", line 260, in <module>
main()
File "/opt/conda/envs/transformers/lib/python3.7/site-packages/torch/distributed/launch.py", line 256, in main
cmd=cmd)
subprocess.CalledProcessError: Command '['/opt/conda/envs/transformers/bin/python', '-u', 'finetune.py', '--local_rank=0', '--learning_rate=3e-5', '--fp16', '--gpus', '4', '--do_train', '--do_predict', '--n_val', '1000', '--val_check_interval', '0.1', '--data_dir', 'wmt_en_ro', '--train_batch_size=1', '--eval_batch_size=1', '--output_dir=xsum_results', '--num_train_epochs', '1', '--model_name_or_path', 't(trans(tr(transformers)
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@rabeehk I also encountered the same problem as you. Have you solved it? |
transformers | 8,896 | closed | Create README.md | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 12-02-2020 11:48:17 | 12-02-2020 11:48:17 | Looks like this PR was unfortunately broken, so I'm going to close it. Also noting that the way to update a model card now is to update it directly in your model repo! see https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755 |
transformers | 8,895 | closed | Create README.md | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 12-02-2020 11:45:30 | 12-02-2020 11:45:30 | Looks like this PR was unfortunately broken, so I'm going to close it. Also noting that the way to update a model card now is to update it directly in your model repo! see https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755 |
transformers | 8,894 | closed | custom prepare_inputs_for_generation for generation | Hi
I need to change model_inputs used for the generation, I am using T5ForConditionalGeneration which has extra input parameter and this needs to be passed in each time I call model.generate(), I cannot see how to rewrite the generate function to also pass this argument, could you provide me with some explanation:
https://github.com/huggingface/transformers/blob/d5b3e56de5376aa85ef46e7f0325139d9e299a41/src/transformers/generation_utils.py#L676
thanks
| 12-02-2020 09:53:21 | 12-02-2020 09:53:21 | solved with implementing it inside the model of T5ForConditionalGeneration thanks |
transformers | 8,893 | closed | [π Feature request] Performer support, tensorflow code, not jax. | https://arxiv.org/abs/2009.14794
Thank you thank you very much. | 12-02-2020 08:14:28 | 12-02-2020 08:14:28 | https://github.com/huggingface/transformers/issues/7675 |
transformers | 8,892 | closed | TFRag draft #1 (page BROKEN) - Should close and use #9002 instead | # What does this PR do?
Hi guys, this is the draft WIP of TFRag. It is runnable in eager mode with mostly proper outputs.
I really need help/consult at this stage especially from @Patrick and @jplu .
In this draft, only `modeling_tf_rag.py` is new here.
## What is done and test. (working on HF 4.0.0, but not on Master due to changes in TF input)
- TFRagModel
- TFRagTokenForGeneration
- generate function with no_beam_search
- work on eager mode
- Colab notebook to test/play around whether the code works properly : https://colab.research.google.com/drive/1CfCulkKGrneiQ0gV0Bgdo71gZ_kgMRIB?usp=sharing
## Main things not done yet
- TFRagSequenceForGeneration
- beam_search generation (may wait for TF generation refactor ?)
- Working in graph mode (due to the need of `.numpy()` for retriever calling, and this doesn't work on graph mode)
- Change input format for HF 4.1.0 (need help from @jplu)
## Need your suggestion on NEED_ADVICE, NEED_HELP
As stated, the code is mostly OK except on the points I marked TOFIX which will be clean later during finishing the draft.
However, there are 2 categories that I really need help especially from @Patrick:
1) There are some codes that work, but I am not sure if I meet Huggingface coding standard (marked by NEED_ADVICE)
2) There are 2 points that I need real help (marked by NEED_HELP )
2.1) the aforementioned `.numpy()` on graph mode.
2.2) about `.from_pretrained` : Rag has two loading methods which are `.from_pretrained_question_encoder_generator` and `.from_pretrained` . While` .from_pretrained_question_encoder_generator` works , in `.from_pretrained` there are two weights that the names do not match , which I could not find any way to fix :
```
'rag.generator.model.shared.weight',Β 'rag.generator.final_logits_bias' --> Pytorch name
'model.shared.weight',Β 'final_logits_bias' --> TF name
```
So at the moment I made UGLY fix by overwrite .from_pretrained and manually loading these two weights.
## Who can review?
TFRag : @patrickvonplaten ,
new TF input for master / 4.1.0 : @jplu
about graph mode & retriever module : will need help from @lhoestq later once all other issues are fixed :) | 12-02-2020 07:54:12 | 12-02-2020 07:54:12 | Hi guys, most commits were from the previous PR (TFDPR). I do not know how to remove them, sorry!
Only `modeling_tf_rag.py` is new here.<|||||>Awesome work @ratthachat!!!
For the input, I think the best way is to check how it is done in TF BERT for example and if you have difficulties to understand you can ask your questions here :)
About the `.numpy()` as suggested you can use the `tf.make_ndarra()` like this `tf.make_ndarray(tf.make_tensor_proto(my_tensor))`.
About the weights, I suggest you to check how it is done in TF BART there are similar weights naming.<|||||>> Awesome work @ratthachat!!!
>
> For the input, I think the best way is to check how it is done in TF BERT for example and if you have difficulties to understand you can ask your questions here :)
>
> About the `.numpy()` as suggested you can use the `tf.make_ndarra()` like this `tf.make_ndarray(tf.make_tensor_proto(my_tensor))`.
>
> About the weights, I suggest you to check how it is done in TF BART there are similar weights naming.
Hi Julien @jplu , thanks for the reply!
On these points, I think I may miss something simple, but I could not solve the puzzle by myself at this moment.
1) on `tf.make_ndarray(tf.make_tensor_proto(my_tensor))` , I could make it work **only** on eager mode too, so still have the problem in graph mode (I could not make it work inside @tf.function)
2) about the input, yes I tried to replicate TF Bert & TF DPR (which is previously my implement) e.g. replace `inputs` with `input_ids` with/without default value (`None`) and use `input_processing` , but no matter what I tried , the simple call got error
```
outputs = model(inputs)
ValueError: The first argument to `Layer.call` must always be passed.
```
I really missed something simple here, I will try to play around again.
3) about the weights, yes, I tried to replicate other TF models, so all weights loading works in `.from_pretrained_question_encoder_generator` and mostly properly loaded in `.from_pretrained`.
Only two aforementioned weights could not have the correct name . I tried various fixing to these minor cases but could not, except my very ugly manual load.<|||||>1. Arf, I thought that this line would automatically deactivate the graph execution, but it is not the case. So to make it short, convert a graph tensor to numpy array is not possible because the graph does not execute in Python - so there is no numpy at graph execution. The only one work around would be to play with [tf.py_function](https://www.tensorflow.org/api_docs/python/tf/py_function) but you will literally kill the perf with this, even though it is your only way to go to.
2. This error means that you are not passing the first argument to a call method (basically don't pass any `input_ids`), you always have to pass it. Where in the code the error is raised?
3. What did you try more precisely?<|||||>Hi again Julien,
1. I see! Let us work only on eager mode for now and come back later.
2. Here's 4.1.0 colab where I try to modify 4.1.0 input on `TFRagModel` (Cell 6)
https://colab.research.google.com/drive/1RvtOxUIravWEkwMnj48pedv2mFnlWYkY?usp=sharing
Please see Cell 9, to see various ways I try to pass the first argument. Really sorry I think I overlooked some simple things here.
3. I tried to adjust `base_prefix_name`, and also setting module `name` as discussed with Sam [here](https://discuss.huggingface.co/t/solved-issue-on-translating-dpr-to-tfdpr-on-loading-pytorch-weights-to-tf-model/1764/2):
<|||||>1. The problem is that you cannot do anything with the model if it cannot be run in graph mode (no serving, no training, no optimization, very slow) π’
2. Please, see how it is done in all the other TF implementations, the way you handle the inputs in `modeling_tf_rag.py` is wrong.
3. I don't really have time this week to go deeper in this, but I will take some time on Monday to do it!
A test file is missing as well, having it might help you to detect what has to be updated :)<|||||>Hi Julien!
> 1. The problem is that you cannot do anything with the model if it cannot be run in graph mode (no serving, no training, no optimization, very slow) π’
I got it. I think we can do some work around on TFRag model training in graph mode. Instead of fitting the model with `input_ids` which will need `.numpy` and `retriever` , we can do all retriever stuffs offline (or with tf.Dataset) first to get the `context_input_ids` and then feed them directly to the trainnig loop. I will test this idea.
> 2. Please, see how it is done in all the other TF implementations, the way you handle the inputs in `modeling_tf_rag.py` is wrong.
Finally, I was able to find a single line that's wrong :) I will update the file in my local.
> 3. I don't really have time this week to go deeper in this, but I will take some time on Monday to do it!
> A test file is missing as well, having it might help you to detect what has to be updated :)
Thanks Julien. I will write the full test file soon. My reason is that I need some suggestions (as commented in the modeling file) for my current draft at this stage , so I have not written the full-fledge test file yet. The colab I posted did have some 10+ basic tests, which I still think I need to (cleanly) pass all these tests first.
<|||||>> I got it. I think we can do some work around on TFRag model training in graph mode. Instead of fitting the model with input_ids which will need .numpy and retriever , we can do all retriever stuffs offline (or with tf.Dataset) first to get the context_input_ids and then feed them directly to the trainnig loop. I will test this idea.
I was thinking the same :) to do the search offline, might be a solution. I think the very long term solution would be to use something more adapted to TensorFlow than FAISS such as [SCANN](https://github.com/google-research/google-research/tree/master/scann). (Pinging @lhoestq to know what he is thinking about this :) )
> Finally, I was able to find a single line that's wrong :) I will update the file in my local.
Nice!! Just to let you know that Friday we have also updated the way the handle the booleans, so be careful to integrate this as well.
> Thanks Julien. I will write the full test file soon. My reason is that I need some suggestions (as commented in the modeling file) for my current draft at this stage , so I have not written the full-fledge test file yet. The colab I posted did have some 10+ basic tests, which I still think I need to (cleanly) pass all these tests first.
It is ok, we are not in a hurry, take your time ^^<|||||>@ratthachat - I think you're on the correct track here :-)
TFRag will actually be the first TF composite model, so I'm quite certain you'll run into problems here that we haven't seen before.
In general I think:
1) We should try to make the `from_pretrained` method work in a nice way (this will actually also show us how `TFEncoderDecoder` could be implemented). I'll take a look at this :-)
2) Make `TFRagTokenForGeneration` work with integration tests that the model behaves the same way as PT using the faiss index. It's fine for me if it works only in eager mode for now. Maybe we can think about a different solution at a later stage if it's impossible to have RAG + Faiss in graph mode. I think it's a bit out-of-the-scope to integrate SCANN here with RAG.
3) Add the other functionalities.
I'll try to help you with 1) here - will add some commits to your PR.<|||||>> We should try to make the from_pretrained method work in a nice way (this will actually also show us how TFEncoderDecoder could be implemented). I'll take a look at this :-)
I would like to remove all the `from_pretrained` calls from the model implementation, it will raises issues for some usage, such as training.
> Make TFRagTokenForGeneration work with integration tests that the model behaves the same way as PT using the faiss index. It's fine for me if it works only in eager mode for now. Maybe we can think about a different solution at a later stage if it's impossible to have RAG + Faiss in graph mode. I think it's a bit out-of-the-scope to integrate SCANN here with RAG.
The problem here is that if the model runs only in eager mode, the model won't be able to be served properly and then becomes useless. I don't see the point to have a model that runs only in your console locally :( The best solution IMHO would be to run the FAISS search offline.<|||||>Hey @ratthachat,
I think the `from_pretrained()` functionality now works as expected. I removed some hacks and we shouldn't have to define any `from_pretrained()` method actually.
I've added two tests. 1 already passes (great job! - I didn't really change anything here...), the other one (a very difficult one based on `generate()` does not pass yet). It'll be quite difficult to make the other one pass, but if you manage you'll certainly have an in-depth knowledge of how `generate()` works.
I would recommend the following next steps for the PR:
1) Implement @jplu's simplified handling of the inputs as proposed in this PR: https://github.com/huggingface/transformers/pull/8602 . This should remove a lot of boiler plate code. If some params are not supported yet, I'm sure @jplu can help.
2) Make the generate test pass
After this I'm happy to take another look :-)
Lemme know if you have any problems with the weight loading. It all worked nicely for me<|||||>> Implement @jplu's simplified handling of the inputs as proposed in this PR: #8602 . This should remove a lot of boiler plate code. I some params are not supported yet, I'm sure @jplu can help.
I will be happy to help. Which ones are the "not supported yet"?<|||||>> > Implement @jplu's simplified handling of the inputs as proposed in this PR: #8602 . This should remove a lot of boiler plate code. If some params are not supported yet, I'm sure @jplu can help.
>
> I will be happy to help. Which ones are the "not supported yet"?
> > Implement @jplu's simplified handling of the inputs as proposed in this PR: #8602 . This should remove a lot of boiler plate code. If some params are not supported yet, I'm sure @jplu can help.
>
> I will be happy to help. Which ones are the "not supported yet"?
I think it should all work perfectly fine! Sorry, this came across a bit bad - meant to say "just in case" something doesn't work don't hesitate to ping you ;-) Wasn't sure if int inputs like `n_docs` are supported, but I think this was added as well - so it should all work fine :-) <|||||>Thanks so much for your great help, Patrick! I will carefully look in each point you made. Full addressing will take a while, but I will be back. For now I have some initial responses:
- (Need help the most) Unfortunately ,there's still a bug in weight loading if removing the hack (please see details in below thread)
- About graph mode & training, I think we can consistently combine we three's thoughts here by (a) Finish the code in eager mode first -- (b) Make minimal changes to support offline-mode for retriever (or maybe not change anything at all) -- (c) Make a community notebook to guide this offline-retrieved training in graph mode. -- (d) SCANN will be an interesting long-term solution we can discuss after all these stuffs.
- I think there is similar graph-retrieval problem also in TFDPR training (which we haven't tested) , so I will also try make some example notebook to train TFDPR in graph mode using this offline principle.
- May I ask what is the meaning of these original Pytorch's 3 lines? (in `def generate()` )
```
# retrieved_doc_embeds = retrieved_doc_embeds.to(question_hidden_states)
# context_input_ids = context_input_ids.to(input_ids)
# context_attention_mask = context_attention_mask.to(input_ids)
```
- About test on `generate`, I will try give it a shot. BTW, I previously test `Bart` vs. `TFBart` and found out that they produce **"different"** `generate` results as well.
Do you have the same experience, and will this affect RAG `generate` test ??
<|||||>> I think the `from_pretrained()` functionality now works as expected. I removed some hacks and we shouldn't have to define any `from_pretrained()` method actually.
>
> Lemme know if you have any problems with the weight loading. It all worked nicely for me
Hi Patrick, @patrickvonplaten
Unfortunately, I found the same bug prior to my hack.
- `from_pretrained_question_encoder_generator` <-- Work great
- `from_pretrained` <-- **BUG** (only on **_2 weights_**) : `['model.shared.weight', 'final_logits_bias']`
```
Some weights or buffers of the TF 2.0 model TFRagTokenForGeneration were not initialized from the PyTorch model and are newly initialized: ['model.shared.weight', 'final_logits_bias']
```
- (New found) local loading `from_pretrained("./rag")` <-- **BUG** on **all weights**
Please (please :) take a look at this new colab which provides "minimal" code to show the bugs (just 8 cells).
https://colab.research.google.com/drive/1s-j9PB9yzrFsL6q5rZUQyf8_Lt6jDAkL?usp=sharing
Bugs only in the last 2 cells.<|||||>> > I think the `from_pretrained()` functionality now works as expected. I removed some hacks and we shouldn't have to define any `from_pretrained()` method actually.
> > Lemme know if you have any problems with the weight loading. It all worked nicely for me
>
> Hi Patrick, @patrickvonplaten
> Unfortunately, I found the same bug prior to my hack.
>
> * `from_pretrained_question_encoder_generator` <-- Work great
> * `from_pretrained` <-- **BUG** (only on **_2 weights_**) : `['model.shared.weight', 'final_logits_bias']`
This is not a bug. It's fine actually. Those weights are handled differently in TF and PT, so this message is expected.
>
> ```
> Some weights or buffers of the TF 2.0 model TFRagTokenForGeneration were not initialized from the PyTorch model and are newly initialized: ['model.shared.weight', 'final_logits_bias']
> ```
>
> * (New found) local loading `from_pretrained("./rag")` <-- **BUG** on **all weights**
Let me look into this!
>
> Please (please :) take a look at this new colab which provides "minimal" code to show the bugs (just 8 cells).
> https://colab.research.google.com/drive/1s-j9PB9yzrFsL6q5rZUQyf8_Lt6jDAkL?usp=sharing
> Bugs only in the last 2 cells.<|||||>@ratthachat,
I think you can ignore those warnings for now. A good next step to make sure that the `from_pretrained()` methods work correctly is to add tests that verify that after saving/loading the model yields the same output as before:
- https://github.com/huggingface/transformers/blob/9d7d0005b046a95d9d59354714bb6c3547a612fe/tests/test_modeling_rag.py#L900
I checked and the following code works fully as expected:
```
from transformers import RagRetriever, TFRagTokenForGeneration
retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=True)
model = TFRagTokenForGeneration.from_pretrained("facebook/rag-token-nq", from_pt=True, retriever=retriever)
model.save_pretrained("./rag")
model = TFRagTokenForGeneration.from_pretrained("./rag", retriever=retriever)
```
All those commands work as they should, so I think we're good for now with the `from_pretrained()`. I think the next step should be to concentrate on removing the TF input boilerplate code and then making the generation work.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 8,891 | closed | providing an example with a dummy iterative dataloaders | Hi
I have tested the trainer.py with iterative datasets and this does not work in distributed case, I shard the data across the cores. Could you please assist me in providing me with a dummy iterative dataloader for finetune_seq2seq.py model which runs fine with xla_spawn.py on TPU so I get some understanding which functions needs to be implemented. I really need to make this work, and trainer.py does not seem to work with iterative datasets or I am missing how to do it properly. thanks
@sgugger | 12-02-2020 07:34:44 | 12-02-2020 07:34:44 | Trainer does indeed not work in distributed fashion with iterative datasets. You need to convert your iterative dataset to a regular dataset for the time being.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 8,890 | closed | Update generation_beam_search.py | BeamHypotheses.add() now behave differently depending on whether it finished with or without EOS token.
# What does this PR do?
see the discussion here #8722
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 12-02-2020 04:16:56 | 12-02-2020 04:16:56 | Hey @ZhaoQianfeng,
Thanks a lot for making the PR. I thought about this a bit and I think we don't have to change anything actually.
The reason is the following. Let's say you want to generate up to a length of 5.
BOS is the start token which should be counted as part of the input length. However EOS should not be counted towards the sequence length for the length penalty IMO since it's the trigger to finish generation.
So the input:
[BOS, hey, there, EOS] -> is ok for me to be counted as a sequence length of 3 ([BOS, Hey, there]) for the length penalty. I don't think the EOS token itself should penalize.
However, un unfinished generation, such as [BOS, hey, there, how, are] should be counted to have a sequence length of 5 since it isn't finished.
If we would merge this PR, this would mean that [BOS, hey, there, peter, EOS] would receive the same length penalty as [BOS, hey, there, how, are], but IMO they should not. The first sequence is finished (*i.e.* shorter) than the second one.
So I'd prefer to leave it as it is. I think it's the right approach. Thanks a lot for looking into this however :-) <|||||>Hey @patrickvonplaten ,
I think you are right!
For your example, my reason why ` [BOS, hey, there, peter, EOS]` and `[BOS, hey, there, how, are] `should have same length penalty is that the former probability is calculated by `log(P(hey))+log(P(there))+log(P(peter))+log(P(EOS))`, and the latter probability is calculated by `log(P(hey))+log(P(there)+log(P(how)+log(P(are))`, both 4 elements.So I used to think that they should be divided by same length.
But I think your explanation is more reasonable and convincing, the former sentence is actually shorter than the latter sentence!It should be what **length peanalty** really means. Thank you for taking the time to discuss this issue!:-) |
transformers | 8,889 | closed | trainer.py does not handle distributed training for iterative datasets and is very slow | ## Environment info
- `transformers` version: 3.5.1
- Platform: TPU
- Python version: 3.7
- using xla_spawn.py
### Who can help
@sgugger
@patrickvonplaten
@patrickvonplaten
@patil-suraj
## Information
I am running seq2seq_finetune.py with iterative datasets and I do not get any speed up for 8 TPU cores versus 1 TPU cores, the code is also even slower than 1 GPU.
## To reproduce
```
git clone [email protected]:google-research/ruse.git
go to iter branch
pip install -r requirements.txt
python setup.py develop
cd seq2seq
python xla_spawn.py finetune_t5_trainer.py configs/mrpc_adapter_tpu.json
```
## Expected behavior
Being faster on TPU, to me trainer.py does not handle iterative datasets properly. could you have a look please? thank you for your help. | 12-02-2020 00:56:36 | 12-02-2020 00:56:36 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 8,888 | closed | clip_grad_norm on Multiple GPUs: (CUDA error: device-side assert triggered) | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.0.0
- Platform: Linux-5.4.0-53-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.9
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): 2.3.1 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help
@LysandreJik
@sgugger
## Information
Model I am using (Bert, XLNet ...):
RoBERTa
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
trainer.train() runs for a bit, then fails with the following output:
```
RuntimeError Traceback (most recent call last)
<ipython-input-11-3435b262f1ae> in <module>
----> 1 trainer.train()
~/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/trainer.py in train(self, model_path, trial)
759 torch.nn.utils.clip_grad_norm_(amp.master_params(self.optimizer), self.args.max_grad_norm)
760 else:
--> 761 torch.nn.utils.clip_grad_norm_(model.parameters(), self.args.max_grad_norm)
762
763 if is_torch_tpu_available():
~/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/utils/clip_grad.py in clip_grad_norm_(parameters, max_norm, norm_type)
33 total_norm = torch.norm(torch.stack([torch.norm(p.grad.detach(), norm_type).to(device) for p in parameters]), norm_type)
34 clip_coef = max_norm / (total_norm + 1e-6)
---> 35 if clip_coef < 1:
36 for p in parameters:
37 p.grad.detach().mul_(clip_coef.to(p.grad.device))
RuntimeError: CUDA error: device-side assert triggered
```
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
Training RoBERTa for sequence classification from text to binary.
## To reproduce
Steps to reproduce the behavior:
1. Load pre-processed dataset from disk using datasets.Dataset.load_from_disk()
2. Instantiate RoBERTa from pretrained (roberta-base) with config mods (num_labels = 2)
3. Create and run trainer. See full code below (most imports omitted).
```
from transformers import (RobertaTokenizerFast)
BLOCK_SIZE = 512
tok = RobertaTokenizerFast.from_pretrained("./art_tok_onefile_roberta_tuned/")
ds_root = '/media/b/My Passport/datasets/'
tokenized = datasets.Dataset.load_from_disk(os.path.join(ds_root, 'art_unit_tokenized_balanced'))
columns_to_return = ['input_ids', 'attention_mask', 'labels']
tokenized.set_format(type='torch', columns=columns_to_return)
from transformers import RobertaForSequenceClassification
config = RobertaConfig(
vocab_size=tok.vocab_size,
max_position_embeddings=514,
num_labels = 2
)
config = RobertaConfig.from_pretrained("roberta-base",
vocab_size=tok.vocab_size,
max_position_embeddings=514,
num_labels = 2)
model = RobertaForSequenceClassification.from_pretrained('roberta-base', config=config)
optimizer = torch.optim.Adam(model.parameters(), lr=1e-5)
for param in model.base_model.parameters():
param.requires_grad = False
from transformers import Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir="./roberta_train_test",
overwrite_output_dir=True,
num_train_epochs=5,
per_device_train_batch_size=128,
save_steps=50,
save_total_limit=2,
logging_steps=10,
#fp16 = True #Enable low-precision via AMP - omitted for now.
)
train_test_bal = tokenized.train_test_split(test_size=0.1)
trainer = Trainer(
model=model,
args=training_args,
#data_collator=collate_fn,
train_dataset=train_test_bal['train']
)
trainer.train()
```
## Expected behavior
The model trains for the duration of the training cycle.
| 12-01-2020 23:57:43 | 12-01-2020 23:57:43 | I would guess this is a memory error. Have you tried monitoring the memory available on your GPUs while the training is running?<|||||>A CUDA device-side assert triggered means a bad index error somewhere, and persists until you restart your kernel. The code you provide does not allow us to reproduce the bug because it uses a tokenizer and a datasets we don't have access to. There are thus multiple reasons for a bad index error. If you want us to help, you'll need to give a reproducer using a pretrained tokenizer of the hub and a dataset on the hub (for instance GLUE MRPC is great since it's tiny).
To debug your problem locally:
```
for batch in trainer.get_train_dataloader():
break
model.cpu()(**batch)
```
as on the CPU you will get a clear indication of where the index error is.<|||||>@sgugger I'll work on making the datasets public and will post here. In the meantime, I'll run your snippet.
@LysandreJik , it's not a memory issue - all four GPUs are at ~87% volatile util for the duration.<|||||>@sgugger @LysandreJik I've made our bucket public, and the relevant material is in gs://bao-ai/transfer; you should be able to pull stuff down in jupyter via:
`!gsutil -m cp -r gs://bao-ai/transfer .`
... though I haven't tried that command on an unauthenticated computer.
Also, note I'm having the same issue in Colab ([notebook here](https://colab.research.google.com/drive/1y-Tgl_zPJzjrzsq3WeYeUFL9A6hGhLhD?usp=sharing)), so I suspect it's an issue with the dataset as suggested above. @sgugger could you elaborate on what sort of issues you mean by 'bad index'? Would rebuilding the dataset from our source files help? If so, are there steps I can take to make sure indexing issues don't arise?<|||||>Note I also periodically get the following error messages at train time:
```
/home/b/anaconda3/envs/transformers_3/lib/python3.7/site-packages/torch/nn/parallel/_functions.py:61: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.
warnings.warn('Was asked to gather along dimension 0, but all '
```
That's copied directly - it cuts off like that in Jupyter.<|||||>I rebuilt the dataset using the following script, reloaded, and trained - still breaking with the same error message. Here's the rebuild code:
```
balanced = pickle.load(open('./balanced_ds.pickle', 'rb'))
bal_df = datasets.Dataset.from_pandas(pd.DataFrame.from_records(balanced, columns = ['txt', 'labels']))
BLOCK_SIZE = 512
tok = RobertaTokenizerFast.from_pretrained("./art_tok_onefile_roberta_tuned/")
ds_tokenized_no_special = bal_df.map(lambda example: tok(example['txt'],
padding='max_length',
max_length=BLOCK_SIZE,
truncation=True,
add_special_tokens = False), batched=True)
ds_tokenized_no_special.save_to_disk('./art_unit_tokenized_balanced_rebuild')
```
This uses the same imports (probably redundantly) as the main script. You can access all the data using `!gsutil -m cp -r gs://bao-ai/transfer .`, same as above.
I'm going to loop through all the data in the dataloader and see if it's returning anything janky. We're expecting tensors of size BATCHxBLOCK_SIZE for attention_mask and input_ids, and a tensor of size BATCHx1 for labels, right? Our labels are currently just 0 or 1, depending on whether a tokenized json document falls within a certain document class (art unit 3600, to be precise - this work is for patent law analysis).<|||||>Note, I found experimentally that special tokens have to be removed from the tokenizer in order to be properly passed through the RobertaForSequenceClassification model; otherwise, we get an index error in the torch.nn.embedding step, due to the vocabulary exceeding the Roberta vocab size by 2.<|||||>On Colab, before the trainer crashes, I get lots of these messages in the runtime logs:
```/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [374,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed.```<|||||>Ran the following checks on the entire dataset - it passed. Training still fails. This would suggest to me that the dataset is not actually the issue:
```
def no_nans(t):
return bool(t.flatten().isnan().sum() == 0)
def check_batch(ex):
masks = ex['attention_mask']
masks_expected_shape = torch.Size([BATCH_SIZE, BLOCK_SIZE])
masks_are_expected_shape = (masks.shape == masks_expected_shape)
allowed_masks = [0,1]
only_allowed_masks = all(i in allowed_masks for i in masks.flatten())
masks_not_nan = no_nans(masks)
masks_valid = (masks_are_expected_shape and only_allowed_masks and masks_not_nan)
ids = ex['input_ids']
ids_expected_shape = torch.Size([BATCH_SIZE, BLOCK_SIZE])
ids_are_expected_shape = (ids.shape == ids_expected_shape)
ids_within_vocab_range = ((ids.max() < tok.vocab_size + 4) and (ids.min() >= 0))
ids_not_nan = no_nans(ids)
ids_valid = (ids_are_expected_shape and ids_within_vocab_range)
labels = ex['labels']
allowed_labels = [0,1]
only_allowed_labels = all(i in allowed_labels for i in labels.flatten())
labels_are_expected_shape = labels.shape == torch.Size([BATCH_SIZE])
labels_not_nan = no_nans(labels)
labels_valid = (only_allowed_labels and labels_are_expected_shape and labels_not_nan)
failures = {
'masks_are_expected_shape': masks_are_expected_shape,
'only_allowed_masks': only_allowed_masks,
'masks_not_nan': masks_not_nan,
#'masks_valid': masks_valid,
'ids_are_expected_shape': ids_are_expected_shape,
'ids_within_vocab_range': ids_within_vocab_range,
'ids_not_nan': ids_not_nan,
#'ids_valid': ids_valid,
'only_allowed_labels': only_allowed_labels,
'labels_are_expected_shape': labels_are_expected_shape,
'labels_not_nan': labels_not_nan,
#'labels_valid': labels_valid
}
return ((masks_valid and ids_valid and labels_valid), ex, failures)
failed = []
for idx, ex in tqdm(enumerate(iter(loader)), total=len(loader)):
passed, ex, fail = check_batch(ex)
if not passed:
print(f'{idx} Failed!')
failed.append((idx, ex, fail))
```
(This, again, uses all the above code as a base).<|||||>I also deactivated the train test split and deleted the cache files in the dataset - also fails.<|||||>Your check for ids in the proper change seems incorrect:
```
ids_within_vocab_range = ((ids.max() < tok.vocab_size + 4) and (ids.min() >= 0))
```
will allow for the indices `tok.vocab_size` to `tok.vocab_size+3` which are all going to generate an index error given the fact your model as `vocab_size = tok.vocab_size`.
<|||||>@sgugger vocab indices +1 thru +4 are the special tokens, though, right? And the model should be able to accept them, right?<|||||>*or +0 thru +3 if we're being pythonic with our indexing<|||||>The highest token index in the entire dataset is the pad token. That shouldn't throw an indexing error. Can you reproduce on your end?<|||||>> Your check for ids in the proper change seems incorrect:
>
> ```
> ids_within_vocab_range = ((ids.max() < tok.vocab_size + 4) and (ids.min() >= 0))
> ```
>
> will allow for the indices `tok.vocab_size` to `tok.vocab_size+3` which are all going to generate an index error given the fact your model as `vocab_size = tok.vocab_size`.
What should the proper check for this step be?<|||||>And regardless of the checks - what would token indices have to do with the ultimate error, which is in clip_grad.py?
Truncated from above:
```
RuntimeError Traceback (most recent call last)
<ipython-input-11-3435b262f1ae> in <module>()
----> 1 trainer.train()
1 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/utils/clip_grad.py in clip_grad_norm_(parameters, max_norm, norm_type)
36 total_norm = torch.norm(torch.stack([torch.norm(p.grad.detach(), norm_type).to(device) for p in parameters]), norm_type)
37 clip_coef = max_norm / (total_norm + 1e-6)
---> 38 if clip_coef < 1:
39 for p in parameters:
40 p.grad.detach().mul_(clip_coef.to(p.grad.device))
RuntimeError: CUDA error: device-side assert triggered
```<|||||>I faced similar issue while running on colab with Linux OS . Ttried restarting and resetting the kernal error disappeared . <|||||>@shivaraj1994 can you define 'kernel'? Do you mean the Jupyter Kernel, the python Kernel, or the Linux/Mac/Windows kernel?
This problem was replicated both on Linux and on a Colab account; I don't think it's an issue with the operating system of a given computer.
**NOTE:** I was able to fix the problem by ditching the datasets library and using the older pytorch dataset paradigm. Full code below (split between three files - note, I'm not importing the HF datasets library; rather, I'm importing a custom module called datasets.py, from the same directory in which I'm running train.py):
# train.py
```
from transformers import (RobertaTokenizerFast,
RobertaForSequenceClassification,
RobertaConfig)
from transformers import Trainer, TrainingArguments
import pickle
from datasets import BalancedDataset
from collators import DataCollatorForDocumentClassificationBATCH
'''
TODO:
Use custom tok (w/ special tokens removed)
Train more layers
Use longformer as drop-in for Roberta
Make nicer dataset (with train/test split, etc...)
'''
BLOCK_SIZE = 512
BATCH_SIZE = 32
balanced = pickle.load(open('./balanced_ds.pickle', 'rb'))
tok = RobertaTokenizerFast.from_pretrained('roberta-base') #"./art_tok_onefile_roberta_tuned/")
bal_ds = BalancedDataset(tok, balanced, BLOCK_SIZE)
collator = DataCollatorForDocumentClassificationBATCH()
config = RobertaConfig.from_pretrained("roberta-base",
vocab_size=tok.vocab_size,
max_position_embeddings=514,
num_labels = 2)
model = RobertaForSequenceClassification.from_pretrained('roberta-base',
config=config)
#Disable training on all but the Classification Head!
for param in model.base_model.parameters():
param.requires_grad = False
training_args = TrainingArguments(
output_dir="./roberta_train_test",
overwrite_output_dir=True,
num_train_epochs=5,
per_device_train_batch_size=BATCH_SIZE,
save_steps=150,
save_total_limit=2,
logging_steps=20,
max_grad_norm = 5,
dataloader_num_workers = 15,
#fp16 = True #Enable low-precision via AMP
)
trainer = Trainer(
model = model,
args = training_args,
data_collator = collator,
train_dataset = bal_ds
)
trainer.train()
```
# datasets.py
```
import torch
from torch.utils.data.dataset import Dataset
from tqdm import tqdm
class BalancedDataset(Dataset):
def __init__(self, tokenizer, data, block_size: int, limit=None):
self.block_size = block_size
self.tok = tokenizer
print('Ingesting data!')
# Load Data
self.txt = [i[0] for i in tqdm(data[:limit])]
self.labels = torch.tensor([i[1] for i in tqdm(data[:limit])])
def __len__(self):
return len(self.txt)
def __getitem__(self, item):
d = self.tok(self.txt[item], padding='max_length',
truncation=True, max_length=self.block_size,
return_tensors='pt')
d['labels'] = self.labels[item]
return d
```
# collators.py
```
from dataclasses import dataclass
from typing import Dict, List, Union
import torch
@dataclass
class DataCollatorForDocumentClassificationBATCH:
def __call__(
self, examples: List[Union[List[int], torch.Tensor, Dict[str, torch.Tensor]]]
) -> Dict[str, torch.Tensor]:
return {
'input_ids': torch.stack([e['input_ids'] for e in examples]).squeeze(),
'attention_mask': torch.stack([e['attention_mask'] for e in examples]).squeeze(),
'labels': torch.stack([e['labels'] for e in examples]),
}
```<|||||>I will keep the bao-ai bucket open to the public for a bit longer so y'all can attempt to replicate the original issue. We still need to figure out what was causing the clip_grad_norm issue in the first place.
<|||||>> Also, note I'm having the same issue in Colab ([notebook here](https://colab.research.google.com/drive/1y-Tgl_zPJzjrzsq3WeYeUFL9A6hGhLhD?usp=sharing)), so I suspect it's an issue with the dataset as suggested above. @sgugger could you elaborate on what sort of issues you mean by 'bad index'? Would rebuilding the dataset from our source files help? If so, are there steps I can take to make sure indexing issues don't arise?
**Replication note:** I continued to debug in that Colab notebook; if you want to replicate the original issue, you'll need to use the old code, not the code that currently exists at the end of that link.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 8,887 | closed | 'Some weights of BertModel were not initialized from the model checkpoint at ./model and are newly initialized: ['bert.pooler.dense.weight', 'bert.pooler.dense.bias']' | Hi everyone,
I ran [ run_mlm.py ](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py) to continue pertaining uncased BERT directly from the examples on this repo, but once I load the newly saved pretrained Bert Model, I receive a warning - "'Some weights of BertModel were not initialized from the model checkpoint at ./model and are newly initialized: ['bert.pooler.dense.weight', 'bert.pooler.dense.bias']'"
I'm trying to fine-tune the model on a sentiment analysis task, but I'm getting horrible results and I wonder if it has something to do with this? Thanks for your help. | 12-01-2020 22:26:50 | 12-01-2020 22:26:50 | maybe related to #8793 , hope could help.<|||||>> maybe related to #8793 , hope could help.
seems to be related. I get high variance in accuracy, I guessed it was probably because of the random initialization of those two weights.<|||||>If you're using the `run_mlm.py`, then you're doing masked language modeling with the `BertForMaskedLM` model. This model does not make use of the pooler, hence why those two layers are randomly initialized. They're not used for predictions or training.<|||||>@LysandreJik would it make more sense to load the saved model using BertModel.load_pretrained(saved_mlm_model) or would it be better to use BertModel.load("bert-base-uncased") and copy the weights over from the saved model? <|||||>I also met this problem. Have you solved it?<|||||>@wenHK It's actually not really relevant. The BertForMaskedLM model doesn't use the pooler layer, so thus why there are no weights assigned. You don't really need to worry about the warning.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 8,886 | closed | UnicodeEncodeError: surrogates not allowed with GPT2Tokenizer | ## Environment info
- `transformers` version: 3.1.0
- Platform: EC2
- Python version: 3.6
### Who can help
@mfuntowicz @LysandreJik
## Information
Model I am using: GPT-2
The problem arises when using the `GPT2Tokenizer` on a piece of text from a file that was written `utf-8` strings and is being opened in `utf-8`.
## To reproduce
```
tokenizer = GPT2Tokenizer.from_pretrained("gpt2-xl")
text = "some utf-8 string" # this string is loaded from a file containing a dictionary {"text": "<some text>"} in each row - the file itself was written by converting TFRecords to text and "<some text>" was decoded explicitly to "utf-8" prior to being dumped into this dictionary and written
text_ids = tokenizer.encode(text)
```
Stack trace I get:
```
Traceback (most recent call last):
text_ids = tokenizer.encode(text)
File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py", line 1730, in encode
**kwargs,
File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py", line 2045, in encode_plus
**kwargs,
File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py", line 448, in _encode_plus
first_ids = get_input_ids(text)
File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py", line 419, in get_input_ids
tokens = self.tokenize(text, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py", line 350, in tokenize
tokenized_text = split_on_tokens(no_split_token, text)
File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py", line 344, in split_on_tokens
for token in tokenized_text
File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py", line 344, in <genexpr>
for token in tokenized_text
File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_gpt2.py", line 237, in _tokenize
self.byte_encoder[b] for b in token.encode("utf-8")
UnicodeEncodeError: 'utf-8' codec can't encode characters in position 0-1: surrogates not allowed
```
## Expected behavior
It should tokenize and then convert the tokens to ids just fine, since `text` is a `utf-8` string.
I'm trying to specifically identify the `text` itself from my file that leads to this error, but I am unable to print it either. I used a try-except block to catch the above `UnicodeEncodeError` and tried to print the `text`, but print itself expectedly failed because print is using the `ascii` codec. Is there a good way for me to identify the exact piece of text that led to this failure? Perhaps it'll help assist with debugging this issue. | 12-01-2020 21:53:38 | 12-01-2020 21:53:38 | Unfortunately, if simply printing the string is impossible, this is out of our expertise. You have probably already seen those threads, but they may help you debug what's going on:
https://stackoverflow.com/questions/27366479/python-3-os-walk-file-paths-unicodeencodeerror-utf-8-codec-cant-encode-s
https://stackoverflow.com/questions/38147259/how-to-work-with-surrogate-pairs-in-python
Let us know if you find an answer!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 8,885 | closed | [ci] skip doc jobs take #3 | @LysandreJik found another edge case when a developer force-pushes a change and `pipeline.git.base_revision` is defined but bogus, resulting in a range that returns no files. https://github.com/huggingface/transformers/pull/8853#issuecomment-736781950
So the proposed logic for take 3 is:
1. if pipeline.git.base_revision and pipeline.git.revision are defined
2. if git diff --name-only range returns anything
3. if what it returned in 2 is just docs
4. then skip
Bottom line, we skip the test altogether if:
```
unless test -n "<< pipeline.git.base_revision >>" && test -n "<< pipeline.git.revision >>" \
&& test -n "$(git diff --name-only << pipeline.git.base_revision >>...<< pipeline.git.revision >>)"
```
@LysandreJik, @sgugger | 12-01-2020 20:25:20 | 12-01-2020 20:25:20 | There has been no traction so far on the circleci forums, I filed a support ticket with cirlceci.<|||||>So `pipeline.git.base_revision` is consistently undefined when making a PR via direct file edit on github.
<|||||>So far so good.
And while monitoring I discovered an interesting thing. In this particular PR my check doesn't actually do what I thought it did. It doesn't check the range of commits from the beginning of PR. The range it checks is actually just for the last commit. That `pipeline.git.base_revision` is very unruly.
You can see a good example of it here: https://github.com/huggingface/transformers/pull/8918
If you look at the checks for the last few commits which are doc-only commits - the jobs are skipped, whereas any commit that had code in it is not skipped.
So actually this is better than what I intended. Since if we check the full range and there are code files and then there is a subsequent commit that has only docs changed in my vision it'd run the jobs normally. But this is better! Since this checks each commit and decides whether to run the jobs or not based on just that commit - this is much more efficient than my original intention.
I hope I explained it clearly.
**edit** Hmm, but what happens if several commits are pushed at once - which file range will it check - since normally it checks just the last commit - this I'm not sure about. `pipeline.git.base_revision` is a wild card it seems.<|||||>Mmmm, that does mean that if a PR changes code then has a last commit that only changes the doc, it will appear green to us, correct?
If so, we should fine a way to correct this behavior as it will lull us (and the user) in a false sense that everything is alright.<|||||>I will run tests once github works again and adjust accordingly.
I'm also in touch with an engineer at circleCI via their support - so hopefully we will get some solid answers rather than needing to validate all the different circumstances.<|||||>I wasn't able to reproduce it, but it's very clear that it happened, and this is not what we want.
And while what I wrote here https://github.com/huggingface/transformers/pull/8885#issuecomment-738583812 is super-cool, it can't work since github relies on the last check for the overall status. So, we can only skip a job if *all* files in PR were docs.
So I merged a change which disabled that struggling new feature, but added a log instead to continue monitoring it while waiting for circleCI support to get back to me. |
transformers | 8,884 | closed | [s2s finetune_trainer] add instructions for distributed training | This PR adds instructions for running finetune_trainer.py under dpp
@patrickvonplaten | 12-01-2020 19:53:37 | 12-01-2020 19:53:37 | |
transformers | 8,883 | closed | Extracting important information | I'm trying to extract important information from a lecture transcript. What's the best way to go about doing this ? This would be without a particular query parameter, just generally important information in the global context of the lecture. | 12-01-2020 19:35:21 | 12-01-2020 19:35:21 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discusss.huggingface.co) instead?
Thanks! |
transformers | 8,882 | closed | [trainer] start using training_args.parallel_mode | Following up on https://github.com/huggingface/transformers/pull/8877 which adds `training_args.parallel_mode` to make it easy to comprehend which mode the trainer is running under - this PR deploys the new property in a few places.
@sgugger, have I deployed it as you envisioned it?
| 12-01-2020 19:05:43 | 12-01-2020 19:05:43 | Thank you for adding this new property, @sgugger - it has indeed improved the readability! |
transformers | 8,881 | closed | Better warning when loading a tokenizer with AutoTokenizer w/o Snetenβ¦ | β¦cePiece
Currently, initializing a `sentencepiece` `AutoTokenizer` without having `sentencepiece` installed results in the following error:
```
AttributeError: 'NoneType' object has no attribute 'from_pretrained'
```
This improves the error message to:
```
This tokenizer cannot be instantiated. Please make sure you have `sentencepiece` installed in order to use this tokenizer.
```
Fix #8864 | 12-01-2020 17:54:56 | 12-01-2020 17:54:56 | Thanks! |
transformers | 8,880 | closed | [PyTorch] Refactor Resize Token Embeddings | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR extends the `resize_embeddings` function in PyTorch to models that have input/output embeddings that are **not** tied.
In PyTorch all models that have tied input/output embeddings by default can also untie those embeddings by setting `config.tie_word_embeddings=False`. This however requires the `_resize_token_embeddings` to be extended to also resize the `lm_head`. This PR does this extension by adding a `_get_resized_lm_head` method. Also, all models that have a `get_output_embedding()` function, now need a `set_output_embedding()` function. A test is added to make sure the new functionality works as expected. The Bart-like models currently skip this test because there is a rather weird `lm_head` behavior that I want to refactor in another PR.
In addition this PR:
- Fixes #8706: With MT5 and T5v1_1, T5 now has a configuration where input and output embeddings are not tied anymore. This PR fixes this.
- Refactors MobileBert
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 12-01-2020 16:59:23 | 12-01-2020 16:59:23 | MobileBERT does this in the `tie_weights` function. Should we do the same here?<|||||>ALBERT also does it in the `_resize_token_embeddings`:
https://github.com/huggingface/transformers/blob/a7d46a060930242cd1de7ead8821f6eeebb0cd06/src/transformers/models/albert/modeling_albert.py#L635-L639
It probably should have been done in that method for MobileBERT as well<|||||>Fine by me :-) Should we do the mobileBERT change in this PR?<|||||>> Fine by me :-) Should we do the mobileBERT change in this PR?
will do!<|||||>@patrickvonplaten Not 100% sure that this fix works, forked and implemented the fixes as shown and receive the following error upon training:
```
Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.cuda.FloatTensor instead (while checking arguments for embedding).
```
If I revert back to transformers-4.0.0 and run exactly the same script, with the same data then T5 trains fine. Could be user error, although I hope not! Did look back through the script and try and diagnose, but no luck as of yet.<|||||>> @patrickvonplaten Not 100% sure that this fix works, forked and implemented the fixes as shown and receive the following error upon training:
>
> ```
> Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.cuda.FloatTensor instead (while checking arguments for embedding).
> ```
>
> If I revert back to transformers-4.0.0 and run exactly the same script, with the same data then T5 trains fine. Could be user error, although I hope not! Did look back through the script and try and diagnose, but no luck as of yet.
Can you attach a simple code snippet showing what code produces your error? It's for T5 no? <|||||>>
>
> > @patrickvonplaten Not 100% sure that this fix works, forked and implemented the fixes as shown and receive the following error upon training:
> > ```
> > Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.cuda.FloatTensor instead (while checking arguments for embedding).
> > ```
> >
> >
> > If I revert back to transformers-4.0.0 and run exactly the same script, with the same data then T5 trains fine. Could be user error, although I hope not! Did look back through the script and try and diagnose, but no luck as of yet.
>
> Can you attach a simple code snippet showing what code produces your error? It's for T5 no?
Sure:
```python
from transformers import T5TokenizerFast, T5ForConditionalGeneration
dev = "cuda"
MODEL_NAME = 'google/t5-v1_1-base'
tokenizer = T5TokenizerFast.from_pretrained('t5-base')
special_tokens_dict = {'additional_special_tokens': ['<ORG>','<PERSON>']}
num_added_tokens = tokenizer.add_special_tokens(special_tokens_dict)
print(f'ADDED TOKENS: {num_added_tokens}')
model = T5ForConditionalGeneration.from_pretrained(MODEL_NAME)
model.resize_token_embeddings(len(tokenizer))
model.to(dev)
BATCH_SIZE = 8
```
```python
#Sets the module in training mode
from IPython.display import HTML, display
def progress(loss,value, max=100):
return HTML(""" Batch loss :{loss} <progress
value='{value}'max='{max}',style='width: 100%'>{value}
</progress> """.format(loss=loss,value=value, max=max))
model.train()
num_of_batches= int(len(train_df) / BATCH_SIZE)
print(num_of_batches)
NUM_EPOCHS = 1
loss_per_10_steps=[]
loss_values = []
for epoch in range(1,NUM_EPOCHS+1):
print('Running epoch: {}'.format(epoch))
running_loss=0
out = display(progress(1, num_of_batches+1), display_id=True)
for i in range(num_of_batches):
inputbatch=[]
labelbatch=[]
new_df=train_df[i*BATCH_SIZE:i*BATCH_SIZE+BATCH_SIZE]
for indx,row in new_df.iterrows():
input = 'Product: '+row['product_name']
labels = row['product_description']
inputbatch.append(input)
labelbatch.append(labels)
inputbatch=tokenizer.batch_encode_plus(inputbatch,padding=True, max_length=512,return_tensors='pt')["input_ids"]
labelbatch=tokenizer.batch_encode_plus(labelbatch,padding=True, max_length=512,return_tensors="pt") ["input_ids"]
inputbatch=inputbatch.to(dev)
labelbatch=labelbatch.to(dev)
# clear out the gradients of all Variables
optimizer.zero_grad()
# Forward propogation
outputs = model(input_ids=inputbatch, labels=labelbatch)
loss = outputs.loss
loss_num=loss.item()
logits = outputs.logits
running_loss+=loss_num
if i%10 ==0:
loss_per_10_steps.append(loss_num)
out.update(progress(loss_num,i, num_of_batches+1))
# calculating the gradients
loss.backward()
#updating the params
optimizer.step()
loss_values.append(loss_num)
running_loss=running_loss/int(num_of_batches)
```
<|||||>> > > @patrickvonplaten Not 100% sure that this fix works, forked and implemented the fixes as shown and receive the following error upon training:
> > > ```
> > > Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.cuda.FloatTensor instead (while checking arguments for embedding).
> > > ```
> > >
> > >
> > > If I revert back to transformers-4.0.0 and run exactly the same script, with the same data then T5 trains fine. Could be user error, although I hope not! Did look back through the script and try and diagnose, but no luck as of yet.
> >
> >
> > Can you attach a simple code snippet showing what code produces your error? It's for T5 no?
>
> Sure:
>
> ```
> MODEL_NAME = 'google/t5-v1_1-base'
> tokenizer = T5TokenizerFast.from_pretrained('t5-base')
> special_tokens_dict = {'additional_special_tokens': ['<ORG>','<PERSON>']}
> num_added_tokens = tokenizer.add_special_tokens(special_tokens_dict)
> print(f'ADDED TOKENS: {num_added_tokens}')
> model = T5ForConditionalGeneration.from_pretrained(MODEL_NAME, return_dict=True)
> model.resize_token_embeddings(len(tokenizer))
> model.to(dev)
> BATCH_SIZE = 8
> ```
>
> ```
> #Sets the module in training mode
> from IPython.display import HTML, display
> def progress(loss,value, max=100):
> return HTML(""" Batch loss :{loss} <progress
> value='{value}'max='{max}',style='width: 100%'>{value}
> </progress> """.format(loss=loss,value=value, max=max))
>
> model.train()
> num_of_batches= int(len(train_df) / BATCH_SIZE)
> print(num_of_batches)
> NUM_EPOCHS = 1
> loss_per_10_steps=[]
> loss_values = []
> for epoch in range(1,NUM_EPOCHS+1):
> print('Running epoch: {}'.format(epoch))
>
> running_loss=0
>
> out = display(progress(1, num_of_batches+1), display_id=True)
> for i in range(num_of_batches):
> inputbatch=[]
> labelbatch=[]
> new_df=train_df[i*BATCH_SIZE:i*BATCH_SIZE+BATCH_SIZE]
> for indx,row in new_df.iterrows():
> input = 'Product: '+row['product_name']
> labels = row['product_description']
> inputbatch.append(input)
> labelbatch.append(labels)
> inputbatch=tokenizer.batch_encode_plus(inputbatch,padding=True, max_length=512,return_tensors='pt')["input_ids"]
> labelbatch=tokenizer.batch_encode_plus(labelbatch,padding=True, max_length=512,return_tensors="pt") ["input_ids"]
> inputbatch=inputbatch.to(dev)
> labelbatch=labelbatch.to(dev)
>
> # clear out the gradients of all Variables
> optimizer.zero_grad()
>
> # Forward propogation
> outputs = model(input_ids=inputbatch, labels=labelbatch)
> loss = outputs.loss
> loss_num=loss.item()
> logits = outputs.logits
> running_loss+=loss_num
> if i%10 ==0:
> loss_per_10_steps.append(loss_num)
> out.update(progress(loss_num,i, num_of_batches+1))
>
> # calculating the gradients
> loss.backward()
>
> #updating the params
> optimizer.step()
>
> loss_values.append(loss_num)
> running_loss=running_loss/int(num_of_batches)
> ```
thanks! `dev` would be equal to `"cuda"` I suppose? <|||||>>
>
> > > > @patrickvonplaten Not 100% sure that this fix works, forked and implemented the fixes as shown and receive the following error upon training:
> > > > ```
> > > > Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.cuda.FloatTensor instead (while checking arguments for embedding).
> > > > ```
> > > >
> > > >
> > > > If I revert back to transformers-4.0.0 and run exactly the same script, with the same data then T5 trains fine. Could be user error, although I hope not! Did look back through the script and try and diagnose, but no luck as of yet.
> > >
> > >
> > > Can you attach a simple code snippet showing what code produces your error? It's for T5 no?
> >
> >
> > Sure:
> > ```
> > MODEL_NAME = 'google/t5-v1_1-base'
> > tokenizer = T5TokenizerFast.from_pretrained('t5-base')
> > special_tokens_dict = {'additional_special_tokens': ['<ORG>','<PERSON>']}
> > num_added_tokens = tokenizer.add_special_tokens(special_tokens_dict)
> > print(f'ADDED TOKENS: {num_added_tokens}')
> > model = T5ForConditionalGeneration.from_pretrained(MODEL_NAME, return_dict=True)
> > model.resize_token_embeddings(len(tokenizer))
> > model.to(dev)
> > BATCH_SIZE = 8
> > ```
> >
> >
> > ```
> > #Sets the module in training mode
> > from IPython.display import HTML, display
> > def progress(loss,value, max=100):
> > return HTML(""" Batch loss :{loss} <progress
> > value='{value}'max='{max}',style='width: 100%'>{value}
> > </progress> """.format(loss=loss,value=value, max=max))
> >
> > model.train()
> > num_of_batches= int(len(train_df) / BATCH_SIZE)
> > print(num_of_batches)
> > NUM_EPOCHS = 1
> > loss_per_10_steps=[]
> > loss_values = []
> > for epoch in range(1,NUM_EPOCHS+1):
> > print('Running epoch: {}'.format(epoch))
> >
> > running_loss=0
> >
> > out = display(progress(1, num_of_batches+1), display_id=True)
> > for i in range(num_of_batches):
> > inputbatch=[]
> > labelbatch=[]
> > new_df=train_df[i*BATCH_SIZE:i*BATCH_SIZE+BATCH_SIZE]
> > for indx,row in new_df.iterrows():
> > input = 'Product: '+row['product_name']
> > labels = row['product_description']
> > inputbatch.append(input)
> > labelbatch.append(labels)
> > inputbatch=tokenizer.batch_encode_plus(inputbatch,padding=True, max_length=512,return_tensors='pt')["input_ids"]
> > labelbatch=tokenizer.batch_encode_plus(labelbatch,padding=True, max_length=512,return_tensors="pt") ["input_ids"]
> > inputbatch=inputbatch.to(dev)
> > labelbatch=labelbatch.to(dev)
> >
> > # clear out the gradients of all Variables
> > optimizer.zero_grad()
> >
> > # Forward propogation
> > outputs = model(input_ids=inputbatch, labels=labelbatch)
> > loss = outputs.loss
> > loss_num=loss.item()
> > logits = outputs.logits
> > running_loss+=loss_num
> > if i%10 ==0:
> > loss_per_10_steps.append(loss_num)
> > out.update(progress(loss_num,i, num_of_batches+1))
> >
> > # calculating the gradients
> > loss.backward()
> >
> > #updating the params
> > optimizer.step()
> >
> > loss_values.append(loss_num)
> > running_loss=running_loss/int(num_of_batches)
> > ```
>
> thanks! `dev` would be equal to `"cuda"` I suppose?
Yeah "cuda" sorry.<|||||>Could you also attach some code for `train_df` and `optimizer`? So that I can fully reproduce :-) <|||||>>
>
> Could you also attach some code for `train_df` and `optimizer`? So that I can fully reproduce :-)
Sure!
```
optimizer = Adafactor(model.parameters(),lr=1e-3,
eps=(1e-30, 1e-3),
clip_threshold=1.0,
decay_rate=-0.8,
beta1=None,
weight_decay=0.0,
relative_step=False,
scale_parameter=False,
warmup_init=False)
```
train_df is just a dataframe containing something like the following:
product_name product_description
37245 Test Product 1 Test Description 1
23451 Test Product 2 Test Description 2
Not sure how to attach a file via GitHub my apologies.
<|||||>> @patrickvonplaten Not 100% sure that this fix works, forked and implemented the fixes as shown and receive the following error upon training:
>
> ```
> Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.cuda.FloatTensor instead (while checking arguments for embedding).
> ```
>
> If I revert back to transformers-4.0.0 and run exactly the same script, with the same data then T5 trains fine. Could be user error, although I hope not! Did look back through the script and try and diagnose, but no luck as of yet.
I tried with some dummy training data and it works for me...not sure what the problem is. Also the error message hints at a wrong `dtype` of either `input_ids` or `labels`...
Could you try to do the following:
```python
inputbatch=inputbatch.to(dev).to(torch.long)
labelbatch=labelbatch.to(dev).to(torch.long)
```
and see if the error persists?<|||||>> ```python
> .to(torch.long)
> ```
never mind, I can reproduce! Thanks for the message! Will see how to fix it -> weird error<|||||>>
>
> > ```python
> > .to(torch.long)
> > ```
>
> never mind, I can reproduce! Thanks for the message! Will see how to fix it -> weird error
Great thank you :)
Previous issue describing the same error #7026 . Gave me some guidance but couldn't quite work it out.<|||||>> > > ```python
> > > .to(torch.long)
> > > ```
> >
> >
> > never mind, I can reproduce! Thanks for the message! Will see how to fix it -> weird error
>
> Great thank you :)
>
> Previous issue describing the same error #7026 . Gave me some guidance but couldn't quite work it out.
Should be good now - was 100% introduces by this PR -> thanks a lot for spotting it!<|||||>>
>
> > > > ```python
> > > > .to(torch.long)
> > > > ```
> > >
> > >
> > > never mind, I can reproduce! Thanks for the message! Will see how to fix it -> weird error
> >
> >
> > Great thank you :)
> > Previous issue describing the same error #7026 . Gave me some guidance but couldn't quite work it out.
>
> Should be good now - was 100% introduces by this PR -> thanks a lot for spotting it!
Amazing thank you will run through a test this evening. <|||||>@sgugger @LysandreJik - I updated the PR description. It's good to merge for me. Let me know what you think. |
transformers | 8,879 | closed | dropout(): argument 'input' (position 1) must be Tensor, not str With Bert | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: google colab
- Python version: 3
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): Bert
@LysandreJik
@jplu
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
I am trying to do sentiment analysis using Bert. My code was working perfectly fine and then last night I tried to run it without changing anything and I am getting the following error message:
"dropout(): argument 'input' (position 1) must be Tensor, not str"
I trained my Bert model and saved the bin file. This occurs when I load the bin file into collab and try to predict the sentiment of any text.
## To reproduce
Steps to reproduce the behavior:
1. Loaded my model that was saved in a bin file in google colab
2. Ran the following code:
`def conclude_sentiment(text):
encoded_review = tokenizer.encode_plus(
text,
max_length=MAX_LEN,
add_special_tokens=True,
return_token_type_ids=False,
pad_to_max_length=True,
return_attention_mask=True,
return_tensors='pt',
)
input_ids = encoded_review['input_ids'].to(device)
attention_mask = encoded_review['attention_mask'].to(device)
output = model(input_ids, attention_mask)
_, prediction = torch.max(output, dim=1)
#print(f'Review text: {text}')
#print(f'Sentiment : {class_names[prediction]}')
return class_names[prediction]`
3. Got an error that says
`/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in dropout(input, p, training, inplace)
981 return (_VF.dropout_(input, p, training)
982 if inplace
--> 983 else _VF.dropout(input, p, training))
984
985
TypeError: dropout(): argument 'input' (position 1) must be Tensor, not str`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
An output of either 'positive' or 'negative' when a string is passed into the method named 'conclude_sentiment'
| 12-01-2020 16:48:17 | 12-01-2020 16:48:17 | having the **same problem**, what is happening, it was working just fine for the past like 90 days!!
<|||||>Hello! It would be very helpful if you could complete the information related to your environment. If you could have a reproducible code example, that would really be great as well.
It is possible you were affected by the breaking changes from v3.x to v4.x. If this is the case, I invite you to read the [migration notes](https://huggingface.co/transformers/migration.html), or to pin the transformers library to the major version 3: `pip install transformers==3`<|||||>https://github.com/mosh98/Swedish_Sentiment_BERTIL/blob/main/Swe_Bert_Training_Bigger_dataset.ipynb<|||||>I am afraid I see no error in your notebook.<|||||>okej thanks for the tip, pinning it to version 3 did the trick!<|||||>Thanks @LysandreJik,
It works but creates a new error here that says:
Error(s) in loading state_dict for SentimentClassifier:
Unexpected key(s) in state_dict: "bert.embeddings.position_ids".
[Notebook
](https://colab.research.google.com/drive/1fEXY3IQ82u41KvwoDOg-oY95RDD1OpKg?usp=sharing)
**After running the code below**:
```
saved_model = torch.load('selective_stock_dataset_state-2.bin')
model = SentimentClassifier(len(class_names))
model.load_state_dict(saved_model)
model = model.to(device)
```
```
class SentimentClassifier(nn.Module):
def __init__(self, n_classes):
super(SentimentClassifier, self).__init__()
self.bert = BertModel.from_pretrained(PRE_TRAINED_MODEL_NAME)
self.drop = nn.Dropout(p=0.3)
self.out = nn.Linear(self.bert.config.hidden_size, n_classes)
self.softmax = nn.Softmax(dim=1)
def forward(self, input_ids, attention_mask):
_, pooled_output = self.bert(
input_ids=input_ids,
attention_mask=attention_mask
)
output = self.drop(pooled_output)
output = self.out(output)
return self.softmax(output)
```
Could you advise. Please and thanks <|||||>as he mentioned earlier, try using `pip install transformers==3`<|||||>@mosh98 , I have tried with `pip install transformers==3` and it removed the first error.
But i then get a new error that I was no getting before that says
`Error(s) in loading state_dict for SentimentClassifier:
Unexpected key(s) in state_dict: "bert.embeddings.position_ids".`
see my notebook here: [notebook](https://colab.research.google.com/drive/1fEXY3IQ82u41KvwoDOg-oY95RDD1OpKg?usp=sharing#scrollTo=iQ93LDzMXO58l)<|||||>Ahh it was solved by changing
`model.load_state_dict(saved_model)`
to
`model.load_state_dict(saved_model, strict=False)`
<|||||>Hi, indeed, this is a different error. We recommend using the `from_pretrained` method (your custom model would need to inherit from `PreTrainedModel` rather than `nn.Module`) rather than using `load_state_dict` to ensure maximum compatibility between checkpoints and architectures, otherwise the state dicts might not be 100% loadable on each custom architecture.
Your workaround using `strict=False` also works!<|||||>"""pip install transformers==3""" doesnt seem to work
<|||||>No need to downgrade the transformers. Just do the following - it's from the migration guide.
```
model = BertModel.from_pretrained("bert-base-cased")
outputs = model(**inputs, return_dict=False)
```
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>`outputs = model(**inputs, return_dict=False)`
or
`model = BertModel.from_pretrained("bert-base-cased",return_dict=False)`<|||||>> `outputs = model(**inputs, return_dict=False)`
>
> or
>
> `model = BertModel.from_pretrained("bert-base-cased",return_dict=False)`
cool, it works.<|||||>> > `outputs = model(**inputs, return_dict=False)`
> > or
> > `model = BertModel.from_pretrained("bert-base-cased",return_dict=False)`
>
> cool, it works.
great! it worked for me too, thanks a million :) |
transformers | 8,878 | closed | Better support for resuming training | # What does this PR do?
This PR adds two things linked to resuming training:
1. It brings full reproducibility when resuming an interrupted training from a checkpoint (i.e., resuming a training from a checkpoint will give the exact same results as a training from the beginning with the same seeding). This was not currently the case because the dataloader shuffle was not triggered `epochs_already_trained` times, so the shuffle of the dataloader was the same as epoch 0. So the full reproducibility was only there for trainings resumed from an early checkpoint (during the first epoch).
2. It also adds the option to ignore that data skipping which can take a very long time on a large dataset. This will go faster but yield different results from a training from scratch.
Fixes #8874 and #8876 | 12-01-2020 16:44:26 | 12-01-2020 16:44:26 | |
transformers | 8,877 | closed | Add a `parallel_mode` property to TrainingArguments | # What does this PR do?
This PR adds a `distributed_env` property to the `TrainingArugments` making it clear if we are in:
- a single process (CPU or one GPU)
- a parallel setting (one process but several GPUs)
- a distributed parallel setting (several processes, one per GPU)
- a TPU setting
Fixes #8858
| 12-01-2020 16:20:15 | 12-01-2020 16:20:15 | Given our discussion yesterday, I'm not sure `distributed_env` is fitting. As you convinced me that DP is not distributed when it comes to pytorch conventions, `if self.distributed_env == "dp"` is back to being confusing.
Given that with the exception of tpu, all dp/ddp/mp/pp are SomethingParallel, should it be called `parallel_mode`?
I don't know anything about tpu, so it's hard for me to know where it fits. But it's probably not distributed either. And not parallel either.
So perhaps we call it `compute_env`<|||||>LGTM, @sgugger!
|
transformers | 8,876 | closed | Resume training from checkpoint: not progressing | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.0.0
- Platform: Linux-3.10.0-514.el7.x86_64-x86_64-with-centos-7.8.2003-Core
- Python version: 3.7.2
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@sgugger
## Information
Model I am using (Bert, XLNet ...): BERT
The problem arises when using:
* [x] the official example scripts: /examples/language-modeling/run_mlm.py
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: BERT MLM pre-training with own dataset
## To reproduce
Steps to reproduce the behavior:
1. Run script run_mlm.py, training from scratch, and save a checkpoint.
2. Stop the training.
3. Restore the training from the checkpoint, e.g. with the code below
4. When restoring, the pre-training process is not progressing (since hours).
```cmd
python run_mlm.py --model_type bert --model_name_or_path /bert-base-v2/checkpoint-204516/
--overwrite_output_dir --config_name /bert-base-v2-config/ --tokenizer_name /bert-base-v2-config/
--train_file /train_subset.txt --validation_file /eval_subset.txt --do_train --do_eval --line_by_line
--output_dir /bert-base-v2/ --cache_dir /tmp/ --save_total_limit 300 --num_train_epochs 10
--warmup_steps 10000 --logging_steps 5000 --save_steps 11362
--per_device_train_batch_size 128 --per_device_eval_batch_size 128 --seed 42
```
Output is
```
12/01/2020 15:43:28 - INFO - __main__ - Loading tokenized dataset from file...
12/01/2020 15:47:22 - INFO - __main__ - Done.
[INFO|trainer.py:357] 2020-12-01 15:47:29,458 >> The following columns in the training set don't have a corresponding argument in `BertForMaskedLM.forward` and have been ignored: special_tokens_mask.
[INFO|trainer.py:357] 2020-12-01 15:47:29,459 >> The following columns in the evaluation set don't have a corresponding argument in `BertForMaskedLM.forward` and have been ignored: special_tokens_mask.
[INFO|trainer.py:662] 2020-12-01 15:47:32,843 >> ***** Running training *****
[INFO|trainer.py:663] 2020-12-01 15:47:32,843 >> Num examples = 145434960
[INFO|trainer.py:664] 2020-12-01 15:47:32,843 >> Num Epochs = 10
[INFO|trainer.py:665] 2020-12-01 15:47:32,843 >> Instantaneous batch size per device = 128
[INFO|trainer.py:666] 2020-12-01 15:47:32,843 >> Total train batch size (w. parallel, distributed & accumulation) = 128
[INFO|trainer.py:667] 2020-12-01 15:47:32,843 >> Gradient Accumulation steps = 1
[INFO|trainer.py:668] 2020-12-01 15:47:32,843 >> Total optimization steps = 11362110
[INFO|trainer.py:681] 2020-12-01 15:47:32,846 >> Continuing training from checkpoint, will skip to saved global_step
[INFO|trainer.py:682] 2020-12-01 15:47:32,846 >> Continuing training from epoch 0
[INFO|trainer.py:683] 2020-12-01 15:47:32,846 >> Continuing training from global step 204516
[INFO|trainer.py:684] 2020-12-01 15:47:32,846 >> Will skip the first 204516 batches in the first epoch
0%| | 0/11362110 [00:00<?, ?it/s]
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Would expect the training to restore from 204516 and continue training.
| 12-01-2020 16:06:17 | 12-01-2020 16:06:17 | It is expected that this would take some time, since it has to skip through `204,516` batches before continuing training. It will continue progressing after that skip is done.<|||||>In the PR mentioned above, I'm adding a flag to ignore that step if you're prepared to pay the price of having the training be slightly different from a training from scratch to go faster.<|||||>That would do for my case, thanks!<|||||>With that PR everything worked as expected, thanks for the very quick turnaround!<|||||>Happy to hear!<|||||>@sgugger Sorry to bother, but I am wondering why skipping steps takes computing. I mean, the random_seed is specified, so the trainer just need to find the breakpoint of an epoch and resume, I shouldn't take much time.
So does any parts of my understand are wrong?<|||||>Yes, but there is no way to be in the exact sample place in the dataloaders (that have randomness with the shuffling) without going through the first epochs and then batches.<|||||>@sgugger thanks for the information, sorry to revive this issue. How long does it usually take to go through the first epochs and then batches? half of what it took to train until that point or less?<|||||>It depends on your data and the time needed for your preprocessing. Note that there is a progress bar in the newer versions of Transformers so you can get a sense of the remaining time. You can also skip this with the [flag `ignore_data_skip`](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments.ignore_data_skip) though the model will train on already seen data in this case.<|||||>I have the same issue,
```
***** Running training *****
Num examples = 2,560,000
Num Epochs = 9,223,372,036,854,775,807
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 16
Gradient Accumulation steps = 2
Total optimization steps = 160,000
Number of trainable parameters = 332,891,919
Continuing training from checkpoint, will skip to saved global_step
Continuing training from epoch 0
Continuing training from global step 77000
Will skip the first 0 epochs then the first 154000 batches in the first epoch.
0%| | 0/160000 [00:00<?, ?it/s]
The following columns in the training set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: locale, audio, input_length, sentence. If locale, audio, input_length, sentence are not expected by `Wav2Vec2ForCTC.forward`, you can safely ignore this message.
There seems to be not a single sample in your epoch_iterator, stopping training at step 77000! This is expected if you're using an IterableDataset and set num_steps (160000) higher than the number of available samples.
Training completed. Do not forget to share your model on huggingface.co/models =)
{'train_runtime': 1869.2894, 'train_samples_per_second': 1369.504, 'train_steps_per_second': 85.594, 'train_loss': 0.0, 'epoch': 43.01}
0%| | 0/160000 [31:09<?, ?it/s]Saving model checkpoint to /usr/local/bin/source/output
Configuration saved in /usr/local/bin/source/output/config.json
Model weights saved in /usr/local/bin/source/output/pytorch_model.bin
Feature extractor saved in /usr/local/bin/source/output/preprocessor_config.json
tokenizer config file saved in /usr/local/bin/source/output/tokenizer_config.json
Special tokens file saved in /usr/local/bin/source/output/special_tokens_map.json
added tokens file saved in /usr/local/bin/source/output/added_tokens.json
trainer save model!
metric: train_runtime
***** train metrics *****
epoch = 43.01
train_loss = 0.0
train_runtime = 0:31:09.28
train_samples_per_second = 1369.504
train_steps_per_second = 85.594
06/19/2023 08:28:34 - INFO - __main__ - *** Evaluate ***
***** Running Evaluation *****
Num examples: Unknown
Batch size = 8
The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: locale, audio, input_length, sentence. If locale, audio, input_length, sentence are not expected by `Wav2Vec2ForCTC.forward`, you can safely ignore this message.
Dropping the following result as it does not have all the necessary fields:
{'task': {'name': 'Automatic Speech Recognition', 'type': 'automatic-speech-recognition'}, 'metrics': [{'name': 'Wer', 'type': 'wer', 'value': 1.0006485084306096}]}
```
max_steps= 160000
last checkpoint=77000
if **ignore_data_skip**=True is set, it can resume training correctly. |
transformers | 8,875 | closed | Fix mlflow parameter overflow | # What does this PR do?
This PR fixes the issue #8849 where MLflow logging failed due to parameters logged being too long. Now the MLflow logger also fetches the limits directly from MLflow validation utility.
<!-- Remove if not applicable -->
Fixes #8849
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
| 12-01-2020 15:42:45 | 12-01-2020 15:42:45 | Thanks!<|||||>Sorry again we let this sit for so long!
So since it's been a long time, the diff has gotten quite messy. Would you mind closing and re-opening a clean PR @noise-field ? Ping me on it and we'll expedite the review. Sorry again.<|||||>Closing as requested |
transformers | 8,874 | closed | Results are different when fine-tuning continues after loading model from checkpoint | ## Environment info
- `transformers` version: 4.0.0
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101 (True)
- Tensorflow version (GPU?): 2.3.0 (True)
- Using GPU in script?: yes (device: cuda:0, n_gpu: 1)
- Using distributed or parallel set-up in script?: False
### Who can help
@sgugger
@stefan-it
## Information
Model I am using (Bert, XLNet ...): bert-base-cased
The problem arises when using:
* [x] the official example scripts: run_ner_old.py
The tasks I am working on is:
* [x] my own task or dataset: token classification for a rhetoric device
## To reproduce
Steps to reproduce the behavior:
1. Run run_ner_old script and save model after one epoch (282 steps):
```
python3 ./run_ner_old.py \
--data_dir ./data/ \
--labels ./data/labels.txt \
--model_name_or_path bert-base-cased \
--output_dir ./output/ \
--max_seq_length 128 \
--num_train_epochs 2 \
--per_device_train_batch_size 16 \
--save_steps 282 \
--seed 1 \
--do_train \
--do_eval
```
2. Run ner_old_script from checkpoint-282:
```
python3 ./run_ner_old.py \
--data_dir ./data/ \
--labels ./data/labels.txt \
--model_name_or_path ./output/checkpoint-282 \
--tokenizer bert-base-cased \
--output_dir ./output2/ \
--max_seq_length 128 \
--num_train_epochs 2 \
--per_device_train_batch_size 16 \
--save_steps 282 \
--seed 1 \
--do_train \
--do_eval
```
3. Compare evaluation results
**First experiment:**
Run the script `run_ner_old.py` as showed above to fine-tune BERT.
I saved the model after the first epoch (282 steps).
**Second experiment:**
Run the script `run_ner_old.py` as showed above to fine-tune BERT, starting from checkpoint-282 from the first experiment:
```
[INFO|trainer.py:662] 2020-12-01 14:35:09,848 >> ***** Running training *****
[INFO|trainer.py:663] 2020-12-01 14:35:09,848 >> Num examples = 4501
[INFO|trainer.py:664] 2020-12-01 14:35:09,848 >> Num Epochs = 2
[INFO|trainer.py:665] 2020-12-01 14:35:09,849 >> Instantaneous batch size per device = 16
[INFO|trainer.py:666] 2020-12-01 14:35:09,849 >> Total train batch size (w. parallel, distributed & accumulation) = 16
[INFO|trainer.py:667] 2020-12-01 14:35:09,849 >> Gradient Accumulation steps = 1
[INFO|trainer.py:668] 2020-12-01 14:35:09,849 >> Total optimization steps = 564
[INFO|trainer.py:681] 2020-12-01 14:35:09,851 >> Continuing training from checkpoint, will skip to saved global_step
[INFO|trainer.py:682] 2020-12-01 14:35:09,851 >> Continuing training from epoch 1
[INFO|trainer.py:683] 2020-12-01 14:35:09,851 >> Continuing training from global step 282
[INFO|trainer.py:684] 2020-12-01 14:35:09,851 >> Will skip the first 0 batches in the first epoch
```
This seems right as the training continues from step 282 and it trains one complete epoch ("skip the first 0 batches").
But when I **compare the results**, they are slightly different:
1. experiment: eval_f1 = 0.9226747985188413
2. experiment: eval_f1 = 0.9211328976034858
Also the loss after 500 steps is already different:
1. experiment:
`{'loss': 0.09096851348876953, 'learning_rate': 5.673758865248227e-06, 'epoch': 1.773049645390071}
`
2. experiment:
`
{'loss': 0.010856814384460449, 'learning_rate': 5.673758865248227e-06, 'epoch': 1.773049645390071}
`
## Expected behavior
I would have expected that both trained models should produce the same results since the second experiment does exactly the same but in two steps. (The model is saved and loaded between the two epochs).
The *checkpoint-282* directory consists of the following files:
```
config.json
optimizer.pt
pytorch_model.bin
scheduler.pt
trainer_state.json
training_args.bin
vocab.txt
```
It does not seem that there is any random initialization since I added the seed and the results do not change when running again.
Did I forget to save or load anything?
Cheers | 12-01-2020 15:22:40 | 12-01-2020 15:22:40 | Hi there. The results are slightly different because your dataloader has some randomization (the train dataloader has `shuffle=True` ) and the `Trainer` currently does not go through your dataloader for the past epochs when resuming training. So it trains starting from the global step 282 with the data of the epoch 0 of the initial training (hope that makes sense).
Let me see if we can support full reproducibility without a big drop in performance (cause we don't want to loop through that epoch 0 without doing anything either).<|||||>Hej
thanks for the fast answer!
I tried it out, but the results still differ slightly (leaving out: `--ignore_data_skip`).
Your change in the code should make the results exactly the same when continue training, right?
Is it because of the training data sampler (`RandomSampler`)?
Cheers
<|||||>> Your change in the code should make the results exactly the same when continue training, right?
Yes, and this is enforced by tests in the CI. If your results still differ slightly, there might be another source of randomness not properly seeded that is responsible for those changes. |
transformers | 8,873 | closed | How to pass the attention mask as a param to model forward when using torchscript? | ## Environment info
- `transformers` version:
- Platform: Ubuntu 16.04
- Python version: 3.6.9
- PyTorch version (GPU): 1.3.0+cu100
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
## Information
I am using a Bert model downloaded from hugging face. I finetuned that model for a two-class classification task and convert that model to a `torchscript` through the `jit.trace`.
The following code show how I got the torchscript:
from transformers import BertTokenizer, BertForSequenceClassification
tokenizer = BertTokenizer.from_pretrained(tokenizer_dir)
model = BertForSequenceClassification.from_pretrained(model_dir, num_labels=2, torchscript=True)
model.eval()
model = model.to("cuda:0")
input_text = ["test this case", "test test this case"]
encoding = tokenizer(text_batch, return_tensors='pt', padding=True, truncation=False)
input_ids = encoding['input_ids']
attention_mask = encoding['attention_mask']
tokens_tensor = torch.tensor(input_ids)
tokens_tensor = tokens_tensor.to("cuda:0")
traced_model = torch.jit.trace(model, tokens_tensor)
torch.jit.save(traced_model, str(pt_path))
The following code show how I use the torchscript, the input tensor is the same with the first code:
pt_model = torch.jit.load(model_path))
pt_model.eval()
pt_label = pt_model(input_tensor)[0]
For the normal model, it needs to pass two params input tensor and attention mask like:
model(input, attention_mask=attn_mask)
But for the `torchscript`, I can't pass the attention mask to the model.
So, what's the right way to use `torchscript` to do the forward with the attention mask?
Thanks!
| 12-01-2020 14:28:25 | 12-01-2020 14:28:25 | I think you would need to compile your model with both the tokens tensor and the attention mask. Given that the attention mask is the second argument, you can pass it directly when tracing the model:
```py
traced_model = torch.jit.trace(model, [tokens_tensor, attention_mask])
```
then you can do:
```py
model(tokens_tensor, attention_mask)
```<|||||>@LysandreJik It works. Thanks for your help! |
transformers | 8,872 | closed | Deberta Tokenizatiion | ## Environment info
- `transformers` version: 4.0.0
- Platform: Linux
- Python version: 3.8
- PyTorch version (GPU?): 1.7
### Who can help
@BigBird01 @LysandreJik
## Information
I'd like to use the new deberta model, but it seems that the tokens aren't mapped correctly?
```
from transformers import AutoTokenizer
test_string = 'hello, I am a dog'
tokenizer = AutoTokenizer.from_pretrained('roberta-base')
print('Roberta output is: ', tokenizer.tokenize(test_string))
tokenizer = AutoTokenizer.from_pretrained('microsoft/deberta-base')
print('Deberta output is: ', tokenizer.tokenize(test_string))
```
Roberta output is: ['hello', ',', 'Δ I', 'Δ am', 'Δ a', 'Δ dog']
Deberta output is: ['31373', '11', '314', '716', '257', '3290']
I'd expect deberta to give an output similar to roberta, rather than numbers. | 12-01-2020 13:33:03 | 12-01-2020 13:33:03 | @LysandreJik any update on this?<|||||>@yaysummeriscoming To get sub words instead of numbers, you can call `tokenizer.gpt2_tokenizer.decode(tokens)`. Please take a look at [our code](https://github.com/huggingface/transformers/blob/52c9e842854a701a7d1b608600a614278b4407d3/src/transformers/tokenization_deberta.py#L396) for reference.<|||||>That did the trick, thanks! |
transformers | 8,871 | closed | Decrease Longformer window size / computational cost | Hi there,
I would like to use Longformer instead of BERT or ROBERTA for longer documents, e.g., 1024 subword units. My goal is to fit a batch of equal size in the same GPU card for all models. In my understanding, this cannot happen with the default configuration, which used windows of 512 subword-units for local attention. In other words, this is by default more computationally expensive than running BERT or ROBERTA. So I thought the solution would be to decrease the size of window proportionally to the increase of input sequence size. This will lead to equal or less computations. I run the following experiments in a single RTX 2080Ti:
**Train ROBERTA with `batch_size=6` and `max_len=512` (SUCCESS)**
```python
from transformers import TFLongformerModel, LongformerConfig, TFRobertaModel
import tensorflow as tf
import numpy as np
import logging
logging.getLogger("tensorflow").setLevel(logging.ERROR)
logging.getLogger("transformers").setLevel(logging.ERROR)
class Classifier(tf.keras.Model):
def __init__(self, bert_encoder, *args, **kwargs):
super(Classifier, self).__init__(*args, **kwargs)
self.classifier = tf.keras.layers.Dense(2)
self.bert_encoder = bert_encoder
def call(self, inputs):
bert_encodings = self.bert_encoder(inputs)
return self.classifier(tf.squeeze(bert_encodings[0][:, 0:1, :], axis=1))
# Train ROBERTA for 512 TOKENS
roberta = TFRobertaModel.from_pretrained('roberta-base')
roberta_classifier = Classifier(bert_encoder=roberta)
dummy_inputs = np.zeros((6, 512), dtype=np.int32)
dummy_outputs = np.zeros((6, 2), dtype=np.int32)
roberta_classifier.compile(optimizer='adam', loss='categorical_crossentropy')
roberta_classifier.fit(dummy_inputs, dummy_outputs, batch_size=8)
print('Roberta (512) trained successfully!')
```
**Train LONGFORMER with `batch_size=6` and `max_len=512` and `attention_window=512` (SUCCESS)**
```python
# Train LONG-FORMER for 512 TOKENS
config = LongformerConfig.from_pretrained('allenai/longformer-base-4096')
config.attention_window = [512] * 12
longformer = TFLongformerModel(config)
longformer_classifier = Classifier(bert_encoder=longformer)
dummy_inputs = np.zeros((6, 512), dtype=np.int32)
dummy_outputs = np.zeros((6, 2), dtype=np.int32)
longformer_classifier.compile(optimizer='adam', loss='categorical_crossentropy')
longformer_classifier.fit(dummy_inputs, dummy_outputs, batch_size=8)
print('Longformer (512) trained successfully!')
```
**Train LONGFORMER with `batch_size=6` and `max_len=1024` and `attention_window=128` (FAILED-OOM)**
```python
# Train LONG-FORMER for 1024 TOKENS
config = LongformerConfig.from_pretrained('allenai/longformer-base-4096')
config.attention_window = [128] * 12
longformer = TFLongformerModel(config)
longformer_classifier = Classifier(bert_encoder=longformer)
dummy_inputs = np.zeros((6, 1024), dtype=np.int32)
dummy_outputs = np.zeros((6, 2), dtype=np.int32)
longformer_classifier.compile(optimizer='adam', loss='categorical_crossentropy')
longformer_classifier.fit(dummy_inputs, dummy_outputs, batch_size=8)
print('Longformer (1024) trained successfully!')
```
The last script is failing with an OOM issue. The same happens for `attention_window in [32,64]`
@patrickvonplaten do I miss something? Thanks! | 12-01-2020 12:43:49 | 12-01-2020 12:43:49 | Hey @iliaschalkidis,
thanks for your issue! The memory usage in Longformer does not decrease linearly when reducing the attention_window...but I'm a bit surprised that you are experiencing OOM in your set-up...Does the same happen for your in eager mode? I'll try to look into it a bit next week. One thing that would be of great help is if you find time to benchmark the memory usage of `TFLongformer`, for:
- eager mode
- compiled
for different settings of the window size<|||||>Hi @patrickvonplaten, how do you recommend to perform this benchmarking? Any suggestion (best practice/ tool)? I also had the impression that TF2 is in eager execution by default...<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 8,870 | closed | Token classification example only returns labels as -100 for longformer | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.0.0
- Platform: Linux-5.4.0-1029-aws-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?:4 x V100 (16GB)
- Using distributed or parallel set-up in script?: parallel
Also:
tokenizers==0.9.4
datasets==1.1.3
I should note I had the same issue in transformers 3.5.1 too
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people. -->
tokenizers: @mfuntowicz
Longformer/Reformer: @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): Longformer
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
I have made very modifications to the token-classification example ([see here](https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_ner.py)) to allow me to use my own custom dataset for NER. I have 3 labels O, B-Org, I-ORG.
When processing my inputs with models like `bert-base-cased`, everything runs smoothly, however, when I make the switch to the `allenai/longformer-base-4096` model, the `tokenize_and_align_labels()` that runs via `datasets.map()` only returns labels of -100 for every token.
## To reproduce
Patch file for original `run_ner.py`
```
20d19
<
44d42
<
199d196
<
206,209c203,204
< text_column_name = "tokens" if "tokens" in column_names else column_names[0]
< label_column_name = (
< f"{data_args.task_name}_tags" if f"{data_args.task_name}_tags" in column_names else column_names[1]
< )
---
> text_column_name = "words"
> label_column_name = "ner"
213,217c208,213
< def get_label_list(labels):
< unique_labels = set()
< for label in labels:
< unique_labels = unique_labels | set(label)
< label_list = list(unique_labels)
---
> def get_label_list(label_lists):
> label_list = list(set(
> [label
> for label_list in label_lists
> for label in label_list]
> ))
227a224
> id_to_label = {i: l for i, l in enumerate(label_list)}
239a237,238
> id2label=id_to_label,
> label2id=label_to_id,
244a244
> add_prefix_space=True,
```
minimal `train.json` data
```json
{
"id": 169,
"words": [
"My", "favourite", "thing", "about", "the", "market", "was", "Manteigaria", "which", "sells", "the", "best", "pasteis", "de", "Nata", "in", "the", "city", "in", "my", "opinion", "."
],
"ner": [
"O", "O", "O", "O", "O", "O", "O", "B-Organisation", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"
]
}
```
Example `train-config.json`
```json
{
"train_file": "./train.json",
"model_name_or_path": "allenai/longformer-base-4096",
"output_dir": "./output",
"max_seq_length": 4096,
"num_train_epochs": 3,
"pad_to_max_length": false,
"per_device_train_batch_size": 1,
"per_device_eval_batch_size": 1,
"save_steps": 250,
"eval_steps": 250,
"seed": 1,
"do_train": true,
"do_eval": false,
"do_predict": false,
"fp16": true,
"evaluation_strategy": "steps",
"save_total_limit": 1,
}
```
Steps to reproduce the behavior:
1. Apply the patch to the `run_ner.py` file found in the token_classification example on the master branch
2. run `python run_ner.py train-config.json`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
If you print `tokenized_inputs["labels"]` produced by `tokenize_and_align_labels()` you'll see that it is a list of `-100` when using `allenai/longformer-base-4096`. However, if you change this to `bert-base-cased`, it will produce the labels `[[-100, 1, 1, 1, 1, 1, 1, 1, 0, -100, -100, -100, 1, 1, 1, 1, 1, -100, 1, 1, -100, 1, 1, 1, 1, 1, 1, 1, -100]]`, which is correct as `1` is `O` and `0` is `B-Organisation`. (There won't be an `I-Organisation` as this minimal reproducible example doesn't have one). | 12-01-2020 12:40:15 | 12-01-2020 12:40:15 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 8,869 | closed | Exporting ALBERT model to onnx increases model size by 7x | I'm trying to export `albert-base-v2` model to onnx using
`python -m transformers.convert_graph_to_onnx --framework pt --model albert-base-v2 --quantize albert.onnx --opset 12`
The original pytorch model size is around 45 MB (https://huggingface.co/albert-base-v2), but the exported model size is around 340 MB.
How do I keep the model's size same? | 12-01-2020 11:40:04 | 12-01-2020 11:40:04 | @mfuntowicz might have an idea. I'm guessing that what it does is copy each layer, while all the layers have shared weights and should point to the same tensor.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 8,868 | closed | Transfoxl seq classification | This PR implements Sequence classification for Transformer XL model
TransfoxlForSequenceClassification uses the last token in order to do the classification, as other causal models (e.g. GPT-1,GPT-2) do.
Fixes #7623 (Partially)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@LysandreJik | 12-01-2020 10:09:34 | 12-01-2020 10:09:34 | |
transformers | 8,867 | closed | length_penalty not influencing results (Bart, Pegasus) | Hello,
I am experimenting with the generative parameters of the two models Bart and Pegasus. In particular, I am having trouble with the `length_penalty` parameter, since changing it does not change the output of the model.
I am summarizing two different chapters of a book (# tokens around 1k) and this is the code I am using:
```
model.generate(
b0ch1sec1_text_enc,
min_length = 150,
max_length = 350,
num_beams = 2,
length_penalty = lp,
early_stopping = True)[0]
```
With `lp` swiping from 0.1 to 2 and model being either `bart-large-cnn` or `pegasus-large`.
Do you have any idea why the output does not change at all? | 12-01-2020 10:00:02 | 12-01-2020 10:00:02 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discusss.huggingface.co) instead?
Thanks! |
transformers | 8,866 | closed | different embedding weights for base-uncased with different transformers versions | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.0.0, 3.4.0 and 2.9.0
- Platform:
- Python version: 3.7.0
- PyTorch version: 1.4.0
- Tensorflow version: 2.2.0
## Information
Model I am using: Bert
The problem arises when using my own scripts. I trained a LayoutLM model by using the original Unilm repo (https://github.com/microsoft/unilm/tree/master/layoutlm) and obtained pretty good results (Β± 0.9 f1 score). When the Huggingface implementation came out, I retrained the model with the same dataset, parameters and seed and got rubbish results (less then 0.2 F1 score). After investigating, I found that the weights of the embeddings of the pretrained model, loaded at the beginning of training are different for different transformers versions. The weights are also different for the final trained model: a model trained with the original implementation gives different predict results for the same data when predicting using the Huggingface implementation, due to the weights being different after loading.
## To reproduce
Steps to reproduce the behavior:
Huggingface code:
```
from transformers import LayoutLMConfig, LayoutLMForTokenClassification
pretrained_model_path = "models/base-uncased"
config = LayoutLMConfig.from_pretrained(pretrained_model_path, num_labels=len(25))
model = LayoutLMForTokenClassification.from_pretrained(
pretrained_model_path, from_tf=bool(".ckpt" in pretrained_model_path), config=config
)
print(model.base_model._modules["embeddings"]._modules["word_embeddings"].weight)
"""transformers 4.0.0:
Parameter containing:
tensor([[-0.0211, -0.0056, 0.0198, ..., 0.0119, 0.0074, -0.0048],
[-0.0268, 0.0006, 0.0310, ..., -0.0195, -0.0534, 0.0284],
[ 0.0234, 0.0026, -0.0024, ..., -0.0074, -0.0015, -0.0212],
...,
[-0.0274, -0.0074, 0.0161, ..., -0.0256, 0.0189, -0.0328],
[-0.0350, -0.0304, 0.0087, ..., -0.0349, -0.0086, 0.0229],
[-0.0068, -0.0077, -0.0084, ..., -0.0181, -0.0111, 0.0385]],
requires_grad=True)
"""
"""transformers 3.4.0:
Parameter containing:
tensor([[ 0.0298, -0.0229, -0.0033, ..., 0.0097, -0.0179, -0.0065],
[-0.0098, 0.0150, -0.0283, ..., -0.0424, -0.0031, -0.0135],
[ 0.0122, 0.0038, -0.0066, ..., -0.0261, 0.0167, 0.0176],
...,
[ 0.0037, 0.0001, 0.0096, ..., -0.0037, -0.0018, 0.0067],
[ 0.0274, 0.0076, 0.0065, ..., 0.0084, -0.0230, -0.0011],
[-0.0155, -0.0155, -0.0028, ..., -0.0140, 0.0084, -0.0016]],
requires_grad=True)
"""
```
With original Layoutlm implementation, transformers 2.9.0:
```
from unilm.layoutlm.layoutlm import LayoutlmConfig, LayoutlmForTokenClassification
pretrained_model_path = "models/base-uncased"
config = LayoutlmConfig.from_pretrained(
pretrained_model_path,
num_labels=len(25),
)
model = LayoutlmForTokenClassification.from_pretrained(
pretrained_model_path,
from_tf=bool(".ckpt" in pretrained_model_path),
config=config,
)
print(model.base_model._modules["embeddings"]._modules["word_embeddings"].weight)
"""
Parameter containing:
tensor([[-0.0111, -0.0777, 0.0293, ..., -0.0323, -0.0190, 0.0403],
[-0.0579, -0.0331, -0.0399, ..., -0.0248, -0.0278, -0.0398],
[-0.0261, -0.0383, -0.0225, ..., 0.0011, -0.0803, -0.0019],
...,
[-0.0186, -0.0593, -0.0167, ..., -0.0243, -0.0096, 0.0050],
[-0.0555, -0.0274, 0.0049, ..., -0.0206, -0.0172, -0.0241],
[-0.0328, -0.0788, -0.0211, ..., -0.0187, -0.0497, 0.0444]],
requires_grad=True)
"""
```
## Expected behavior
Get the same weights regardless the transformers version used.
| 12-01-2020 09:40:28 | 12-01-2020 09:40:28 | Facing the same issue. A reply on this is highly appreciated.<|||||>can [this](https://github.com/huggingface/transformers/issues/8524#issuecomment-753876838) be your solution? Hope it helps...<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I had the same issue for `GPT2LMHeadModel`. In my case, I found the solution. Let `hf3_model_dir` be the Hugging Face v3.x model directory that you give to `load_pretrained`. Inside this directory is the saved pytorch model file called `pytorch_model.bin`. Let's load this file directly using pytorch:
```
state_dict = torch.load('pytorch_model.bin')`
```
Now check the values of these two entries:
```
state_dict['transformer.wte.weight']
state_dict['lm_head.weight']
```
I found that they were different. However, they should be the same `vocab_size x embedding_size` matrix. Indeed, let's actually load the model:
```
model = transformers.GPT2LMHeadModel.from_pretrained(hf3_model)`
```
And check the following values:
```
model.transformer.wte.weight
model.lm_head.weight
```
You will find that they are the same. However,
in Hugging Face v3.x, they are both equal to `state_dict['lm_head.weight']`
in Hugging Face v4.x, they are both equal to `state_dict['transformer.wte.weight']`.
So that's the cause of the problem. To get the same behavior in Hugging Face v4.x as you get in Hugging Face v3.x, I manually set both equal to `state_dict['lm_head.weight']`.<|||||>As a further comment, for models saved under Hugging Face v4.x, `state_dict['transformer.wte.weight']` and `state_dict['lm_head.weight']` are both equal as they should be.
For models saved under Hugging Face v3.x, `state_dict['transformer.wte.weight']` ends up being (I believe) just random garbage that is harmless if reloaded using Hugging Face v3.x but can be very harmful if reloaded using Hugging Face v4.x |
transformers | 8,865 | closed | can the BertModel convert to onnx? whether any one had done sucessfully ? | 12-01-2020 09:18:07 | 12-01-2020 09:18:07 | 2 resources in the thread linked by valhalla: https://discuss.huggingface.co/t/how-to-apply-pruning-on-a-bert-model/1658/5<|||||>> 2 resources in the thread linked by valhalla: https://discuss.huggingface.co/t/how-to-apply-pruning-on-a-bert-model/1658/5
Thank you very much for your warm reply, I will study it. |
|
transformers | 8,864 | closed | AttributeError: 'NoneType' object has no attribute 'from_pretrained' | This code was working yesterday but doesn't work today:
```py
from transformers import AutoTokenizer
AutoTokenizer("Helsinki-NLP/opus-mt-en-fr")
``` | 12-01-2020 01:59:44 | 12-01-2020 01:59:44 | Same here a couple of hours ago<|||||>1. Hi, could you please provide the information related to your environment?
2. When you say it was working yesterday but was working before, do you mean to say you've upgraded to version v4.0.0 released yesterday? If this is so, you may be obtaining the following error message: `AttributeError: 'NoneType' object has no attribute 'from_pretrained'`. This would be because you do not have `sentencepiece` installed.
3. Are you sure this worked previously? This should never have worked, as `AutoTokenizer` cannot be initialized like this, but has to be instantiated from the `from_pretrained` method:
```py
from transformers import AutoTokenizer
AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-fr")
```
which works on v4.0.0 and on `master`, as long as you have SentencePiece installed.<|||||>Putting a better error message in #8881.<|||||>Right, I was using
```py
AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-fr")
```
Thanks, `pip install sentencepiece` fixed the issue!
It looks that previously the tokenizer outputted torch tensors and now lists. Is this intended? It breaks existing code.<|||||>Yes, this was a bug. Tokenizers are framework-agnostic and should not output a specific framework's tensor. The implementation of the Marian tokenizer was not respecting the API in that regard.
Tokenizers can still handle torch tensors, you need to specify that you want them though:
```py
tokenizer(xxx, return_tensors="pt")
```
I guess in your situation it has to do with the `prepare_seq2seq_batch`:
```py
tokenizer.prepare_seq2seq_batch(xxx, return_tensors="pt")
```<|||||>Thanks! |
transformers | 8,863 | closed | Unwanted left shift of target tokens in `get_nll` | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.4
- Platform: ubuntu
- Python version: 3.8
- PyTorch version (GPU?): 1.6
- Tensorflow version (GPU?):
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?:no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
RAG: @patrickvonplaten, @lhoestq
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x ] my own modified scripts: (give details below)
I am training RAG on the FEVER dataset, trying to generate one token from `['[SUPPORTS]', '[REFUTES]', '[INCONCLUSIVE]'. My loss function is always zero, I think because of the shift left that occurs in `RagTokenForGeneration.get_nll()`, which I think should only happen if special tokens are included.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
Training RAG on FEVER dataset
## To reproduce
Steps to reproduce the behavior:
1. Simply train `RAGTokenForGeneration` on anything, using only one token with no special tokens. Loss is zero
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 11-30-2020 23:04:13 | 11-30-2020 23:04:13 | @patrickvonplaten or @lhoestq might have an idea.<|||||>Hey @JamesDeAntonis yes this is expected. The same behavior would occur for GPT2 if only one token is provided as the labels. You should at least add an EOS token at the end to `labels` (so that you have two labels tokens) to make sure the loss is not zero.
If you cannot do this you will have to fork the repo and manually change the `get_nll()` function.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 8,862 | closed | TypeError: forward() got an unexpected keyword argument 'past' | TypeError: forward() got an unexpected keyword argument 'past'
```
text1 = request.form['rawtext']
m = text1
text = tokenizer.encode(text1)
myinput, past = torch.tensor([text]), None
logits, past = model(myinput, past = past)
logits = logits[0,-1]
probabilities = torch.nn.functional.softmax(logits)
best_logits, best_indices = logits.topk(780)
best_words = [tokenizer.decode([idx.item()]) for idx in best_indices]
text.append(best_indices[0].item())
best_probabilities = probabilities[best_indices].tolist()
for i in range(780):
f = ('Generated {}: {}'.format(i, best_words[i]))
print(f)
```
```
/content/GLPAPP
* Serving Flask app "__main__" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
* Running on http://19bd405035a7.ngrok.io
* Traffic stats available on http://127.0.0.1:4040
127.0.0.1 - - [30/Nov/2020 22:34:16] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [30/Nov/2020 22:34:19] "GET /static/js/jquery.min.js HTTP/1.1" 200 -
127.0.0.1 - - [30/Nov/2020 22:34:19] "GET /static/css/main.css HTTP/1.1" 200 -
127.0.0.1 - - [30/Nov/2020 22:34:19] "GET /static/js/breakpoints.min.js HTTP/1.1" 200 -
127.0.0.1 - - [30/Nov/2020 22:34:19] "GET /static/js/main.js HTTP/1.1" 200 -
127.0.0.1 - - [30/Nov/2020 22:34:19] "GET /static/js/util.js HTTP/1.1" 200 -
127.0.0.1 - - [30/Nov/2020 22:34:19] "GET /static/js/browser.min.js HTTP/1.1" 200 -
127.0.0.1 - - [30/Nov/2020 22:34:19] "GET /static/css/fontawesome-all.min.css HTTP/1.1" 200 -
127.0.0.1 - - [30/Nov/2020 22:34:22] "GET /favicon.ico HTTP/1.1" 404 -
[2020-11-30 22:34:23,393] ERROR in app: Exception on /predict [POST]
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python3.6/dist-packages/flask/_compat.py", line 39, in reraise
raise value
File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "<ipython-input-8-a5d8492f7c0c>", line 36, in predict
logits, past = model(myinput, past = past)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
TypeError: forward() got an unexpected keyword argument 'past'
127.0.0.1 - - [30/Nov/2020 22:34:23] "POST /predict HTTP/1.1" 500 -
``` | 11-30-2020 22:36:55 | 11-30-2020 22:36:55 | Hello, it seems you've upgraded your library from a 3.x version to 4.0.0. I invite you to consult the migration guide [here (deprecated attributes)](https://huggingface.co/transformers/migration.html#removed-some-deprecated-attributes) or to pin your `transformers` on version 3: `transformers==3`.<|||||>Thank you! |
transformers | 8,861 | closed | Add warnings for incompatible generation parameters | # What does this PR do?
While testing various generation configurations, I got confused when beam_sample's outputs (i.e. `num_beams > 1, do_sample=True`) failed to change with different settings of `top_k, top_p, temperature`. Then I realized that in my call to `generate()`, `do_sample` was still set to false even though I was cycling through various settings of top_k, top_p and temperature.
Generate (and its helper methods) already contain some parameter compatibility checks. This adds a few more checks:
- `num_beams` must be >= 1
- when `do_sample` is not set, warn the user if any of `top_p, top_k, temperature` are not None (since they will have no effect)
- if `num_beams` is 1 (no beam search), warn user if any of `early_stopping, length_penalty` are not None (since they will have no effect)
- if an invalid set of params `num_beams` and `do_sample` are passed, raise ValueError. Note that since we also add a check for `num_beams < 1`, this final value error will never be raised, but this prevents the `generate` function from falling off the end of the ifelse chain if something is altered in the future.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
No doc changes necessary
- [ ] Did you write any new necessary tests?
ran tests, but no new functionality created, just warning messages
## Who can review?
Please tag fewer than 3 people.
Text Generation: @patrickvonplaten | 11-30-2020 22:33:15 | 11-30-2020 22:33:15 | Question: the `generate()` documentation says that `topk` etc params will default to values in the `pretrainedconfig`, but `top_k`, for example is never read from the config file. This differs from, e.g., `num_beams, max_length` which are read from config if they are not passed in as params to the generate function.
And since, for example, `T5PreTrainedModel` never overwrites the generate function, I don't see how the defaults in the config (for params like `top_k`) could actually end up being passed to the generate function?<|||||>> Question: the `generate()` documentation says that `topk` etc params will default to values in the `pretrainedconfig`, but `top_k`, for example is never read from the config file. This differs from, e.g., `num_beams, max_length` which are read from config if they are not passed in as params to the generate function.
>
> And since, for example, `T5PreTrainedModel` never overwrites the generate function, I don't see how the defaults in the config (for params like `top_k`) could actually end up being passed to the generate function?
Regarding reading from config - am I missing something or do these never get checked?
@patrickvonplaten <|||||>> > Question: the `generate()` documentation says that `topk` etc params will default to values in the `pretrainedconfig`, but `top_k`, for example is never read from the config file. This differs from, e.g., `num_beams, max_length` which are read from config if they are not passed in as params to the generate function.
> > And since, for example, `T5PreTrainedModel` never overwrites the generate function, I don't see how the defaults in the config (for params like `top_k`) could actually end up being passed to the generate function?
>
> Regarding reading from config - am I missing something or do these never get checked?
>
> @patrickvonplaten
https://github.com/huggingface/transformers/blob/693ac3594b96e86dd282fdf8e413f3a48b176892/src/transformers/generation_utils.py#L240<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 8,860 | closed | Prevent BatchEncoding from blindly passing casts down to the tensors it contains | # What does this PR do?
This PR prevents `BatchEncoding.to` from passing down things which aren't devices to the tensors it contains. Previously it would pass down all the arguments, and as the `to` method in pytorch can also cast the arguments to different types it's used blindly by other packages (e.g. Nvidia's Apex). This caused an issue where when using Apex's AMP support with `O2` or greater it would cast the token indexes from a `LongTensor` to a `HalfTensor` truncating our vocab at 65k and rounding most of the words to the nearest 8th word (if you blindly insert the cast back in in the embedding layer, which the warning says to do).
The doc for `BatchEncoding.to` says it is only for moving the encoding and the tensors it contains between devices, but as the type checking isn't on by default it can behave like a regular pytorch `to` method and accept cast arguments that it passes down to the tensors it contains.
Fixes #6582
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. #6582
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
There are no docs or tests changes as the change makes the method conform with its currently documented behaviour.
@LysandreJik
| 11-30-2020 22:05:28 | 11-30-2020 22:05:28 | black complained about the style after the update, so I fixed it and squashed the commits again.<|||||>Thank you @Craigacp! |
transformers | 8,859 | closed | transformers/trainer.py stops after some iterations for iterative dataloaders. | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.5.1
- Platform: linux
- Python version: 3.7
- PyTorch version (GPU?): yes
- Tensorflow version (GPU?): -
- Using GPU in script?: -
- Using distributed or parallel set-up in script?: -
### Who can help
Trainer: @sgugger
Text Generation: @patrickvonplaten @TevenLeScao
T5: @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
## Information
Hi
I am using dataloader of below, after first epoch it finishes and trainer does not continue with max_steps, could you point me to the issue? I set is_sized_dataset to False. Thank you.
```
class TaskDataLoader:
"""Wrapper around dataloader to keep the task names."""
def __init__(self, task_name, dataset, batch_size=8,
collate_fn=None, drop_last=False, num_workers=0, sampler=None):
self.dataset = dataset
self.task_name = task_name
self.data_loader = DataLoader(self.dataset,
batch_size=batch_size,
sampler=sampler,
collate_fn=collate_fn,
drop_last=drop_last,
num_workers=num_workers)
def __len__(self):
return len(self.data_loader) #self.dataset.num_rows
def __iter__(self):
for batch in self.data_loader:
yield batch
class MultiTaskDataLoader:
"""Given a dictionary of task: dataset, returns a multi-task dataloader
which uses temperature sampling to sample different datasets."""
def __init__(self, tasks_to_datasets, batch_size=8, collate_fn=None,
drop_last=False, num_workers=0, temperature=100.0):
# Computes a mapping from task to dataloaders.
self.task_to_dataloaders = {}
for task, dataset in tasks_to_datasets.items():
dataloader = TaskDataLoader(task, dataset, batch_size,
collate_fn=collate_fn, drop_last=drop_last, num_workers=num_workers)
self.task_to_dataloaders.update({task: dataloader})
self.tasknames = list(self.task_to_dataloaders.keys())
# Computes the temperature sampling weights.
self.sampling_weights = self.temperature_sampling(self.dataloader_sizes.values(), temperature)
self.dataiters = {k: cycle(v) for k, v in self.task_to_dataloaders.items()}
def temperature_sampling(self, dataset_sizes, temp):
total_size = sum(dataset_sizes)
weights = np.array([(size / total_size) ** (1.0 / temp) for size in dataset_sizes])
return weights/np.sum(weights)
@property
def dataloader_sizes(self):
if not hasattr(self, '_dataloader_sizes'):
self._dataloader_sizes = {k: len(v) for k, v in self.task_to_dataloaders.items()}
return self._dataloader_sizes
def __len__(self):
return sum(v for k, v in self.dataloader_sizes.items())
def __iter__(self):
outputs = {}
for i in range(len(self)):
taskname = np.random.choice(self.tasknames, p=self.sampling_weights)
dataiter = self.dataiters[taskname]
outputs["batch"] = next(dataiter)
outputs["task"] = taskname
yield outputs
class Trainer():
"""This is the trainer class which is responsible for distributing the data
in case of multiple TPUs/GPUs."""
def __init__(self, dataset_names_to_datasets):
self.dataset_names_to_datasets = dataset_names_to_datasets
self.batch_size = 8
self.local_rank = -1 # this is not -1 in case of multi-gpu
self.collate_fn = None
self.drop_last = False
self.num_workers = 0
def get_sharded_data(self, num_replicas, rank):
"""Returns the sharded data belonging to the given rank."""
sharded_dataset_names_to_datasets = {}
for dataset_name, dataset in self.dataset_names_to_datasets:
sharded_data = dataset.shard(num_replicas, rank)
sharded_dataset_names_to_datasets.update({dataset_name: sharded_data})
return sharded_dataset_names_to_datasets
def get_train_dataset_shards(self):
"""In case of multiprocessing, returns the sharded data for the given rank."""
if is_torch_tpu_available():
if xm.xrt_world_size() > 1:
return self.get_sharded_data(num_replicas=xm.xrt_world_size(), rank=xm.get_ordinal())
elif self.local_rank != -1:
return self.get_sharded_data(num_replicas=xm.xrt_world_size(), rank=xm.get_ordinal())
else:
return self.dataset_names_to_datasets
def get_train_dataloader(self):
"""Returns the multi-task dataloader, each batch belongs
to one task dataset."""
dataset_names_to_datasets = self.get_train_dataset_shards()
dataloader = MultiTaskDataLoader(dataset_names_to_datasets,
batch_size=self.batch_size,
collate_fn=self.collate_fn,
drop_last=self.drop_last,
num_workers=self.num_workers)
return dataloader
```
| 11-30-2020 21:34:06 | 11-30-2020 21:34:06 | could you tell me please how should be the format of iterative dataloaders for the trainer funtion? I mean in the current implementation, this just goes till the end of length of dataloader and then it terminates, it does not loop again, could you explain how I can use trainer.py with iterative dataloaders please? thanks <|||||>It's hard to know what's going on without seeing the command/script you are executing. In particular, the `Trainer` logs a lot of info regarding the number of steps/epochs at the beginning of training that could be useful to debug this.<|||||>Hi @sgugger I spent really the whole day long hours continously, cannot see what is going on, this really needs someone of more expertise. this is hard for me to see the reason. To me callbaxks are changing the dataloader but not sure where it is happening.
<|||||>@sgugger I added the codes in https://github.com/rabeehk/debug , here is how to run:
```
pip install -r requirements.txt
python setup.py develop
cd seq2seq
python finetune_t5_trainer.py configs/mrpc_adapter_local.json
```
The results of running the codes is that after epoch 1 the dataloader is no more called resulting in labels not being inside the batch. I made the test case small to run fast, could you have a look, this is my only hope to fix this issue. thank you
```
### epochs_trained, num_train_epochs 0 4000
#### epoch 0
step 0 dict_keys(['input_ids', 'attention_mask', 'decoder_input_ids', 'labels'])
### in the loss dict_keys(['input_ids', 'attention_mask', 'decoder_input_ids', 'labels'])
@@@ after
#### epoch 1
step 0 dict_keys(['input_ids', 'attention_mask', 'decoder_input_ids'])
### in the loss dict_keys(['input_ids', 'attention_mask', 'decoder_input_ids'])
Traceback (most recent call last):
File "finetune_t5_trainer.py", line 250, in <module>
main()
File "finetune_t5_trainer.py", line 183, in main
model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/trainers/trainer.py", line 789, in train
tr_loss += self.training_step(model, inputs)
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/trainers/trainer.py", line 1141, in training_step
loss = self.compute_loss(model, inputs)
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/trainers/t5_trainer.py", line 339, in compute_loss
labels = inputs.pop("labels")
KeyError: 'labels'
0%|
```
<|||||>Hi
here is the solution, the way trainer works is having 1 iterative datasets with max_steps, then the issue was that cycle has caching in memory under the hood, then after first epoch, when modifying inputs, this was using it in the next iter and was craching, fixed with defining cycle as:
```
def cycle(iterable):
while True:
for x in iterable:
yield x
```
and iterating over max_steps in the MultiTaskDataLoader.<|||||>I'm really sorry to bother you. Could you please tell me how to modify it specifically? I have also encountered this problem. |
transformers | 8,858 | closed | [trainer] add distributed_env to TrainingArguments | As discussed in https://github.com/huggingface/transformers/pull/8823 it's not simple to check whether the downstream code is running under distributed mode or not (currently requires checking `self.args.local_rank != -1` which is far from obvious.
So we were discussing about adding a flag like `distributed_env` so that the downstream code could do a much simpler intuitive check.
I'm not sure whether we need just True/False for ddp or whether we also need to have another flag if we are under DP as well?
@sgugger | 11-30-2020 21:14:19 | 11-30-2020 21:14:19 | |
transformers | 8,857 | closed | keys_to_ignore_at_inference -> output_keys_to_ignore_at_inference | # What does this PR do?
@patrickvonplaten mentioned in #8633 he was not happy with a name I picked, so I changed it. | 11-30-2020 20:54:08 | 11-30-2020 20:54:08 | Not sure this is really worth the hassle then.<|||||>> Not sure this is really worth the hassle then.
Agree, let's leave it. It's just a personal cosmetic change, so not worth the change. |
transformers | 8,856 | closed | Make the big table creation/check platform independent | # What does this PR do?
As @jplu mentioned in #8813, the check that the big table of models/tokenziers is up-to-date (done in `make quality`) requires all three backends installed (plus tokenizers and sentencepiece). This PR amends the script to use the objects in the init (that are always there, thanks to the dummies) instead of the dicts in the auto modules (which are set to None if a specific backend is not installed).
In passing, it adds aliases `MT5Tokenizer` and `MT5TokenizerFast` (to `T5Tokenizer` and `T5TokenizerFast` respectively) because otherwise the script does not detect the tokenizers associated to this model, cc @patrickvonplaten | 11-30-2020 20:48:52 | 11-30-2020 20:48:52 | |
transformers | 8,855 | closed | KeyError: 'labels' in training_step in transformers/trainer.py | ## Environment info
- `transformers` version: 3.5.1
- Platform:
- Python version: 2.7
- PyTorch version (GPU?): GPU
- Tensorflow version (GPU?): -
- Using GPU in script?: -
- Using distributed or parallel set-up in script?: -
### Who can help
tokenizers: @mfuntowicz
Trainer: @sgugger
T5: @patrickvonplaten
examples/seq2seq: @patil-suraj
## Information
I am using finetune_seq2seq model, the issue arises inside training_step in trainer function, and it gives error that there is not "labels" inside the batch, below please find the stack trace, I spent the whole day I could not get why dataloader does not return back the labels, I would be grateful to point me to some possible reasons why this behaviour might happen and assisting me figuring this out. thanks
```
Traceback (most recent call last):
File "finetune_t5_trainer.py", line 250, in <module>
main()
File "finetune_t5_trainer.py", line 183, in main
model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/trainers/trainer.py", line 784, in train
tr_loss += self.training_step(model, inputs)
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/trainers/trainer.py", line 1125, in training_step
loss = self.compute_loss(model, inputs)
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/trainers/t5_trainer.py", line 338, in compute_loss
labels = inputs.pop("labels")
KeyError: 'labels'
```
| 11-30-2020 20:19:57 | 11-30-2020 20:19:57 | This happens in fact after the first epoch, could you think of the reason why this is the case? I tested the dataloader alone and it generates the epochs properly for any number of epochs.<|||||>during the first epoch my batches have all info
`batch inside multitask dict_keys(['input_ids', 'attention_mask', 'decoder_input_ids', 'labels'])`
After first epoch, they miss the labels, my dataloader has an inner dataloader and I checked this is not called anymore after epoch 1.
`batch inside multitask dict_keys(['input_ids', 'attention_mask', 'decoder_input_ids'])`
thanks a lot in advance, I am really struggling with this issue, appreciate any helps/thoughts on this.
<|||||>here is the structure of multi-task dataloader, which is my train_dataloader, could you point me what might happen after first epoch? could you point me to any changes you introduce to the train dataloader after epoch 1? thanks
```
class TaskDataLoader:
"""Wrapper around dataloader to keep the task names."""
def __init__(self, task_name, dataset, batch_size=8,
collate_fn=None, drop_last=False, num_workers=0, sampler=None):
self.dataset = dataset
self.task_name = task_name
self.data_loader = DataLoader(self.dataset,
batch_size=batch_size,
sampler=sampler,
collate_fn=collate_fn,
drop_last=drop_last,
num_workers=num_workers)
def __len__(self):
return len(self.data_loader)
def __iter__(self):
for batch in self.data_loader:
print("### batch inside taskdataloader ", batch.keys())
yield batch
class MultiTaskDataLoader:
"""Given a dictionary of task: dataset, returns a multi-task dataloader
which uses temperature sampling to sample different datasets."""
def __init__(self, tasks_to_datasets, batch_size=8, collate_fn=None,
drop_last=False, num_workers=0, temperature=100.0):
# Computes a mapping from task to dataloaders.
self.task_to_dataloaders = {}
for task, dataset in tasks_to_datasets.items():
dataloader = TaskDataLoader(task, dataset, batch_size,
collate_fn=collate_fn, drop_last=drop_last, num_workers=num_workers)
self.task_to_dataloaders.update({task: dataloader})
self.tasknames = list(self.task_to_dataloaders.keys())
# Computes the temperature sampling weights.
self.sampling_weights = self.temperature_sampling(self.dataloader_sizes.values(), temperature)
self.dataiters = {k: cycle(v) for k, v in self.task_to_dataloaders.items()}
def temperature_sampling(self, dataset_sizes, temp):
total_size = sum(dataset_sizes)
weights = np.array([(size / total_size) ** (1.0 / temp) for size in dataset_sizes])
return weights/np.sum(weights)
@property
def dataloader_sizes(self):
if not hasattr(self, '_dataloader_sizes'):
self._dataloader_sizes = {k: len(v) for k, v in self.task_to_dataloaders.items()}
return self._dataloader_sizes
def __len__(self):
return sum(v for k, v in self.dataloader_sizes.items())
def num_examples(self):
return sum(len(dataloader.dataset) for dataloader in self.task_to_dataloaders.values())
def __iter__(self):
outputs = {}
for i in range(len(self)):
taskname = np.random.choice(self.tasknames, p=self.sampling_weights)
dataiter = self.dataiters[taskname]
#outputs["batch"] = next(dataiter)
#outputs["task"] = taskname
#outputs = next(dataiter)
#outputs["task"] = taskname
outputs = next(dataiter)
print("### batch inside multitask ", outputs.keys())
yield outputs
# Example how this dataloader works.
if __name__ == "__main__":
batch_size = 10
num_shards = 2
rank = 0
dataset1 = load_dataset('glue', 'rte', split="train[:16]")
dataset2 = load_dataset('glue', 'cola', split="train[:32]")
trainer = Trainer({'dataset1': dataset1, 'dataset2': dataset2})
dataloader = trainer.get_train_dataloader()
print("### length ", len(dataloader))
for epoch in range(1000):
for i, batch in enumerate(dataloader): #islice(dataloader, 5):
print("## epoch ", epoch, " i ", i) #batch) #batch)
```
<|||||>solved in #8859 |
transformers | 8,854 | closed | Fix interaction of return_token_type_ids and add_special_tokens | Fix https://github.com/huggingface/transformers/issues/8578
It shouldn't raise a warning if `return_token_type_ids` is set to `False`. @thomwolf am I missing something here? | 11-30-2020 18:57:03 | 11-30-2020 18:57:03 | |
transformers | 8,853 | closed | [CI] skip docs-only jobs take #2 | So we discovered CircleCI has a problem and `pipeline.git.base_revision` is unreliable - not always set - breaking the test. https://github.com/huggingface/transformers/pull/8826#issuecomment-735972196
We had a few PRs incorrectly skipping the jobs, as in this example: https://app.circleci.com/pipelines/github/huggingface/transformers/16541/workflows/17b20230-8d7c-4b36-813c-2681f2c8a977/jobs/128232
It's missing `<< pipeline.git.base_revision >>` in
```
if git diff --name-only << pipeline.git.base_revision >>...<< pipeline.git.revision >> | egrep -qv '\.(md|rst)$'
```
resulting in:
```
if git diff --name-only ...5170e5381b9fccdfb9405d665ecee0515efc6453 | egrep -qv '\.(md|rst)$'
```
and hence fails the test. (it's missing the first hash before `...`).
This PR checks that the external variables `pipeline.git.base_revision` and `pipeline.git.revision` are set before we do the test. Should one of them be not set, the whole test is skipped and the job continues normally, regardless of whether it's docs only or not.
Meanwhile I filed a question about why `pipeline.git.base_revision` is not always set:
https://discuss.circleci.com/t/pipeline-git-base-revision-is-often-empty-which-reliable-variable-to-use/38301
Let's merge it at a time that one of us can monitor the next few PRs in case we need to back it out again.
If you have to back it out - you only need to comment out this line: `circleci step halt` and leave the invocations in place.
@sgugger, @LysandreJik
| 11-30-2020 17:26:40 | 11-30-2020 17:26:40 | Github actions decided to be down right at the moment where I wanted to monitor :disappointed: <|||||>Github actions? We are only doing this on circleCI - so far all seems to be working fine.<|||||>Hah, was so disappointed I let it blind me and think it was all CI. Will continue.<|||||>I'm pretty sure the CI should have run on the latest pipeline, as it did in the ones preceding it: https://app.circleci.com/pipelines/github/huggingface/transformers?branch=conda-ci
Changes were done to .yml files.<|||||>I believe the issue might come from the fact that it is looking at the build commit (d26ca66e2b12c5d5bc30be474d35f6e58dd21808) and comparing it to the previous build's commit (4780c8086a7aa95fc9b610cfe351ec5b226de669). It doesn't find any files, as the previous build commit (4780c8...) doesn't exist anymore, as I force pushed the branch, overwriting that commit's data.
Maybe checking for empty results in the diff would help in that regard?<|||||>OK, I found when we don't have `pipeline.git.base_revision` defined - it happens when PR is opened via github file edit. As I just did here: https://github.com/huggingface/transformers/pull/8884
You can see then what happens: https://app.circleci.com/pipelines/github/huggingface/transformers/16617/workflows/fea80cc9-2093-4053-b3c3-f315632ab3a6/jobs/129069
```
#!/bin/bash -eo pipefail
# pipeline.git.base_revision is not always defined, so only proceed if all external vars are defined
if test -n "" && test -n "32f03035ce5b23abd8a1659f24f04b298319ae78"
then
if git diff --name-only ...32f03035ce5b23abd8a1659f24f04b298319ae78 | egrep -qv '\.(md|rst)$'
then
echo "Non-docs were modified in this PR, proceeding normally"
else
echo "Only docs were modified in this PR, quitting this job"
circleci step halt
fi
else
echo "Can't perform skipping check w/o base_revision defined, continuing the job"
fi
Can't perform skipping check w/o base_revision defined, continuing the job
CircleCI received exit code 0
```
So the workaround worked. The job continued normally.
<|||||>~Oh I always open PRs like that, so it was indeed the culprit for the two other times we saw that happen~.
That does not make sense since the I open the PR on GitHub but the CI runs on a commit. So forget I said anything!<|||||>> I believe the issue might come from the fact that it is looking at the build commit ([d26ca66](https://github.com/huggingface/transformers/commit/d26ca66e2b12c5d5bc30be474d35f6e58dd21808)) and comparing it to the previous build's commit ([4780c80](https://github.com/huggingface/transformers/commit/4780c8086a7aa95fc9b610cfe351ec5b226de669)). It doesn't find any files, as the previous build commit (4780c8...) doesn't exist anymore, as I force pushed the branch, overwriting that commit's data.
>
> Maybe checking for empty results in the diff would help in that regard?
OK, so this is another edge case. So this pipeline thing is totally borked :( Why can't it give us a normal commit range of the PR.
So let's comment out `circleci step halt` and I will work on take #3 that will be much more elaborate. Should I do it or will you? I'm not sure if it's ok to commit directly.<|||||>You can comment it out! Thanks!<|||||>So the proposed logic for take 3 will be:
1. if pipeline.git.base_revision and pipeline.git.revision are defined
2. if git diff --name-only range returns anything
3. if what it returned in 2 is just docs
4. then skip
<|||||>
> You can comment it out! Thanks!
Done.<|||||>> ~Oh I always open PRs like that, so it was indeed the culprit for the two other times we saw that happen~.
> That does not make sense since the I open the PR on GitHub but the CI runs on a commit. So forget I said anything!
I'm not sure what you're saying - I think the point is that CircleCI can't find the branching point when the change is done via github file edit.
Note that `git diff --name-only $(git merge-base --fork-point master)` doesn't work on CirlceCI - otherwise we would have figured out the range ourselves.<|||||>Yes I don't do commit by editing files on GitHub, just the PR part, that's why I scratched what I was saying.<|||||>Thank you for clarifying that, @sgugger. Then this lack of `pipeline.git.base_revision` appears to be random then. |
transformers | 8,852 | closed | Remove deprecated `evalutate_during_training` | # What does this PR do?
Replaces the `evaluate_during_training` in examples using the `Trainer` (as well as integrations and tf_trainer) by the new `evaluation_strategy`.
Fixes #8792 | 11-30-2020 15:44:11 | 11-30-2020 15:44:11 | Merging this because we need it for the v4.0.0, pinging @jplu so he's aware of the changes made to the TFTrainer. |
transformers | 8,851 | closed | Transfoxl sequence classification | This PR implements Sequence classification for Transformer XL model
TransfoxlForSequenceClassification uses the last token in order to do the classification, as other causal models (e.g. GPT-1,GPT-2) do.
Fixes #7623 (Partially)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@LysandreJik
| 11-30-2020 15:40:35 | 11-30-2020 15:40:35 | Same as GPT-2, this would benefit from also handling padding on the left; I'll work on this in another PR.<|||||>@LysandreJik , I'll raise new PR, there was some conflicts in it. |
transformers | 8,850 | closed | Add a direct link to the big table | # What does this PR do?
This PR adds an anchor to the big table of models/tokenizers to be able to generate a direct link to it, and it adds that link in the README. | 11-30-2020 15:25:29 | 11-30-2020 15:25:29 | |
transformers | 8,849 | closed | Some unintended things happen in Seq2SeqTrainer example | I posted this report in the HuggingFace Forum at first, but @BramVanroy kindly told me to post the report here instead of the forum.
The link to the post in the forum: https://discuss.huggingface.co/t/some-unintended-things-happen-in-seq2seqtrainer-example/2361
## Environment info
- `transformers` version: 4.0.0-rc-1
- The latest commit: commit 5ced23dc845c76d5851e534234b47a5aa9180d40
- Platform: Linux-4.15.0-123-generic-x86_64-with-glibc2.10
- Python version: 3.8.3
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): 2.3.1 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
Trainer: @sgugger
examples/seq2seq: @patil-suraj
## Information
Model I am using (Bert, XLNet ...): facebook/bart-large
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
I used the XSum dataset following the README of `examples/seq2seq`.
## To reproduce
### What seems strange
- The number of data pairs is not correctly recognized.
- MLflow cannot treat the params (too long).
I wasnβt sure if I should divide these into two issues, but in the end, I decided on one.
If it is better to divide them into two, I will modify it.
I first noticed this strangeness when I use a different dataset than the those in the example.
I again follow the README of `examples/seq2seq` to check if my modification causes the problem or not.
Having checked https://github.com/huggingface/transformers/issues/8792, I used `--evaluation_strategy epoch` instead of `--evaluate_during_training`.
### Run official example scripts
```
$ CUDA_VISIBLE_DEVICES=0 python finetune_trainer.py \
--data_dir $XSUM_DIR \
--learning_rate=3e-5 \
--fp16 \
--do_train --do_eval --do_predict \
--evaluation_strategy epoch \
--predict_with_generate \
--n_val 1000 \
--model_name_or_path facebook/bart-large \
--output_dir ./xsum_bart-large/ \
--save_total_limit 5 \
2>&1 | tee tmp.log
```
## Expected behavior
### Log
```
[INFO|trainer.py:667] 2020-11-30 08:10:43,836 >> ***** Running training *****
[INFO|trainer.py:668] 2020-11-30 08:10:43,836 >> Num examples = 204016
[INFO|trainer.py:669] 2020-11-30 08:10:43,836 >> Num Epochs = 3
[INFO|trainer.py:670] 2020-11-30 08:10:43,836 >> Instantaneous batch size per device = 8
[INFO|trainer.py:671] 2020-11-30 08:10:43,836 >> Total train batch size (w. parallel, distributed & accumulation) = 8
[INFO|trainer.py:672] 2020-11-30 08:10:43,836 >> Gradient Accumulation steps = 1
[INFO|trainer.py:673] 2020-11-30 08:10:43,836 >> Total optimization steps = 76506
...
mlflow.exceptions.MlflowException: Param value '{'summarization': {'length_penalty': 1.0, 'max_length': 128, 'min_length': 12, 'num_beams': 4}, 'summarization_cnn': {'length_penalty': 2.0, 'max_length': 142, 'min_length': 56, 'num_beams': 4}, 'summarization_xsum': {'length_penalty': 1.0, 'max_leng' had length 293, which exceeded length limit of 250
```
### (Reference) Dataset length
```sh
$ cd $XSUM_DIR/
$ wc -l *
11333 test.source
11333 test.target
204017 train.source
204017 train.target
11327 val.source
11327 val.target
453354 total
```
### Details
#### The number of examples shown
At first, I tried to use the dataset with 40,000 pairs for training, but it was shown that `Num examples = 39999`.
I don't know why, so I've checked the example with the XSum dataset.
Checking the number of lengths, it seems the XSum train set used in the example has 204017 pairs, but it is shown `Num examples = 204016` as above.
I thought the dataset was supposed to start with the first line, but am I mistaken? For example, is the first line treated as a header?
#### MLflow can not treat params in this case
As shown above, the length of `param value` exceeds the limit that MLflow can handle.
Do I just need to change the settings of MLflow? Or, should I add some modifications to `param value` to be used in MLflow?
Thank you in advance. | 11-30-2020 10:03:55 | 11-30-2020 10:03:55 | I have never looked at the `finetune_trainer.py` script so I can't reply for the number of examples part.
For the MLFlow problem, I don't understand how the value of this parameter could be longer than 250 (if interpreted has a string) could you print it out for debugging?<|||||>Thank you for your quick response!
First, here is more detail of the error message about MLFlow problem.
I apologize that I didn't give the information in the first of this issue.
```python
Traceback (most recent call last):
File "finetune_trainer.py", line 310, in <module>
main()
File "finetune_trainer.py", line 254, in main
trainer.train(
File "/path/to/transformers/src/transformers/trainer.py", line 713, in train
self.control = self.callback_handler.on_train_begin(self.args, self.state, self.control)
File "/path/to/transformers/src/transformers/trainer_callback.py", line 336, in on_train_begin
return self.call_event("on_train_begin", args, state, control)
File "/path/to/transformers/src/transformers/trainer_callback.py", line 374, in call_event
result = getattr(callback, event)(
File "/path/to/transformers/src/transformers/integrations.py", line 502, in on_train_begin
self.setup(args, state, model)
File "/path/to/transformers/src/transformers/integrations.py", line 497, in setup
mlflow.log_params(dict(combined_dict_items[i : i + MLflowCallback.MAX_LOG_SIZE]))
File "$HOME/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/mlflow/tracking/fluent.py", line 470, in log_params
MlflowClient().log_batch(run_id=run_id, metrics=[], params=params_arr, tags=[])
File "$HOME/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/mlflow/tracking/client.py", line 830, in log_batch
self._tracking_client.log_batch(run_id, metrics, params, tags)
File "$HOME/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/mlflow/tracking/_tracking_service/client.py", line 246, in log_batch
self.store.log_batch(run_id=run_id, metrics=metrics, params=params, tags=tags)
File "$HOME/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/mlflow/store/tracking/file_store.py", line 852, in log_batch
_validate_batch_log_data(metrics, params, tags)
File "$HOME/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/mlflow/utils/validation.py", line 221, in _validate_batch_log_data
_validate_param(param.key, param.value)
File "$HOME/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/mlflow/utils/validation.py", line 101, in _validate_param
_validate_length_limit("Param value", MAX_PARAM_VAL_LENGTH, value)
File "$HOME/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/mlflow/utils/validation.py", line 169, in _validate_length_limit
raise MlflowException(
mlflow.exceptions.MlflowException: Param value '{'summarization': {'length_penalty': 1.0, 'max_length': 128, 'min_length': 12, 'num_beams': 4}, 'summarization_cnn': {'length_penalty': 2.0, 'max_length': 142, 'min_length': 56, 'num_beams': 4}, 'summarization_xsum': {'length_penalty': 1.0, 'max_leng' had length 293, which exceeded length limit of 250
```
The error message says the error is caused in line 497 of `integrations.py`.
https://github.com/huggingface/transformers/blob/5ced23dc845c76d5851e534234b47a5aa9180d40/src/transformers/integrations.py#L497
I added logger.info before that.
```python
# debug
logger.info("--- dict --- %s", dict(combined_dict_items[i : i + MLflowCallback.MAX_LOG_SIZE]))
```
Then, the output is as below:
```python
[INFO|integrations.py:499] 2020-11-30 16:39:51,612 >> --- dict --- {'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'use_bfloat16': False, 'pruned_heads': {}, 'tie_word_embeddings': True, 'is_encoder_decoder': True, 'is_decoder': False, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 128, 'min_length': 12, 'do_sample': False, 'early_stopping': True, 'num_beams': 4, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 3, 'bad_words_ids': None, 'num_return_sequences': 1, 'chunk_size_feed_forward': 0, 'architectures': ['BartModel', 'BartForConditionalGeneration', 'BartForSequenceClassification'], 'finetuning_task': None, 'id2label': {0: 'LABEL_0', 1: 'LABEL_1', 2: 'LABEL_2'}, 'label2id': {'LABEL_0': 0, 'LABEL_1': 1, 'LABEL_2': 2}, 'tokenizer_class': None, 'prefix': None, 'bos_token_id': 0, 'pad_token_id': 1, 'eos_token_id': 2, 'sep_token_id': None, 'decoder_start_token_id': 2, 'task_specific_params': {'summarization': {'length_penalty': 1.0, 'max_length': 128, 'min_length': 12, 'num_beams': 4}, 'summarization_cnn': {'length_penalty': 2.0, 'max_length': 142, 'min_length': 56, 'num_beams': 4}, 'summarization_xsum': {'length_penalty': 1.0, 'max_length': 62, 'min_length': 11, 'num_beams': 6}}, 'xla_device': None, '_name_or_path': 'facebook/bart-large', 'classif_dropout': 0.1, 'model_type': 'bart', 'num_hidden_layers': 12, 'vocab_size': 50265, 'd_model': 1024, 'encoder_ffn_dim': 4096, 'encoder_layers': 12, 'encoder_attention_heads': 16, 'encoder_layerdrop': None, 'decoder_layerdrop': None, 'decoder_ffn_dim': 4096, 'decoder_layers': 12, 'decoder_attention_heads': 16, 'max_position_embeddings': 1024, 'init_std': 0.02, 'activation_function': 'gelu', 'scale_embedding': False, 'normalize_embedding': True, 'normalize_before': False, 'add_final_layer_norm': False, 'add_bias_logits': False, 'static_position_embeddings': False, 'attention_dropout': None, 'activation_dropout': 0.1, 'dropout': None, 'classifier_dropout': 0.0, 'extra_pos_embeddings': 2, 'force_bos_token_to_be_generated': False, 'do_blenderbot_90_layernorm': False, 'use_cache': True, 'output_dir': './xsum_bart-large_no_cuda/', 'overwrite_output_dir': False, 'do_train': True, 'do_eval': True, 'do_predict': True, 'model_parallel': False, 'evaluation_strategy': 'epoch', 'prediction_loss_only': False, 'per_device_train_batch_size': 8, 'per_device_eval_batch_size': 8, 'per_gpu_train_batch_size': None, 'per_gpu_eval_batch_size': None, 'gradient_accumulation_steps': 1, 'eval_accumulation_steps': None, 'learning_rate': 3e-05, 'weight_decay': 0.0, 'adam_beta1': 0.9, 'adam_beta2': 0.999, 'adam_epsilon': 1e-08, 'max_grad_norm': 1.0, 'num_train_epochs': 3.0, 'max_steps': -1, 'warmup_steps': 0, 'logging_dir': 'runs/Nov30_16-39-34_hamo', 'logging_first_step': False, 'logging_steps': 500, 'save_steps': 500, 'save_total_limit': 5, 'no_cuda': True, 'seed': 42, 'fp16': False}
```
The error message seems to indicate the `'task_specific_params'`, so I've checked the length of it.
```
>>> str = "{'summarization': {'length_penalty': 1.0, 'max_length': 128, 'min_length': 12, 'num_beams': 4}, 'summarization_cnn': {'length_penalty': 2.0, 'max_length': 142, 'min_length': 56, 'num_beams': 4}, 'summarization_xsum': {'length_penalty': 1.0, 'max_length': 62, 'min_length': 11, 'num_beams': 6}}"
>>> len(str)
293
```
Should I have added some processing to `task_specific_params`?
Thank you.
<|||||>Mmm, that's weird. Pining @noise-field as this was the user that added integration with MLFlow.<|||||>Hi, @forest1988
Thank you for the detailed bug description. MLFlow does limit the parameter length (see: mlflow/mlflow#1976).
I think we probably need to stop sending arbitrarily nested parameters as string literals because they are:
- not actually single parameters
- can easily overflow the 250 symbols limit
Another idea would be to skip long parameters and produce a warning like in case of invalid metrics values.
@sgugger what would you suggest would be a better option? I could fix it this week.
<|||||>Maybe we could just skip the args we are trying to send to MLFlow when they get over the limit?<|||||>Hi, I have the same issue. I'm using more or less the standard `run_glue.py` script for finetuning. Most models worked but BART threw the same error as above.
Fortunately, this error happened at the start. But, I wrote my own trainer callback handler which failed only after 1-22 hours in the training process and interrupted the training, because some backend API failed to respond.
I'm not sure whether it might make some sense to just wrap all the callback calls into try-catch blocks so the training will continue in any case?<|||||>Here is my solution https://github.com/huggingface/transformers/issues/8967#issue-758695096<|||||>There was an error where the callback tried to log a value that is too long for MLflow. It was fixed in this PR: #8875<|||||>Hi,
It seems PR #8875 will solve this issue, are there any problems that block the PR from merging?
(Added: I'm sorry for the duplicate comments.)
Currently, I am dealing with this issue temporarily as follows.
(Trainer works without MLflow integration)
```
# remove MLflowCalback temporarily
from transformers.integrations import MLflowCallback
trainer.callback_handler.remove_callback(MLflowCallback)
```<|||||>Oh sorry @noise-field it seems like your PR slipped through the cracks of our review process. In general, don't hesitate to ping the person that reviewed your PR if there is no activity in a week and you believe you addressed every comment.<|||||>I'm experiencing this issue again with the trainer when doing NER with `AutoModelForTokenClassification`. The model config containing the `label2id` and `id2label` fields can be quite long when there are many entity types, and it cannot be split under the current strategy.
Example error when trying to log the `id2label` from model config:
```
873 combined_dict_items = list(combined_dict.items())
874 for i in range(0, len(combined_dict_items), self._MAX_PARAMS_TAGS_PER_BATCH):
--> 875 self._ml_flow.log_params(dict(combined_dict_items[i : i + self._MAX_PARAMS_TAGS_PER_BATCH]))
876 mlflow_tags = os.getenv("MLFLOW_TAGS", None)
877 if mlflow_tags:
...
RestException: INVALID_PARAMETER_VALUE: Param value '{0: 'LABEL_0', 1: 'LABEL_1', 2: 'LABEL_2', 3: 'LABEL_3', 4: 'LABEL_4', 5: 'LABEL_5', 6: 'LABEL_6', 7: 'LABEL_7', 8: 'LABEL_8', 9: 'LABEL_9', 10: 'LABEL_10', 11: 'LABEL_11', 12: 'LABEL_12', 13: 'LABEL_13', 14: 'LABEL_14', 15: 'LABEL_15', 16: 'LABEL_16' had length 316, which exceeded length limit of 250
```
Edit: a work-around is to set `MLFLOW_FLATTEN_PARAMS` to true. This limit has been [increased to 500 in MLFlow](https://github.com/mlflow/mlflow/pull/6358) |
transformers | 8,848 | closed | Fix docstring for language code in mBart | # What does this PR do?
Fixes #8534
## Before submitting
- [X] This PR fixes a typo or improves the docs.
## Who can review?
@patrickvonplaten
| 11-30-2020 08:55:02 | 11-30-2020 08:55:02 | This is great thank you! |
transformers | 8,847 | closed | KeyError: 'mt5' | I am trying to use the google/mt5 model, but I get KeyError: 'mt5'. How do I fix this?
KeyError Traceback (most recent call last)
<ipython-input-2-207aa15555f1> in <module>
----> 1 tokenizer = AutoTokenizer.from_pretrained("google/mt5-small")
/usr/local/lib/python3.8/dist-packages/transformers/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
304 config = kwargs.pop("config", None)
305 if not isinstance(config, PretrainedConfig):
--> 306 config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)
307
308 if "bert-base-japanese" in str(pretrained_model_name_or_path):
/usr/local/lib/python3.8/dist-packages/transformers/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
334
335 if "model_type" in config_dict:
--> 336 config_class = CONFIG_MAPPING[config_dict["model_type"]]
337 return config_class.from_dict(config_dict, **kwargs)
338 else:
KeyError: 'mt5' | 11-30-2020 08:34:31 | 11-30-2020 08:34:31 | pip install transformers==4.0.0rc1 sentencepiece
<|||||>> pip install transformers==4.0.0rc1 sentencepiece
Thank you! You are my hero |
transformers | 8,846 | closed | How to globally change the PYTORCH_PRETRAINED_BERT_CACHE path | Hi, all. I don't have enough disk space under `~/` to download the pre-trained model. When I run others' experiments, I always need to change their code from
```
model = BertForQuestionAnswering.from_pretrained(args.bert_model, \
cache_dir=PYTORCH_PRETRAINED_BERT_CACHE'distributed_{}'.format(-1))
```
to something like
```
PYTORCH_PRETRAINED_BERT_CACHE = ELSE_WHERE
model = BertForQuestionAnswering.from_pretrained(args.bert_model, \
cache_dir=PYTORCH_PRETRAINED_BERT_CACHE'distributed_{}'.format(-1))
```
or
```
model = BertForQuestionAnswering.from_pretrained(args.bert_model, \
cache_dir=ELSE_WHERE)
```
For the convenience of those who don't have enough disk under home dir `~/`, I wonder if there is any way to globally change this `PYTORCH_PRETRAINED_BERT_CACHE` value once and for all.
| 11-30-2020 08:30:51 | 11-30-2020 08:30:51 | Hello! You can set the environment variable `TRANSFORMERS_CACHE` to define which location should be used to store weights.<|||||>@LysandreJik Thanks.
Just to make sure I understand your point correctly. If I run `TRANSFORMERS_CACHE=ELSE_WHERE train.sh` in cmd, are all the downloaded pretrained cache files stored under `ELSE_WHERE` rather than `~/.pytorch_pretrained_bert`?
It doesn't work for me. Specifically, I run `TRANSFORMERS_CACHE=./cache_dir bash train.sh` instead of `bash train.sh` and the cache files are still download to my HOME dir `/homes/jzhoubu/.pytorch_pretrained_bert`. Below is the log.
```
11/30/2020 23:12:36 - INFO - pytorch_pretrained_bert.tokenization - loading vocabulary file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt from cache at /homes/jzhoubu/.pytorch_pretrained_bert/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084
11/30/2020 23:12:42 - INFO - pytorch_pretrained_bert.modeling - loading archive file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz from cache at /homes/jzhoubu/.pytorch_pretrained_bert/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba
11/30/2020 23:12:42 - INFO - pytorch_pretrained_bert.modeling - extracting archive file /homes/jzhoubu/.pytorch_pretrained_bert/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba to temp dir /tmp/tmpwf4pab4l
```
<|||||>Ah, given your logs, it seems you're running on a very very old version (`pytorch_pretrained_bert`, which is 1+ years old). While we recommend updating to more recent versions, you should be able to obtain the same behavior by setting the `PYTORCH_PRETRAINED_BERT_CACHE` environment variable instead.
For future issues, please always complete the issue template, the information related to your environment is especially important for us to help you correctly.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 8,845 | closed | Correct docstring. | Related issue: https://github.com/huggingface/transformers/issues/8837
# What does this PR do?
Updating the PreTrainedTokenizerBase.pad argument default value docstring to show the correct default value.
**Current**
docstring:
https://github.com/huggingface/transformers/blob/d5b3e56de5376aa85ef46e7f0325139d9e299a41/src/transformers/tokenization_utils_base.py#L2469-L2470
arg:
https://github.com/huggingface/transformers/blob/d5b3e56de5376aa85ef46e7f0325139d9e299a41/src/transformers/tokenization_utils_base.py#L2431-L2472
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
Fixes #8837 (issue)
I'm also curious why this method has default `padding=True`? Other methods (prepare_for_model, encode, __call__, encode_plus, batch_encode_plus) have `padding=False`.
Its default means the DataCollatorForLanguageModeling pads input examples which means it can't be simply switched with the default collator in the [example script](https://github.com/huggingface/transformers/blob/master/templates/adding_a_new_example_script/%7B%7Bcookiecutter.directory_name%7D%7D/run_%7B%7Bcookiecutter.example_shortcut%7D%7D.py#L287-L306) without breaking the attention mask.
https://github.com/huggingface/transformers/blob/610cb106a216cfb99d840648b576f9502189e4d1/src/transformers/data/data_collator.py#L253
@mfuntowicz
@LysandreJik
@sgugger
| 11-30-2020 07:27:13 | 11-30-2020 07:27:13 | |
transformers | 8,844 | closed | mT5 fine-tuned model generate wrong answer | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.0.0-rc-1
- Platform: Linux
- Python version: 3.7.9
- PyTorch version (GPU?): 1.4.0
- Tensorflow version (GPU?): NA
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
-->
@patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): MT5ForConditionalGeneration.from_pretrained('google/mt5-small')
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
KoreanSTS dataset
https://github.com/kakaobrain/KorNLUDatasets
## To reproduce
Steps to reproduce the behavior:
1. fine-tuning Korean STSb dataset on mT5-small model
2. Proceed inference using testset
3. Strange results
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```ruby
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
import random
import time
import datetime
import numpy as np
import os
from tqdm.notebook import tqdm
import logging
import matplotlib.pyplot as plt
import seaborn as sns
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler
from transformers import Adafactor, get_linear_schedule_with_warmup, MT5ForConditionalGeneration, T5Tokenizer
from scipy.stats import spearmanr, pearsonr
tokenizer = T5Tokenizer.from_pretrained('google/mt5-small')
model = MT5ForConditionalGeneration.from_pretrained('google/mt5-small', return_dict=True)
GPU_NUM = 4
device = torch.device(f'cuda:{GPU_NUM}' if torch.cuda.is_available() else 'cpu')
torch.cuda.set_device(device) # change allocation of current GPU
print ('Current cuda device ', torch.cuda.current_device()) # check
data_path = "../dataset"
train = os.path.join(data_path,'sts-train.tsv')
test = os.path.join(data_path,'sts-test.tsv')
dev = os.path.join(data_path,'sts-dev.tsv')
train_data = pd.read_csv(train, delimiter='\t', error_bad_lines=False)
test_data = pd.read_csv(test, delimiter='\t', error_bad_lines=False)
dev_data = pd.read_csv(dev, delimiter='\t', error_bad_lines=False)
train_data.score = round(train_data.score*5)/5
train_data = train_data.applymap(str)
train_data['input']=''
for i in range(len(train_data)):
strs_to_join = []
strs_to_join = ['stsb sentence1:', train_data.iloc[i]['sentence1'], 'sentence2:', train_data.iloc[i]['sentence2']]
train_data['input'].iloc[i] = " ".join(strs_to_join)
dev_data.score = round(dev_data.score*5)/5
dev_data = dev_data.applymap(str)
dev_data['input']=''
for i in range(len(dev_data)):
strs_to_join = []
strs_to_join = ['stsb sentence1:', dev_data.iloc[i]['sentence1'], 'sentence2:', dev_data.iloc[i]['sentence2']]
dev_data['input'].iloc[i] = " ".join(strs_to_join)
dev_target = dev_data.score
test_data.score = round(test_data.score*5)/5
test_data = test_data.applymap(str)
test_data['input']=''
for i in range(len(test_data)):
strs_to_join = []
strs_to_join = ['stsb sentence1:', test_data.iloc[i]['sentence1'], 'sentence2:', test_data.iloc[i]['sentence2']]
test_data['input'].iloc[i] = " ".join(strs_to_join)
test_target = test_data.score
train_inputs, train_targets, dev_inputs, dev_targets, test_inputs, test_targets = [],[],[],[],[],[]
for input in train_data.input:
tokenized_inputs = tokenizer.encode_plus(input, max_length=283, padding='max_length', return_tensors="pt").input_ids
train_inputs.append(tokenized_inputs)
for target in train_target:
tokenized_targets = tokenizer.encode_plus(target, max_length=2, padding='max_length', return_tensors="pt").input_ids
train_targets.append(tokenized_targets)
for input in dev_data.input:
tokenized_inputs = tokenizer.encode_plus(input, max_length=283, padding='max_length', return_tensors="pt").input_ids
dev_inputs.append(tokenized_inputs)
for target in dev_target:
tokenized_targets = tokenizer.encode_plus(target, max_length=2, padding='max_length', return_tensors="pt").input_ids
dev_targets.append(tokenized_targets)
for input in test_data.input:
tokenized_inputs = tokenizer.encode_plus(input, max_length=283, padding='max_length', return_tensors="pt").input_ids
test_inputs.append(tokenized_inputs)
for target in test_target:
tokenized_targets = tokenizer.encode_plus(target, max_length=2, padding='max_length', return_tensors="pt").input_ids
test_targets.append(tokenized_targets)
train_input_ids = torch.cat(train_inputs, dim=0)
train_labels = torch.cat(train_targets, dim=0)
dev_input_ids = torch.cat(dev_inputs, dim=0)
dev_labels = torch.cat(dev_targets, dim=0)
test_input_ids = torch.cat(test_inputs, dim=0)
test_labels = torch.cat(test_targets, dim=0)
train_dataset = TensorDataset(train_input_ids, train_labels)
dev_dataset = TensorDataset(dev_input_ids, dev_labels)
test_dataset = TensorDataset(test_input_ids, test_labels)
batch_size = 16
train_dataloader = DataLoader(
train_dataset, # The training samples.
sampler = RandomSampler(train_dataset), # Select batches randomly
batch_size = batch_size # Trains with this batch size.
)
dev_dataloader = DataLoader(
dev_dataset, # The validation samples.
sampler = SequentialSampler(dev_dataset), # Pull out batches sequentially.
batch_size = batch_size # Evaluate with this batch size.
)
test_dataloader = DataLoader(
test_dataset, # The validation samples.
sampler = SequentialSampler(test_dataset), # Pull out batches sequentially.
batch_size = batch_size # Evaluate with this batch size.
)
model.cuda()
params = list(model.named_parameters())
optimizer = Adafactor(model.parameters(),
lr = 1e-3, # args.learning_rate - default is 5e-5, our notebook had 2e-5
eps=(1e-30, 1e-3),
relative_step = False
)
epochs = 30
total_steps = len(train_dataloader) * epochs
scheduler = get_linear_schedule_with_warmup(optimizer,
num_warmup_steps = 0, # Default value in run_glue.py
num_training_steps = total_steps)
predictions_all=[]
seed_val = 0
random.seed(seed_val)
np.random.seed(seed_val)
torch.manual_seed(seed_val)
torch.cuda.manual_seed_all(seed_val)
training_stats = []
total_t0 = time.time()
for epoch_i in tqdm(range(0, epochs)):
# Training
print("")
print('======== Epoch {:} / {:} ========'.format(epoch_i + 1, epochs))
print('Training...')
t0 = time.time()
total_train_loss = 0
model.train()
for step, batch in tqdm(enumerate(train_dataloader)):
if step % 50 == 0 and not step == 0:
elapsed = format_time(time.time() - t0)
print(' Batch {:>5,} of {:>5,}. Elapsed: {:}.'.format(step, len(train_dataloader), elapsed))
b_input_ids = batch[0].to(device)
b_labels = batch[1].to(device)
model.zero_grad()
output = model(input_ids=b_input_ids, labels=b_labels, return_dict=True)
loss = output.loss
logits = output.logits
total_train_loss += loss.item()
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
optimizer.step()
scheduler.step()
avg_train_loss = total_train_loss / len(train_dataloader)
training_time = format_time(time.time() - t0)
print("")
print(" Average training loss: {0:.2f}".format(avg_train_loss))
print(" Training epcoh took: {:}".format(training_time))
# Validation
print("")
print("Running Validation...")
t0 = time.time()
model.eval()
total_eval_loss = 0
nb_eval_steps = 0
for batch in tqdm(dev_dataloader):
b_input_ids = batch[0].to(device)
b_labels = batch[1].to(device)
with torch.no_grad():
output = model(input_ids=b_input_ids, labels=b_labels, return_dict=True)
loss = output.loss
logits = output.logits
total_eval_loss += loss.item()
logits = logits.detach().cpu().numpy()
label_ids = b_labels.to('cpu').numpy()
avg_val_loss = total_eval_loss / len(dev_dataloader)
validation_time = format_time(time.time() - t0)
print(" Validation Loss: {0:.2f}".format(avg_val_loss))
print(" Validation took: {:}".format(validation_time))
training_stats.append(
{
'epoch': epoch_i + 1,
'Training Loss': avg_train_loss,
'Valid. Loss': avg_val_loss,
'Training Time': training_time,
'Validation Time': validation_time
}
)
# test
print('Predicting labels for {:,} test sentences...'.format(len(test_input_ids)))
model.eval()
predictions = []
for batch in tqdm(test_dataloader):
b_input_ids = batch[0].to(device)
with torch.no_grad():
outputs = model.generate(b_input_ids)
predictions.append(outputs)
print('DONE.')
predictions_all.append(predictions)
print("")
print("Training complete!")
print("Total training took {:} (h:mm:ss)".format(format_time(time.time()-total_t0)))
for i in range(10):
output = model.generate(test_input_ids[i].cuda().reshape(1,-1))
print(tokenizer.decode(output[0]))
```
><pad> <extra_id_0></s>
<pad> <extra_id_0>.</s>
<pad> <extra_id_0>.</s>
<pad> <extra_id_0></s>
<pad> <extra_id_0>ν©λλ€.</s>
<pad> <extra_id_0></s>
<pad> <extra_id_0>.</s>
<pad> <extra_id_0>.</s>
<pad> <extra_id_0>.</s>
<pad> <extra_id_0>.</s>
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Thank you for sharing so you can use T5 and mT5 using pytorch.
1. I fine-tuned the Korean STSB dataset on mt5-small. But the result didn't come out the way I wanted it to come out in a strange shape.
There are about 5700 training datasets.
I wonder if there was a mistake in the learning process, or because the data set was insufficient, or because it was less learned.
2. Next, when inferencing using mT5(T5), what is the difference between proceeding with model.generate() and doing with model(input_ids=input_ids, decoder_input_ids=decoder_input_ids)?
| 11-30-2020 05:05:24 | 11-30-2020 05:05:24 | Hey @JejuWayfarer,
it would be awesome if you could post such a question on the forum since it's not really a specific bug report, but more a question/problem on a case-specific training script. Could you maybe post your question on this thread - I'm sure you'll have more luck of getting a good answer there.
This could be a good thread: https://discuss.huggingface.co/t/mt5-t5v1-1-fine-tuning-results/2098 or open a new one :-) <|||||>Thank you so much :) I need to use the forum. I will ask there.<|||||>> ## Environment info
> * `transformers` version: 4.0.0-rc-1
> * Platform: Linux
> * Python version: 3.7.9
> * PyTorch version (GPU?): 1.4.0
> * Tensorflow version (GPU?): NA
> * Using GPU in script?: yes
> * Using distributed or parallel set-up in script?: no
>
> ### Who can help
> @patrickvonplaten
>
> ## Information
> Model I am using (Bert, XLNet ...): MT5ForConditionalGeneration.from_pretrained('google/mt5-small')
>
> The problem arises when using:
>
> * [ ] the official example scripts: (give details below)
> * [x] my own modified scripts: (give details below)
>
> The tasks I am working on is:
>
> * [ ] an official GLUE/SQUaD task: (give the name)
> * [x] my own task or dataset: (give details below)
> KoreanSTS dataset
> https://github.com/kakaobrain/KorNLUDatasets
>
> ## To reproduce
> Steps to reproduce the behavior:
>
> 1. fine-tuning Korean STSb dataset on mT5-small model
> 2. Proceed inference using testset
> 3. Strange results
>
> ```ruby
> import pandas as pd
> %matplotlib inline
> import matplotlib.pyplot as plt
> import random
> import time
> import datetime
> import numpy as np
> import os
> from tqdm.notebook import tqdm
> import logging
> import matplotlib.pyplot as plt
> import seaborn as sns
>
> import torch
> import torch.nn as nn
> import torch.optim as optim
> from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler
>
> from transformers import Adafactor, get_linear_schedule_with_warmup, MT5ForConditionalGeneration, T5Tokenizer
> from scipy.stats import spearmanr, pearsonr
>
> tokenizer = T5Tokenizer.from_pretrained('google/mt5-small')
> model = MT5ForConditionalGeneration.from_pretrained('google/mt5-small', return_dict=True)
>
> GPU_NUM = 4
> device = torch.device(f'cuda:{GPU_NUM}' if torch.cuda.is_available() else 'cpu')
> torch.cuda.set_device(device) # change allocation of current GPU
> print ('Current cuda device ', torch.cuda.current_device()) # check
>
> data_path = "../dataset"
> train = os.path.join(data_path,'sts-train.tsv')
> test = os.path.join(data_path,'sts-test.tsv')
> dev = os.path.join(data_path,'sts-dev.tsv')
>
> train_data = pd.read_csv(train, delimiter='\t', error_bad_lines=False)
> test_data = pd.read_csv(test, delimiter='\t', error_bad_lines=False)
> dev_data = pd.read_csv(dev, delimiter='\t', error_bad_lines=False)
>
> train_data.score = round(train_data.score*5)/5
> train_data = train_data.applymap(str)
> train_data['input']=''
> for i in range(len(train_data)):
> strs_to_join = []
> strs_to_join = ['stsb sentence1:', train_data.iloc[i]['sentence1'], 'sentence2:', train_data.iloc[i]['sentence2']]
> train_data['input'].iloc[i] = " ".join(strs_to_join)
>
>
> dev_data.score = round(dev_data.score*5)/5
> dev_data = dev_data.applymap(str)
> dev_data['input']=''
> for i in range(len(dev_data)):
> strs_to_join = []
> strs_to_join = ['stsb sentence1:', dev_data.iloc[i]['sentence1'], 'sentence2:', dev_data.iloc[i]['sentence2']]
> dev_data['input'].iloc[i] = " ".join(strs_to_join)
> dev_target = dev_data.score
>
>
> test_data.score = round(test_data.score*5)/5
> test_data = test_data.applymap(str)
> test_data['input']=''
> for i in range(len(test_data)):
> strs_to_join = []
> strs_to_join = ['stsb sentence1:', test_data.iloc[i]['sentence1'], 'sentence2:', test_data.iloc[i]['sentence2']]
> test_data['input'].iloc[i] = " ".join(strs_to_join)
> test_target = test_data.score
>
> train_inputs, train_targets, dev_inputs, dev_targets, test_inputs, test_targets = [],[],[],[],[],[]
>
> for input in train_data.input:
> tokenized_inputs = tokenizer.encode_plus(input, max_length=283, padding='max_length', return_tensors="pt").input_ids
> train_inputs.append(tokenized_inputs)
>
> for target in train_target:
> tokenized_targets = tokenizer.encode_plus(target, max_length=2, padding='max_length', return_tensors="pt").input_ids
> train_targets.append(tokenized_targets)
>
> for input in dev_data.input:
> tokenized_inputs = tokenizer.encode_plus(input, max_length=283, padding='max_length', return_tensors="pt").input_ids
> dev_inputs.append(tokenized_inputs)
>
> for target in dev_target:
> tokenized_targets = tokenizer.encode_plus(target, max_length=2, padding='max_length', return_tensors="pt").input_ids
> dev_targets.append(tokenized_targets)
>
> for input in test_data.input:
> tokenized_inputs = tokenizer.encode_plus(input, max_length=283, padding='max_length', return_tensors="pt").input_ids
> test_inputs.append(tokenized_inputs)
>
> for target in test_target:
> tokenized_targets = tokenizer.encode_plus(target, max_length=2, padding='max_length', return_tensors="pt").input_ids
> test_targets.append(tokenized_targets)
>
> train_input_ids = torch.cat(train_inputs, dim=0)
> train_labels = torch.cat(train_targets, dim=0)
>
> dev_input_ids = torch.cat(dev_inputs, dim=0)
> dev_labels = torch.cat(dev_targets, dim=0)
>
> test_input_ids = torch.cat(test_inputs, dim=0)
> test_labels = torch.cat(test_targets, dim=0)
>
>
> train_dataset = TensorDataset(train_input_ids, train_labels)
> dev_dataset = TensorDataset(dev_input_ids, dev_labels)
> test_dataset = TensorDataset(test_input_ids, test_labels)
>
>
> batch_size = 16
> train_dataloader = DataLoader(
> train_dataset, # The training samples.
> sampler = RandomSampler(train_dataset), # Select batches randomly
> batch_size = batch_size # Trains with this batch size.
> )
> dev_dataloader = DataLoader(
> dev_dataset, # The validation samples.
> sampler = SequentialSampler(dev_dataset), # Pull out batches sequentially.
> batch_size = batch_size # Evaluate with this batch size.
> )
> test_dataloader = DataLoader(
> test_dataset, # The validation samples.
> sampler = SequentialSampler(test_dataset), # Pull out batches sequentially.
> batch_size = batch_size # Evaluate with this batch size.
> )
>
> model.cuda()
>
> params = list(model.named_parameters())
>
> optimizer = Adafactor(model.parameters(),
> lr = 1e-3, # args.learning_rate - default is 5e-5, our notebook had 2e-5
> eps=(1e-30, 1e-3),
> relative_step = False
> )
>
> epochs = 30
> total_steps = len(train_dataloader) * epochs
> scheduler = get_linear_schedule_with_warmup(optimizer,
> num_warmup_steps = 0, # Default value in run_glue.py
> num_training_steps = total_steps)
>
> predictions_all=[]
> seed_val = 0
>
> random.seed(seed_val)
> np.random.seed(seed_val)
> torch.manual_seed(seed_val)
> torch.cuda.manual_seed_all(seed_val)
>
> training_stats = []
> total_t0 = time.time()
>
> for epoch_i in tqdm(range(0, epochs)):
> # Training
> print("")
> print('======== Epoch {:} / {:} ========'.format(epoch_i + 1, epochs))
> print('Training...')
>
> t0 = time.time()
> total_train_loss = 0
>
> model.train()
>
> for step, batch in tqdm(enumerate(train_dataloader)):
>
> if step % 50 == 0 and not step == 0:
> elapsed = format_time(time.time() - t0)
>
> print(' Batch {:>5,} of {:>5,}. Elapsed: {:}.'.format(step, len(train_dataloader), elapsed))
>
> b_input_ids = batch[0].to(device)
> b_labels = batch[1].to(device)
>
> model.zero_grad()
>
> output = model(input_ids=b_input_ids, labels=b_labels, return_dict=True)
> loss = output.loss
> logits = output.logits
>
> total_train_loss += loss.item()
> loss.backward()
>
> torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
> optimizer.step()
> scheduler.step()
>
> avg_train_loss = total_train_loss / len(train_dataloader)
> training_time = format_time(time.time() - t0)
> print("")
> print(" Average training loss: {0:.2f}".format(avg_train_loss))
> print(" Training epcoh took: {:}".format(training_time))
>
>
> # Validation
> print("")
> print("Running Validation...")
>
> t0 = time.time()
>
> model.eval()
>
> total_eval_loss = 0
> nb_eval_steps = 0
>
> for batch in tqdm(dev_dataloader):
> b_input_ids = batch[0].to(device)
> b_labels = batch[1].to(device)
>
> with torch.no_grad():
> output = model(input_ids=b_input_ids, labels=b_labels, return_dict=True)
> loss = output.loss
> logits = output.logits
>
> total_eval_loss += loss.item()
>
> logits = logits.detach().cpu().numpy()
> label_ids = b_labels.to('cpu').numpy()
>
> avg_val_loss = total_eval_loss / len(dev_dataloader)
> validation_time = format_time(time.time() - t0)
> print(" Validation Loss: {0:.2f}".format(avg_val_loss))
> print(" Validation took: {:}".format(validation_time))
>
> training_stats.append(
> {
> 'epoch': epoch_i + 1,
> 'Training Loss': avg_train_loss,
> 'Valid. Loss': avg_val_loss,
> 'Training Time': training_time,
> 'Validation Time': validation_time
> }
> )
>
> # test
> print('Predicting labels for {:,} test sentences...'.format(len(test_input_ids)))
> model.eval()
> predictions = []
>
> for batch in tqdm(test_dataloader):
> b_input_ids = batch[0].to(device)
>
> with torch.no_grad():
> outputs = model.generate(b_input_ids)
> predictions.append(outputs)
> print('DONE.')
>
> predictions_all.append(predictions)
>
> print("")
> print("Training complete!")
>
> print("Total training took {:} (h:mm:ss)".format(format_time(time.time()-total_t0)))
>
>
> for i in range(10):
> output = model.generate(test_input_ids[i].cuda().reshape(1,-1))
> print(tokenizer.decode(output[0]))
> ```
>
> > <extra_id_0>
> > <extra_id_0>.
> > <extra_id_0>.
> > <extra_id_0>
> > <extra_id_0>ν©λλ€.
> > <extra_id_0>
> > <extra_id_0>.
> > <extra_id_0>.
> > <extra_id_0>.
> > <extra_id_0>.
>
> ## Expected behavior
> Thank you for sharing so you can use T5 and mT5 using pytorch.
>
> 1. I fine-tuned the Korean STSB dataset on mt5-small. But the result didn't come out the way I wanted it to come out in a strange shape.
> There are about 5700 training datasets.
> I wonder if there was a mistake in the learning process, or because the data set was insufficient, or because it was less learned.
> 2. Next, when inferencing using mT5(T5), what is the difference between proceeding with model.generate() and doing with model(input_ids=input_ids, decoder_input_ids=decoder_input_ids)?
I have encountered the same question. Do you have any idea? thank you |
transformers | 8,843 | closed | "BertForMaskedLM - pretrained model" cannot resize vocab output size |
### Who can help
albert, bert, GPT2, XLM: @LysandreJik
## Information
Model I am using (Bert)
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [0] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [0] my own task or dataset: (give details below)
## To reproduce
I resized embedding dim, but output dim cannot change. Please kindly refer to the code below:
```
from transformers import BertTokenizer, BertForMaskedLM
from transformers import LineByLineTextDataset
model = BertForMaskedLM.from_pretrained('bert-base-uncased')
###### tokenizer
tokenizer_path = './data/transformer_tokenizer_add_entitymasking_token.pt/'
tokenizer = BertTokenizer.from_pretrained(tokenizer_path)
model.bert.resize_token_embeddings(len(tokenizer))
model.cls.predictions.decoder.out_features = len(tokenizer)
out[0].shape, label.shape
>>>
(torch.Size([2, 20, 30522]), torch.Size([2, 20])) # 30522 should be len(tokenizer) 30544
```
| 11-30-2020 02:51:27 | 11-30-2020 02:51:27 | I solved it by following code:
```
with torch.no_grad():
replace_linear = torch.nn.Linear(in_features=768, out_features=len(tokenizer))
replace_linear.weight[:30522,:].copy_(model.cls.predictions.decoder.weight)
model.cls.predictions.decoder = replace_linear
model.cls.predictions.decoder = model.cls.predictions.decoder.requires_grad_(True)
``` |
transformers | 8,842 | closed | T5 generations for pretraining objective degenerate | ## The issue
I am using a pretrained T5 model to generate missing spans as in the pretraining objective. However, I'm finding that these generations deteriorate for longer sequences (usually after around the 25th span or so). Below is an example of this deterioration on a sequence (from the IMDB dataset) where 15% of the tokens have been randomly masked with sentinel tokens. Given that the T5 model was pretrained using sequences of up to 512 tokens with 15% of tokens masked, shouldn't it be possible to obtain good generations on sequences like the one below? Why are generations like this one deteriorating? Thank you!
## Environment info
- `transformers` version: 3.5.0
- Platform: Linux-4.15.0-45-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.7
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: True
### Who can help
@patrickvonplaten
## To reproduce
Steps to reproduce the behavior:
```python
import torch
from transformers import T5Tokenizer, T5ForConditionalGeneration
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model_name = "t5-base"
t5_tokenizer = T5Tokenizer.from_pretrained(model_name)
t5_model = T5ForConditionalGeneration.from_pretrained(model_name)
t5_model = t5_model.to(device)
original_sentence = "ROCK STAR is a well-told Hollywood-style rendition of the tale based on fact actually on how Ripper became Rob Halford's replacement for Judas Priest. Mark Wahlberg poured on his likable boy-ish charm and performed with believable admirably, something he has been known to do since the release of BOOGIE NIGHTS. Stephen Herek, no stranger to musically-themed movies, takes the audience through the wonders of the breakneck lifestyle of an extinct species, the Hair-Metal Rock God. Wahlberg's 'Izzy' acts as the film's host plays the everyman who gets to see his wish come true. His likable character quickly wins over the heart of the viewer, who wants to see him succeed and gets the chance to give him the Metal 'goat horn' hand-sign several times over. The only real complaint with the story is that the supporting cast, namely the other members of the band, were not fleshed out, or even introduced, properly. More interaction with these life-long Rock musicians would have amplified and solidified Izzy's new surroundings. Naturally, ROCK STAR is filled with great music. Rabin's score, the Steel Dragon's original work and plenty of 80's-style Metal hits makes this soundtrack a must-have! Let's all hope that films like ROCK STAR not only give a credibility to a style of music that helped define a generation but also spark a very-needed revival.</s>"
sentence = "ROCK STAR is a well-told Hollywood-style rendition<extra_id_0> the tale <extra_id_1> on<extra_id_2> on<extra_id_3> Ri<extra_id_4> became Rob Hal<extra_id_5>'s replacement for Ju<extra_id_6> Priest.<extra_id_7> Wahlberg poured on his likable boy-ish charm and performed with believable admirably, something<extra_id_8>he has been known to do<extra_id_9> the release of BOOGIE NIGHTS. Stephen Herek<extra_id_10> no stranger to musically-themed<extra_id_11>, takes the<extra_id_12> through the<extra_id_13>s<extra_id_14> break<extra_id_15> of<extra_id_16> extinct<extra_id_17>, the<extra_id_18>-Metal Rock<extra_id_19> Wahlberg's 'Izzy' acts as the film's host plays the everyman who gets to see his wish come true. His<extra_id_20>likable character quickly<extra_id_21> the heart of the viewer, who<extra_id_22> to see him succeed and gets the chance to give him the Metal 'goat horn' hand-sign several times over<extra_id_23> The only real complaint with the<extra_id_24> is that the supporting<extra_id_25>,<extra_id_26>namely the other members of<extra_id_27> band, were not fleshed out, or even introduced, properly<extra_id_28> More interaction with these life-long<extra_id_29> musicians would have amplified and solidified Izzy's new surroundings<extra_id_30> Naturally,<extra_id_31>CK STAR is filled<extra_id_32> great music. Rabin's score, the Steel Dragon<extra_id_33>s original work<extra_id_34> of 80's-style Metal<extra_id_35> makes this soundtrack<extra_id_36>a must-have<extra_id_37> all hope that films like ROCK STAR not only give a credibility<extra_id_38> a style of music that helped define a generation but also spark a very-needed revival<extra_id_39></s>"
encoded = t5_tokenizer.encode(sentence)
print("original sentence: ", original_sentence)
print("\nmasked sentence: ", sentence)
print("\nnum tokens masked sentence: ", len(encoded))
encoded_tensor = torch.LongTensor(encoded).unsqueeze(0).to(device)
eos_token_id = t5_tokenizer.encode("<extra_id_40>")[0]
batch = t5_model.generate(encoded_tensor, early_stopping = True, max_length = 300, eos_token_id = eos_token_id, no_repeat_ngram_size = 2, num_beams = 1, num_return_sequences = 1)
for b in batch:
print("\noutput: ")
print(t5_tokenizer.decode(b, skip_special_tokens = False))
```
output:
> original sentence: ROCK STAR is a well-told Hollywood-style rendition of the tale based on fact actually on how Ripper became Rob Halford's replacement for Judas Priest. Mark Wahlberg poured on his likable boy-ish charm and performed with believable admirably, something he has been known to do since the release of BOOGIE NIGHTS. Stephen Herek, no stranger to musically-themed movies, takes the audience through the wonders of the breakneck lifestyle of an extinct species, the Hair-Metal Rock God. Wahlberg's 'Izzy' acts as the film's host plays the everyman who gets to see his wish come true. His likable character quickly wins over the heart of the viewer, who wants to see him succeed and gets the chance to give him the Metal 'goat horn' hand-sign several times over. The only real complaint with the story is that the supporting cast, namely the other members of the band, were not fleshed out, or even introduced, properly. More interaction with these life-long Rock musicians would have amplified and solidified Izzy's new surroundings. Naturally, ROCK STAR is filled with great music. Rabin's score, the Steel Dragon's original work and plenty of 80's-style Metal hits makes this soundtrack a must-have! Let's all hope that films like ROCK STAR not only give a credibility to a style of music that helped define a generation but also spark a very-needed revival.</s>
>
> masked sentence: ROCK STAR is a well-told Hollywood-style rendition<extra_id_0> the tale <extra_id_1> on<extra_id_2> on<extra_id_3> Ri<extra_id_4> became Rob Hal<extra_id_5>'s replacement for Ju<extra_id_6> Priest.<extra_id_7> Wahlberg poured on his likable boy-ish charm and performed with believable admirably, something<extra_id_8>he has been known to do<extra_id_9> the release of BOOGIE NIGHTS. Stephen Herek<extra_id_10> no stranger to musically-themed<extra_id_11>, takes the<extra_id_12> through the<extra_id_13>s<extra_id_14> break<extra_id_15> of<extra_id_16> extinct<extra_id_17>, the<extra_id_18>-Metal Rock<extra_id_19> Wahlberg's 'Izzy' acts as the film's host plays the everyman who gets to see his wish come true. His<extra_id_20>likable character quickly<extra_id_21> the heart of the viewer, who<extra_id_22> to see him succeed and gets the chance to give him the Metal 'goat horn' hand-sign several times over<extra_id_23> The only real complaint with the<extra_id_24> is that the supporting<extra_id_25>,<extra_id_26>namely the other members of<extra_id_27> band, were not fleshed out, or even introduced, properly<extra_id_28> More interaction with these life-long<extra_id_29> musicians would have amplified and solidified Izzy's new surroundings<extra_id_30> Naturally,<extra_id_31>CK STAR is filled<extra_id_32> great music. Rabin's score, the Steel Dragon<extra_id_33>s original work<extra_id_34> of 80's-style Metal<extra_id_35> makes this soundtrack<extra_id_36>a must-have<extra_id_37> all hope that films like ROCK STAR not only give a credibility<extra_id_38> a style of music that helped define a generation but also spark a very-needed revival<extra_id_39></s>
>
> num tokens masked sentence: 337
>
> output:
> <extra_id_0> of<extra_id_1> of the man who<extra_id_2> the day of his death<extra_id_3> the<extra_id_4>m<extra_id_5>e<extra_id_6>das<extra_id_7> Steven<extra_id_8> that<extra_id_9> since<extra_id_10>o,<extra_id_11> films<extra_id_12> audience<extra_id_13> 'rock<extra_id_14>aga' of a<extra_id_15>-up<extra_id_16> the<extra_id_17> CK STAR band<extra_id_18> newest<extra_id_19> band. Steven<extra_id_20> incredibly<extra_id_21> captures<extra_id_22> is eager<extra_id_23>.<extra_id_24> film<extra_id_25> cast<extra_id_26> and<extra_id_27> the -Metal Rock, the band's sailor, and the members of Izzy' emcees, who were <extra_id_1><extra_id_20><extra_id_19>.-n " de an (s, in<extra_id_10> thetgrae and pro also le of'lr to ex si not on<extra_id_5><extra_id_3> I<extra_id_7>" ensembleΒ β Β» be last for fiia =/ den?<extra_id_26> pour as --) 2 $: + 1 S un former dis spa<extra_id_17> tub root will at both second<extra_id_25> is no bout muscle hard des<extra_id_21>re<extra_id_23> baseball facialw mi& * [...;
| 11-30-2020 02:21:38 | 11-30-2020 02:21:38 | I guess it's quite normal that the quality degenerates the longer the sequence gets, especially since the output comes from `generate()`...I don't really think that this poses a problem here<|||||>The model for some reason does not want to generate anything after <extra_id_27>, and the same behavior (i.e. problems after 27) occurs for t5-small.
The reason you're seeing gibberish after 27 is that the model has already generated an EOS token (id == 1). At this point the model has said "I'm done generating. I think the sequences has ended". However, since you told it to use <extra_id_40> as EOS, it continues to try to produce tokens even after producing token id == 1. But the model doesn't know what to do after creating an EOS so you get gibberish.
If you don't tell it to use a different EOS token, then it will simply stop generating after hitting <extra_id_27>.
I tried specifying that token id == 1 is a bad word so that the model won't generate it, but that also doesn't fix the problem.
Still, I do think it is quite odd that the model cannot generate more than 27 masked tokens.
Is it possible that this sort of task was only ever done as pretraining? So then the model would always have had teacher forcing, which means that it would never have to "predict" so many tokens into the future for this task.
If your goal is to fill in many blanks, then you could adapt in one of the following ways:
- masking one token at a time in the full sentence
- starting with the long input sentence and then appending the correct outputs up to the current extra_id. So e.g. for extra_id_2 you would have (+/- the trailing token) :
long_input_sentence <\s> <eid 0> of <eid 1> based <eid 2>
and then the model will generate the next tokens. at each point you would be using the next <eid> as the stopping token.
-- edit --
I tried the second method and it does not seem to work. It might be because this is an encoder-decoder model, so we need to be seeding the decoder with each additional generation, rather than extending the input sequence. This is possible in the model's forward method but I don't know how to do it with generate.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 8,841 | closed | Don't warn that models aren't available if Flax is available. | # What does this PR do?
Disables the "Neither PyTorch nor TensorFlow >= 2.0 have been found" warning if Flax has been found.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 11-30-2020 01:07:13 | 11-30-2020 01:07:13 | |
transformers | 8,840 | closed | Diverse number of return sequences for greedy search and sampling generation | # What does this PR do?
A new option proposed, `diverse_sequences`, for cases, when one wants really different sequences to be generated (conversational bot, for example). For greedy search, it starts generating new sequences from top `num_return_sequences` tokens (as first tokens in sequences). For sample generation mode, `num_return_sequences` first tokens are taken from a multinomial distribution.
Default `diverse_sequences=False` leaves generation in a way it was before this PR.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
GPT2: @patrickvonplaten
Text Generation: @TevenLeScao
T5: @patrickvonplaten
| 11-29-2020 20:52:12 | 11-29-2020 20:52:12 | Hey @LSinev,
thanks for your PR! Since the `generate()` refactor we are really trying to not add any use-case specific code anymore to the individual `generate()` methods. This quickly led to an unmaintainable code previously (especially if start adding lots of those `if` statements) again, so we can't merge the PR as it is in this state. It would be ideal if we could just add a new `LogitsPreprocessor` class or a `LogitsWarperClass`.
If this is not sufficient, we have to think a bit more about how to add this PR.
One thing, I don't really understand is how greedy search with `diverse_sequences=True` is different from Beam Search with `num_return_sequences > 1` -> it seems to be the same thing for me...Also could you add some links/pointers (paper, blog, other codebase) that makes use of this method? <|||||>> It would be ideal if we could just add a new LogitsPreprocessor class or a LogitsWarperClass.
Ok. I will check if it is possible (but this can move `if` statements inside, as I have to check processing of first token somehow).
> how greedy search with `diverse_sequences=True` is different from Beam Search with `num_return_sequences > 1` -> it seems to be the same thing for me...
Never thought about this. Will check.
> Also could you add some links/pointers (paper, blog, other codebase) that makes use of this method?
Nothing openly available as far as i know. Because of `transformers` popularity, if such possibility not implemented, few developers will try these ideas. Main usecase is additional ranking of generated sequences. As for now, nothing stops to have exactly same sequences as output. It can also be used with probabilities of final sequences from second head of GPT2DoubleHeadsModel, for example (https://github.com/huggingface/transformers/issues/5164). <|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 8,839 | closed | [Needs Discussion] [WIP] [Docs] Clean tokenizer doc api and add fast tokenizers | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Currently, most fast tokenizers do not have their docstrings in the doc. In this PR I want to add all the fast tokenizer docs. Also, I think that the main slow tokenizer `__call__` method should also be added to the docs. @sgugger - before proceeding with all other tokenizer docs, I'd like to hear your opinion on which functions should be included in the docs and which should not.
I'd like to add the `__call__` function to all slow tokenizer docs as well as for all fast tokenizer docs. @sgugger what other functions do you think I should add to the fast tokenizer doc?
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 11-29-2020 15:58:30 | 11-29-2020 15:58:30 | Yes, the person that added the fast sentencepiece tokenizers did not document them in the same PR...
As for the `__call__` method (and the `encode` one), the idea was to refer to the doc of the superclass, but it makes sense to have them directly accessible for each tokenizer. If we add it, we should also add `encode` I think.<|||||>> Yes, the person that added the fast sentencepiece tokenizers did not document them in the same PR...
> As for the `__call__` method (and the `encode` one), the idea was to refer to the doc of the superclass, but it makes sense to have them directly accessible for each tokenizer. If we add it, we should also add `encode` I think.
Ok great, I'll add `__call__` and `encode` to all tokenizers. Do you think I should add `encode_plus` and `batch_encode_plus` then as well? Or would that clutter the docs too much in your opinion?<|||||>I think `encode_plus` and `batch_encode_plus` are more or less deprecated and should not be in the docs. |
transformers | 8,838 | closed | RuntimeError: found torch.cuda.HalfTensor expected torch.cuda.FloatTensor while fine-tuning RAGSequence-base with custom data | ## Environment info
- `transformers` version: 4.0.0-rc-1 (master)
- Platform: Linux-3.10.0-862.el7.x86_64-x86_64-with-centos-7.5.1804-Core
- Python version: 3.6.5
- PyTorch version (GPU?): 1.5.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help
RAG: @patrickvonplaten, @lhoestq
## Information
I am fine-tuning RAGSequence-base according to the fine-tuning examples, along with a custom knowledge dataset (<10 lines)
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Generated the faiss index and the custom embeddings.
2. Executed `finetune_rag.py` with the same parameters as the given shell script. (except reducing epoch to 1 for test)
3. After going through the specified epoch, it abruptly ends with the following runtime error.
```python
Traceback (most recent call last):
File "transformers/examples/rag/finetune_rag.py", line 512, in <module>
main()
File "transformers/examples/rag/finetune_rag.py", line 507, in main
trainer.test()
File "/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 721, in test
results = self.__test_using_best_weights(ckpt_path, test_dataloaders)
File "/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 763, in __test_using_best_weights
results = self.fit(model)
File "/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 445, in fit
results = self.accelerator_backend.train()
File "/python3.6/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 148, in train
results = self.ddp_train(process_idx=self.task_idx, model=model)
File "/lib/python3.6/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 269, in ddp_train
model = self.trainer.precision_connector.connect(model)
File "/lib/python3.6/site-packages/pytorch_lightning/trainer/connectors/precision_connector.py", line 78, in connect
model, optimizers = self.backend.connect(model, self.trainer.optimizers)
File "/lib/python3.6/site-packages/pytorch_lightning/plugins/apex.py", line 37, in connect
model, optimizers = self.configure_apex(amp, model, optimizers, self.trainer.amp_level)
File "/lib/python3.6/site-packages/pytorch_lightning/plugins/apex.py", line 102, in configure_apex
model, optimizers = amp.initialize(model, optimizers, opt_level=amp_level)
File "/lib/python3.6/site-packages/apex/amp/frontend.py", line 358, in initialize
return _initialize(models, optimizers, _amp_state.opt_properties, num_losses, cast_model_outputs)
File "/lib/python3.6/site-packages/apex/amp/_initialize.py", line 171, in _initialize
check_params_fp32(models)
File "/lib/python3.6/site-packages/apex/amp/_initialize.py", line 87, in check_params_fp32
name, param.type()))
File "/lib/python3.6/site-packages/apex/amp/_amp_state.py", line 32, in warn_or_err
raise RuntimeError(msg)
RuntimeError: Found param model.rag.question_encoder.question_encoder.bert_model.embeddings.word_embeddings.weight with type torch.cuda.HalfTensor, expected torch.cuda.FloatTensor.
When using amp.initialize, you do not need to call .half() on your model
before passing it, no matter what optimization level you choose.
```
To add, I am using Nvidia's Apex library for --fp16 training.
## Expected behavior
fine-tuning to complete, with generated models in the output directory.
Have spent a couple hours tinkering with no clue how to proceed, any help would be appreciated. Thank you! | 11-29-2020 15:17:06 | 11-29-2020 15:17:06 | I cannot really reproduce when just running the fine-tuning script @ritvik1512 could you provide us maybe with a complete code example to reproduce the error? A short colab would also be very helpful!
Also pinging @lhoestq here since he knows more about RAG fine-tuning<|||||>I haven't experienced issues using --fp16 for apex. A code example to reproduce the error would be welcome indeed<|||||>Thanks for the quick response!
I used the following parameters while running `finetune.py`
```python
python finetune_rag.py \
--data_dir $DATA_DIR \
--output_dir $OUTPUT_DIR \
--model_name_or_path $MODEL_NAME_OR_PATH \ #(facebook/rag-sequence-base)
--model_type rag_sequence \
--gpus 4 \
--fp16 \
--index_name custom \
--passages_path $PASSAGE_PATH \ #(an extremely short knowledge source with 2 entries for test)
--index_path $INDEX_PATH \ #(corresponding index)
--do_predict \
--do_train \
--n_val -1 \
--val_check_interval 0.25 \
--train_batch_size 4 \ #(reduced from 8 to avoid OOM)
--eval_batch_size 1 \
--max_source_length 128 \
--max_target_length 25 \
--val_max_target_length 25 \
--test_max_target_length 25 \
--label_smoothing 0.1 \
--dropout 0.1 \
--attention_dropout 0.1 \
--weight_decay 0.001 \
--adam_epsilon 1e-08 \
--max_grad_norm 0.1 \
--lr_scheduler polynomial \
--learning_rate 3e-05 \
--num_train_epochs 1 \ #(set at 1 for testing)
--warmup_steps 500 \
--gradient_accumulation_steps 1
```
I will try putting together a colab example very soon, in the meantime let me know if the above snippet helps, thanks!<|||||>@patrickvonplaten @lhoestq apologies for a delayed response. I did try putting together a Google Colab environment [(here)](https://colab.research.google.com/drive/1LlWS6tWWp1Oo4ygUE_J53bBTzUxWlF6J?usp=sharing) replicating my local and with **no** --fp16 I could not get the training to end, at around 20% of the way `tcmalloc: large alloc` warnings show up and the runtime resets with all RAM used.
Meanwhile using --fp16, the training it fails with an `IndexError: list index out of range` before the training even starts.
By the way, I follow the exact code as above (with --fp16) on my local machine and while there the training does do through all the epochs it ends up giving out the error mentioned at the top.
Therefore I am kind of at a loss as to where the issue is and how to proceed.
Please let me know if you see any possible ways out, thanks!<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 8,837 | closed | Inconsistent PreTrainedTokenizerBase.pad argument default value & docstring | The docstring states the argument `padding` has a default of `False` but its default is `True`
docstring:
https://github.com/huggingface/transformers/blob/d5b3e56de5376aa85ef46e7f0325139d9e299a41/src/transformers/tokenization_utils_base.py#L2469-L2470
arg:
https://github.com/huggingface/transformers/blob/d5b3e56de5376aa85ef46e7f0325139d9e299a41/src/transformers/tokenization_utils_base.py#L2431-L2472
This causes issues when using `DataCollatorForLanguageModeling` with an already padded dataset as it resets the attention mask. | 11-29-2020 14:50:45 | 11-29-2020 14:50:45 | Seems to have been added in this commit:
https://github.com/huggingface/transformers/commit/f3065abdb8805f5beaed9ff1e92ce874e655f5c9#diff-85b29486a884f445b1014a26fecfb189141f2e6b09f4ae701ee758a754fddcc1R2146-R2168
As part of merge https://github.com/huggingface/transformers/pull/6110<|||||>Hi, indeed! The docs should be changed to reflect the method signature. Do you want to open a PR? |
transformers | 8,836 | closed | Add utility function for retrieving locally cached models | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds implementation of a small utility function for retrieving a list of locally cached models, discussed in #8803
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@LysandreJik
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 11-29-2020 13:32:41 | 11-29-2020 13:32:41 | No problem just pushed the fix now. <|||||>@LysandreJik Not sure what exactly caused the flax test suite on this to crash. It looks like the docker image crashed.<|||||>I think you need to run the `make style` command on your branch to fix the styling issues. The other test failures seem spurious.<|||||>Thanks! |
transformers | 8,835 | closed | cannot run "examples/language-modeling/run_mlm.py" | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: cmd
- Python version: 3.7
- PyTorch version (GPU?): 1.6.0
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
albert, bert, GPT2, XLM: @LysandreJik
## Information
Model I am using (Bert)
The problem arises when using:
* [o] the official example scripts: (give details below)
* [] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [o] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
python examples/language-modeling/run_mlm.py
>>
Traceback (most recent call last):
File "examples/language-modeling/run_mlm.py", line 30, in <module>
from datasets import load_dataset
ModuleNotFoundError: No module named 'datasets'
```
| 11-29-2020 11:48:57 | 11-29-2020 11:48:57 | You need to install datasets library: https://github.com/huggingface/datasets
```
pip install datasets
```<|||||>Thanks! |
transformers | 8,834 | closed | Allow none-tensor fields in BatchEncoding | # What does this PR do?
This PR allows BatchEncoding to have non-tensor fields. This is useful for example when doing multi-task learning, I can add a task name (str) in the batch, and use it to decide the computation later on.
Without this PR, I cannot use `.to('cuda')` if there is str in the batch.
I don't know who to tag. @LysandreJik
| 11-29-2020 02:53:12 | 11-29-2020 02:53:12 | This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 8,833 | closed | AutoTokenizer can't find model/tokenizer config.json | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.4.0
- Platform: Linux-4.19.0-8-amd64-x86_64-with-debian-10.3
- Python version: 3.7.3
- PyTorch version (GPU?): 1.7.0+cu101 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people. -->
@LysandreJik @mfuntowicz
## Information
Model I am using (Bert, XLNet ...): XLM-Roberta, but I've noticed this with other models as well
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.run `tokenizer = AutoTokenizer.from_pretrained(REF_MODEL)`
2. restart the notebook, for example
3.run `tokenizer = AutoTokenizer.from_pretrained(REF_MODEL)` again
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
The following error occurs:
```
file xlm-roberta-large/config.json not found
---------------------------------------------------
OSError Traceback (most recent call last)
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
387 resume_download=resume_download,
--> 388 local_files_only=local_files_only,
389 )
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/transformers/file_utils.py in cached_path(url_or_filename, cache_dir, force_download, proxies, resume_download, user_agent, extract_compressed_file, force_extract, local_files_only)
961 # File, but it doesn't exist.
--> 962 raise EnvironmentError("file {} not found".format(url_or_filename))
963 else:
OSError: file xlm-roberta-large/config.json not found
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-11-b51d77705f76> in <module>
----> 1 tokenizer = AutoTokenizer.from_pretrained(f'{REF_MODEL}')
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/transformers/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
304 config = kwargs.pop("config", None)
305 if not isinstance(config, PretrainedConfig):
--> 306 config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)
307
308 if "bert-base-japanese" in str(pretrained_model_name_or_path):
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/transformers/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
331 {'foo': False}
332 """
--> 333 config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
334
335 if "model_type" in config_dict:
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
398 f"- or '{pretrained_model_name_or_path}' is the correct path to a directory containing a {CONFIG_NAME} file\n\n"
399 )
--> 400 raise EnvironmentError(msg)
401
402 except json.JSONDecodeError:
OSError: Can't load config for 'xlm-roberta-large'. Make sure that:
- 'xlm-roberta-large' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'xlm-roberta-large' is the correct path to a directory containing a config.json file
```
## Expected behavior
I think that it should load it smoothly
| 11-29-2020 00:39:51 | 11-29-2020 00:39:51 | Hello! Do you have a notebook handy so that we can see and try to reproduce the error?<|||||>Thanks for the quick reply!
I'm doing it in a private repo, I'll try to reproduce it and export it to a public repo asap :)<|||||>I just checked it again and it seems to work smoothly now π€
I'm closing this and if this happens again in the future, I'll open it again :)
My bad.
|
transformers | 8,832 | closed | [MT5] Add use_cache to config | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Because two PRs happened in parallel I forgot to add `use_cache` to the MT5 config. Thanks a lot for spotting it @jplu ! This model would have been pretty slow at generation for a while otherwise.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 11-28-2020 18:33:10 | 11-28-2020 18:33:10 | |
transformers | 8,831 | closed | logging.set_verbosity_error() displays dict instead of NotebookTrainingTracker | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.0.0-rc-1
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101 (True)
- Tensorflow version (GPU?): 2.3.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@sgugger
## Information
Model I am using (Bert, XLNet ...): `distilbert-base-uncased`
The problem arises when using:
* [] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
Following the [docs](https://huggingface.co/transformers/main_classes/logging.html#logging) I was looking for a way to turn off the warnings that `transformers` shows when loading a new model and believe that `logging.set_verbosity_error()` should do the trick.
However, when working in a _Jupyter notebook environment_, I find that setting the logging level to error produces unexpected output from the `Trainer`, namely that I get a `dict` like
```
{'loss': 0.33437405395507813, 'learning_rate': 1.308411214953271e-06, 'epoch': 0.9345794392523364}
{'eval_loss': 0.509843111038208, 'eval_matthews_correlation': 0.5011235129840701, 'epoch': 1.0}
{'epoch': 1.0}
```
instead of the progress bar and table of metrics:

I encountered the problem in my own experiments, but have also been able to reproduce it in @sgugger's tutorial on the GLUE tasks: https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/text_classification.ipynb
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
The task is GLUE
## To reproduce
Steps to reproduce the behavior:
1. Set the logging verbosity to _error_ in the first cell of the notebook, i.e. with
```
# Turn off warnings
import transformers
transformers.logging.set_verbosity_error()
```
2. Load and encode dataset
3. Configure trainer
4. Run training
```
# With logging.set_verbosity_error() we lose the metrics table :(
trainer.train()
# Output
{'loss': 0.33437405395507813, 'learning_rate': 1.308411214953271e-06, 'epoch': 0.9345794392523364}
{'eval_loss': 0.509843111038208, 'eval_matthews_correlation': 0.5011235129840701, 'epoch': 1.0}
{'epoch': 1.0}
TrainOutput(global_step=535, training_loss=0.34615044994889016)
```
I have trimmed down @sgugger's tutorial to create a reproducible example: https://colab.research.google.com/gist/lewtun/21d44a20f94f480dfa2891f587323ffd/logging-bug-in-trainer.ipynb
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Changing the logging level should not interfere with the display of the progress bar or table of metrics in Jupyter notebooks.
| 11-28-2020 15:25:58 | 11-28-2020 15:25:58 | Hi there, I'm afraid this is not a bug bu how the default of `disable_tqdm` behaves. As shown in the [docs](https://huggingface.co/transformers/main_classes/trainer.html#trainingarguments), it defaults to `False` if your verbosity level is at warn or lower, `True` otherwise. So you need to pass along `disabel_tqm=False` to override the default when using this logging level.<|||||>Sorry for the silly oversight!
I saw the `disable_tqdm` flag but didn't realise that "progress bars" also referred to the table of metrics. Would a small clarification in the docs be warranted (I'm happy to do it)?<|||||>Yes we can definitely make the docstring clearer. |
transformers | 8,830 | closed | Longform QA demo breaks after clearing cache | ## Environment info
Browser: Chrome, running on windows.
Running demo at: http://35.226.96.115:8080/
Linked from https://github.com/huggingface/transformers/tree/master/examples/longform-qa
### Who can help
@sgugger
## Information
I had clicked "Clear Cache" in the app and when I did another search errors came up in the browers. subsequent runs also produces errors in the browser.
RuntimeError: Error in void faiss::gpu::allocMemorySpaceV(faiss::gpu::MemorySpace, void**, size_t) at gpu/utils/MemorySpace.cpp:26: Error: 'err == cudaSuccess' failed: failed to cudaMalloc 8987501056 bytes (error 2 out of memory)
Traceback:
File "/home/yacine/anaconda3/lib/python3.7/site-packages/streamlit/ScriptRunner.py", line 322, in _run_script
exec(code, module.__dict__)
File "/home/yacine/Code/transformers/examples/longform-qa/eli5_app.py", line 78, in <module>
passages, gpu_dense_index, es_client = load_indexes()
File "/home/yacine/anaconda3/lib/python3.7/site-packages/streamlit/caching.py", line 591, in wrapped_func
return get_or_create_cached_value()
File "/home/yacine/anaconda3/lib/python3.7/site-packages/streamlit/caching.py", line 575, in get_or_create_cached_value
return_value = func(*args, **kwargs)
File "/home/yacine/Code/transformers/examples/longform-qa/eli5_app.py", line 58, in load_indexes
wiki40b_gpu_index_flat.add(wiki40b_passage_reps) # TODO fix for larger GPU
File "/home/yacine/anaconda3/lib/python3.7/site-packages/faiss/__init__.py", line 138, in replacement_add
self.add_c(n, swig_ptr(x))
File "/home/yacine/anaconda3/lib/python3.7/site-packages/faiss/swigfaiss.py", line 4245, in add
return _swigfaiss.GpuIndexFlat_add(self, arg2, x)
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Expected to have the results of the ELI5 search | 11-28-2020 13:38:36 | 11-28-2020 13:38:36 | This seems to be an out-of-memory error! @yjernite might know what's up.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 8,829 | closed | Attempt to fix Flax CI error(s) | - Increased the tolerance when comparing Flax et PyTorch output (_~0.00058 on my dev box_)
- Removed the `jit` parametrization when running `test_multiple_sentences` because it leads to instabilities
- Introduced subtests expliciting what we're doing by enabling / disabling JIT. | 11-28-2020 11:36:14 | 11-28-2020 11:36:14 | |
transformers | 8,828 | closed | token-classification: use is_world_process_zero instead of is_world_master() | Hi,
I just found some leftovers of the `is_world_master()` function in the token classification example.
As this method has been removed, the following error message is thrown when using the `do_prediction` option:
```bash
Traceback (most recent call last):
File "run_ner.py", line 394, in <module>
main()
File "run_ner.py", line 372, in main
if trainer.is_world_master():
AttributeError: 'Trainer' object has no attribute 'is_world_master'
```
This PR fixes it and uses the new `is_world_process_zero()` method instead! | 11-28-2020 01:14:40 | 11-28-2020 01:14:40 | /cc @sgugger :hugs: |
transformers | 8,827 | closed | error: sentencepiece 0.1.94 is installed but sentencepiece==0.1.91 is required by {'transformers'} | ## Environment info
- `transformers` version: 3.5.1
- Platform: google cloud
- Python version: 3.7
- PyTorch version (GPU?): TPU, 1.6
- Tensorflow version (GPU?): -
- Using GPU in script?: -
- Using distributed or parallel set-up in script?: -
### Who can help
tokenizers: @mfuntowicz
Trainer: @sgugger
examples/distillation: @VictorSanh
examples/seq2seq: @patil-suraj
## Information
I am using requirements.txt file inside examples, and installing it, it fails with this error:
error: sentencepiece 0.1.94 is installed but sentencepiece==0.1.91 is required by {'transformers'}
here is my setup script, as mentioned in requirements of transformers 3.5.1 for running the examples. thank you
```
install_requires=[
'sentencepiece != 0.1.92',
'transformers==3.5.1',
'tensorboard',
'scikit-learn',
'seqeval',
'psutil',
'sacrebleu',
'rouge-score',
'tensorflow_datasets',
'pytorch-lightning==1.0.4',
'matplotlib',
'git-python==1.0.3',
'faiss-cpu',
'streamlit',
'elasticsearch',
'nltk',
'pandas',
'datasets',
'fire',
'pytest',
'conllu',
'tf-nightly',
'google-cloud-storage',
],
```
| 11-27-2020 21:50:33 | 11-27-2020 21:50:33 | The error tells you it wants SentencePiece 0.1.91, can you install that version instead?
```
pip install -U sentencepiece==0.1.91
```
We should update the requirements.txt file to reflect this. Do you want to open a PR?<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 8,826 | closed | [CI] implement job skipping for doc-only PRs | Let's save some time and money. This PR:
* [x] skips most jobs when the only change is in `\.(md|rst)$` files.
I tested this with various types of files and it seems to do the right thing. But if we merge let's monitor that I didn't miss some use case and we end up with broken master if some CI jobs didn't run.
- pros: obvious
- cons: I don't like that the skipped CI job status appear as completed normally, even though it didn't quite run. Let's hope circleci comes up with some better way of indicating that the job was skipped.
---------------
how it was done:
`git merge-base --fork-point master` to get the commit range didn't work at all, even though that's what we use for the `fixup` `Makefile` target.
Other suggestions I found didn't work either.
At the end I found https://circleci.com/docs/2.0/pipeline-variables/ to get the correct commit range:
```
git diff --name-only << pipeline.git.base_revision >>...<< pipeline.git.revision >>
```
and now all is good.
**credits**: the `circle step halt` idea comes from this blog https://yu-ishikawa.medium.com/reusable-a-circleci-command-to-halt-if-no-changed-target-files-e87c6b0af82b
@LysandreJik, @sgugger | 11-27-2020 21:43:37 | 11-27-2020 21:43:37 | That's a great idea!<|||||>Btw this is why GitHub Actions are so cool: this is built-in<|||||>ok, this is good now.<|||||>> Btw this is why GitHub Actions are so cool: this is built-in
This among others :) the only pain point is the lack of anchors, which would be a godsend given our current YAML files.<|||||>There is a problem with the test, which seems to skip the tests as soon as there is at least one doc file, even if code files have also been modified. #8852 gives an example of this happening.
I have commented out the line `skip-job-on-doc-only-changes` in [this commit](https://github.com/huggingface/transformers/commit/08e707633ca5e48b3c0d068522ccac36e623b09d) to have the CI work while waiting for a fix on your side @stas00.<|||||>@sgugger, can you show me a a specific example of this behavior? Looking at PR you linked and other commits since the skip rule has been merged I don't see this happening.
For example, https://github.com/huggingface/transformers/commit/75f8100fc77e4124aa643c45c4a4943cd5ee47cd has both docs and code files and it has the skip rule activated - and none of the jobs were skipped, e.g. here is one job from that PR:
https://app.circleci.com/pipelines/github/huggingface/transformers/16543/workflows/ac01a5da-d9dc-4e23-8ee7-9e562263f030/jobs/128245
Thank you!
p.s. while we are sorting this new one out, you don't need to comment out all the invocation of the command, just the 'circleci halt' line in the command.<|||||>Arg, I force-pushed my rebase so we lost the commit where the problem was happening. I assure you that PR had all tests skipped except `build_doc`.<|||||>I totally believe you what you saw, but this must have been some edge case that I haven't accounted for and I need to see what it was. As I have shown in the 2 links in the comment above yours the rule did work correctly for a PR with 2 docs and one py file, so what you suggested that it skips as soon as there is at least one doc doesn't seem to be the case.
Do you know at least what files were involved in that commit where the undesired skip has occurred? or perhaps it's still in your local branch?
The logic is simple:
1) it gets the modified file names
2) it then removes any docs that match `\.(md|rst)$`
3)
- a. if there are any files left, we have non-docs - normal behavior ensues
- b. if there are no files left, we have only docs - and it skips
<|||||>I've deleted that branch but there was only one commit with the 9 files you see in the PR.<|||||>OK, I created a PR https://github.com/huggingface/transformers/pull/8853 with the exact same files by just reverting your commit 553029909620455e040a49032a9c45f6a5f0cd52 for the sake of the test (not intending to merge it) - plus `.circleci/config.yml` to restore the skipping rule - none of the checks has been skipped.
Moreover, you said:
> There is a problem with the test, which seems to skip the tests as soon as there is at least one doc file
but your commit had no doc files.
Do you want to try to re-enable the rule and monitor to catch a potential edge case that you saw but we no longer know what it was? And if you run into it and I will monitor too, let's make sure to save the branch so that we could reproduce the problem.
To quickly disable the skip just this line needs to be commented out:
https://github.com/huggingface/transformers/blob/dfec84db3fdce1079f01f1bc8dfaf21db2ccaba1/.circleci/config.yml#L19
The only tricky part with monitoring is that it won't affect older branches that weren't rebased or created after the skip was enabled.
Oh and I apologize if this causes a temporary potential hurdle in normal PR process - hopefully we will sort it out quickly and overall things will be better in the long run.<|||||>If that helps, [here](https://github.com/huggingface/transformers/pull/8850/commits/5170e5381b9fccdfb9405d665ecee0515efc6453) is another commit with rst, md and py files where the tests were all skipped:
The corresponding PR is #8850<|||||>Ah, great! That helped a lot, @sgugger - Thank you for finding it!
It appears to be a bug in circleCI (https://app.circleci.com/pipelines/github/huggingface/transformers/16541/workflows/17b20230-8d7c-4b36-813c-2681f2c8a977/jobs/128232)
It's missing `<< pipeline.git.base_revision >>` in
```
if git diff --name-only << pipeline.git.base_revision >>...<< pipeline.git.revision >> | egrep -qv '\.(md|rst)$'
```
resulting in:
```
if git diff --name-only ...5170e5381b9fccdfb9405d665ecee0515efc6453 | egrep -qv '\.(md|rst)$'
```
and hence fails the test. (it's missing the first hash before `...`).
Back to the drawing board.<|||||>Can you think of why these few commits could be missing `pipeline.git.base_revision` - was there something special about those?<|||||>I have no idea, but if CircleCI is flaky like this, I guess we won't be able to use this to determine whether the commit contains only doc files or not...<|||||>We still can, by checking whether `pipeline.git.base_revision` is defined, and never skip if it's not. If that's the best we can do, it won't always save resources.
But let me research first why is it not defined at times.<|||||>Workaround: https://github.com/huggingface/transformers/pull/8853 |
transformers | 8,825 | closed | Model parallel tests should return, not pass in non model parallel seβ¦ | β¦ttings.
`pass` does not skip the test; `return` does. | 11-27-2020 21:39:11 | 11-27-2020 21:39:11 | |
transformers | 8,824 | closed | suggest a numerical limit of 50MB for determining @slow | This is a follow up to https://github.com/huggingface/transformers/issues/7250 which adds a guideline to when making a test `@slow` based on the download requirements if any.
The suggested value is >50MB, and we can adjust it later if it's too large or small.
Fixes: https://github.com/huggingface/transformers/issues/7250
@LysandreJik, @sgugger | 11-27-2020 20:48:51 | 11-27-2020 20:48:51 | |
transformers | 8,823 | closed | [s2s trainer] fix DP mode | This PR:
* [x] fixes https://github.com/huggingface/transformers/issues/8822 which currently crashes under multigpu and w/o an explicit ddp mode
* [x] adds tests
* [x] makes `finetune_trainer.py` executable/runnable
@patrickvonplaten, @sgugger | 11-27-2020 20:03:24 | 11-27-2020 20:03:24 | moving the discussion out of the review commentary as it disappears as soon as it's resolved, so it's best to discuss it in the normal comments as this is what this PR is trying to solve.
-------------
Oh, I see - thank you for catching that. So I didn't solve the actual problem, but had a luck of hiding it under the carpet.
The problem is that the `distributed=...` is wrong here - it is currently coded to expect ddp when `distributed==True` and not dp. dp doesn't have `get_world_size()`/etc and so it fails, so should that arg be called `dpp` instead of `distributed`? But in any case the correct solution is then:
```
self.train_dataset.make_sortish_sampler(
self.args.per_device_train_batch_size, distributed=self.args.local_rank != -1)
```
or re-coded to handle dp too? I don't know the initial intention - should it support `sortish_sampler` under dp or not?
we need to know whether to:
1. recode `make_sortish_sampler` to support dp (can't use `get_world_size()`/etc)
2. recode `make_sortish_sampler` to change its `distributed` arg to `dpp`, so that it only does the special case for dpp.
And somewhat unrelated to the actual bug, I'd like to repeat the request at https://github.com/huggingface/transformers/issues/8822 - let's have a simple flag so that the downstream code knows which mode it is under and not via checking ranks and n_gpus which is very confusing and error-prone.<|||||>Here is where the problem happens with dp:
https://github.com/huggingface/transformers/blob/9995a341c9d68a9963d86c506d17330b3ad813f9/examples/seq2seq/utils.py#L361-L368
So `dist.is_available()` returns `True` under `dp`, but `dist.get_world_size()` fails, since it only works under `dpp` and requires `torch.distributed.init_process_group()` which doesn't get called under `dp`.<|||||>In `DataParallel` mode, you don't need to do anything to your datalaoder (only in DistributedDataParallel where you need to split the batches across the various processes somehow) so you should make a regular datalaoder in that case.
In general, the only proper way to detect if you are in distributed data parallel is to look at the test `local_rank != -1` as `torch.distributed` can give you false information there. I agree it would all be much easier if the training arguments contained something that directly gives the distributed environment. <|||||>> In `DataParallel` mode, you don't need to do anything to your datalaoder (only in DistributedDataParallel where you need to split the batches across the various processes somehow) so you should make a regular datalaoder in that case.
Great, so then should we change the signature to make it clear ddp is wanted and not any distributed:
```
- def make_sortish_sampler(self, batch_size, distributed=False, shuffle=True, **kwargs):
+ def make_sortish_sampler(self, batch_size, ddp=False, shuffle=True, **kwargs):
```
and adjust the invocations accordingly?
> In general, the only proper way to detect if you are in distributed data parallel is to look at the test `local_rank != -1` as `torch.distributed` can give you false information there. I agree it would all be much easier if the training arguments contained something that directly gives the distributed environment.
Great. Should we create a feature request for that?
<|||||>I think there is a misunderstanding on the terminology: `DataParallel` is not distributed: distributed means launching several processes with the same script. The package `torch.distributed` does not return anything useful for `DataParallel` and ddp stands for *distributed* data parallel, so leaving that argument as distributed seems better to me.
> Great. Should we create a feature request for that?
We can do that, yes.<|||||>If you stick to the specific implementation, yes, dpp is the only distributed mode. But logically it doesn't make sense. DP is just as distributed as DPP, just isn't using the `torch.distributed`, so it's not a very clear distinction and will lead to such confusions all over.
As an example if you look at this function usage pattern it's mostly `dataset.make_sortish_sampler(batch_size, distributed=self.hparams.gpus > 1)` which clearly implies for any multi gpu mode (and erroneously so).<|||||>I disagree, in the sense that code use PyTorch should stick with the PyTorch naming conventions. They chose to have a not distributed `DataParallel`, so we should honor that in our naming as well. In Distributed data parallel, you have to use a `DistributedSampler` (but not in `DataParallel`) etc. Those are all *parallel* modes (as you're training with multiple GPUs) but only one is *distributed*.<|||||>That is a reasonable choice to follow. I'm only flagging how this leads to coding errors when a developer assumes that n_gpu> 1 == ddp. So perhaps some extra support is needed there.<|||||>Let's see how it goes once we add the "distributed_env" to `TrainingArguments`!<|||||>@sgugger, please kindly review at your convenience - I addressed all the issues you have raised - all should be good - CI failures are unrelated. Thank you!<|||||>> Perfect, thanks a lot for humoring me and my annoying comments :-)
On the contrary, your comments were excellent and to the point.
I was just slow on getting your point of view since in my mind if we solve a problem on multiple gpus it's distributed across multiple-gpus, regardless of the way it's implemented. But here distributed means distributed across multiple processes. Different semantics.<|||||>So this is probably wrong too:
```
# examples/seq2seq/finetune.py:
sampler = dataset.make_sortish_sampler(batch_size, distributed=self.hparams.gpus > 1)
```
But that's code base on PL.
@patil-suraj, may be you could have a look when you start working at this one? I suspect that it should do a different check for distributed and not check the number of gpus. Let me know if you prefer that I open a separate issue.
<|||||>Dunno how PL works.<|||||>> Let's see how it goes once we add the "distributed_env" to `TrainingArguments`!
Added a feature request: https://github.com/huggingface/transformers/issues/8858<|||||>Thank you HuggingFace Team and @stas00 , I cannot express how much I appreciate your efforts. |
transformers | 8,822 | closed | [s2s finetune_trainer] a mess around distributed | Currently `examples/seq2seq/finetune_trainer.py` bails with multigpu and w/o an explicit ddp mode invoked w/ `-m torch.distributed.launch` - it tries to get the world size thinking it's under ddp, when it's actually under dp.
```
Traceback (most recent call last):
File "finetune_trainer.py", line 310, in <module>
main()
File "finetune_trainer.py", line 254, in main
trainer.train(
File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/trainer.py", line 595, in train
train_dataloader = self.get_train_dataloader()
File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/trainer.py", line 390, in get_train_dataloader
train_sampler = self._get_train_sampler()
File "/mnt/nvme1/code/huggingface/transformers-deepspeed/examples/seq2seq/seq2seq_trainer.py", line 124, in _get_train_sampler
self.train_dataset.make_sortish_sampler(
File "/mnt/nvme1/code/huggingface/transformers-deepspeed/examples/seq2seq/utils.py", line 156, in make_sortish_sampler
return DistributedSortishSampler(self, batch_size, shuffle=shuffle, **kwargs)
File "/mnt/nvme1/code/huggingface/transformers-deepspeed/examples/seq2seq/utils.py", line 368, in __init__
num_replicas = dist.get_world_size()
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 671, in get_world_size
return _get_group_size(group)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 233, in _get_group_size
default_pg = _check_default_pg()
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 225, in _check_default_pg
raise RuntimeError("Default process group is not initialized")
RuntimeError: Default process group is not initialized
```
The problem is that the HF trainer doesn't have a very clear way about the different dist modes. There is a bunch of different checks at different places and no simple single flag to tell the downstream code which mode it is in, leading to such bugs.
I sent a fix PR, with:
```
distributed=(self.args.n_gpu > 1 and self.args.local_rank != -1),
```
but it just shows how fragile the downstream code is because there is no loud and clear flag :(
I propose to set a new attribute`self.distributed_mode={None|dp|ddp}` perhaps in `_setup_devices` in `training_args.py`?
@patrickvonplaten, @sgugger, @LysandreJik
| 11-27-2020 20:02:21 | 11-27-2020 20:02:21 | Hi, thanks @stas00 , I would be grateful to integrate this fix, I am currently dealing with this issue and using this script. thanks.<|||||>Once https://github.com/huggingface/transformers/pull/8823 is merged (hopefully Monday), it will be in master, but feel free to use that branch until then.
<|||||>Awesome, I am really thankful, it helps me a lot.
On Sun, Nov 29, 2020 at 1:48 AM Stas Bekman <[email protected]>
wrote:
> Once #8823 <https://github.com/huggingface/transformers/pull/8823> is
> merged (hopefully Monday), it will be in master, but feel free to use that
> branch until then.
>
> β
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/8822#issuecomment-735311547>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ABP4ZCDF7U67LXQTURXHFRTSSGK6TANCNFSM4UFIIMLA>
> .
>
<|||||>@rabeehk, here is a quick update - as @sgugger pointed out my fix wasn't correct if you wanted the sortish_sampler and we are trying to figure out how to fix it correctly. If all you want is to use sortish_sample with dpp only then the correct fix is most likely this:
```
self.train_dataset.make_sortish_sampler(
self.args.per_device_train_batch_size, distributed=self.args.local_rank != -1)
```
please watch the development in https://github.com/huggingface/transformers/pull/8823<|||||>OK, the right fix has been merged into master https://github.com/huggingface/transformers/pull/8823 so just update the master and you should have it working, @rabeehk
|
transformers | 8,821 | closed | Shared vocabulary with EncoderDecoderModel | # π Feature request
Currently, two separate models are instantiated as encoder/decoder when using this model class. It would be useful in a lot of fine-tuning applications (i.e. summarization) to share the same embeddings between the encoder / decoder classes -- is this something that could be supported with the library? | 11-27-2020 19:46:17 | 11-27-2020 19:46:17 | I believe the `EncoderDecoderConfig` class has an argument `tie_encoder_decoder` which can be used to share weights between the encoder and decoder.
Is this what you're looking for?
@patrickvonplaten <|||||>Thanks @LysandreJik; it's close enough to what I'm looking for. Closing this issue.<|||||>Hey @bayanbatn,
We are currently working on a function (see #8224) that automatically ties encoder and decoder word embeddings only in an automatic fashion...for now what one can do is to simply set it yourself via
```python
model.decoder.word_embeddings = model.encoder.word_embeddings
``` |
transformers | 8,820 | closed | Update README.md | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 11-27-2020 19:33:22 | 11-27-2020 19:33:22 | [https://drive.google.com/file/d/1DDIs0MsvmpJU402o1v7eM-8BS3ACbcKV/view?usp=drivesdk]() <|||||>#
Duplicate of # |
transformers | 8,819 | closed | [Examples] fix few typos in help messages and arguments | # What does this PR do?
- fix typos in help message
- consistently use `gpus`, instead of `n_gpu` (followed by #6315)
(it's not working because not fully converted)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@VictorSanh | 11-27-2020 14:33:17 | 11-27-2020 14:33:17 | Hi @baeseongsu
thanks for opening this. I am actually working on a fairly big PR to revamp these distillation scripts. I'll integrate your modifications directly there to have everything in a single place!
I **hope** to do this by end of week
Victor |
transformers | 8,818 | closed | Slower training time per batch for increasing dataset size | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.0.0-rc-1
- `tokenizers` version: 0.9.4
- `datasets` version: 1.1.3
- Platform: Linux-3.10.0-862.el7.x86_64-x86_64-with-centos-7.8.2003-Core
- Python version: 3.7.2
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes, V100
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten @sgugger @LysandreJik
## Information
Model I am using: BERT, language modelling task.
The problem arises when using:
* [ x] the official example scripts: /examples/language-modeling/run_mlm.py
The tasks I am working on is:
* [ x] my own task or dataset: BERT MLM pre-training with own dataset
I need to pre-train a BERT base model from scratch with own dataset.
The dataset has millions of lines, each line is a short document.
I am experiencing slower training time given increasing size of the dataset.
To debug the problem, I have already tried to split original datasets into several smaller files (issue https://github.com/huggingface/datasets/issues/610), switch on/off the caching mechanism, but no improvements.
What could it be? I am not able to find the origin of the problem. Thanks a lot!
## To reproduce
Steps to reproduce the behavior:
1. run_mlm.py with increasing dataset sizes.
2. This results in slower training time for each batch.
Below some stats, each batch is 128, and I am running run_mlm.py with --line_by_line option. The increase seems not linear.
| Number lines in dataset | seconds per batch |
|---|---|
| 100k | 0.16 |
| 10M | 0.25 |
| 100M | 3 |
BERT parameters:
```
Model config BertConfig {
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 18,
"model_type": "bert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 0,
"position_embedding_type": "absolute",
"type_vocab_size": 2,
"vocab_size": 32000
}
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I expect that the training time for each batch will remain constant given different datasets' sizes.
| 11-27-2020 14:20:12 | 11-27-2020 14:20:12 | Mmm, this looks like a problem in Datasets, @lhoestq @thomwolf ?<|||||>Could you check if the same speed differences appear if you're iterating through the `tokenized_datasets` ?
For example doing
```python
for i in range(0, len(tokenized_datasets), batch_size):
batch = tokenized_datasets[i:i + batch_size]
```
If so, please open an issue on the Datasets repo so we can investigate<|||||>I have run the following code for both dataset 10M rows and 100M rows, and the speed is a bit slower for the 100M dataset compared to the 10M dataset.
However, when training BERT, I have much higher differences (e.g. in the timing above is 3s Vs. 0.25s).
```python
print("--- Starting test for 10M batches ---")
num_batches = 10000000
batch_size = 128
import time
start_time = time.time()
for i in range(0, num_batches, batch_size):
batch = tokenized_datasets['train'][i:i + batch_size]
end_time = time.time() - start_time
print("--- %3.3f seconds per 10M batches ---" % (end_time))
```
| Number of lines in dataset | Seconds |
|---|---|
| 10M | 241 |
| 100M | 303 |<|||||>It looks like the major slowdown comes from somewhere else then.
It could either be from the PyTorch DataLoader but my best guess would be the PyTorch RandomSampler.
For big datasets the sampler takes a ton of RAM as mentioned in https://github.com/huggingface/datasets/issues/610#issuecomment-731725078, it could slowdown your training signficantly.
Could you run the same experiment with the dataloader and the random sampler to make sure ?<|||||>To debug I have replaced RandomSampler with SequentialSampler in Trainer class, https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py at line 374.
With SequentialSampler it works as expected, with no slower time.
How to fix RandomSampler now? <|||||>`RandomSampler` comes from PyTorch, so you can open an issue there. You will most likely need to implement your own random-ih sampler that goes faster than PyTorch.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread.<|||||>This is still an issue today. I got a 10x speed up in training on a larger dataset, by switching to the sequential sampler (due to the RandomSampler bottleneck). Monkeypatch here to switch to the seqential sampler in case it's useful:
```python
import transformers.trainer as trainer
from transformers.trainer import SequentialSampler
def sampler_monkey_patch(dataset, generator):
return SequentialSampler(dataset)
trainer.RandomSampler = sampler_monkey_patch
```
Versions this was used with:
```
transformers==4.26.1
pytorch==1.13.1
datasets==2.10.1
python==3.9.16
```
To detect if this is an issue for you, it's useful to compare the rate at which samples are processed (and gpu utilization), for a small dataset slice versus a large one.<|||||>You can also use an IterableDataset :
```python
train_dataset = train_dataset.to_iterable_dataset()
```
PS : pass `num_shards=` with a factor of `num_workers` to distribute the data evenly across DataLoader workers
PS2 : for distributed, see the "Distributed" section at https://huggingface.co/docs/datasets/use_with_pytorch |
transformers | 8,817 | closed | cache reuse | I have downloaded wikitext by `load_dataset("wikitext", "wikitext-2-raw-v1")`, and get the cache file in `.cache/huggingface/datasets`,
then I try to copy the `huggingface/datasets` folder to the lab server to reuse but fails.
What's the proper way to reuse the cache downloaded on another pc? | 11-27-2020 14:09:01 | 11-27-2020 14:09:01 | You should use [`save_to_disk`](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=save#datasets.DatasetDict.save_to_disk) and [`load_from_disk`](https://huggingface.co/docs/datasets/package_reference/loading_methods.html?highlight=save_to_disk#datasets.load_from_disk):
```python
import datasets
# Will download and preprocess the dataset
data = datasets.load_dataset("wikitext", "wikitext-2-raw-v1")
# Save it in a folder that you can copy
data.save_to_disk("PATH/TO/FOLDER")
# On the other machine - reload the ready to use dataset from the copied folder
data = datasets.load_from_disk("PATH/TO/FOLDER")
```<|||||>thanks! |
transformers | 8,816 | closed | [Flax test] Add require pytorch to flix flax test | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes Flaky CI.
Currently the flax tests are failing which IMO is because of a missing `require_torch` in the flax test.
This PR should fix it -> @mfuntowicz could you take a look?
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 11-27-2020 13:23:52 | 11-27-2020 13:23:52 | |
transformers | 8,815 | closed | Fixed typo in README.md of bert-base-greek-uncased-v1 | # What does this PR do?
The tokenizer called at the input_ids var of Example 2 is currently encoding text_1. This PR is changing the input to text_2.
Motivation and context for this change: I am considering using this model for a uni assignment and it took me a while to understand why the example code was yielding the wrong results. Hopefully, the next person who is eager to try these examples will not get confused by that typo.
- [x] This PR fixes a typo or improves the docs.
documentation: @sgugger
| 11-27-2020 12:51:11 | 11-27-2020 12:51:11 | Thanks! Looks good to me, just pinging @iliaschalkidis for information/validation<|||||>Wow, thanks for the fix @mdermentzi. If I recall correctly, I was trying things in an initial scratch python script that was using a universal `text` variable back to back and I though it would be better to rename those in 3 different variables to make it clearer. It seems I was quite unwary... <|||||>No worries @iliaschalkidis! Just a small typo. ;) Thank you for publishing this model. I am so happy I can play with it for my current uni project. |
transformers | 8,814 | closed | I can not find a Linear layer in the end of Multi-Head Attention layer like Figure 2 right, could someone help me solve it | 11-27-2020 12:32:46 | 11-27-2020 12:32:46 | A bit more context would be appreciated here |
|
transformers | 8,813 | closed | Fix check copies | # What does this PR do?
The target `make quality` fails when generating the new model table when at least one of the optional packages TF, PT or Flax is not installed. We should not force to have everything installed to do a simple quality check, this can be added to an extra target such as `make full-quality` or something like this.
The fix does a condition checking and replace the raised error by a simple warning. | 11-27-2020 10:15:23 | 11-27-2020 10:15:23 | Replacing the error by a warning defeats the purpose of the check as it will make the CI pass when it should fail. We can see if we want to move it in another command, I'm just afraid it will make the failures in the CI (and what the corresponding fixes are) less understandable to the user.<|||||>Let me think about this over the weekend, I'll try to find a solution by Monday :-)<|||||>Awesome! I let you handle this, so I'm closing the PR. |
transformers | 8,812 | closed | Ctrl for sequence classification | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #7623
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
@LysandreJik
-->
| 11-27-2020 08:58:02 | 11-27-2020 08:58:02 | Same as GPT-2, this would benefit from also handling padding on the left; I'll work on this in another PR. |
transformers | 8,811 | closed | HuggingFace pipeline sentiment analysis giving wrong results. | I am just starting with hugging face and following the official doc. When using sentiment analysis pipeline I am getting incorrect output. I am not sure what's the reason behind it.
```
from transformers import pipeline
classifier=pipeline('sentiment-analysis')
text='This is just a statement'
a=classifier(text)
print(a)
```
Giving output as:
[{'label': 'NEGATIVE', 'score': 0.9583144783973694}]
I have changed the input with different sentences it is having problem with neutral statements but is able to predict the positive and negative statements, which have words like {"awesome","good","bad"}.
Statements I have tried and respective output:
1. 'Today is thursday' {'label': 'POSITIVE', 'score': 0.987697184085846}
2. 'Give me my water bottle' {'label': 'NEGATIVE', 'score': 0.855629563331604}
3. 'Its raining outside' {'label': 'POSITIVE', 'score': 0.8293998837471008}
4. 'You are awesome' {'label': 'POSITIVE', 'score': 0.9998681545257568}
5. 'I hate you' {'label': 'NEGATIVE', 'score': 0.9991129040718079}
| 11-27-2020 06:17:15 | 11-27-2020 06:17:15 | Hey @vishwa30 - what exactly do you mean by incorrect output? -> The model obviously doesn't always classify the input correctly. Since this issue doesn't seem to be a bug, could you maybe look in the forum whether people asked a similar question before and if not post this one there? :-)
https://discuss.huggingface.co/<|||||>@patrickvonplaten Sure! I will do that. Thanks!! |
transformers | 8,810 | closed | typo | s/FSTM/FSMT/ | 11-26-2020 22:38:15 | 11-26-2020 22:38:15 | |
transformers | 8,809 | closed | [model loading] remove pointless log entries | This PR removes 2 IMO-pointless log entries that literally say "all is well".
Log entries are useful for debugging problems, but just add to the noise that make more difficult to see useful entries, when they state the obvious, no?
@sgugger, @LysandreJik
| 11-26-2020 22:34:22 | 11-26-2020 22:34:22 | @LysandreJik, I can totally see your point.
But this one says the same thing twice:
```
f"All model checkpoint weights were used when initializing {model.__class__.__name__}.\n"
f"All the weights of {model.__class__.__name__} were initialized from the model checkpoint at {pretrained_model_name_or_path}.\n"
```
They aren't exactly the same but they say twice that there was no problem in loading the model
So as with your excellent example the following would be fitting:
```
"Loading model {name} at {path}"
(any exceptions go here)
"Model loaded"
```
Would you be open if changed this PR to follow this strategy?<|||||>They don't really say the same thing, I would see the first one as:
```
Checking if all checkpoints weights were used in the model ...
All checkpoint weights were used.
[...]
Checking if all weights of the model were initialized by the checkpoint ...
All model weights are initialized.
```
I think both serve a purpose:
1. Is the checkpoint meant for that architecture, or was it trained for another one.
2. Is the model perfectly initialized from that checkpoint, or will it require some fine-tuning.<|||||>The full current log for point 2 is:
```
logger.info(
f"All the weights of {model.__class__.__name__} were initialized from the model checkpoint at {pretrained_model_name_or_path}.\n"
f"If your task is similar to the task the model of the checkpoint was trained on, "
f"you can already use {model.__class__.__name__} for predictions without further training."
)
```
So that log's line 2+3 are still there. I didn't suggest to remove those.
But I feel that this is bordering on splitting hairs (from my side), so I will just let it be.
Your explanation of its purpose makes sense, @LysandreJik. I'd have just tuned it up to make it more factual and less verbose ...
My issue is that I read those logs and not all of them feel like very readable to me...
|
transformers | 8,808 | closed | Fix dpr<>bart config for RAG | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
There was a big silent bug between the interaction of DPR and BERT. Bert added a new config parameter that DPR does not have -> so a "normal" dpr config crashes when used with BERT. This is actually a very big bug and was introduces in #8276 as pointed out by @lhoestq - thanks!
Two things went wrong here.
1) We should be more careful in general when introducing new config parameters and calling them via `config.<new_parameter>` especially for models like BERT that can be used with other configs.
2) The DPR tests should have could that, but instead of using a normal DPR config, a BERT-like DPR config was used in the tests, which is dangerous because it exactly doesn't catch errors like those.
This PR fixes 1) and 2) by calling the config in the case of the newly introduces parameter only with `getattr(config, <param_name>, <default_value>)` **and** adds the config functionality to DPR as well (DPR also should have this functionality over BERT's positional embedding). IMO `getattr(config, <param_name>, <default_value>)` should be used for models like BERT in general because they could be used and wrapped in many different ways.
Also the DPR test is fixed to use a DPR config instead of a BERT config.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSTM: @stas00
-->
| 11-26-2020 21:37:34 | 11-26-2020 21:37:34 | |
transformers | 8,807 | closed | [s2s finetune trainer] potpurri of small fixes | This PR makes a bunch of small readability improvements around finetune trainer instructions and script missing `\` - no code changes.
@sgugger, @patrickvonplaten | 11-26-2020 20:26:37 | 11-26-2020 20:26:37 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.