repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 9,007 | closed | Fix link to stable version in the doc navbar | # What does this PR do?
Currently the link to the stable version in the navigation bar of the docs does not work properly, this PR fixes that. | 12-09-2020 14:10:00 | 12-09-2020 14:10:00 | |
transformers | 9,006 | closed | Diverse beam search 2 | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Copy of #8627 because branch got messed up.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 12-09-2020 13:47:19 | 12-09-2020 13:47:19 | @ayushtiku5 I tweeted about it and tagged you on the tweet: https://twitter.com/PatrickPlaten/status/1336681238485229568 - hope that's fine for you :-) <|||||>@ayushtiku5 can you please check if `HammingDiversityLogitsProcessor` and `PrefixConstrainedLogitsProcessor` can be speeded up with functions like `torch.scatter`, `torch.gather`, `torch.masked_fill`, `torch.index_fill`, `torch.index_add`, `torch.index_copy`? I believe there is room for improvement in
https://github.com/huggingface/transformers/blob/master/src/transformers/generation_logits_process.py#L406-L409 and https://github.com/huggingface/transformers/blob/master/src/transformers/generation_logits_process.py#L471 (in the way like https://github.com/huggingface/transformers/pull/9557 and https://github.com/huggingface/transformers/pull/9600) but I have no experience using these processors to create good enough examples for speed testing and corner cases research. |
transformers | 9,005 | closed | Add the code_search_net datasets tag to CodeBERTa model cards | # What does this PR do?
TL;DR
Related to this PR on `huggingface/datasets`: https://github.com/huggingface/datasets/pull/1288
## Who can review?
@julien-c
| 12-09-2020 12:12:41 | 12-09-2020 12:12:41 | |
transformers | 9,004 | closed | Add MP Net 2 | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Copy of #8971 that had to close because of problems with git history.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 12-09-2020 10:27:35 | 12-09-2020 10:27:35 | @patrickvonplaten Never mind, just use this PR is ok. I am ok if our work can be merged into the master quickly. <|||||>@jplu @LysandreJik @sgugger , we all gave our thumbs-up in the old PR. It's a bit unfortunate that the authorship is slightly changed here, but the PR should be read to merge. <|||||>Squashed commit & cherry picked on the `master` branch so that the authorship is kept in df2af6d. Closing.<|||||>@StillKeepTry thanks a lot for all of your hard work on this PR! Glad to welcome MPNet in the library!<|||||>Thanks every reviewer for helping review our work. :) |
transformers | 9,003 | closed | Turn attentions and hidden-states into a tensor | # What does this PR do?
This PR turns the `all_attentions` and `all_hidden_states` values into tensors instead of a tuple. This update is to properly allow the dict outputs in TF serving, because the value of each key cannot be something else than a TF tensor.
Here a simple piece of code to produce the issue:
```
from transformers import TFBertModel, BertConfig
import tensorflow as tf
config = BertConfig.from_pretrained("bert-base-cased", output_attentions=True)
model = TFBertModel.from_pretrained("bert-base-cased", config=config)
tf.saved_model.save(model, "my_model")
```
Gets the error:
```
ValueError: Got a dictionary containing non-Tensor value (<tf.Tensor 'StatefulPartitionedCall:0' shape=(None, 12, None, None) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:1' shape=(None, 12, None, None) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:2' shape=(None, 12, None, None) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:3' shape=(None, 12, None, None) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:4' shape=(None, 12, None, None) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:5' shape=(None, 12, None, None) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:6' shape=(None, 12, None, None) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:7' shape=(None, 12, None, None) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:8' shape=(None, 12, None, None) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:9' shape=(None, 12, None, None) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:10' shape=(None, 12, None, None) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:11' shape=(None, 12, None, None) dtype=float32>) for key attentions in the output of the function __inference_serving_15889 used to generate a SavedModel signature. Dictionaries outputs for functions used as signatures should have one Tensor output per string key.
``` | 12-09-2020 09:46:12 | 12-09-2020 09:46:12 | After an offline discussion we decided to proceed differently. Then closing this PR. |
transformers | 9,002 | closed | Add TFRag | # What does this PR do?
This is a reopen PR of TFRag Draft version (https://github.com/huggingface/transformers/pull/8892 )
which somehow seems broken and not accessible at the moment .
## Things done
- `TFRagModel`
- `TFRagSequenceForGeneration`
- `TFRagTokenForGeneration`
- beam_search generation
- "Work-around" example in graph mode training (The full graph-mode training need of `.numpy()` for retriever calling, and this doesn't work on graph mode) --> using `context_input_ids` in place of `input_ids`
- Complete test on `TFRag`
## Things not yet done ...
- Integrate with T5 as generator <-- In the next PR
## Who can review?
@jplu @patrickvonplaten | 12-09-2020 09:42:39 | 12-09-2020 09:42:39 | @jplu Thanks so much for your kind reviews! I will improve the code as you suggested.
@patrickvonplaten I have confirmed that my TF and Pytorch have equivalent `generate` output on `num_beam=1` (greedy) on all (15) test cases .
Nevertheless, I just confirmed that the labels in the official test file is based on beam search where `num_beam=4` and this TFRag does not have `beam_search` yet since I am not sure whether I should wait for `tf_generation` refactor .
In the current Pytorch test, if explicitly set `num_beam=1` we will get exact same result as my TF implementation.
So to pass the current official test completely I will have to adapt `beam_search` into TFRag which I will try :)
For now, I slightly modify test cases to match Pytorch greedy output (we can revert back later when I finish `beam_search` )<|||||>This is very much WIP -> need some more time for the `save/from_pretrained()`<|||||>**UPDATED** Dec, 23 2020 : Finish `TFRagSequenceForGeneration`
---
Sorry that I forgot to mention that the latest updated is still in very WIP (phase-2, after phase-1 which is core-part of `TFRagModel` and `TFRagToken` ), so not ready for review .
As discussed with Patrick, there is still an "output_truncation" issue (perhaps due to Caching & TFBart refactor) that Patrick will help take a look.
**We will have to finish in Phase-2** (reason of TFRagToken tests fail at the moment ) :
[ x] - bugs in the new test `test_rag_token_inference_nq_checkpoint() `
[ x] - `beam_search` of `TFRagTokenForGeneration` and
[ x ] - `TFRagSequenceForGeneration` <-- finished!
[ ] - Apply jplu comments to clean up the code
which will take some more time :) :) <|||||>Hi @jplu , @patrickvonplaten , regarding the graph issue, I just successfully made a[ colab notebook](https://colab.research.google.com/drive/1s-j9PB9yzrFsL6q5rZUQyf8_Lt6jDAkL?usp=sharing) capable of training `TFRagSequenceForGeneration` in graph mode using `context_input_ids` as inputs (instead of `input_ids` ) ...
Hopefully this is a reasonable work around on TFRag training.
Now `TFRagToken` has the same output_truncation [issue as Pytorch 's 9098](https://github.com/huggingface/transformers/pull/9098) . If Patrick help me solve this, I will be able to finish the beam search ... which should finish all the main parts of TFRag ...
(After this, the remaining is to clean codes as suggested by you guys, complete all fast tests, and fix the ugly hacks [ ie. pass `test_rag_token_inference_nq_checkpoint` without hacking ] )<|||||>Hey @ratthachat,
great work! I'll take care of the new (and hopefully last :D) `save_/from_pretrained()` issue today. And I'll also make sure that `TFRagTokenGeneration` works properly for `greedy_search`! I'll leave `beam_search` then for you :-) <|||||>Okey, I fixed the `from_pretrained` bug when `from_pt=True` and also fixed `greedy search` for `TFRagToken`. Thanks for the very in-detail error descriptions @ratthachat.
I think the only thing left to do for now is to correctly implement `beam_search` for `TFRagToken`. As I understood it, you'd like to give it a try. Let me know if you want me to tackle this or if you need help :-)
I extended your "load from pt" test slightly to make sure that weights are now always (I really hope so :D) correctly loaded. Also, I added one test for `greedy_search` for `RagToken`<|||||>Hi @patrickvonplaten , I agree and all T5 related tests are deleted.
Tests related to TFBart are still not passed which likely due to TFBart bug.
More precisely, TFBart vs. Torch's Bart forward pass return `generator_enc_last_hidden_state.shape`
differently if we provide `decoder_input_ids`
```
texts = "My friends are cool but they eat too many carbs. I really want them to be healthy, so I buy them vegetable."
texts2 = "My friends are cool."
inputs = tokenizer([texts], max_length=1024, return_tensors='tf')
inputs2 = tokenizer([texts2], max_length=1024, return_tensors='tf')
input_ids=inputs['input_ids']
input_ids2=inputs2['input_ids']
out = model(input_ids,decoder_input_ids=None)
print(out.encoder_last_hidden_state.shape) # RETURN (1, 27, 1024)
out = model(input_ids,decoder_input_ids=input_ids2)
print(out.encoder_last_hidden_state.shape) # RETURN (1, 7, 1024)
```
If we run the same snippet in Pytorch, they will both return `(1, 27, 1024)` . (tested on both official released and master)
So likely this is TFBart bug, **and it makes 4 TFRag fast-tests fail**.
(shape of `out.encoder_last_hidden_state.shape` is not as expected)<|||||>> Hi @patrickvonplaten , I agree and all T5 related tests are deleted.
>
> Tests related to TFBart are still not passed which likely due to TFBart bug.
> More precisely, TFBart vs. Torch's Bart forward pass return `generator_enc_last_hidden_state.shape`
> differently if we provide `decoder_input_ids`
>
> ```
> texts = "My friends are cool but they eat too many carbs. I really want them to be healthy, so I buy them vegetable."
> texts2 = "My friends are cool."
> inputs = tokenizer([texts], max_length=1024, return_tensors='tf')
> inputs2 = tokenizer([texts2], max_length=1024, return_tensors='tf')
>
> input_ids=inputs['input_ids']
> input_ids2=inputs2['input_ids']
> out = model(input_ids,decoder_input_ids=None)
> print(out.encoder_last_hidden_state.shape) # RETURN (1, 27, 1024)
>
> out = model(input_ids,decoder_input_ids=input_ids2)
> print(out.encoder_last_hidden_state.shape) # RETURN (1, 7, 1024)
> ```
>
> If we run the same snippet in Pytorch, they will both return `(1, 27, 1024)` . (tested on both official released and master)
> So likely this is TFBart bug, **and it makes 4 TFRag fast-tests fail**.
> (shape of `out.encoder_last_hidden_state.shape` is not as expected)
Hey @ratthachat,
> Hi @patrickvonplaten , I agree and all T5 related tests are deleted.
>
> Tests related to TFBart are still not passed which likely due to TFBart bug.
> More precisely, TFBart vs. Torch's Bart forward pass return `generator_enc_last_hidden_state.shape`
> differently if we provide `decoder_input_ids`
>
> ```
> texts = "My friends are cool but they eat too many carbs. I really want them to be healthy, so I buy them vegetable."
> texts2 = "My friends are cool."
> inputs = tokenizer([texts], max_length=1024, return_tensors='tf')
> inputs2 = tokenizer([texts2], max_length=1024, return_tensors='tf')
>
> input_ids=inputs['input_ids']
> input_ids2=inputs2['input_ids']
> out = model(input_ids,decoder_input_ids=None)
> print(out.encoder_last_hidden_state.shape) # RETURN (1, 27, 1024)
>
> out = model(input_ids,decoder_input_ids=input_ids2)
> print(out.encoder_last_hidden_state.shape) # RETURN (1, 7, 1024)
> ```
>
> If we run the same snippet in Pytorch, they will both return `(1, 27, 1024)` . (tested on both official released and master)
> So likely this is TFBart bug, **and it makes 4 TFRag fast-tests fail**.
> (shape of `out.encoder_last_hidden_state.shape` is not as expected)
Hey @ratthachat,
sorry to answer so late! I don't think that there is a TFBart bug to be honest. When inputting `decoder_input_ids` the behavior you described above is expected and is the same for PyTorch.
If you copy & paste the following code on master:
```python
#!/usr/bin/env python3
from transformers import AutoModelForSeq2SeqLM, TFAutoModelForSeq2SeqLM
from transformers import AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/bart-base")
tf_model = TFAutoModelForSeq2SeqLM.from_pretrained("facebook/bart-base")
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-base")
input_str_short = "this is a string"
input_str_long = "this is a veeeeeeery veeeeeeeeery long string!!!"
output_shape = model(input_ids=tokenizer(input_str_short, return_tensors="pt").input_ids, decoder_input_ids=tokenizer(input_str_long, return_tensors="pt").input_ids)[0].shape
tf_output_shape = tf_model(input_ids=tokenizer(input_str_short, return_tensors="tf").input_ids, decoder_input_ids=tokenizer(input_str_long, return_tensors="tf").input_ids)[0].shape
assert output_shape == tf_output_shape, "Output shapes have to be the same"
```
you'll see that no assertion error is thrown.
I think you might have to adapt a couple of TFRag tests to make it work. There also might be a small chance that you have to rebase your current PR to master because there is a weird version of Bart in this PR (but I doubt that a bit to be honest).
Please let me know if you need help for the tests! Think we are almost finished !!! :-) <|||||>Hi @patrickvonplaten , sorry that I did not explain clear enough.
The test I exactly adapted from Pytorch test the shape of `last_hidden_states` not the shape of `logits` .
Ie. please see
https://github.com/ratthachat/transformers/blob/tfrag-draft-new/tests/test_modeling_tf_rag.py#L375
```
self.assertEqual(
outputs.generator_enc_last_hidden_state.shape,
(n_docs * decoder_input_ids.shape[0], self.max_combined_length, config.generator.hidden_size),
)
```
From your example change output from `[0] (logits)` to `[2] (last_hidden_states)` , **we indeed got assertion error**.
```
from transformers import AutoModelForSeq2SeqLM, TFAutoModelForSeq2SeqLM
from transformers import AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/bart-base")
tf_model = TFAutoModelForSeq2SeqLM.from_pretrained("facebook/bart-base")
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-base")
input_str_short = "this is a string"
input_str_long = "this is a veeeeeeery veeeeeeeeery long string!!!"
output_shape = model(input_ids=tokenizer(input_str_short, return_tensors="pt").input_ids, decoder_input_ids=tokenizer(input_str_long, return_tensors="pt").input_ids)[2].shape
tf_output_shape = tf_model(input_ids=tokenizer(input_str_short, return_tensors="tf").input_ids, decoder_input_ids=tokenizer(input_str_long, return_tensors="tf").input_ids)[2].shape
assert output_shape == tf_output_shape, "Output shapes have to be the same"
```
```
AssertionError Traceback (most recent call last)
<ipython-input-17-04a80fc987f7> in <module>()
14 tf_output_shape = tf_model(input_ids=tokenizer(input_str_short, return_tensors="tf").input_ids, decoder_input_ids=tokenizer(input_str_long, return_tensors="tf").input_ids)[2].shape
15
---> 16 assert output_shape == tf_output_shape, "Output shapes have to be the same"
AssertionError: Output shapes have to be the same
```
BTW, about the rebase, I really want to do it, but I could not solve the merging conflicts .<|||||>> Hi @patrickvonplaten , sorry that I did not explain clear enough.
> The test I exactly adapted from Pytorch test the shape of `last_hidden_states` not the shape of `logits` .
>
> Ie. please see
> https://github.com/ratthachat/transformers/blob/tfrag-draft-new/tests/test_modeling_tf_rag.py#L375
>
> ```
> self.assertEqual(
> outputs.generator_enc_last_hidden_state.shape,
> (n_docs * decoder_input_ids.shape[0], self.max_combined_length, config.generator.hidden_size),
> )
> ```
>
> From your example change output from `[0] (logits)` to `[2] (last_hidden_states)` , **we indeed got assertion error**.
>
> ```
> from transformers import AutoModelForSeq2SeqLM, TFAutoModelForSeq2SeqLM
> from transformers import AutoTokenizer
>
> model = AutoModelForSeq2SeqLM.from_pretrained("facebook/bart-base")
> tf_model = TFAutoModelForSeq2SeqLM.from_pretrained("facebook/bart-base")
>
> tokenizer = AutoTokenizer.from_pretrained("facebook/bart-base")
>
> input_str_short = "this is a string"
> input_str_long = "this is a veeeeeeery veeeeeeeeery long string!!!"
>
> output_shape = model(input_ids=tokenizer(input_str_short, return_tensors="pt").input_ids, decoder_input_ids=tokenizer(input_str_long, return_tensors="pt").input_ids)[2].shape
>
> tf_output_shape = tf_model(input_ids=tokenizer(input_str_short, return_tensors="tf").input_ids, decoder_input_ids=tokenizer(input_str_long, return_tensors="tf").input_ids)[2].shape
>
> assert output_shape == tf_output_shape, "Output shapes have to be the same"
> ```
>
> ```
> AssertionError Traceback (most recent call last)
> <ipython-input-17-04a80fc987f7> in <module>()
> 14 tf_output_shape = tf_model(input_ids=tokenizer(input_str_short, return_tensors="tf").input_ids, decoder_input_ids=tokenizer(input_str_long, return_tensors="tf").input_ids)[2].shape
> 15
> ---> 16 assert output_shape == tf_output_shape, "Output shapes have to be the same"
>
> AssertionError: Output shapes have to be the same
> ```
>
> BTW, about the rebase, I really want to do it, but I could not solve the merging conflicts .
Hey @ratthachat, please note that [2] are the `hidden_states` and not the `last_hidden_state`. The last hidden_state is as expected the same.
```python
from transformers import AutoModel, TFAutoModel
from transformers import AutoTokenizer
model = AutoModel.from_pretrained("facebook/bart-base")
tf_model = TFAutoModel.from_pretrained("facebook/bart-base")
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-base")
input_str_short = "this is a string"
input_str_long = "this is a veeeeeeery veeeeeeeeery long string!!!"
output_shape = model(input_ids=tokenizer(input_str_short, return_tensors="pt").input_ids, decoder_input_ids=tokenizer(input_str_long, return_tensors="pt").input_ids).last_hidden_state.shape
tf_output_shape = tf_model(input_ids=tokenizer(input_str_short, return_tensors="tf").input_ids, decoder_input_ids=tokenizer(input_str_long, return_tensors="tf").input_ids).last_hidden_state.shape
assert output_shape == tf_output_shape, "Output shapes have to be the same"
```<|||||>I'm not 100% sure if the merge was completely correct, but let's first focus on making all TFRag tests pass. The other test we can fix later :-) <|||||>I apologize @patrickvonplaten . I think I am now a bit confused.
In TFRag, `model.generator` is created using `TFAutoModelForSeq2SeqLM` (the same as Pytorch).
https://github.com/ratthachat/transformers/blob/tfrag-draft-new/src/transformers/models/rag/modeling_tf_rag.py#L519
https://github.com/ratthachat/transformers/blob/tfrag-draft-new/src/transformers/models/rag/modeling_rag.py#L356
In the above example, you used `TFAutoModel`, so it's not the same.
So if I changed to `TFAutoModelForSeq2SeqLM` and check `encoder_last_hidden_state.shape` ,
**[all fast tests, test this `encoder_last_hidden_state.shape` attribute]**
I still got assertion error.
I am not sure what's going on and I may miss something simple here. I apologize again.
```
from transformers import AutoModelForSeq2SeqLM, TFAutoModelForSeq2SeqLM
from transformers import AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/bart-base")
tf_model = TFAutoModelForSeq2SeqLM.from_pretrained("facebook/bart-base")
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-base")
input_str_short = "this is a string"
input_str_long = "this is a veeeeeeery veeeeeeeeery long string!!!"
output_shape = model(input_ids=tokenizer(input_str_short, return_tensors="pt").input_ids, decoder_input_ids=tokenizer(input_str_long, return_tensors="pt").input_ids).encoder_last_hidden_state.shape
tf_output_shape = tf_model(input_ids=tokenizer(input_str_short, return_tensors="tf").input_ids, decoder_input_ids=tokenizer(input_str_long, return_tensors="tf").input_ids).encoder_last_hidden_state.shape
print(output_shape, tf_output_shape)
assert output_shape == tf_output_shape, "Output shapes have to be the same"
```
```
torch.Size([1, 6, 768]) (1, 17, 768)
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
<ipython-input-4-9790bfa59614> in <module>()
16 # print(output.keys(), tf_output.keys())
17
---> 18 assert output_shape == tf_output_shape, "Output shapes have to be the same"
AssertionError: Output shapes have to be the same
```<|||||>Patrick, another issue is that after rebase, there is an error on `load_weight_prefix` which we invented for TFRag's name, so now the basic building block does not work.
`TypeError: ('Keyword argument not understood:', 'load_weight_prefix') `
<|||||>> Patrick, another issue is that after rebase, there is an error on `load_weight_prefix` which we invented for TFRag's name, so now the basic building block does not work.
>
> `TypeError: ('Keyword argument not understood:', 'load_weight_prefix') `
Yeah sorry, I made a quick & dirty rebase so there might be errors! It would be awesome if you could fix them (if there are easy to fix)<|||||>> from transformers import AutoModelForSeq2SeqLM, TFAutoModelForSeq2SeqLM
> from transformers import AutoTokenizer
>
> model = AutoModelForSeq2SeqLM.from_pretrained("facebook/bart-base")
> tf_model = TFAutoModelForSeq2SeqLM.from_pretrained("facebook/bart-base")
>
> tokenizer = AutoTokenizer.from_pretrained("facebook/bart-base")
>
> input_str_short = "this is a string"
> input_str_long = "this is a veeeeeeery veeeeeeeeery long string!!!"
>
> output_shape = model(input_ids=tokenizer(input_str_short, return_tensors="pt").input_ids, decoder_input_ids=tokenizer(input_str_long, return_tensors="pt").input_ids).encoder_last_hidden_state.shape
> tf_output_shape = tf_model(input_ids=tokenizer(input_str_short, return_tensors="tf").input_ids, decoder_input_ids=tokenizer(input_str_long, return_tensors="tf").input_ids).encoder_last_hidden_state.shape
>
> print(output_shape, tf_output_shape)
>
> assert output_shape == tf_output_shape, "Output shapes have to be the same"
I see thanks for the very descriptive error description! You're completely right -> that's a bug, great catch! I'll fix<|||||>> > from transformers import AutoModelForSeq2SeqLM, TFAutoModelForSeq2SeqLM
> > from transformers import AutoTokenizer
> > model = AutoModelForSeq2SeqLM.from_pretrained("facebook/bart-base")
> > tf_model = TFAutoModelForSeq2SeqLM.from_pretrained("facebook/bart-base")
> > tokenizer = AutoTokenizer.from_pretrained("facebook/bart-base")
> > input_str_short = "this is a string"
> > input_str_long = "this is a veeeeeeery veeeeeeeeery long string!!!"
> > output_shape = model(input_ids=tokenizer(input_str_short, return_tensors="pt").input_ids, decoder_input_ids=tokenizer(input_str_long, return_tensors="pt").input_ids).encoder_last_hidden_state.shape
> > tf_output_shape = tf_model(input_ids=tokenizer(input_str_short, return_tensors="tf").input_ids, decoder_input_ids=tokenizer(input_str_long, return_tensors="tf").input_ids).encoder_last_hidden_state.shape
> > print(output_shape, tf_output_shape)
> > assert output_shape == tf_output_shape, "Output shapes have to be the same"
>
> I see thanks for the very descriptive error description! You're completely right -> that's a bug, great catch! I'll fix
Ok merged it: https://github.com/huggingface/transformers/pull/9944. Could you merge master once again into your PR and see whether the tests work now? :-)
```bash
git fetch upstream master
git merge upstream/master
```
I don' think there will be any merge conflicts. Lemme know if you need any help :-) <|||||>Thanks so much Patrick. Tomorrow, I will try my best to fix the "load_weights_prefix" issue and will come back ❤️ <|||||>@patrickvonplaten I am sorry - bad news.
Even though now I think all tests should be passed, I could not find a way to fix the `load_weights_prefix` issue above arised after conflict fixing 2 days ago.
It seems that this `load_weights_prefix` is sent as `kwarg` to all required functions correctly but failed with Keras not allowing this argument .
(I attach the full `TraceBack` below).
At first, I thought that this might be due to the recent TF2.4 upgrade, but I tried downgrade back to TF2.3 and still got the same error. Could you please help take a look?
To reproduce, simply initiate the model:
```
from transformers import RagTokenizer, RagRetriever
from transformers.models.rag.modeling_tf_rag import TFRagModel, TFRagSequenceForGeneration, TFRagTokenForGeneration
PATH = "facebook/rag-token-nq"
tokenizer = RagTokenizer.from_pretrained(PATH)
retriever = RagRetriever.from_pretrained(PATH, index_name="exact", use_dummy_dataset=True)
model = TFRagModel.from_pretrained_question_encoder_generator('facebook/dpr-question_encoder-single-nq-base', "facebook/bart-base", generator_from_pt=True, question_encoder_from_pt=True, retriever=retriever)
```
Produced the following TraceBack:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-10-a201b416721e> in <module>()
1
----> 2 model = TFRagModel.from_pretrained_question_encoder_generator('facebook/dpr-question_encoder-single-nq-base', "facebook/bart-base", generator_from_pt=True, question_encoder_from_pt=True, retriever=retriever)
8 frames
/usr/local/lib/python3.6/dist-packages/transformers/models/rag/modeling_tf_rag.py in from_pretrained_question_encoder_generator(cls, question_encoder_pretrained_model_name_or_path, generator_pretrained_model_name_or_path, retriever, *model_args, **kwargs)
366 name="generator",
367 load_weight_prefix=cls.load_weight_prefix,
--> 368 **kwargs_generator,
369 )
370
/usr/local/lib/python3.6/dist-packages/transformers/models/auto/modeling_tf_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1105 if type(config) in TF_MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING.keys():
1106 return TF_MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING[type(config)].from_pretrained(
-> 1107 pretrained_model_name_or_path, *model_args, config=config, **kwargs
1108 )
1109 raise ValueError(
/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1245
1246 # Instantiate model.
-> 1247 model = cls(config, *model_args, **model_kwargs)
1248
1249 if from_pt:
/usr/local/lib/python3.6/dist-packages/transformers/models/bart/modeling_tf_bart.py in __init__(self, config, load_weight_prefix, *inputs, **kwargs)
1246 def __init__(self, config, load_weight_prefix=None, *inputs, **kwargs):
1247 super().__init__(config, *inputs, **kwargs)
-> 1248 self.model = TFBartModel(config, load_weight_prefix=load_weight_prefix, name="model")
1249 self.use_cache = config.use_cache
1250 # final_bias_logits is registered as a buffer in pytorch, so not trainable for the the sake of consistency.
/usr/local/lib/python3.6/dist-packages/transformers/models/bart/modeling_tf_bart.py in __init__(self, config, *inputs, **kwargs)
1139 class TFBartModel(TFBartPretrainedModel):
1140 def __init__(self, config: BartConfig, *inputs, **kwargs):
-> 1141 super().__init__(config, *inputs, **kwargs)
1142
1143 self.model = TFBartMainLayer(config, name="model")
/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_utils.py in __init__(self, config, *inputs, **kwargs)
629
630 def __init__(self, config, *inputs, **kwargs):
--> 631 super().__init__(*inputs, **kwargs)
632 if not isinstance(config, PretrainedConfig):
633 raise ValueError(
/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/tracking/base.py in _method_wrapper(self, *args, **kwargs)
455 self._self_setattr_tracking = False # pylint: disable=protected-access
456 try:
--> 457 result = method(self, *args, **kwargs)
458 finally:
459 self._self_setattr_tracking = previous_value # pylint: disable=protected-access
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in __init__(self, *args, **kwargs)
260 # self.non_trainable_weights
261 generic_utils.validate_kwargs(kwargs, {'trainable', 'dtype', 'dynamic',
--> 262 'name', 'autocast'})
263 super(Model, self).__init__(**kwargs)
264 # By default, Model is a subclass model, which is not in graph network.
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/utils/generic_utils.py in validate_kwargs(kwargs, allowed_kwargs, error_message)
776 for kwarg in kwargs:
777 if kwarg not in allowed_kwargs:
--> 778 raise TypeError(error_message, kwarg)
779
780
TypeError: ('Keyword argument not understood:', 'load_weight_prefix')
```<|||||>> @patrickvonplaten I am sorry - bad news.
> Even though now I think all tests should be passed, I could not find a way to fix the `load_weights_prefix` issue above arised after conflict fixing 2 days ago.
>
> It seems that this `load_weights_prefix` is sent as `kwarg` to all required functions correctly but failed with Keras not allowing this argument .
> (I attach the full `TraceBack` below).
>
> At first, I thought that this might be due to the recent TF2.4 upgrade, but I tried downgrade back to TF2.3 and still got the same error. Could you please help take a look?
>
> To reproduce, simply initiate the model:
>
> ```
> from transformers import RagTokenizer, RagRetriever
> from transformers.models.rag.modeling_tf_rag import TFRagModel, TFRagSequenceForGeneration, TFRagTokenForGeneration
>
> PATH = "facebook/rag-token-nq"
> tokenizer = RagTokenizer.from_pretrained(PATH)
> retriever = RagRetriever.from_pretrained(PATH, index_name="exact", use_dummy_dataset=True)
>
> model = TFRagModel.from_pretrained_question_encoder_generator('facebook/dpr-question_encoder-single-nq-base', "facebook/bart-base", generator_from_pt=True, question_encoder_from_pt=True, retriever=retriever)
> ```
>
> Produced the following TraceBack:
>
> ```
> ---------------------------------------------------------------------------
> TypeError Traceback (most recent call last)
> <ipython-input-10-a201b416721e> in <module>()
> 1
> ----> 2 model = TFRagModel.from_pretrained_question_encoder_generator('facebook/dpr-question_encoder-single-nq-base', "facebook/bart-base", generator_from_pt=True, question_encoder_from_pt=True, retriever=retriever)
>
> 8 frames
> /usr/local/lib/python3.6/dist-packages/transformers/models/rag/modeling_tf_rag.py in from_pretrained_question_encoder_generator(cls, question_encoder_pretrained_model_name_or_path, generator_pretrained_model_name_or_path, retriever, *model_args, **kwargs)
> 366 name="generator",
> 367 load_weight_prefix=cls.load_weight_prefix,
> --> 368 **kwargs_generator,
> 369 )
> 370
>
> /usr/local/lib/python3.6/dist-packages/transformers/models/auto/modeling_tf_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
> 1105 if type(config) in TF_MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING.keys():
> 1106 return TF_MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING[type(config)].from_pretrained(
> -> 1107 pretrained_model_name_or_path, *model_args, config=config, **kwargs
> 1108 )
> 1109 raise ValueError(
>
> /usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
> 1245
> 1246 # Instantiate model.
> -> 1247 model = cls(config, *model_args, **model_kwargs)
> 1248
> 1249 if from_pt:
>
> /usr/local/lib/python3.6/dist-packages/transformers/models/bart/modeling_tf_bart.py in __init__(self, config, load_weight_prefix, *inputs, **kwargs)
> 1246 def __init__(self, config, load_weight_prefix=None, *inputs, **kwargs):
> 1247 super().__init__(config, *inputs, **kwargs)
> -> 1248 self.model = TFBartModel(config, load_weight_prefix=load_weight_prefix, name="model")
> 1249 self.use_cache = config.use_cache
> 1250 # final_bias_logits is registered as a buffer in pytorch, so not trainable for the the sake of consistency.
>
> /usr/local/lib/python3.6/dist-packages/transformers/models/bart/modeling_tf_bart.py in __init__(self, config, *inputs, **kwargs)
> 1139 class TFBartModel(TFBartPretrainedModel):
> 1140 def __init__(self, config: BartConfig, *inputs, **kwargs):
> -> 1141 super().__init__(config, *inputs, **kwargs)
> 1142
> 1143 self.model = TFBartMainLayer(config, name="model")
>
> /usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_utils.py in __init__(self, config, *inputs, **kwargs)
> 629
> 630 def __init__(self, config, *inputs, **kwargs):
> --> 631 super().__init__(*inputs, **kwargs)
> 632 if not isinstance(config, PretrainedConfig):
> 633 raise ValueError(
>
> /usr/local/lib/python3.6/dist-packages/tensorflow/python/training/tracking/base.py in _method_wrapper(self, *args, **kwargs)
> 455 self._self_setattr_tracking = False # pylint: disable=protected-access
> 456 try:
> --> 457 result = method(self, *args, **kwargs)
> 458 finally:
> 459 self._self_setattr_tracking = previous_value # pylint: disable=protected-access
>
> /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in __init__(self, *args, **kwargs)
> 260 # self.non_trainable_weights
> 261 generic_utils.validate_kwargs(kwargs, {'trainable', 'dtype', 'dynamic',
> --> 262 'name', 'autocast'})
> 263 super(Model, self).__init__(**kwargs)
> 264 # By default, Model is a subclass model, which is not in graph network.
>
> /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/utils/generic_utils.py in validate_kwargs(kwargs, allowed_kwargs, error_message)
> 776 for kwarg in kwargs:
> 777 if kwarg not in allowed_kwargs:
> --> 778 raise TypeError(error_message, kwarg)
> 779
> 780
>
> TypeError: ('Keyword argument not understood:', 'load_weight_prefix')
> ```
Should be fine now - can you check? <|||||>@patrickvonplaten all fast and slow tests are now pass 😄
Just that in one slow test : `test_rag_token_generate_batch` , my colab P100 ran out of memory if we used all 15 inputs.
If I reduce the size of inputs to 5, the test passed.
(any 5 of 15 will passed indicated that all outputs are correct)<|||||>@ratthachat - you've really done an amazing job here! The PR looks very nice to me overall.
One thing, I'd like to change before trying to merge is to delete the `_generate_beam_search` and `_generate_no_beam_search` methods and use the default ones instead. I can definitely help you get this done here. Do you know what the differences are in `_generate_beam_search` that you added to `modeling_tf_rag.py` compared to the one in `modeling_tf_utils.py`? Happy to help you here
Apart from that, I only left a couple of nits.<|||||>Hi Patrick, thanks for all your super kind helps all these time! 😄 ❤️
I improved docstrings as suggested.
About` _generate_beam_search` and `_generate_no_beam_search`, actually there are exactly **20 lines differences**.
I made a notebook to show **20 lines differences** -- Version1 (`TFRag`) and Version2 (`generation_tf_utils.py`)
https://www.kaggle.com/ratthachat/generate-beam-and-no-searches/
(Please clicking version on the top-right and see `diff` )
Mainly, I simply fix both functions to accept `**kwarg` arguments from `TFRag` (in particular `kwargs["encoder_outputs"]` ).
However, I did not directly change/PR `generation_tf_utils.py` for two reasons :
(1) I am not sure if this changes will affect other TF models or not and I don't have enough resources to check them all
(2) As we once discussed, there will be big 'generation refactor' in `generation_tf_utils.py` like Pytorch soon, and it should be a great chance there to fix & test TFRag at that time together with other TF models.
What do you think ?
<|||||>All the slow tests passed for TFBart & TFRag -> PR is ready for review. Started running the whole suite of SLOW tests, just to be sure - will report any unexpected behavior here.
@LysandreJik @sgugger @jplu - it would be great if you can review the PR.<|||||>> I am very much not in favor of changing the core modeling utils to fit the needs of one model, so really dislike the change related to `from_pretrained`. I understand the problems of scope, but it seems to me that it's only there to be able to write the convenience init `from_encoder_decoder_pretrained` and the like which is not strictly necessary (one can load the encoder and decoder out of the model then instantiate it by passing the encoder and the decoder, it's just three lines of code instead of one).
>
> I would favor this option for now while we look into different solutions for the weight loading.
I also think that it's not clean at all to introduce this hack to be able to load the pretrained weights correctly, but there is no way around it really.
It is not possible to load the encoder and decoder separately and to pass them into init if one wants to save the model afterward. Even with the current hack this is not possible because there is no way of knowing what the correct scope of the overall model is when the submodel is loaded separately. Try this in the branch of this PR:
```python
from transformers import TFBartForConditionalGeneration, TFDPRQuestionEncoder, TFRagModel, RagRetriever
retriever = RagRetriever.from_pretrained("facebook/rag-sequence-nq", index_name="exact", use_dummy_dataset=True)
encoder = TFDPRQuestionEncoder.from_pretrained("facebook/dpr-question_encoder-single-nq-base")
generator = TFBartForConditionalGeneration.from_pretrained("facebook/bart-large-cnn")
rag = TFRagModel(question_encoder=encoder, generator=generator, retriever=retriever)
rag.save_pretrained("rag_temp")
new_rag = TFRagModel.from_pretrained("rag_temp", retriever=retriever) # ERROR => the weights are randomly initialized here
```
This means that ```from_encoder_decoder_pretrained``` is not just a convenience function, but actually the only way to correctly load two "sub" models into a composite model class. Otherwise the weights are not saved correctly and can then not be loaded again.
Also one other thing I want to point out here is that the change/hack added to `TFPretrainedModel`'s `from_pretrained` method in `modeling_tf_utils.py` is not just done for RAG but will make composite TF model classes such as `TFEncoderDecoderModel` possible.
At the same time, I also think it's a big hack that is not beautiful in any way.
At the moment, I don't really see another solution here because we cannot just overwrite the `from_pretrained(...)` method for TFRag since the `load_preifx_weight` is passed to BART's and DPR's `from_pretrained`'s method. What we could do instead however is to add a whole new `from_pretrained_with_prefix(....)` function to `modeling_tf_utils.py` instead of changing the existing method.
What do you think? @sgugger @LysandreJik & @jplu <|||||>> At the moment, I don't really see another solution here because we cannot just overwrite the from_pretrained(...) method for TFRag since the load_preifx_weight is passed to BART's and DPR's from_pretrained's method. What we could do instead however is to add a whole new from_pretrained_with_prefix(....) function to modeling_tf_utils.py instead of changing the existing method.
From a short term view, I don't see another solution either. We clearly have an issue with the weights naming for Seq2Seq models and the found workaround we had until now (the `tf.compat.v1...`) reaches its limits with RAG as it requires changes where it should not.
For me this has to be rethought because I think that all the convert issues have to be handled in the `modeling_tf_pytorch_utils.py` script and not elsewhere, and we should stop to force TF to have the same names than PT but more handle how to convert "proper" TF weight names to "proper" PT weight names (and the other way around). I think that if we continue without having a better design for this we go to a more complex implementation and then understanding. We should also keep in mind that the day TF removes the V1 compatibility, none of these models will work as expected. Hence, I would accept this as a temporary solution, but we clearly need to review this TF naming part thoroughly.<|||||>As discussed a bit offline, @LysandreJik will take a final review and then we'll merge the approach of this PR.
We should integrate the suggestion from @sgugger which is to change `prefix` to a private function arg `_prefix` to keep the design flexible for future changes.
Once @LysandreJik has done the review, I'll do a last refactor & then we can merge I think @ratthachat :-)<|||||>Thanks so much everyone for the merge! Especially @jplu who gave insightful comments on several earlier versions and @patrickvonplaten who has collaborated and has greatly helped in all aspects of this works!!
(Honestly it's not possible without Patrick's help)
About the single mismatched generated answer ("step by step" vs. "evolution"), I will investigate this point further. Strangely, in earlier versions all tests are passed meaning all outputs are equivalent. |
transformers | 9,001 | open | 🌟 CTRLsum | # 🌟 New model addition
## Model description
>Current summarization systems yield generic summaries that are disconnected from users’ preferences and expectations. To address this limitation, we present **CTRLsum**, a novel framework for controllable summarization.
>
> Our approach enables users to control multiple aspects of generated summaries by interacting with the summarization system through textual input in the form of a set of keywords or descriptive prompts.
Using a single unified model, CTRLsum is able to achieve a broad scope of summary manipulation at inference time without requiring additional human annotations or pre-defining a set of control aspects during training.
We quantitatively demonstrate the effectiveness of our approach on three domains of summarization datasets and five control aspects:
> 1) entity-centric
> 2) length-controllable summarization
> 3) contribution summarization on scientific papers
> 4) invention purpose summarization on patent filings
> 5) question-guided summarization on news articles in a reading comprehension setting
>
> Moreover, when used in a standard, uncontrolled summarization setting, CTRLsum achieves state-of-the-art results on the CNN/DailyMail dataset.
## Open source status
* [x] the model implementation is available: https://github.com/salesforce/ctrl-sum
* [x] the model weights are available: _Download link available in the README of the repo_
* [x] who are the authors: @jxhe @muggin
| 12-09-2020 08:12:29 | 12-09-2020 08:12:29 | I ported this model for easy use in Hugging Face Transformers. Try using the code below!
### 1. Create models and tokenizers
```python
>> from transformers import AutoModelForSeq2SeqLM, PreTrainedTokenizerFast
>>> model = AutoModelForSeq2SeqLM.from_pretrained("hyunwoongko/ctrlsum-cnndm")
>>> # model = AutoModelForSeq2SeqLM.from_pretrained("hyunwoongko/ctrlsum-arxiv")
>>> # model = AutoModelForSeq2SeqLM.from_pretrained("hyunwoongko/ctrlsum-bigpatent")
>>> tokenizer = PreTrainedTokenizerFast.from_pretrained("hyunwoongko/ctrlsum-cnndm")
>>> # tokenizer = PreTrainedTokenizerFast.from_pretrained("hyunwoongko/ctrlsum-arxiv")
>>> # tokenizer = PreTrainedTokenizerFast.from_pretrained("hyunwoongko/ctrlsum-bigpatent")
```
### 2. Unconditioned summarization
```python
>>> data = tokenizer("My name is Kevin. I love dogs. I loved dogs from 1996. Today, I'm going to walk on street with my dogs", return_tensors="pt")
>>> input_ids, attention_mask = data["input_ids"], data["attention_mask"]
>>> tokenizer.batch_decode(model.generate(input_ids, attention_mask=attention_mask, num_beams=5))[0]
'</s>My name is Kevin. I loved dogs from 1996.</s>'
```
### 3. Conditioned summarization
- You can input condition token using `TOKEN => CONTENTS` structure
```python
>>> data = tokenizer("today plan => My name is Kevin. I love dogs. I loved dogs from 1996. Today, I'm going to walk on street with my dogs", return_tensors="pt")
>>> input_ids, attention_mask = data["input_ids"], data["attention_mask"]
>>> tokenizer.batch_decode(model.generate(input_ids, attention_mask=attention_mask, num_beams=5))[0]
"</s> Today, I'm going to walk on street with my dogs. I loved dogs from 1996</s>"
```
### 4. Prompt summarization
- You can also input `decoder_input_ids` for input prompt.
```python
>>> data = tokenizer("Q:What is my name? A: => My name is Kevin. I love dogs. I loved dogs from 1996. Today, I'm going to walk on street with my dogs", return_tensors="pt")
>>> input_ids, attention_mask = data["input_ids"], data["attention_mask"]
>>> tokenizer.batch_decode(model.generate(input_ids, attention_mask=attention_mask, num_beams=5, decoder_input_ids=tokenizer("Q:What is My name? A:", return_tensors="pt")["input_ids"][:, :-1]))[0]
'<s>Q:What is My name? A: Kevin.</s>'
``` |
transformers | 9,000 | closed | ValueError: You have to specify either decoder_inputs or decoder_inputs_embeds | ## Environment info
- `transformers` version:4.0.0
- Platform:Google Colab
- Python version:3
- Tensorflow version (GPU?):2.3.0
- Using GPU in script?:No
### Who can help
@patrickvonplaten
@patil-suraj
@jplu
## Information
I refer to the URL below and want to run the fine-tuning on mT5.
https://huggingface.co/transformers/training.html
https://huggingface.co/transformers/model_doc/mt5.html
Model I am using (mT5):
```
from transformers import MT5Model, T5Tokenizer, TFMT5Model
model = TFMT5Model.from_pretrained("google/mt5-base")
tokenizer = T5Tokenizer.from_pretrained("google/mt5-base")
```
```
from transformers import BertTokenizer, glue_convert_examples_to_features
import tensorflow as tf
import tensorflow_datasets as tfds
data = tfds.load('glue/mrpc')
train_dataset = glue_convert_examples_to_features(data['train'], tokenizer, max_length=128, task='mrpc')
train_dataset = train_dataset.shuffle(100).batch(32).repeat(2)
```
```
optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model.compile(optimizer=optimizer, loss=loss)
model.fit(train_dataset, epochs=2, steps_per_epoch=115)
```
the output produced :
```
ValueError Traceback (most recent call last)
<ipython-input-9-650f77977ac3> in <module>()
3 loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
4 model.compile(optimizer=optimizer, loss=loss)
----> 5 model.fit(train_dataset, epochs=2, steps_per_epoch=115)
6 # model.fit({"inputs": train_dataset},epochs=2, steps_per_epoch=115)
7 # model.fit(train_dataset)
10 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
971 except Exception as e: # pylint:disable=broad-except
972 if hasattr(e, "ag_error_metadata"):
--> 973 raise e.ag_error_metadata.to_exception(e)
974 else:
975 raise
ValueError: in user code:
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:806 train_function *
return step_function(self, iterator)
/usr/local/lib/python3.6/dist-packages/transformers/models/t5/modeling_tf_t5.py:1094 call *
decoder_outputs = self.decoder(
/usr/local/lib/python3.6/dist-packages/transformers/models/t5/modeling_tf_t5.py:642 call *
raise ValueError(f"You have to specify either {err_msg_prefix}inputs or {err_msg_prefix}inputs_embeds")
ValueError: You have to specify either decoder_inputs or decoder_inputs_embeds
``` | 12-09-2020 04:57:17 | 12-09-2020 04:57:17 | analogs to #8923<|||||>this one is probably safe to close - probably your link was satisfactory, @patrickvonplaten |
transformers | 8,999 | open | AlbertTokenizer handles special tokens incorrectly | If I download the pretrained vocab `https://huggingface.co/albert-base-v1/resolve/main/spiece.model` to the local file system and use the following snippet, the tokenizer does not handle the special tokens properly:
```
tokenizer = AlbertTokenizer('spiece.model')
tokenizer.tokenize('[CLS] Hello World ! [SEP]')
['▁[', 'cl', 's', ']', '▁hello', '▁world', '▁', '!', '▁[', 's', 'ep', ']']
```
If I use `from_pretrained` to load the vocab, it works well:
```
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v1')
tokenizer.tokenize('[CLS] Hello World ! [SEP]')
['[CLS]', '▁hello', '▁world', '▁', '!', '[SEP]']
``` | 12-09-2020 04:40:58 | 12-09-2020 04:40:58 | I'm taking a look at why this is, it seems that we have a differing behavior between `from_pretrained` and the initialization method. I tried loading that `spiece.model` directly with `from_pretrained` and it behaves normally, and so does pointing to a directory containing solely that file.
I'm taking a look and will come back to you.<|||||>@LysandreJik the problem is that `unique_no_split_tokens` is not initialised when you create a tokenizer from `__init__`.
See: https://stackoverflow.com/questions/64631665/what-is-the-difference-in-robertatokenizer-and-from-pretrained-way-of-initia/64640570#64640570<|||||>Indeed, thanks for investigating @cronoik. Do you want to open a PR with a fix?<|||||>Yes, I can but I would like to discuss this before because it affects the core of the library and all tokenizers (I have only checked the slow tokenizers yet, but it probably applies to the fast tokenizers as well.).
When a user calls `.from_pretrained`, the tokenizer is created with `__init__` in the `._from_pretrained` method of the `PreTrainedTokenizerBase` class ([line 1868](https://github.com/huggingface/transformers/blob/b01ddc9577b87f057e163d49563ee3f74f4810cf/src/transformers/tokenization_utils_base.py#L1868)). The problem is now, that `._from_pretrained` does some magic from line [1881](https://github.com/huggingface/transformers/blob/b01ddc9577b87f057e163d49563ee3f74f4810cf/src/transformers/tokenization_utils_base.py#L1881) to [1909](https://github.com/huggingface/transformers/blob/b01ddc9577b87f057e163d49563ee3f74f4810cf/src/transformers/tokenization_utils_base.py#L1909), that is not executed when you create the tokenizer from `__init__` directly.
So, simply said all I need to do is to move this magic to the `__init__` method and remove it from the `._from_pretrained`?
@LysandreJik <|||||>Pinging @thomwolf for advice on tokenizers loading methods.<|||||>This issue has been stale for 1 month.<|||||>@LysandreJik Maybe the [documentation](https://huggingface.co/transformers/main_classes/tokenizer.html) should be updated to at least tell the people that the recommended way to initialize a tokenizer is `from_pretrained` and that is not guaranteed that `__init__` will work properly?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>It is closed without solving the issue?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Pinging @SaulLu here, as I also encountered this.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 8,998 | open | Marge - Pre-training via Paraphrasing | # 🌟 New model addition
## Model description
| 12-09-2020 03:40:31 | 12-09-2020 03:40:31 | model weights available?<|||||>No. :( |
transformers | 8,997 | closed | [wip] [ci] doc-job-skip take #4.5 dry-run via github direct edit | This is take 4.5 on attempting to find a reliable way to get a list of modified files of this PR. It's identical to https://github.com/huggingface/transformers/pull/8980 but this PR was created from github UI direct file edit, so as we can see, it doesn't provide `CIRCLE_PR_NUMBER` - Nothing bad happens, but the check can't be done since we have no information to work with :(
It also happens with PR's made from a non-personal branch, https://github.com/huggingface/transformers/pull/9015
And the result is that the check is completely skipped as it has no data to work with:
https://app.circleci.com/pipelines/github/huggingface/transformers/17118/workflows/48285d78-cb04-4feb-87f8-77cb02ac2593/jobs/134493
Hoping that circlePR will fix that bug.
This PR:
* [x] tests a PR submission from non-personal forked repo
* [x] switches to `head.user.login` for the username to checkout the branch with - using PR username as it's in the master will not work if the branch is coming from a non-forked repo (original that is). (could also use `.head.repo.full_name` for the whole thing at once.)
For now I will let this PR sit for a while and add other fixes if we find more edge cases.
| 12-09-2020 00:47:46 | 12-09-2020 00:47:46 | |
transformers | 8,996 | closed | Remove use of deprecated method in Trainer HP search | # What does this PR do?
Somehow this one slip in the cracks and was forgotten when we removed old deprecated method. This might warrant a release patch if we don't do a new release soon.
Fixes #8995 | 12-09-2020 00:24:32 | 12-09-2020 00:24:32 | |
transformers | 8,995 | closed | AttributeError: 'Trainer' object has no attribute 'is_world_master' | ## Environment info
- `transformers` version: 4.0.0
- Platform: Linux-4.15.0-96-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.8
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
- Ray version: 1.0.1.post1
### Who can help
@sgugger — Would you be able to offer any insight?
## Information
Model I am using (Bert, XLNet ...): BertForSequenceClassification
The problem arises when using:
* [ ] the official example scripts:
* [ x] my own modified scripts:
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task:
* [ x] my own task or dataset:
## To reproduce
The error occurs when I try to run a hyperparameter search for my finetuning step using Ray Tune. I am able to successfully finetune the BertForSequenceClassification model normally — the error only arises when running hyperparameter search.
```
config = BertConfig.from_pretrained(pretrained_model_path, num_labels=num_labels, finetuning_task ='text-classification')
def model_init():
return BertForSequenceClassification.from_pretrained(pretrained_model_path, config=config)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
model_init=model_init
)
trainer.hyperparameter_search(
direction="minimize",
backend="ray",
n_trials=20,
keep_checkpoints_num=1,
resources_per_trial = {'gpu':1, 'cpu':1}
)
```
## Expected behavior
I am trying to run Ray Tune from Huggingface as per these instructions: https://huggingface.co/blog/ray-tune
If anyone has any insight as to what could be causing this error, it would be greatly appreciated, thank you!
| 12-08-2020 23:51:30 | 12-08-2020 23:51:30 | you may replace is_world_master by is_world_process_zero.<|||||>Any solutions to this? I am not able to use ray tune backend in any way<|||||>If you get this error still it's probably because you're using mismatched library versions/example versions. Using the latest examples or a more recent version (v4.1.x) should patch this.
If it doesn't, then please open a new issue and fill in the issue template so that we may help you. Thank you. |
transformers | 8,994 | closed | DistilBert PyTorch to TensorFlow conversion - input sequence length is max 5 tokens for tensorflow | ## Environment info
- `transformers` version: 3.3.1
- Platform: Linux-5.4.0-1028-aws-x86_64-with-debian-buster-sid
- Python version: 3.7.7
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): 2.3.1 (True)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@LysandreJik
@jplu
## Information
Model I am using (Bert, XLNet ...): DistilBert
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
I load the PyTorch model as a TensorFlow model and then save it to the TensorFlow SavedModel format:
`tf_model = TFDistilBertModel.from_pretrained(model_name, from_pt=True)
tf.saved_model.save(tf_model, path)`
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
I am attempting to take a Sentence Transformer (ST) model I trained in PyTorch and use it in TensorFlow.js. The code above is specifically converting the DistilBert model ST uses to a TensorFlow SavedModel format. Then, I load the SavedModel format into a TFJS Graph Model format and write the pooling layer. When implementing a forward pass in JS, I noticed that the input sequence length must be 5 tokens (instead of 128). I checked the SavedModel format (.pb file) to rule out an issue from TF to TFJS and noticed that the shapes all have 5 where 128 should be.
## To reproduce
Steps to reproduce the behavior:
Run the above code on the DistilBert model we trained (not very reproducible). These are the contents of the folder:
* config.json
* pytorch_model.bin
* sentence_bert_config.json (this file has the max_seq_length=128 parameter - I tried adding it to config.json, but it doesn't work)
* special_tokens_map.json
* tokenizer_config.json
* vocab.txt
This is the output of the script.
2020-12-08 23:32:19.317202: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2020-12-08 23:32:20.677265: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
2020-12-08 23:32:20.677408: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-12-08 23:32:20.678158: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
pciBusID: 0000:00:1e.0 name: Tesla T4 computeCapability: 7.5
coreClock: 1.59GHz coreCount: 40 deviceMemorySize: 14.75GiB deviceMemoryBandwidth: 298.08GiB/s
2020-12-08 23:32:20.678182: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2020-12-08 23:32:20.680072: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-12-08 23:32:20.681791: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-12-08 23:32:20.682130: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-12-08 23:32:20.684002: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-12-08 23:32:20.685065: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-12-08 23:32:20.688448: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2020-12-08 23:32:20.688575: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-12-08 23:32:20.689406: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-12-08 23:32:20.690132: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2020-12-08 23:32:20.690351: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2020-12-08 23:32:20.713215: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 2499995000 Hz
2020-12-08 23:32:20.713451: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5572187ea750 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-12-08 23:32:20.713477: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2020-12-08 23:32:20.878776: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-12-08 23:32:20.879646: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x557218b0f200 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-12-08 23:32:20.879675: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla T4, Compute Capability 7.5
2020-12-08 23:32:20.879895: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-12-08 23:32:20.880628: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
pciBusID: 0000:00:1e.0 name: Tesla T4 computeCapability: 7.5
coreClock: 1.59GHz coreCount: 40 deviceMemorySize: 14.75GiB deviceMemoryBandwidth: 298.08GiB/s
2020-12-08 23:32:20.880665: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2020-12-08 23:32:20.880697: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-12-08 23:32:20.880712: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-12-08 23:32:20.880726: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-12-08 23:32:20.880744: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-12-08 23:32:20.880761: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-12-08 23:32:20.880780: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2020-12-08 23:32:20.880853: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-12-08 23:32:20.881644: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-12-08 23:32:20.882347: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2020-12-08 23:32:20.882390: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2020-12-08 23:32:21.437267: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-12-08 23:32:21.437314: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263] 0
2020-12-08 23:32:21.437328: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0: N
2020-12-08 23:32:21.437554: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-12-08 23:32:21.438350: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-12-08 23:32:21.439124: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 12367 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:00:1e.0, compute capability: 7.5)
2020-12-08 23:32:21.678399: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
2020-12-08 23:32:21.874045: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
All PyTorch model weights were used when initializing TFDistilBertModel.
All the weights of TFDistilBertModel were initialized from the PyTorch model.
If your task is similar to the task the model of the ckeckpoint was trained on, you can already use TFDistilBertModel for predictions without further training.
WARNING:tensorflow:Skipping full serialization of Keras layer <tensorflow.python.keras.layers.core.Dropout object at 0x7fafd1a97fd0>, because it is not built.
WARNING:tensorflow:Skipping full serialization of Keras layer <tensorflow.python.keras.layers.core.Dropout object at 0x7fafd0056e10>, because it is not built.
WARNING:tensorflow:Skipping full serialization of Keras layer <tensorflow.python.keras.layers.core.Dropout object at 0x7fafc051ea10>, because it is not built.
WARNING:tensorflow:Skipping full serialization of Keras layer <tensorflow.python.keras.layers.core.Dropout object at 0x7fafc0535690>, because it is not built.
WARNING:tensorflow:Skipping full serialization of Keras layer <tensorflow.python.keras.layers.core.Dropout object at 0x7fafc04cb3d0>, because it is not built.
WARNING:tensorflow:Skipping full serialization of Keras layer <tensorflow.python.keras.layers.core.Dropout object at 0x7fafc04d9f50>, because it is not built.
WARNING:tensorflow:From /home/ubuntu/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/tracking/tracking.py:111: Model.state_updates (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.
WARNING:tensorflow:From /home/ubuntu/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/tracking/tracking.py:111: Layer.updates (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.
## Expected behavior
The TensorFlow model should have a maximum input sequence length of 128, not 5. | 12-08-2020 23:38:09 | 12-08-2020 23:38:09 | Hello!
For now, as the TF models are implemented right now, this is the normal behavior, if you want to have a different size you have to set it manually yourself before to create the saved model. It is planned to make the size of the sequence length variable when creating a saved model but we don't know when. If you don't know how to do it, I can show you how :)<|||||>Hi Julien! Thanks for your response. Could you please show me how to manually set that?<|||||>To do that you can run the following lines:
```
from transformers import TFDistilBertModel, DistilBertTokenizer
tf_model = TFDistilBertModel.from_pretrained(model_name, from_pt=True)
tokenizer = DistilBertTokenizer.from_pretrained(model_name)
inputs = tokenizer("My test sentence", padding="max_length", max_length=128, return_tensors="tf")
tf_model._saved_model_inputs_spec = None
model._set_save_spec(inputs)
tf.saved_model.save(tf_model, path)
```<|||||>Thanks! |
transformers | 8,993 | closed | Templates overhaul 1 | Re-opening of https://github.com/huggingface/transformers/pull/8981 after history was messed up. | 12-08-2020 22:49:45 | 12-08-2020 22:49:45 | |
transformers | 8,992 | closed | New squad example | Reopening from #8924 since the rebase gave too big a diff. | 12-08-2020 19:11:09 | 12-08-2020 19:11:09 | |
transformers | 8,991 | closed | fixes #8968 | # What does this PR do?
One of the 3.X releases introduce output objects that replaced the previously returned tuples. This PR updates the transformers notebook to reflect that update.
Fixes #8968
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@patrickvonplaten | 12-08-2020 17:22:24 | 12-08-2020 17:22:24 | The failed circleci test is not related to my PR. :)<|||||>Hi @cronoik! There's been a mistake done yesterday, and the history of your branch was messed up by mistake (see the file changes, +/-). Do you mind closing this PR and opening another one? No need to do anything on the branch, just closing this one and opening a new one should be enough. Thank you! |
transformers | 8,990 | closed | [Flax] Serialization, Design changes | This PR is a proposition for some changes of the flax design. @mfuntowicz - I'm trying to get a better understanding of how to best use the flax library with Transformers' philosophy. Would be super happy about some comments from your side :-)
1) I think Flax's `from_pretrained()` should default to the flax serialization and not the PyTorch one. Flax's serialization didn't work previously and the model was loaded from PyTorch by default. This PR changes this to Flax and makes `from_pretrained()` and `save_pretrained()` work. I uploaded Bert's and Roberta's flax model weights to the model hub (I noticed that I accidently overwrote an existing Flax `bert-base-cased` - hope that was fine @mfuntowicz - it's doesn't break anything on master since PT was loaded by default)
2) Not sure why we have the `model_class` class attribute in Flax - I don't think we need it no? @mfuntowicz - It would be nice to avoid it for simplicity IMO.
3) I added a `FlaxBertPretrainedModel` class, just as it's done for PyTorch. IMO, ideally we should stay as close as possible to the design of PyTorch. Not sure at all if something like this could work:
```python
class FlaxBertForMaskedLM(FlaxBertPretrainedModel):
def __init__(self, config, state, seed, **kwargs):
self.bert = FlaxBertModel(config, state[self.base_model_prefix], seed, **kwargs) # pass bert relevant
@nn.compact
def __call__(....):
last_hidden_states = self.bert(hidden_states)[0]
logits = FlaxBertLMPredictionHead(vocab_size=self.vocab_size, name="mlm", dtype=self.dtype)(last_hidden_states)
```
=> What do you think @mfuntowicz ?
Would be awesome if we could do some flax library design discussions here @mfuntowicz @LysandreJik @sgugger @thomwolf | 12-08-2020 16:34:08 | 12-08-2020 16:34:08 | I think I'll take a Flax 101 class before reviewing this PR.<|||||>Requires more discussion |
transformers | 8,989 | closed | Make `ModelOutput` pickle-able | # What does this PR do?
To be pickle-able or deep-copyable, `ModelOutput`s need to have all fields with a default. This was already the case on the TF side, just doing it on the PT side as well.
Fixes #8978 | 12-08-2020 16:24:35 | 12-08-2020 16:24:35 | |
transformers | 8,988 | closed | [WIP] Add Tapas (bis) | This is a clean branch based on #8113 which is up-to-date with master. To do:
- [x] Make sure all tests pass (currently 44 passed, 4 skipped for `test_modeling_tapas.py`) cc @LysandreJik
- [x] `make style` & `make quality`
- [x] Investigating forward/backward pass => weird issue with fine-tuning already-finetuned WTQ checkpoint, I guess people should just not do it
- [x] Add notebooks to show how to use:
- `tapas-base-finetuned-sqa`: https://colab.research.google.com/drive/1zMW-D2kYrpDA-cvpNJ-ctGD-tDXWebZa?usp=sharing
- `tapas-base-finetuned-tabfact`: https://colab.research.google.com/drive/1Ug6gzPFgf3J0dR-0f4spt0eyPS10dD1l?usp=sharing
Once they all pass, I'll start uploading more checkpoints to the model hub.
| 12-08-2020 16:17:45 | 12-08-2020 16:17:45 | Closing this PR and opening a new one on the same branch due to Github issues. |
transformers | 8,987 | closed | Tensor arrays | # What does this PR do?
This PR turns the `all_attentions` and `all_hidden_states` values into tensors instead of a tuple. This update is to properly allow the dict outputs in TF serving, because the value of each key cannot be something else than a TF tensor. | 12-08-2020 16:02:20 | 12-08-2020 16:02:20 | |
transformers | 8,986 | closed | Checking output format + check raises ValueError | Just making sure we're not changing the format when we apply `function_to_apply` | 12-08-2020 15:49:04 | 12-08-2020 15:49:04 | |
transformers | 8,985 | closed | Remove value error | # What does this PR do?
This PR update the behavior of the input. We should not raise an error if the name is not among the parameters but act like if there was no name, this is more elegant and less annoying.
| 12-08-2020 13:01:58 | 12-08-2020 13:01:58 | Could you elaborate on the use case? It seems dangerous and magical to me. When passing a parameter to a function that is not in the signature, the user gets a `ValueError`.<|||||>Sure! an EagerTensor doesn't have a `.name` attribute so we assume for that case that the values are given in the parameters order. That's ok because we don't have the choice, but why not having the same behavior in case someone decides to name the tensors as he wishs.
This is very picky, and I won't fight at all if not accepted ahah<|||||>Mmm, but in this test we're not eager tensors since there is a `.name` attribute, or am I missing something?<|||||>While I was trying to explain this, a use case came to my mind, and indeed this behavior is not correct for an edge use case:
```
from transformers import AutoTokenizer, TFBertForSequenceClassification, BertConfig
import tensorflow as tf
import datasets
config = BertConfig.from_pretrained("bert-base-cased", num_labels=6)
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
ds = datasets.load_dataset('emotion')
encoded_train = ds['train'].map(lambda examples: tokenizer(examples['text'], truncation=True, padding='max_length', max_length=128), batched = True)
encoded_train.set_format(type='tensorflow', columns=['input_ids', 'attention_mask', 'label'])
features_train = {x: encoded_train[x].to_tensor(default_value=0, shape=[None, 128]) for x in ['input_ids', 'attention_mask']}
train_ds = tf.data.Dataset.from_tensor_slices((features_train, encoded_train["label"])).batch(16)
input_ids = tf.keras.Input(shape=(128,), dtype='int32', name="input_ids")
attention_mask = tf.keras.Input(shape=(128, ), dtype='int32', name="attention_mask")
transformer = TFBertForSequenceClassification.from_pretrained("bert-base-cased", num_labels=6)
encoded = transformer([input_ids, attention_mask])
logits = encoded[0]
model = tf.keras.models.Model(inputs = [input_ids, attention_mask], outputs = logits)
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08, clipnorm=1.0),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy('accuracy')])
model.fit(train_ds, epochs=1, steps_per_epoch=1)
```
We get:
```
ValueError: The tensor named IteratorGetNext:1 does not belong to the authorized list of names ['input_ids', 'attention_mask', 'token_type_ids', 'position_ids', 'head_mask', 'inputs_embeds', 'output_attentions', 'output_hidden_states', 'return_dict', 'labels', 'training'].
```
Which is normal because `.fit()` wraps the dataset into an iterator, and then the tensors are renamed accordingly. Thanks @sgugger for asking the question :)<|||||>Thanks for explaining, I understand better now :-)<|||||>Ok,just realized it is even worse, the inputs gets an ID, here `IteratorGetNext:1` and `IteratorGetNext:0` but the order of the list is never guaranteed. I'm trying to think to a fix for this.<|||||>Ok, as long as we are naming the inputs accordingly to the parameters, the order is safe. For example:
```
input_ids = tf.keras.Input(shape=(128,), dtype='int32', name="input_ids")
attention_mask = tf.keras.Input(shape=(128, ), dtype='int32', name="attention_mask")
model = tf.keras.models.Model(inputs = [input_ids, attention_mask], outputs = ...)
```
Is perfectly fine and works as expected, but:
```
input_ids = tf.keras.Input(shape=(128,), dtype='int32')
attention_mask = tf.keras.Input(shape=(128, ), dtype='int32')
model = tf.keras.models.Model(inputs = [input_ids, attention_mask], outputs = ...)
```
Brings an undefined behavior into the order.
Nevertheless, there is still an issue. Let's imagine this case:
```
input_embeds = tf.keras.Input(shape=(768,), dtype='float32')
attention_mask = tf.keras.Input(shape=(128, ), dtype='int32')
model = tf.keras.models.Model(inputs = [input_embeds, attention_mask], outputs = ...)
```
Won't work because internally, the `input_ids` parameter will take the value of the `input_embeds` input. This can be solved by integrating the names of each parameter directly inside the model, but we cannot do this because of a bug in TF <= 2.4, and will be solved in the TF 2.5 release. So as long as this release is not out, we cannot fix this, so we have to live with this bug, even though this is an edge use case.
What do you think?<|||||>I think we should document that this does not work and encourage users to use named inputs then.<|||||>I have completed the documentation of the `input_processing` function. Does-it sounds enough as explanation for you?<|||||>LGTM!<|||||>LGTM! @LysandreJik feel free to merge if the PR gets your approval! |
transformers | 8,984 | closed | [libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): | Hi
When I evaluate the finetune_trainer.py on translation datasets like wmt16-en-cs I always get this error after calling evaluate function, I am using version 3.5.1 of transformer on 1 GPU, this issue is really blokcing me, and happens to be there for all translation datasets I tried, could you give me some ideas on this? thanks
[libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0):
terminate called after throwing an instance of 'google::protobuf::FatalException'
what(): CHECK failed: (index) >= (0):
Aborted
| 12-08-2020 10:41:07 | 12-08-2020 10:41:07 | Hello! We really cannot help you with this little information. Please respect the issue template with your environment information, the exact command you used to launch the script, and the full stack trace.
Thank you for your understanding.<|||||>@rabeehk Did you manage to solve this? Experiencing the same issue fine-tuning mT5 on toy data.<|||||>yes I did managed, the issue was I needed to set a longer max_length for the decoder.<|||||>
if you want to debug your codes, go to the place where huggingface computes the final metrics, like bleu, ... and there you can check that prediction max length and targets max-length need to match<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 8,983 | closed | BertConfig.id2label use list instead of "int: string" dict | In https://github.com/huggingface/transformers/blob/master/src/transformers/configuration_utils.py#L262, use list instead of "int: string" dict maybe is better.
When we use easydict to replace dict, there will be bugs in updating the `.to_dict()` to other easydict object. Because int object can not be the key in easydict.
Solve method:
use ```["LABEL_{}".format(i) for i in range(num_labels)]``` to replace ```self.id2label = {i: "LABEL_{}".format(i) for i in range(num_labels)}```. | 12-08-2020 09:24:06 | 12-08-2020 09:24:06 | This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 8,982 | closed | [Example] Fix the argument name mismatch in the distillation example | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This past [PR](https://github.com/huggingface/transformers/pull/6315) replaced the argument name `n_gpu` of the distillation example with `gpus`. This causes a crash in running the example since the rest of the example code still uses the old argument name (`n_gpu`).
This PR solves the issue and get the distillation example to run.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 12-08-2020 07:49:35 | 12-08-2020 07:49:35 | This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 8,981 | closed | Model templates overhaul | This is the first PR to make the model templates better. It improves the templates themselves, as well as the testing tools around them:
- Re-instantiates the tests in the CI, this time as a separate test.
- Respects the library style, and tests it. These tests ensure that the templates have not diverged from the code base, especially due to the `# Copied from ...`.
- Implements a decoder model, with support for cross attentions, to be used in the encoder-decoder framework.
- Implements the same decoder model in TensorFlow
- Implements multiple types of position embeddings, similarly to BERT.
- Tests every new feature.
- Tokenizer separation between slow and fast
- Soft dependency on `cookiecutter`
- Adds easily tweakable integration tests for both PyTorch and TensorFlow
Things left for overhaul 2 & 3:
- General tokenizer improvements, I'm not happy with their current state and it's not tested. I find it surprisingly difficult to have a template for a tokenizer that is general enough, so I'm probably going to try to cover as much possible use-cases as possible
- Encoder-decoder w/ @patrickvonplaten
Things to improve for this overhaul (1):
- Probably speeding up the model templates test. It's running on github actions right now, and it's quite slow (8 minutes) even though the downloads are cached. Possible options are
- use CircleCI instead
- Cache the whole environment
- Probably others, thinking about it
- The test runs on each commit + when the branch is opened. That's unnecessary. | 12-08-2020 05:21:03 | 12-08-2020 05:21:03 | Thank you all for your reviews. Will do the last changes and update from `LMHead` to `CausalLM`.
@jplu the GA machines are indeed less beefy than the CircleCI ones (and there's a reason for that, we pay CircleCI but not GA).<|||||>Closing PR and opening it again because the history is messed up. |
transformers | 8,980 | closed | [wip] [ci] doc-job-skip take #4 dry-run | This is take 4 on attempting to find a reliable way to get a list of modified files of this PR.
Spent the whole day trying many different ideas, none worked. github API for
`https://api.github.com/repos/${CIRCLE_USERNAME}/${CIRCLE_REPO_NAME}/pulls/${CIRCLE_PR_NUMER}"`
is broken. It gives a bogus `base.sha` at times, e.g. when someone force-pushes into master and then you end up with a base.sha which has nothing to do with the fork. on github website everything is valid, but github API gives bogus info.
So after many attempts I give up on trying to get a reliable way via the SHA information provided via github or circleCI.
The only solution that seems to work is to replicate user's original branch on their fork. Cost is about 3secs.
To do that one has to clone user's repo, switch to their branch and find the branching point identical to what `make fixup` does. This is what the latest incarnation of this PR does.
This PR doesn't enable the skipping yet, just reporting what it would have done and dumps the list of modified files so that we could check if we get some edge cases again which this incarnation doesn't cover.
Please have a look and see if it looks safe to merge and then monitor for a while and if all seems in order then we can enable skipping.
This PR will not be able to handle PRs originating from github direct file edit as it can be seen from https://github.com/huggingface/transformers/pull/8997 as CircleCI fails to pass PR number to the job in this situation :( The whole skip check is skipped in that case the job continues normally - we just don't get the saving on direct doc PRs. I'm still trying to see if CircleCI should be providing this data, since according to https://developer.github.com/webhooks/event-payloads/#pull_request this hook should be sending PR number to circleCI.
When I had the idea first little did I know that this trivial one-liner on user-side (we use it in `make fixup`) will turn out to be such a complicated and unreliable thing on CI.
@LysandreJik | 12-08-2020 05:00:07 | 12-08-2020 05:00:07 | trying to self-heal PR<|||||>I guess continuing over here: https://github.com/huggingface/transformers/pull/8997<|||||>ok, succeeded at restoring this PR after force pushed mess up.
Thanks to this recipe: https://gist.github.com/robertpainsi/2c42c15f1ce6dab03a0675348edd4e2c<|||||>let's please monitor new PRs closely, so that it doesn't somehow break the job while testing things. thank you.<|||||>> Crazy that even the GitHub API is unreliable on that front.
I haven't seen it with my own eyes (other than when force pushed history of master was rewritten yesterday), but I found more than one report of it being unreliable on various forums.
It's possible that they are referring to the situation when someone force pushes into the PR branch, thus changing the local history which could impact the forking/branching point (== `base.sha`), but github API continues to report the original `base.sha` for awhile - I think based on reports due to caching.
So this workaround going into user's branch and derives an up-to-date branching point from their branch - at a cost of needing to clone their forked repo. |
transformers | 8,979 | closed | [training] SAVE_STATE_WARNING was removed in pytorch | `SAVE_STATE_WARNING` has been removed from pytorch 3 days ago: pytorch/pytorch#46813
I had to add redundant ()'s to avoid a terrible auto-formatter outcome.
Fixes: #8232
@sgugger, @LysandreJik | 12-08-2020 02:42:29 | 12-08-2020 02:42:29 | Thank God they removed that horrible thing! Instead of the complex parsing, maybe a `try`/`except` would be cleaner?<|||||> no horrible formatting that way! good idea - done.<|||||>@stas00, could it be possible to apply your fix to the code of version `3.5.X` and release the next **minor** version (like v3.5.2)?<|||||>@LysandreJik, your help is needed here. I don't know anything about how old branches maintenance is done.
This PR was merged in 4x series and @vyshkant is requesting this fix applied to 3.5.x for the next release.
Thank you.<|||||>No we won't do fixes for old versions, either upgrade to v4 or use PyTorch < 1.8 if you want to stick to v3.5. |
transformers | 8,978 | closed | Deepcopy and pickling fails for modeling_outputs | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.0.0
- Python version: 3.8
- PyTorch version (GPU?): 1.6.0
- Using GPU in script?: N/A
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## To reproduce
Steps to reproduce the behavior:
```
>>> from transformers.modeling_outputs import BaseModelOutput
>>> import torch
>>> import copy
>>> x = BaseModelOutput(last_hidden_state=torch.ones(1,))
>>> z = copy.deepcopy(x)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/lib/python3.8/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/opt/lib/python3.8/copy.py", line 263, in _reconstruct
y = func(*args)
TypeError: __init__() missing 1 required positional argument: 'last_hidden_state'
>>> import pickle
>>> obj = pickle.dumps(x)
>>> obj_loaded = pickle.loads(obj)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: __init__() missing 1 required positional argument: 'last_hidden_state'
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
No failures when using deepcopy or pickle.dumps/pickle.loads | 12-08-2020 02:07:18 | 12-08-2020 02:07:18 | Pinging @sgugger, the king of model outputs!<|||||>That was very quick :) Thank you @sgugger and @LysandreJik ! |
transformers | 8,977 | closed | BertForMaskedLM train | I have a question
When training using BertForMaskedLM, is the train data as below correct?
- token2idx
```
<pad> : 0, <mask>: 1, <cls>:2, <sep>:3
```
- max len : 8
- input token
```
<cls> hello i <mask> cats <sep>
```
- input ids
```
[2, 34,45,1,56,3,0,0]
```
- attention_mask
```
[1,1,1,1,1,1,0,0]
```
- labels
```
[-100,-100,-100,64,-100,-100,-100,-100]
```
I wonder if I should also assign -100 to labels for padding token. | 12-08-2020 01:33:59 | 12-08-2020 01:33:59 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discusss.huggingface.co) instead?
Thanks! |
transformers | 8,976 | closed | Check table as independent script | Separated the table check from `check_copies.py` as for the template I need to manage the copies without managing the table. | 12-07-2020 23:03:34 | 12-07-2020 23:03:34 | |
transformers | 8,975 | closed | Update quicktour docs to showcase the use of truncation | # What does this PR do?
Currently, running the tokenizer batch example on https://huggingface.co/transformers/quicktour.html gives an error
```
Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation.
```
This PR fixes the above by passing the `max_length` param explicitly (instead of depending on it having a default, which might not be the case for all models).
The fix also adds clarity to the statement in the docs above this example
> If your goal is to send them through your model as a batch, you probably want to pad them all to the same length, truncate them to the maximum length the model can accept and get tensors back
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@sgugger
| 12-07-2020 22:56:36 | 12-07-2020 22:56:36 | |
transformers | 8,974 | closed | Add option to only check copies | 12-07-2020 22:48:36 | 12-07-2020 22:48:36 | Closing in favor of #8976 |
|
transformers | 8,973 | closed | Small fix to the run clm script | # What does this PR do?
@LysandreJik pointed out that the scripts will fail with a cryptic error if the tokenizer `model_max_len` is huge and no `block_size` is set. This PR fixes this by clipping the `block_size` to 1024 (when no value is passed). | 12-07-2020 22:13:50 | 12-07-2020 22:13:50 | |
transformers | 8,972 | closed | Removed unused `encoder_hidden_states` and `encoder_attention_mask` | # What does this PR do?
This PR removes unused `encoder_hidden_states` and `encoder_attention_mask` from MobileBert forward methods. These are use for decoder models, but MobileBert does not include a cross-attention mechanism.
Fixes https://github.com/huggingface/transformers/issues/8969
## Who can review?
albert, bert, XLM: @LysandreJik
| 12-07-2020 20:27:00 | 12-07-2020 20:27:00 | @LysandreJik I had to remove some tests that were testing the decoder mode for MobileBert.
One test still fails (flax), the error seems unrelated to this PR unless I am missing something? |
transformers | 8,971 | closed | MPNet: Masked and Permuted Pre-training for Language Understanding | # Model addition
[MPNet](https://arxiv.org/abs/2004.09297)
## Model description
MPNet introduces a novel self-supervised objective named masked and permuted language modeling for language understanding. It inherits the advantages of both the masked language modeling (MLM) and the permuted language modeling (PLM) to addresses the limitations of MLM/PLM, and further reduce the inconsistency between the pre-training and fine-tuning paradigms.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 12-07-2020 19:02:47 | 12-07-2020 19:02:47 | @patrickvonplaten I have added an integration test with a pre-trained weight in [https://github.com/StillKeepTry/transformers/blob/dfc18d59da04c38723553354ee1799ce204f52c8/tests/test_modeling_mpnet.py#L240](https://github.com/StillKeepTry/transformers/blob/dfc18d59da04c38723553354ee1799ce204f52c8/tests/test_modeling_mpnet.py#L240)
<|||||>> @patrickvonplaten I have added an integration test with a pre-trained weight in https://github.com/StillKeepTry/transformers/blob/dfc18d59da04c38723553354ee1799ce204f52c8/tests/test_modeling_mpnet.py#L240
That's awesome thanks a lot!<|||||>Think you need to run `make style` and then the last test should pass as well :-)<|||||>@jplu
I have updated the inputs handling in the TF file now, and rebase and fix the conflicting files.
Besides, I have used `make style` multiple times but it still reports an error in `check_code_quality`. And I have checked and it seems the problem is not from my added part in the [https://github.com/StillKeepTry/transformers/blob/master/src/transformers/__init__.py](https://github.com/StillKeepTry/transformers/blob/master/src/transformers/__init__.py), despite it reports an error. <|||||>@LysandreJik Thanks for pointing out this problem. I have fixed it.<|||||>The `make style` issue is probably because of the isort version installed, maybe you can try uninstalling black/isort and doing the following at the root of the repo:
```
pip uninstall black isort
pip install -e .[quality]
```
If you want I can run `make style` and push on your branch so that it's ready to be merged.<|||||>@LysandreJik Thank you. You are right. It is because I have installed black and isort before. <|||||>Great, thanks! The quality test really doesn't like you haha!
This time I think it's because of the "# Copied from xxx ..." which still uses the old scheme (like `transformers.modeling_roberta.RobertaLMHead`) instead of the new scheme (like `transformers.models.roberta.modeling_roberta.RobertaLMHead`).<|||||>It seems ok now :) ...<|||||>@jplu I have fixed your comments now.<|||||>Thanks!!
@sgugger @patrickvonplaten @LysandreJik I'm seeing that the `TFMPNetForPreTraining` and `MPNetForPreTraining` are missing from the TF and PT file. Should they be added? Otherwise it is fine for me :)<|||||>> Thanks!!
>
> @sgugger @patrickvonplaten @LysandreJik I'm seeing that the `TFMPNetForPreTraining` and `MPNetForPreTraining` are missing from the TF and PT file. Should they be added? Otherwise it is fine for me :)
I observe that some models also lack `TFXXXForPreTraining` and `XXXForPreTraining`. I am willing to add them in the next stage. <|||||>Hey @StillKeepTry,
we are super sorry, we had a problem yesterday with git and this is why your git history is cluttered with wrong commits earlier. I cleaned your PR and pushed it to a new branch on master here: https://github.com/huggingface/transformers/pull/9004 .
It should include all the commits you had earlier. I think we all gave our thumbs-up, so we could merge the other pull request to master (which would require the least amount of work from your side).
However if you want to be the main author of the PR (which is 100% understandable and which is what I would want!), can you do the following steps to open a new clean PR which was exactly like before:
In your repo (https://github.com/StillKeepTry/transformers), assuming that the remote to the original hugging face repo (https://github.com/huggingface/transformers.git) is called `upstream`:
```
$ git fetch upstream
$ git checkout upstream/master
$ git checkout -b add_mp_net_new
# now we'll cherry pick all of your commits
$ git cherry-pick 7361516^..78dcc71
$ git push
# => now you should be able to open new PR with exactly the commits you had previously
```
Lemme know if you need help doing this (or if you don't mind merging https://github.com/huggingface/transformers/pull/9004 - but it would be fairer to you if you're also officially the main author!).
Big sorry again!<|||||>@patrickvonplaten Never mind, just use your PR. I am ok if our work can be merged into the master quickly. <|||||>Hello! I can not see the data collator for permuted and masked language models. Was it added also inside HuggingFace? There is an already proposed way to do this collator inside the trainer?
Thanks!<|||||>@gaceladri we have an example for permutation language modeling, check it out here: https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_plm.py<|||||>Hi @LysandreJik, thank you for your kind response. This data collator that you pointed me out, is the collator from permuted language model used in XLNet right? I am unsure that this is a collator to replicate MPNet that mask tokens, not indices and also do the permutation. Sure that I am misunderstanding something... |
transformers | 8,970 | closed | Copyright | # What does this PR do?
This PR adds a copyright in any file missing, or fixes the copyright in some of the files to include HuggingFace when missing. We should be vigilant when new files are added to make sure they get one @LysandreJik and @patrickvonplaten
I've excluded the examples folder as I'll do the copyright addition along with the cleaning. | 12-07-2020 18:26:04 | 12-07-2020 18:26:04 | Merging to avoid merge conflicts. Can address comments in a follow-up PR. |
transformers | 8,969 | closed | MobileBERT decoder capabilities | The current input parameters for MobileBERT indicate that the model may be used in a decoder setting. However, the model architecture does not contain a cross-attention mechanism and several inputs to the model are effectively never used: `encoder_hidden_states` and `encoder_attention_mask`.
This can be seen in:
- https://github.com/huggingface/transformers/blob/de6befd41f3986c68f4af302761b627cb6519eb7/src/transformers/models/mobilebert/modeling_mobilebert.py#L247, where these 2 inputs are not used
- https://github.com/huggingface/transformers/blob/de6befd41f3986c68f4af302761b627cb6519eb7/src/transformers/models/mobilebert/modeling_mobilebert.py#L330, where these inputs are just passed to the previous forward function (where they have no impact)
- https://github.com/huggingface/transformers/blob/de6befd41f3986c68f4af302761b627cb6519eb7/src/transformers/models/mobilebert/modeling_mobilebert.py#L496, where these parameters are not used (not even passed to the `MobileBertAttention`)
- https://github.com/huggingface/transformers/blob/de6befd41f3986c68f4af302761b627cb6519eb7/src/transformers/models/mobilebert/modeling_mobilebert.py#L552 where they are passed to the `MobileBertLayer` described above (therefore without impact)
- https://github.com/huggingface/transformers/blob/de6befd41f3986c68f4af302761b627cb6519eb7/src/transformers/models/mobilebert/modeling_mobilebert.py#L847 where they will trigger some reshaping of the attention mask, but eventually not get used.
I believe these unused inputs make the code more difficult to follow and potentially misleading (I don't believe the model can actually be used as a decoder).
Would you be generally supportive of a cleanup of the MobileBERT architecture to reflect its current capabilities? I'd be happy to share a PR but I wanted to check your general thoughts on this.
Thank you,
### Who can help
albert, bert, GPT2, XLM: @LysandreJik
(did not find anyone for MobileBert? But this is relevant for package maintenance I believe you may be the right person for this)
| 12-07-2020 17:33:04 | 12-07-2020 17:33:04 | You are right, there are never used. These seem to have been added by mistake during the original implementation. I would be all for a cleanup, as long as it doesn't touch to anything other than these attributes. |
transformers | 8,968 | closed | 02-transformery.ipynb - output from model only strings 'last_hidden_state', 'pooler_output' | Running in NVIDIA Docker Container: nvcr.io/nvidia/pytorch:20.11-py3
Pytorch version: 1.8.0a0+17f8c32
transformers version: 4.0.0-rc-1
Python version: 3.6.10 |Anaconda, Inc.| (default, May 8 2020, 02:54:21) [GCC 7.3.0]
When running through transformers/notebooks/02-transformers.ipynb, I see the following output at this point:
```python
outputs, pooled = model(tokens_pt)
print("Token wise output: {}, Pooled output: {}".format(outputs.shape, pooled.shape))
```
output for this part:
```
AttributeError Traceback (most recent call last)
<ipython-input-47-cda4654dfa83> in <module>
20 #outputs = model_outs[0]
21 #pooled = model_outs[1]
---> 22 print("Token wise output: {}, Pooled output: {}".format(outputs.shape, pooled.shape))
AttributeError: 'str' object has no attribute 'shape'
```
This is because the values of `outputs` and `pooled` are strings, `"last_hidden_state"` and "pooler_output", respectively.
However, the following change (comment out original line and replace it with version where `model` output is captured in a single object whose 0 and 1 indices are accessed) produces the desired result:
```
#outputs, pooled = model(tokens_pt)
model_outs = model(tokens_pt)
outputs = model_outs[0]
pooled = model_outs[1]
print("Token wise output: {}, Pooled output: {}".format(outputs.shape, pooled.shape))
```
I am not sure if this is a PyTorch version thing or what, but was hoping to either get some insight or alert you to something coming up when upgrading to the latest pytorch. Thank you. | 12-07-2020 17:20:01 | 12-07-2020 17:20:01 | Exactly same issue while using transformers. I'm using Pytorch 1.7. The solution presented solved the issue |
transformers | 8,967 | closed | EncoderDecoderModel works poorly with Mlflow | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.0.0
## Description
Mlflow keeps track of model config parameters, EncoderDecoderConfig has `encoder` and `decoder` parameters which are basically configs of encoder and decoder. They are converted first `to_dict` and then to string `str()` by mlflow resulting in a long string that can not be fit. The `MAX_PARAM_VAL_LENGTH` of Mlflow is set to 250 and AFAIK can not be changed.
This result in an error:
`MlflowException: Param value '{'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'use_bfloat16': False, 'pruned_heads': {}, 'tie_word_embeddings': True, 'is_encoder_decoder': False, 'is_decoder': False, 'add_cross_attention': Fa' had length 1669, which exceeded length limit of 250`
My solution for training phase is:
```
class DummyClass:
def to_dict(self):
return {}
model.config.encoder = DummyClass()
model.config.decoder = DummyClass()
```
@patrickvonplaten | 12-07-2020 17:18:10 | 12-07-2020 17:18:10 | Thanks for your proposed solution @alexyalunin !
I don't really think that this is a problem on our side. I think MLFlow should better handle this no? <|||||>Probably, but since you use mlflow inside your library people might expect it working with your models. I won't open the issue in mlflow repo, I leave it to someone who encounters this error again. @patrickvonplaten You can close this issue then. <|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 8,966 | closed | Make loss function an init parameter | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
I would like to request an optional init parameter of `*Model` (e. g. BertModel) that allows the user to provide an own loss function for training. If it is None, it will fall back to the default implementations.
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
Main motivation is ease of use for the user.
Say for example, I want to change the default `CrossEntropyLoss` in a non-binary classification with `BertForSequenceClassification`, I have to override `forward` in a subclass or write my own class.
Now I will have to do this for all model types (RoBERTa, XLNet ...) I want to also check out. Because of subclassing I won't be able to use `AutoModelForSequenceClassification.from_pretrained`.
Suppose, this is an init parameter with default `None`. Any model can be instanciated, and in the `forward` method the default loss functions will be used if not explicitely a custom loss function (factory) is being provided.
```python
def forward(...):
# ...
loss = None
if labels is not None:
if self.num_labels == 1:
# We are doing regression
loss_fct = MSELoss()
loss = loss_fct(logits.view(-1), labels.view(-1))
else:
loss_fct = CrossEntropyLoss()
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
```
to
```python
def __init__(..., loss_fct_cls =None):
# ....
self.loss_fct_cls = loss_fct_cls
def forward(...):
# ...
loss = None
if labels is not None:
if self.num_labels == 1:
# We are doing regression
loss_fct = MSELoss() if not self.loss_fct_cls else self.loss_fct_cls()
loss = loss_fct(logits.view(-1), labels.view(-1))
else:
loss_fct = CrossEntropyLoss() if not self.loss_fct_cls else self.loss_fct_cls()
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
# ....
# usage
config = AutoConfig.from_pretrained("bert-base-uncased", num_labels=2)
model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased", config=config, loss_fct_cls=torch.nn.BCELossWithLogitsLoss)
```
The user still has to be careful to use the correct loss function to match the number of labels for example but he can easier transition between models. The the overhead in performance and adapting a optional custom loss to each model should not be that high. It is just another hyperparameter that further allows for customization in using the models.
This might also allow for easier multi-class multi-label classifications, as it now is more for multi-class single-label, isn't it?
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
In case the feature request will not immediately be denied because of reasons (not sure which?), I can start extending the existing models to allow for using an optional loss function.
I'm just not sure what the final parameter name should be because changing it can probably be done with `sed`/search-replace but doing it right the first time is just being efficient. I'm also not sure whether to store it in the config object or in the model as an attribute. (For performance reasons, it would still cache it as a model property for slightly faster access but I did not design the whole library, so I might be wrong on my thoughts.)
| 12-07-2020 16:43:24 | 12-07-2020 16:43:24 | While I understand where you're coming from, why not define the loss function outside of the model, like it's generally done for PyTorch models? If you do not pass the labels, you get the logits from the model. You can use these logits with your labels to compute any loss you want.<|||||>Ok. Good I asked first because I did not know this general practice. 😄
So, if I want to use the `Trainer`, I will then be required to only override:
https://github.com/huggingface/transformers/blob/9d7d0005b046a95d9d59354714bb6c3547a612fe/src/transformers/trainer.py#L1114-L1120
→ I would need to split the labels from the input, feed it into the model as usual and then compute the loss afterwards manually. Or just ignore the default computed loss and compute my own loss myself and override it.
My own loss computation can then still be like this:
https://github.com/huggingface/transformers/blob/9d7d0005b046a95d9d59354714bb6c3547a612fe/src/transformers/models/bert/modeling_bert.py#L1383-L1391
I think this is even easier. Thank you. Not sure why I did not see this ...<|||||>Yes, the trainer has a `compute_loss` method that is simple to override for that exact purpose. Glad you're satisfied with the outcome, and thanks for opening such a detailed issue.<|||||>I stumbled over this exact method when the `Trainer` was introduced but did not realize that the models still return the raw logits that I then can use for custom loss computation ...
Well, I try to research before opening issues, and looking through the source code often helps understand some details but the code base keeps growing and changing, so it's sometimes hard to keep up and not miss some obvious things.
😄 |
transformers | 8,965 | closed | Remove sourcerer | # What does this PR do?
Removes sourcerer from the readme
## Before submitting
- [x ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
documentation: @sgugger
| 12-07-2020 16:12:59 | 12-07-2020 16:12:59 | |
transformers | 8,964 | closed | Create README.md | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Add model card.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 12-07-2020 15:48:27 | 12-07-2020 15:48:27 | |
transformers | 8,963 | closed | PegasusTokenizer requires the SentencePiece library but it was not found in your environment | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.0.0
- Platform: Google Colab
- Python version: 3.6.9
### Who can help
tokenizers: @mfuntowicz
Pegasus: @patrickvonplaten
## To reproduce
Steps to reproduce the behavior:
```
!pip install -U transformers
!pip install sentencepiece
from transformers import PegasusTokenizer
tokenizer = PegasusTokenizer.from_pretrained('google/pegasus-large')
```
Error:
```
ImportError Traceback (most recent call last)
<ipython-input-7-12d68b5e397b> in <module>()
1 from transformers import PegasusTokenizer
----> 2 tokenizer = PegasusTokenizer.from_pretrained('google/pegasus-large')
/usr/local/lib/python3.6/dist-packages/transformers/utils/dummy_sentencepiece_objects.py in from_pretrained(self, *args, **kwargs)
54 @classmethod
55 def from_pretrained(self, *args, **kwargs):
---> 56 requires_sentencepiece(self)
57
58
/usr/local/lib/python3.6/dist-packages/transformers/file_utils.py in requires_sentencepiece(obj)
459 name = obj.__name__ if hasattr(obj, "__name__") else obj.__class__.__name__
460 if not is_sentencepiece_available():
--> 461 raise ImportError(SENTENCEPIECE_IMPORT_ERROR.format(name))
462
463
ImportError:
PegasusTokenizer requires the SentencePiece library but it was not found in your environment. Checkout the instructions on the
installation page of its repo: https://github.com/google/sentencepiece#installation and follow the ones
that match your environment.
```
| 12-07-2020 15:42:05 | 12-07-2020 15:42:05 | Hey @marcoabrate,
I tried to reproduce your error in this colab without success: https://colab.research.google.com/drive/1nBCEtP773LplNodOSw5OBW-rJ84gizW5?usp=sharing can you check again ?<|||||>You are right @patrickvonplaten
To reproduce
```
!pip install -U transformers
from transformers import PegasusTokenizer
tokenizer = PegasusTokenizer.from_pretrained('google/pegasus-large')
!pip install sentencepiece
tokenizer = PegasusTokenizer.from_pretrained('google/pegasus-large')
```
Very weird<|||||>Having the same issue. Although solely in my Jupyter notebook. Running the code from a file is working fine..<|||||>Probably it's a way the Jupyter kernel works. Indeed, if you restart the kernel and install `sentencepiece` before is working.<|||||>as @marcoabrate said, restated kernel at my project and without code changes everything started working<|||||>I can confirm that this solution is still valid today. I encountered the same issue today.
I solved it by adding ` !pip install sentencepiece` and then fully restart the Jupiter environment and rerun.<|||||>I am unable to install the SentencePiece library, this is the error i get when i do pip3 install sentencepiece:
error: legacy-install-failure
<|||||>Restarting the kernel and using
!pip install Transformers==3.2.0 instead of !pip install Transformers, worked for me |
transformers | 8,962 | closed | Use word_ids to get labels in run_ner | # What does this PR do?
As #8958 pointed out, the current way labels are computed in the `run_ner` script using offset mappings does not work for sentencepiece-based tokenizers. This PR fixes that using the `.word_ids` method which is more elegant and more reliable.
In passing it adds an early check that the tokenzier is fast (otherwise the script just doesn't work).
Fixes #8958
| 12-07-2020 15:11:10 | 12-07-2020 15:11:10 | |
transformers | 8,961 | closed | Optional layers | # What does this PR do?
This PR adds the possibility to have optional layers in the models thanks to the new input/output process. Here the pooling layer is created or not for the BERT/ALBERT/Longformer/MobileBERT/Roberta models. The keys to ignore when loading for these layers has been updated in same time. | 12-07-2020 15:11:10 | 12-07-2020 15:11:10 | Thanks for working on this @jplu. I think we should take the opportunity to think about this issue: https://github.com/huggingface/transformers/issues/8793.
The problem with the `add_pooling_layer` option and how it's currently done in PyTorch models is that when doing a training initialized from a model checkpoints that *contains the pooling layer*, like `bert-base-cased`:
```py
model = BertForMaskedLM.from_pretrained("bert-base-cased")
# Fine-tune the model on an MLM task
```
we're losing the pooling layer doing so. It's not a big deal here as we're doing an MLM task, however, if we want to use that model for a downstream task:
```py
model.save_pretrained("bert-base-cased-finetuned-mlm")
classifier_model = BertForSequenceClassification.from_pretrained("bert-base-cased-finetuned-mlm")
```
we're now having a classifier model that has a randomly initialized pooling layer, whereas the weights that were stored in the `bert-base-cased` original checkpoint would have been better than a randomly initialized layer.
The issue is that right now, we have no way of specifying if we want to keep the pooling layer or not in such a setup. I would argue that controlling it from the configuration would really be useful here, rather than setting it to `add_pooling_layer=False` in architectures that do not need it.
cc @jplu @sgugger @patrickvonplaten <|||||>Indeed, it starts to be more complicated than we thought at the beginning, but the case you are raising is a very good one!!
I think that controlling this from the config to have the same behavior would be more flexible, I +1 this proposal!<|||||>> Thanks for working on this @jplu. I think we should take the opportunity to think about this issue: #8793.
>
> The problem with the `add_pooling_layer` option and how it's currently done in PyTorch models is that when doing a training initialized from a model checkpoints that _contains the pooling layer_, like `bert-base-cased`:
>
> ```python
> model = BertForMaskedLM.from_pretrained("bert-base-cased")
> # Fine-tune the model on an MLM task
> ```
>
> we're losing the pooling layer doing so. It's not a big deal here as we're doing an MLM task, however, if we want to use that model for a downstream task:
>
> ```python
> model.save_pretrained("bert-base-cased-finetuned-mlm")
> classifier_model = BertForSequenceClassification.from_pretrained("bert-base-cased-finetuned-mlm")
> ```
>
> we're now having a classifier model that has a randomly initialized pooling layer, whereas the weights that were stored in the `bert-base-cased` original checkpoint would have been better than a randomly initialized layer.
>
> The issue is that right now, we have no way of specifying if we want to keep the pooling layer or not in such a setup. I would argue that controlling it from the configuration would really be useful here, rather than setting it to `add_pooling_layer=False` in architectures that do not need it.
>
> cc @jplu @sgugger @patrickvonplaten
I remember that we were thinking about adding a config param for `add_pooling_layer` for PT: https://github.com/huggingface/transformers/pull/7272 and decided not to. I still think the cleaner solution is to **not** add a config param because it's a very weird use-case IMO. Why wouldn't the user just use a `BertForPreTraining` model for his use case? But I'm also fine with adding a config param instead. It's not a big deal to me...but in this case I'd definitely prefer to not add it to the general `PretrainedConfig`, but to each model's config.<|||||>Good point regarding the `BertForPreTraining`. I think this is a use-case (you want to keep a layer from another architecture) where you would want to build your own architectures for that, to have complete control over the layers.
I think we might be missing some documentation on how to do that, and on how creating an architecture that inherits from `PreTrainedModel` works, but this is a discussion for another time.
Ok to keep it this way.<|||||>LGTM for me! |
transformers | 8,960 | closed | TFBertModel NOT learning at all! | Hi, i am trying to implement a simple Keras model where the first inputs are the input_ids and the attention_mask and then i have a `TFBertModel.from_pretrained('bert-base-uncased')` layer to extract the word embeddings and everything compiles okay, but when I train the model I get a constant accuracy of 0.5 (it is a binary classification problem).
Here is how I've defined my model:

And I am using `BertTokenizer.from_pretrained('bert-base-uncased')` to prepare the dataset. I might also have a problem with how i feed the data to the model, I am not sure, so here is a scr of that too:

| 12-07-2020 14:30:25 | 12-07-2020 14:30:25 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discusss.huggingface.co) instead?
Thanks! |
transformers | 8,959 | closed | FileNotFoundError: [Errno 2] No such file or directory: 'cached_train_BertTokenizer_180.lock' | I want to train the model bert-base-german-cased on some documents, but when I try to run run_ner.py with the config.json it tells me, that it can't find the file mentioned above.
I don't quite know what's the issue here, because it worked the last time I tried. Do I have to tell the model it shouldn't use any cached files? I tried that with the overwrite_cache flag.
Does anyone have a clue what could be the problem? | 12-07-2020 14:16:54 | 12-07-2020 14:16:54 | Hi! Could you provide the information related to your environment, as well as the command that you used to launch the script, as it's requested in the issue template? Thank you.<|||||>Yes sure!
- `transformers` version: 3.5.1
- Platform: Linux-5.9.1-kd-cluster-x86_64-with-glibc2.10
- Python version: 3.8.0
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
Model I am using: BERT, specifically "bert-base-german-cased"
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
Traceback:
`Traceback (most recent call last):
File "run_ner.py", line 324, in <module>
main()
File "run_ner.py", line 187, in main
TokenClassificationDataset(
File "/home/IAIS/tschmude/bert_remote/examples/token-classification/utils_ner.py", line 240, in __init__
with FileLock(lock_path):
File "/home/IAIS/tschmude/anaconda3/envs/bert_env_remote/lib/python3.8/site-packages/filelock.py", line 323, in __enter__
self.acquire()
File "/home/IAIS/tschmude/anaconda3/envs/bert_env_remote/lib/python3.8/site-packages/filelock.py", line 271, in acquire
self._acquire()
File "/home/IAIS/tschmude/anaconda3/envs/bert_env_remote/lib/python3.8/site-packages/filelock.py", line 384, in _acquire
fd = os.open(self._lock_file, open_mode)
FileNotFoundError: [Errno 2] No such file or directory: '/home/tschmude/PycharmProjects/smart-sentencing/examples/token-classification/Data processing scripts/Data_Preprocessed/cached_train_BertTokenizer_180.lock'
## Expected behavior
I'm running `python run_ner.py Data/config.json` to train the model for custom NER recognition. I have a couple self defined labels. It has worked before, but I can't quite tell what has changed since then. I already deleted cached .lock files that I could find.
<|||||>Would you mind providing the `config.json` as well, given that it contains your launch command? Thank you!<|||||>Sure, this is my config.json:
`
{
"data_dir": "/home/tschmude/PycharmProjects/smart-sentencing/examples/token-classification/Data processing scripts/Data_Preprocessed",
"labels": "./Data/labels.txt",
"model_name_or_path": "bert-base-german-cased",
"output_dir": "./Data/Models",
"task_type": "NER",
"max_seq_length": 180,
"num_train_epochs": 6,
"per_device_train_batch_size": 48,
"learning_rate": 0.001,
"seed": 1,
"overwrite_cache": true,
"fp16": true,
"do_train": true,
"do_predict": true,
"do_eval": true
}
`<|||||>Issue solved... it had to do with a dumb typo in the path, sorry for the confusion!<|||||>No problem, glad you solved your issue! |
transformers | 8,958 | closed | run_ner.py with xlm-roberta-base raises an IndexError in tokenize_and_align_labels | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
`transformers` version: 4.0.0 (and the example scripts from git master aka 72d6c9c6)
- Platform: Linux-4.19.0-12-amd64-x86_64-with-glibc2.10
- Python version: 3.8.3
- PyTorch version (GPU?): 1.5.0 (True)
- Tensorflow version (GPU?): 2.2.0 (True)
- Using GPU in script?: True (but we don’ŧ get that far)
- Using distributed or parallel set-up in script?: False
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
examples/token-classification: @stefan-it
documentation: @sgugger
-->
git blame says @sgugger
## Information
Model I am using (Bert, XLNet ...): xlm-roberta-base
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. `python3 run_ner.py --model_name_or_path xlm-roberta-base --task_name ner --dataset_name conll2003 --label_all_tokens --do_train --do_eval --output_dir finetuning-output`
Crashes with the following stacktrace:
```
Traceback (most recent call last):
File "run_ner.py", line 394, in <module>
main()
File "run_ner.py", line 292, in main
tokenized_datasets = datasets.map(
File "/home/vitt/.conda/envs/cuda/lib/python3.8/site-packages/datasets/dataset_dict.py", line 286, in map
{
File "/home/vitt/.conda/envs/cuda/lib/python3.8/site-packages/datasets/dataset_dict.py", line 287, in <dictcomp>
k: dataset.map(
File "/home/vitt/.conda/envs/cuda/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1239, in map
update_data = does_function_return_dict(test_inputs, test_indices)
File "/home/vitt/.conda/envs/cuda/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1210, in does_function_return_dict
function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
File "run_ner.py", line 277, in tokenize_and_align_labels
current_label = label_to_id[label[label_index]]
IndexError: list index out of range
```
From a little debugging, the problem seems to be that this code assumes there are only as many sequences with `offset[0] == 0 and offset[1] != 0` as there are words in the original input (and thus as there are labels):
https://github.com/huggingface/transformers/blob/72d6c9c68ba19b2e991b0d7a32989410399b33f5/examples/token-classification/run_ner.py#L276-L278
However, the SentencePiece tokenizer may split input words to sequences starting with a single `'▁'` token. Then, the offset mapping for '▁' will be `(0, 1)` and for the following token `(0, x)` (E.g. '.' in the CONLL data ⇒ `['▁', '.']` with offsets `[(0, 1), (0, 1)]` or ['NACCO'] ⇒ `('▁', (0, 1)), ('NAC', (0, 3)), ('CO', (3, 5))`.
(Could this use `tokenized_inputs.word_ids()` instead?) | 12-07-2020 14:10:17 | 12-07-2020 14:10:17 | Thanks for flagging! Yes, using `word_ids` is probably a better idea in this case, I did that in the PR mentioned above. If you want to review it, I'd be happy to take your comments into account! |
transformers | 8,957 | closed | Update README.txt | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 12-07-2020 13:11:19 | 12-07-2020 13:11:19 | |
transformers | 8,956 | closed | Update README.txt | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 12-07-2020 04:15:13 | 12-07-2020 04:15:13 | Closed in favor of #8957 |
transformers | 8,955 | closed | shutil.Error: Destination path '/home/ubuntu/.cache/huggingface/transformers/transformers' already exists | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Ubuntu 18.04
- Python version: 1.7.0
- PyTorch version (GPU?): 1.7.0
- Tensorflow version (GPU?):
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten
@sgugger
-->
## Information
Model I am using (MT5ForConditionalGeneration.):
The problem arises when using:
I am trying to run my script importing Mt5
```
In Transformers v4.0.0, the default path to cache downloaded models changed from '~/.cache/torch/transformers' to '~/.cache/huggingface/transformers'. Since you don't seem to have overridden and '~/.cache/torch/transformers' is a directory that exists, we're moving it to '~/.cache/huggingface/transformers' to avoid redownloading models you have already in the cache. You should only see this message once.
Traceback (most recent call last):
File "__main__.py", line 87, in <module>
from data_science.recommenders.content_recommender.context_similarity import Context_Similarity
File "/home/ubuntu/parth/trell-ds-framework/data_science/recommenders/content_recommender/context_similarity.py", line 5, in <module>
from sentence_transformers import SentenceTransformer
File "/home/ubuntu/venv_trellai/lib/python3.6/site-packages/sentence_transformers/__init__.py", line 3, in <module>
from .datasets import SentencesDataset, SentenceLabelDataset
File "/home/ubuntu/venv_trellai/lib/python3.6/site-packages/sentence_transformers/datasets.py", line 12, in <module>
from . import SentenceTransformer
File "/home/ubuntu/venv_trellai/lib/python3.6/site-packages/sentence_transformers/SentenceTransformer.py", line 10, in <module>
import transformers
File "/home/ubuntu/venv_trellai/lib/python3.6/site-packages/transformers/__init__.py", line 22, in <module>
from .integrations import ( # isort:skip
File "/home/ubuntu/venv_trellai/lib/python3.6/site-packages/transformers/integrations.py", line 5, in <module>
from .trainer_utils import EvaluationStrategy
File "/home/ubuntu/venv_trellai/lib/python3.6/site-packages/transformers/trainer_utils.py", line 25, in <module>
from .file_utils import is_tf_available, is_torch_available, is_torch_tpu_available
File "/home/ubuntu/venv_trellai/lib/python3.6/site-packages/transformers/file_utils.py", line 227, in <module>
shutil.move(old_default_cache_path, default_cache_path)
File "/usr/lib/python3.6/shutil.py", line 548, in move
raise Error("Destination path '%s' already exists" % real_dst)
shutil.Error: Destination path '/home/ubuntu/.cache/huggingface/transformers/transformers' already exists
```
## To reproduce
I am using Transformer==4.0.0 I get this error but when installing transformers==4.0.0rc1 the error doesn't show. Is there any reason for this?
| 12-07-2020 04:12:13 | 12-07-2020 04:12:13 | This was a bug added in v4, it's fixed on master so if you install from source, you should be fine.<|||||>I think alternatively you could also just delete the cache:
```
rm -rf /home/ubuntu/.cache/huggingface/transformers/transformers
```
but then you'll have to re-download all models<|||||>yeah its worked!<|||||>@patrickvonplaten since this means that the cache has already been moved to `.cache/huggingface/transformers`, I think deleting the cache `.cache/torch/transformers` makes more sense, as you won't have to delete all the models you had in the initial cache, only those that were redownloaded when you went back to an older version.<|||||>Why do transformers throw an error ? If it exists,why not just throw a warning?I think this is a bug and should be fixed.<|||||>Please read the full conversation:
> This was a bug added in v4, it's fixed on master so if you install from source, you should be fine. |
transformers | 8,954 | closed | Fine-tuning on Language Model using two tasks | Hi,
I'm reading the language modeling example on the documentation: https://huggingface.co/transformers/v2.0.0/examples.html#language-model-fine-tuning
It seems that the fine-tuning is done based on Masked Language Modelling (MLM) loss. While in the BERT paper, it appears that the LM fine-tuning is done by optimizing two tasks: 1) Masked Language Modeling, and 2) Next sentence prediction. I'm looking for the second part in Huggingface's implementation, but it seems that this part is either not implemented or I'm missing something? | 12-07-2020 02:42:25 | 12-07-2020 02:42:25 | Hi, you're not missing anything, this part is not implemented in the examples, as fine-tuning a model using only MLM yields similar downstream results than fine-tuning a model with both tasks.
However, we have the `BertForPreTraining` architecture which is implemented, and which can train a model using the two objectives. You would have to tweak the example scripts to manage this case, however.<|||||>Also, we try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discusss.huggingface.co) instead?
Thanks!<|||||>> Also, we try to keep the github issues for bugs/feature requests.
> Could you ask your question on the [forum](https://discusss.huggingface.co) instead?
>
> Thanks!
Sure, thank you for clarifying! |
transformers | 8,953 | closed | Wrong shape output for loss of TFGPT2LMHeadModel | ## Environment info
- `transformers` version: 4.0.0
- Platform: Ubuntu 18.04
- Python version: 3.7
- PyTorch version (GPU?): N/A
- Tensorflow version (GPU?): 2.3
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik
@jplu
## Information
Model I am using (Bert, XLNet ...): TFGPT2LMHeadModel
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
The following in a Python interpreter:
```python
import transformers
model = transformers.models.gpt2.TFGPT2LMHeadModel.from_pretrained('gpt2')
input_ids = tf.ones((1, 1024), dtype=tf.int32)
labels = tf.ones((1, 1024), dtype=tf.int32)
print(model(input_ids, labels=labels, return_dict=True, training=True).loss.shape)
```
Outputs
```
TensorShape([1023])
```
It seems the loss output is dependent on batch size:
```python
labels = tf.ones((2, 1024), dtype=tf.int32)
input_ids = tf.ones((2, 1024), dtype=tf.int32)
print(model(input_ids, labels=labels, return_dict=True, training=True).loss.shape)
```
Outputs
```
TensorShape([2046])
```
## Expected behavior
According to the docs (https://huggingface.co/transformers/model_doc/gpt2.html#tfgpt2lmheadmodel), the loss is of shape `(1,)`. However, this is not the shape that is returned. | 12-06-2020 21:48:00 | 12-06-2020 21:48:00 | Hello !
This is an error in the documentation. TF doesn't apply a mean across all the values so you basically get a loss of shape 1023 (sequence length - 1 because of the right shift). Thanks for having spotted this!<|||||>This issue has been stale for 1 month. |
transformers | 8,952 | closed | batch_sampler with trainer.py would not set the epoch | Dear Huggingface team
If one uses batch_sampler instead of sampler then in trainer.py the part you set_epoch for the dataloader sampler it would not work, also in case one define custom sampler in certain application, like what I do, again in trainer.py it would not be called.
I was wondering if the codes can be more general to include these cases.
thanks.
Best
Rabeeh | 12-06-2020 21:14:00 | 12-06-2020 21:14:00 | Hi there.
To use custom sampling (either through a sampler or a batch sampler), users are expected to subsclass `Trainer` and override the `get_train_dataloader`/`get_eval_dataloader` methods to suit their needs.
Note that those changes might then not be compatible with distributed training/TPU training.<|||||>Hi there
yes, thats correct, but looking into train() method of trainer.py class,
the user needs to overwrite the whole train() function for such cases, and
this is just for setting the epoch for other type of sampler, it would be
very nice if the train() method allowed custom sampler. thanks.
On Sun, Dec 6, 2020 at 11:56 PM Sylvain Gugger <[email protected]>
wrote:
> Hi there.
> To use custom sampling (either through a sampler or a batch sampler),
> users are expected to subsclass Trainer and override the
> get_train_dataloader/get_eval_dataloader methods to suit their needs.
>
> Note that those changes might then not be compatible with distributed
> training/TPU training.
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/8952#issuecomment-739579294>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ABP4ZCGAIHTYBBBJI75TGKLSTQDYXANCNFSM4UPTHQNQ>
> .
>
<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 8,951 | closed | vocab_file and merges_file still required params for loading serialized tokenizers | e.g.
https://github.com/huggingface/transformers/blob/dd52804f5fce0a568ffbb3dc7fd088d2de0a0e56/src/transformers/models/gpt2/tokenization_gpt2_fast.py#L122-L132
Both of these should probably be optional params now. Setting `vocab_file` and `merges_file` to `None` while specifying a `tokenizer_file` works, but seems messy. | 12-06-2020 19:46:46 | 12-06-2020 19:46:46 | Indeed, we could do that, and then add a check below to ensure that we get a correct error message. Do you want to open a PR with the fix?<|||||>I wasn't planning on doing a PR because I wasn't sure of the scope of changes needed (e.g. every tokenizer `__init__` would need to be changed) and it also seems like there isn't any documentation for serialized tokenizers at all in `transformers`, so I assumed you were getting to that.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 8,950 | closed | Fix Code quality issues | This pull request fixes some of the code quality issues raised by DeepSource on my fork of this repository.
I have already fixed some issues using DeepSource's Autofix.
Take a quick look at all the issues caught by DeepSource for this repository [here](https://deepsource.io/gh/withshubh/transformers/issues/?category=recommended).
### Summary of fixes
- Remove unnecessary use of comprehension
- Remove unused imports
- Use literal syntax instead of function calls to create data structure
- Remove unnecessary generator
- Remove unnecessary `return` statement
You can also have a look at the [configuration file](https://github.com/withshubh/transformers/blob/deepsource/.deepsource.toml) I used for DeepSource Analysis.
### Using DeepSource to continuously analyze your repository
- Merge this PR. I have included a `.deepsource.toml` in this PR, which you can use to configure your analysis settings.
- Install DeepSource on your repository [here](https://deepsource.io/signup).
- Activate analysis [here](https://deepsource.io/gh/huggingface/transformers/).
Feel free to merge this PR if you wish to fix the issues.✨ | 12-06-2020 19:19:14 | 12-06-2020 19:19:14 | Hello! Rather than fixing code issues, this looks like an opinionated take on the code structure. We like using list/dict comprehensions as we find them explicit, what doesn't `DeepSource` like about these?
We already have code quality setup, with `black`, `isort`, `flake8` and our own tools for code maintainability. I think this is enough already, and don't see the advantage of adding another code quality tool.
Why should we add DeepSource to our code quality stack?<|||||>Hi @LysandreJik :wave:
> Hello! Rather than fixing code issues, this looks like an opinionated take on the code structure. We like using list/dict comprehensions as we find them explicit, what doesn't `DeepSource` like about these?
DeepSource suggests these because using the other can give a minor performance boost:
Example:
```
In [3]: timeit.timeit(stmt="{num: square for num, square in zip(first_hundred_nums, first_hundred_squares)}", globals=globals())
Out[3]: 5.606797965000624
In [4]: timeit.timeit(stmt="dict(zip(first_hundred_nums, first_hundred_squares))", globals=globals())
Out[4]: 4.588974316000531
```
Also, the inbuilt functions `all()` and `any()` in python also support short-circuiting (evaluation stops as soon as the overall return value of the function is known), but this behavior is lost if you use comprehension.
> We already have code quality setup, with `black`, `isort`, `flake8` and our own tools for code maintainability. I think this is enough already, and don't see the advantage of adding another code quality tool.
> Why should we add DeepSource to our code quality stack?
- DeepSource internally runs black/isort/flake8 checks too. In addition to that: if your codebase is strictly following the conventions from these tools, instead of failing the checks and depending on the contributors to fix it, DeepSource Transformers can do this for you (commit to the same PR with the fixes). Also, you won't need to look after the version upgrades for these tools.
- Fix the issues. DeepSource can automatically fix some of the issues it detects (also includes some flake8 issues) with just a click. Read more about this [here](https://deepsource.io/blog/code-formatting-on-autopilot/).
- DeepSource's own code quality checks.
- Option to analyze only modified code: DeepSource will show you only newly introduced code quality issues for a changeset. Read more about it [here](https://deepsource.io/blog/release-granular-diffs/).<|||||>Hi @LysandreJik :wave:
Please have a look :eyes: <|||||>Hey! @LysandreJik @mfuntowicz :wave:
Please have a look at this! :eyes: <|||||>I vote to stay with our current tooling for now.
We'll follow your work at DeepSource and could reconsider it in a few month, ie. the summer but no need to ping us more for now.
Thanks. |
transformers | 8,949 | closed | Adds flashcards to Glossary & makes small corrections | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Flashcards have been made using the glossary as a starting point and are now linked at the start of the glossary. Other small corrections & standardisations have also been made for consistency.
This pull requests follows from the discussion in issue #8932
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 12-06-2020 16:13:34 | 12-06-2020 16:13:34 | Thanks very much for the feedback, the relevant updates have been made as requested in a1295b7!<|||||>@sgugger Is there anything else that is required for this pull request to be merged?<|||||>@sgugger is there anything else that needs to be done on our side for this pull request to be merged?<|||||>Hi @darigovresearch, sorry I missed your previous ping. We have just added a [community page](https://huggingface.co/transformers/master/community.html) in the documentation and we would actually prefer to put the links to your flashcards there if that's okay.
Sorry again for the delay!<|||||>Hi @sgugger, no worries the updates have been made to the community.md file, the glossary.rst file still contains the corrections but has removed reference to the flashcards.
Is there anything else you need for this to be merged?<|||||>Nope, that's perfect! Thanks a lot for your patience.<|||||>@sgugger no worries & thanks for merging it!
When checking the page it appears that the rendering works in the .md file but not the final page - https://huggingface.co/transformers/master/community.html
Not sure what it could be, any thoughts?
Potentially add an extra blank line after the heading?

<|||||>Yes I just tested locally and it was the new line missing. I added it in [this commit](https://github.com/huggingface/transformers/commit/7acfa95afb8194f8f9c1f4d2c6028224dbed35a2) (directly on master).<|||||>Great, thanks for the heads up and for the help! |
transformers | 8,948 | closed | Add model card | # What does this PR do?
Adds a model card.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR. | 12-06-2020 15:48:28 | 12-06-2020 15:48:28 | |
transformers | 8,947 | closed | Fix QA pipeline on Windows | # What does this PR do?
As reported on the [forum](https://discuss.huggingface.co/t/pipeline-example-in-the-doc-throws-an-error-question-answering/2632), there is a problem in the current pipeline on Windows. The problem's root is that numpy int arrays have a different default on Linux and Windows, the current snippet:
```
import numpy as np
x = np.array([1, 2, 3])
x.dtype
```
will print `dtype('int64')` on Linux/MacOS but `dtype('int32')` on Windows. So this means that just doing `torch.tensor(some_numpy_array)` may result in a tensor of dtype `int32` which PyTorch does not like. For future reference, the error:
```
Expected tensor for argument #1 'xxx' to have scalar type Long; but got torch.IntTensor instead
```
is usually a clear indicator of this behavior happening.
The PR fixes the QA pipeline by casting the tensors to long if they have the int type. | 12-06-2020 15:13:54 | 12-06-2020 15:13:54 | |
transformers | 8,946 | closed | Error during validation Trainer step | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.1.0.dev0
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101 (True)
- Tensorflow version (GPU?): 2.3.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes (Dataloaders)
@sgugger
## Information
I'm using BERT for sequence classification. I have build my own pytorch dataset, with my data. During training there is no problem, but when it starts evaluation it gives an error with the following message:
```
/usr/local/lib/python3.6/dist-packages/transformers/trainer.py in train(self, model_path, trial)
801
802 self.control = self.callback_handler.on_epoch_end(self.args, self.state, self.control)
--> 803 self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
804
805 if self.args.tpu_metrics_debug or self.args.debug:
/usr/local/lib/python3.6/dist-packages/transformers/trainer.py in _maybe_log_save_evaluate(self, tr_loss, model, trial, epoch)
863 metrics = None
864 if self.control.should_evaluate:
--> 865 metrics = self.evaluate()
866 self._report_to_hp_search(trial, epoch, metrics)
867
/usr/local/lib/python3.6/dist-packages/transformers/trainer.py in evaluate(self, eval_dataset, ignore_keys)
1278 # self.args.prediction_loss_only
1279 prediction_loss_only=True if self.compute_metrics is None else None,
-> 1280 ignore_keys=ignore_keys,
1281 )
1282
/usr/local/lib/python3.6/dist-packages/transformers/trainer.py in prediction_loop(self, dataloader, description, prediction_loss_only, ignore_keys)
1387 losses_host = losses if losses_host is None else torch.cat((losses_host, losses), dim=0)
1388 if logits is not None:
-> 1389 preds_host = logits if preds_host is None else nested_concat(preds_host, logits, padding_index=-100)
1390 if labels is not None:
1391 labels_host = labels if labels_host is None else nested_concat(labels_host, labels, padding_index=-100)
/usr/local/lib/python3.6/dist-packages/transformers/trainer_pt_utils.py in nested_concat(tensors, new_tensors, padding_index)
82 ), f"Expected `tensors` and `new_tensors` to have the same type but found {type(tensors)} and {type(new_tensors)}."
83 if isinstance(tensors, (list, tuple)):
---> 84 return type(tensors)(nested_concat(t, n, padding_index=padding_index) for t, n in zip(tensors, new_tensors))
85 elif isinstance(tensors, torch.Tensor):
86 return torch_pad_and_concatenate(tensors, new_tensors, padding_index=padding_index)
/usr/local/lib/python3.6/dist-packages/transformers/trainer_pt_utils.py in <genexpr>(.0)
82 ), f"Expected `tensors` and `new_tensors` to have the same type but found {type(tensors)} and {type(new_tensors)}."
83 if isinstance(tensors, (list, tuple)):
---> 84 return type(tensors)(nested_concat(t, n, padding_index=padding_index) for t, n in zip(tensors, new_tensors))
85 elif isinstance(tensors, torch.Tensor):
86 return torch_pad_and_concatenate(tensors, new_tensors, padding_index=padding_index)
/usr/local/lib/python3.6/dist-packages/transformers/trainer_pt_utils.py in nested_concat(tensors, new_tensors, padding_index)
84 return type(tensors)(nested_concat(t, n, padding_index=padding_index) for t, n in zip(tensors, new_tensors))
85 elif isinstance(tensors, torch.Tensor):
---> 86 return torch_pad_and_concatenate(tensors, new_tensors, padding_index=padding_index)
87 elif isinstance(tensors, np.ndarray):
88 return numpy_pad_and_concatenate(tensors, new_tensors, padding_index=padding_index)
/usr/local/lib/python3.6/dist-packages/transformers/trainer_pt_utils.py in torch_pad_and_concatenate(tensor1, tensor2, padding_index)
45 def torch_pad_and_concatenate(tensor1, tensor2, padding_index=-100):
46 """Concatenates `tensor1` and `tensor2` on first axis, applying padding on the second if necessary."""
---> 47 if len(tensor1.shape) == 1 or tensor1.shape[1] == tensor2.shape[1]:
48 return torch.cat((tensor1, tensor2), dim=0)
49
IndexError: tuple index out of range
```
## To reproduce
Here is the code I used:
```
args = TrainingArguments("/content/drive/MyDrive/SNOMED/TrainingLog",
learning_rate = 0.0003,
num_train_epochs = 10,
per_device_train_batch_size = 32,
per_device_eval_batch_size = 32,
evaluation_strategy = "epoch",
label_names = labels,
disable_tqdm = False,
dataloader_num_workers = 6,
load_best_model_at_end = True,
metric_for_best_model = "accuracy",
greater_is_better = True)
print("\nDEVICE:",args.device)
callbacks = [EarlyStoppingCallback(2,0.8)]
trainer = Trainer(model,
args = args,
train_dataset = trainDataset,
eval_dataset = validationDataset,
tokenizer = tokenizer,
callbacks = callbacks,
compute_metrics = accuracy)
trainer.train()
```
Both datasets have the same structure. Each item has the ```BatchEncoding.data``` dict, with a field 'label' added.
## Expected behavior
It should do the evaluation step correctly.
| 12-06-2020 13:49:05 | 12-06-2020 13:49:05 | Hi there! The code is incomplete as we have no idea of what your dataset and model is. From the error message it looks like the problem is in the logits, so we would need the model to be able to reproduce the error.<|||||>Here is the full code:
```
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification,Trainer, TrainingArguments
import json
from torch.utils.data import Dataset, DataLoader
import pandas as pd
from transformers.trainer_callback import EarlyStoppingCallback
class dataset(Dataset):
def __init__(self,data,labels,tokenizer):
self.data = data
self.labels = labels
self.tokenizer= tokenizer
def processText(self,text):
return self.tokenizer(text, truncation=True)
def __len__(self):
return len(self.data.index)
def __getitem__(self,i):
row = self.data.iloc[i]
x = self.processText(self.data.iloc[i]['x']).data
try:
y = self.labels.index(self.data.iloc[i]['y'])
except:
y = len(self.labels) - 1
x['label'] = y
return x
def getLabels(data,nLabels):
serie = data.pivot_table(index=['y'], aggfunc='size')
labelsList = serie.sort_values(ascending=False).index.values.tolist()
return labelsList[0:nLabels-1] + ["OTHER"]
def accuracy(evalPrediction):
yPred = evalPrediction.predictions
yTrue = evalPrediction.label_ids
return {'accuracy':(yPred == yTrue).mean()}
df = pd.read_csv("/content/drive/MyDrive/SNOMED/Biopsias_HUPM_2010-2018_mor_codes-v1.csv",low_memory=False)
df = df[["Diagnostico", "CodOrgano"]]
data = df.rename(columns = {'Diagnostico':'x','CodOrgano':'y'})
data = data.dropna().reset_index(drop=True)
#df = df.iloc[:1000,:]
index = df.index
N = len(index)
P = 0.7
limit = round(N*P)
trainData = data.iloc[:limit,:]
validationData = data.iloc[limit:,:]
nLabels = 51
labels = getLabels(data,nLabels)
model = AutoModelForSequenceClassification.from_pretrained('dccuchile/bert-base-spanish-wwm-uncased',num_labels = nLabels)
tokenizer = AutoTokenizer.from_pretrained('dccuchile/bert-base-spanish-wwm-uncased',model_max_length = 128, use_fast=True)
trainDataset = dataset(trainData,labels,tokenizer)
validationDataset = dataset(validationData,labels,tokenizer)
args = TrainingArguments("/content/drive/MyDrive/SNOMED/TrainingLog",
learning_rate = 0.0003,
num_train_epochs = 10,
per_device_train_batch_size = 32,
per_device_eval_batch_size = 32,
evaluation_strategy = "epoch",
label_names = labels,
disable_tqdm = False,
dataloader_num_workers = 6,
load_best_model_at_end = True,
metric_for_best_model = "accuracy",
greater_is_better = True)
print("\nDEVICE:",args.device)
callbacks = [EarlyStoppingCallback(2,0.8)]
trainer = Trainer(model,
args = args,
train_dataset = trainDataset,
eval_dataset = validationDataset,
tokenizer = tokenizer,
callbacks = callbacks,
compute_metrics = accuracy)
trainer.train()
```
Here is the notebook where it can be checked easily: [https://colab.research.google.com/drive/1VCacM-CDl2xrIFfwsrkmEh-D0IswK61D?usp=sharing](url)
I'm not sure but, do the model need ```return_dict = True```?<|||||>One thing that may be linked to this is the `label_names = labels` in your training arguments. `label_names` is the name(s) of the field containing your labels. In this case, the default (which is `["labels"]`) is what you want, so you should leave it as is.<|||||>I changed my dataset to save the label on "labels" and it worked. It was a really silly problem, thank you so much!!<|||||>The same silly problem happens on me, thx a lot!!!!!!!!!!!!!😵💫 |
transformers | 8,945 | open | Sparse Transormer | # 🌟 New model addition
## Model description
Sparse Transformers (https://openai.com/blog/sparse-transformer/) are one of the two most efficient transformers for long range problems, according to Google's Long Arena paper: https://arxiv.org/pdf/2011.04006.pdf (Big Bird) is the other one.
The original Sparse Transformers work shows great results on text, images, and audio. Further OpenAI work Jukebox (https://openai.com/blog/jukebox/) uses Sparse Transformers to generate incredibly long raw music audio with style transfer. Lastly https://proceedings.icml.cc/static/paper_files/icml/2020/6095-Paper.pdf uses Sparse Transformers to achieve state-of-the-art CIFAR performance.
## Open source status
* [x] the model implementation is available:
latest version, for CIFAR: https://github.com/openai/distribution_augmentation
original, but not maintained: https://github.com/openai/sparse_attention
Alternate implementation from FAIR: https://github.com/pytorch/fairseq/blob/master/fairseq/modules/sparse_multihead_attention.py
* [x] the model weights are available:
https://github.com/openai/distribution_augmentation (CIFAR work) has model weights available, as described in the README: https://openaipublic.blob.core.windows.net/distribution-augmentation-assets/models/c10-15m-baseline.npz
Jukebox is open-source and has model weights, but is a larger pipeline that includes VQ-VAEs so it may not be of interest for a transformers-only library.
* [x] who are the authors: @rewonc @myleott @cclauss
| 12-06-2020 11:10:33 | 12-06-2020 11:10:33 | Hi there,
Happy to consult on anything. The sparse attention kernels included above
are very fast, but require building blocksparse -- not sure if this will
work for you all.
Rewon
On Sun, Dec 6, 2020 at 3:10 AM Joseph Turian <[email protected]>
wrote:
> 🌟 New model addition Model description
>
> Sparse Transformers (https://openai.com/blog/sparse-transformer/) are one
> of the two most efficient transformers for long range problems, according
> to Google's Long Arena paper: https://arxiv.org/pdf/2011.04006.pdf (Big
> Bird) is the other one.
>
> The original Sparse Transformers work shows great results on text, images,
> and audio. Further OpenAI work Jukebox (https://openai.com/blog/jukebox/)
> uses Sparse Transformers to generate incredibly long raw music audio with
> style transfer. Lastly
> https://proceedings.icml.cc/static/paper_files/icml/2020/6095-Paper.pdf
> uses Sparse Transformers to achieve state-of-the-art CIFAR performance.
> Open source status
>
> - the model implementation is available:
>
> latest version, for CIFAR:
> https://github.com/openai/distribution_augmentation
> original, but not maintained: https://github.com/openai/sparse_attention
> Alternate implementation from FAIR:
> https://github.com/pytorch/fairseq/blob/master/fairseq/modules/sparse_multihead_attention.py
>
> - the model weights are available:
>
> https://github.com/openai/distribution_augmentation (CIFAR work) has
> model weights available, as described in the README:
> https://openaipublic.blob.core.windows.net/distribution-augmentation-assets/models/c10-15m-baseline.npz
>
> Jukebox is open-source and has model weights, but is a larger pipeline
> that includes VQ-VAEs so it may not be of interest for a transformers-only
> library.
>
> - who are the authors: @rewonc <https://github.com/rewonc> @myleott
> <https://github.com/myleott> @cclauss <https://github.com/cclauss>
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/8945>, or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AAYEDVCS3HWQ3EIPG4I77WTSTNRDNANCNFSM4UPHW52Q>
> .
>
<|||||>cc'ing @madlag for info |
transformers | 8,944 | open | how to use EncoderDecoderModel to do en-de translation? | I have trained a EncoderDecoderModel from huggging face to do english-German translation task. I tried to overfit a small dataset (100 parallel sentences), and use `model.generate()` then `tokenizer.decode()` to perform the translation. However, the output seems to be proper German sentences, but it is definitely not the correct translation.
Here are the code for building the model
```
encoder_config = BertConfig()
decoder_config = BertConfig()
config = EncoderDecoderConfig.from_encoder_decoder_configs(encoder_config, decoder_config)
model = EncoderDecoderModel(config=config)
```
Here are the code for testing the model
```
model.eval()
input_ids = torch.tensor(tokenizer.encode(input_text)).unsqueeze(0)
output_ids = model.generate(input_ids.to('cuda'), decoder_start_token_id=model.config.decoder.pad_token_id)
output_text = tokenizer.decode(output_ids[0])
```
Example input: "iron cement is a ready for use paste which is laid as a fillet by putty knife or finger in the mould edges ( corners ) of the steel ingot mould ."
Ground truth translation: "iron cement ist eine gebrauchs ##AT##-##AT## fertige Paste , die mit einem Spachtel oder den Fingern als Hohlkehle in die Formecken ( Winkel ) der Stahlguss -Kokille aufgetragen wird ."
What the model outputs after trained 100 epochs: "[S] wenn sie den unten stehenden link anklicken, sehen sie ein video uber die erstellung ansprechender illustrationen in quarkxpress" which is totally nonesense.
Where is the problem? | 12-06-2020 10:58:57 | 12-06-2020 10:58:57 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discusss.huggingface.co) instead?
Thanks!
cc @patrickvonplaten who might have an idea.<|||||>This blog post should also help on how to fine-tune a warm-started Encoder-Decoder model: https://huggingface.co/blog/warm-starting-encoder-decoder . But as @LysandreJik said the forum is the better place to ask.<|||||>@patrickvonplaten the blog post mentions about a notebook link for machine translation task but on clicking, it redirects to the blog only. I think there might be some mistake while adding the notebook link. Can you please share the translation task notebook on WMT dataset?<|||||>Hey @zmf0507 - yeah I sadly haven't found the time yet to do this notebook<|||||>@patrickvonplaten please let me know here when you make one. Despite being so popular, hugging-face doesn't provide any tutorial/notebook for machine translation. I think a lot of people might be looking for similar resources. Will help much. Thanks<|||||>We have now one for mBart: https://colab.research.google.com/github/vasudevgupta7/huggingface-tutorials/blob/main/translation_training.ipynb -> will try to make one for Encoder Decoder as well when I find time :-) <|||||>sure. thanks a lot :)<|||||>@patrickvonplaten is there any encoder-decoder notebook made for translation task ? thanks <|||||>I'm sadly not finding the time to do so at the moment :-/
I'll put this up as a "Good First Issue" now in case someone from the community finds time to make such a notebook.
A notebook for EncoderDecoderModel translation should look very similar to this notebook: https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Leveraging_Pre_trained_Checkpoints_for_Encoder_Decoder_Models.ipynb - one only has to change the summarization dataset with a translation dataset<|||||>@patrickvonplaten thanks for the update.
Can you tell if there is any work on keyphrase generation /keywords generation (seq2seq task) using hugging-face ? I am looking for such tutorials and examples where I can try and play around keyphrase generation. This task is not mentioned on hugging-face notebooks page as well.
Please let me know<|||||>My best advice would be to ask this question on the [forum](https://discuss.huggingface.co/) - I sadly don't know of any work related to this<|||||>@patrickvonplaten : Here's my [attempt](https://gist.github.com/parambharat/6870f3a32537f5febac70f7fd876e90c) that modifies the condensed version of [BERT2BERT.ipynb](https://colab.research.google.com/drive/1Ekd5pUeCX7VOrMx94_czTkwNtLN32Uyu?usp=sharing) to use the wmt dataset, BLEU4 score for the en-de translation task. <|||||>> We have now one for mBart: https://colab.research.google.com/github/vasudevgupta7/huggingface-tutorials/blob/main/translation_training.ipynb -> will try to make one for Encoder-Decoder as well when I find time :-)
Inferring the model training details from BERT2BERT for CNN daily mail is not sufficient, we experimented with an MT model with the must-c data for en-fr , however the prediction were almost random and it was not able to understand the core meaning of its input sequence.<|||||>If anyone has a complete notebook based on the Encoder-Decoder model for MT, please share. Thank you.<|||||>Has anyone performed the translation task correctly using bert2bert ? TAT<|||||>@xueqianyi - maybe you have more luck on https://discuss.huggingface.co/ ? <|||||>Just an extra comment here: With bert2bert, it's not very helpful for MT, as BERT is only trained on English data.<|||||>Hi there, I'm a Data Science grad student at Luddy. I was looking to contribute to open source in my free time and came across this issue. I did put a rough notebook together, linking it [here](https://colab.research.google.com/drive/1uaXsyu3S7LizulA3m6Fp__F9Fxu5AU97?usp=sharing) @xueqianyi @CharizardAcademy. I would love to polish it to the standard upheld in the HF community if its indeed helpful.
Just some comments (I did NOT spend a lot of time on this, so your observations MIGHT differ):
1) The translation quality depends a lot on model capacity, though even using base BERT, the translations are fairly decent and definitely not gibberish. Tweaking the decoding parameters will help too.
2) I've trained only on 1M examples due to compute constraints, but I believe some multiples higher might work out better. I trained with 0.1M and 0.5M examples, I saw consistent improvements to the BLEU score on every increase.
3) Length of the tensors fed into the model (post-tokenization) have an impact on the translation quality too. Specifically max_length=64 and higher results in a lot of repetitions especially for short sentences because this particular dataset (1M subset) has most examples below 32 tokens (95%) (hence I recommend spending sometime tweaking the decoding parameters, no_repeat_ngram_size, max_length, length_penality etc in particular).
4) Also, the model seems to think President Obama and President Bush are the same person, EVERYTIME. xD |
transformers | 8,943 | closed | Why BertSelfAttention reshape Q,K,V from 3-D tensor to 4-D tensor | # 🌟 New model addition
## Model description
https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/modeling_bert.py
def transpose_for_scores(self, x):
new_x_shape = x.size()[:-1] + (self.num_attention_heads, self.attention_head_size)
x = x.view(*new_x_shape)
return x.permute(0, 2, 1, 3)
query_layer = self.transpose_for_scores(mixed_query_layer)
key_layer = self.transpose_for_scores(mixed_key_layer)
value_layer = self.transpose_for_scores(mixed_value_layer)
## Open source status
Question:
1. Why we must transpose Q,K,V from 3-D tensor to 4-D tensor ?
2. What if we just use 3-D Q,K,V to do torch.matmul ?
| 12-06-2020 09:15:53 | 12-06-2020 09:15:53 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discusss.huggingface.co) instead?
Thanks! |
transformers | 8,942 | closed | NER Pipeline Issue | ## Environment info
- `transformers` version: 4.0.0
- Platform: Linux-3.10.0-1062.9.1.el7.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
@stefan-it
I am trying to pass multiple sentences to NER pipeline, but it fails with the following error message:
```ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length.```
Code to reproduce:
```
nlp = pipeline("ner")
nlp(["Some dummy text", "some more dummy text"])
````
Also, the output data structure is wrong, ex:
nlp(["City New York","City New York"])
This should return a list of dicts as per the documentation, but it returns only a singe dict.
```[{'word': 'City', 'score': 0.6329959034919739, 'entity': 'I-LOC', 'index': 1},
{'word': 'New', 'score': 0.5934403538703918, 'entity': 'I-LOC', 'index': 2},
{'word': 'York', 'score': 0.728114128112793, 'entity': 'I-LOC', 'index': 3}]
``` | 12-06-2020 08:50:54 | 12-06-2020 08:50:54 | Btw, this works fine.
```
from transformers import pipeline
ner = pipeline("ner", grouped_entities=True)
sequence = """
Hugging Face Inc. is a company based in New York City.
Its headquarters are in DUMBO, therefore very close to the Manhattan Bridge which is visible from the window.
"""
output = ner(sequence)
print(output)
```
```
[{'entity_group': 'ORG', 'score': 0.9970663785934448, 'word': 'Hugging Face Inc'}, {'entity_group': 'LOC', 'score': 0.9993778467178345, 'word': 'New York City'}, {'entity_group': 'LOC', 'score': 0.9571147759755453, 'word': 'DUMBO'}, {'entity_group': 'LOC', 'score': 0.983814150094986, 'word': 'Manhattan Bridge'}]
```<|||||>@devansvd That's just a single sequence, I want to pass multiple sequences i.e. list of strings.<|||||>@albertnanda, @devansvd, @LysandreJik, I still get this issue in `v4.2.0`, even if `padding=True` and `truncation=True`. I tried all variants of padding and truncation with and without `grouped_entities=True` and got the same error as above. Did you figure out a solution besides feeding in the narratives one by one?
```
nlp = pipeline("ner", model=MODEL_NAME, tokenizer=TOKENIZER_NAME, grouped_entities=True)
results = nlp(narratives, padding=True, truncation=True)
```<|||||>Hello! Could you try again on the `master` branch and let us know if it works? https://github.com/huggingface/transformers/pull/10184 was recently merged and it should fix the issue. Thanks!<|||||>@LysandreJik This works, but this runs the model sequentially over the list of text. Can we add batching support. It would be way faster then. Without it, this change has little significance, the only thing it does is save 1 line of code i.e
```
[nlp(text) for text in texts]
```<|||||>Sure, we could look in adding batching support, that would indeed make things much faster! Would you like to try your hand at it?<|||||>Sure, let me see if I can add batching support.<|||||>Working on batching for NER pipeline in this PR - https://github.com/huggingface/transformers/pull/11251<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 8,941 | closed | Error running source code -- import | Hi, I have downloaded source code of transformers and tried to run modeling_utils.py.
However, it seems that there are a lot of importing problems.
Am I running it in a wrong way?
| 12-06-2020 08:42:30 | 12-06-2020 08:42:30 | Hi, what are you trying to do?
The `modeling_utils.py` is an internal file that defines objects to be used by models. I invite you to read the documentation, especially the [quick tour](https://huggingface.co/transformers/quicktour.html).<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 8,940 | closed | failure to use conda-forge apex with torch1.6 and --amp_backend='apex' + --fp16_opt_level O1 | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.2.0
- Platform: Linux-4.4.0-1111-aws-x86_64-with-debian-stretch-sid
- Python version: 3.6.8
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): 2.3.1 (True)
- Using GPU in script?: YES
- Using distributed or parallel set-up in script?: NO
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
-->
@stas00
## Information
Model I am using: BART
The problem arises when using:
torch 1.6 + conda-forge apex w/ --fp16 --amp_backend='apex' + --fp16_opt_level O1 to run finetune.py
```
File "/home/ubuntu/anaconda3/envs/nightly/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 429, in fit
self.accelerator_backend.setup(model)
File "/home/ubuntu/anaconda3/envs/nightly/lib/python3.6/site-packages/pytorch_lightning/accelerators/gpu_accelerator.py", line 53, in setup
model = self.trainer.precision_connector.connect(model)
File "/home/ubuntu/anaconda3/envs/nightly/lib/python3.6/site-packages/pytorch_lightning/trainer/connectors/precision_connector.py", line 78, in connect
model, optimizers = self.backend.connect(model, self.trainer.optimizers)
File "/home/ubuntu/anaconda3/envs/nightly/lib/python3.6/site-packages/pytorch_lightning/plugins/apex.py", line 38, in connect
self.trainer.reinit_scheduler_properties(optimizers, self.trainer.lr_schedulers)
File "/home/ubuntu/anaconda3/envs/nightly/lib/python3.6/site-packages/pytorch_lightning/trainer/optimizers.py", line 143, in reinit_scheduler_properties
scheduler.__class__.__mro__[idx].__init__(scheduler, optimizer)
File "/home/ubuntu/anaconda3/envs/nightly/lib/python3.6/site-packages/torch/optim/lr_scheduler.py", line 74, in __init__
self.optimizer.step = with_counter(self.optimizer.step)
File "/home/ubuntu/anaconda3/envs/nightly/lib/python3.6/site-packages/torch/optim/lr_scheduler.py", line 56, in with_counter
instance_ref = weakref.ref(method.__self__)
AttributeError: 'function' object has no attribute '__self__'
```
The tasks I am working on is:
* [ ] Finetune BART on XSUM
## To reproduce
Steps to reproduce the behavior:
It is the same issue mentioned in https://github.com/huggingface/transformers/issues/8403#issuecomment-724787083
If you control find "\_\_self\_\_"
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 12-05-2020 20:25:39 | 12-05-2020 20:25:39 | @XiangLi1999, thank you for making a separate issue for this.
(note: I edited your post to format the error traceback and also edited the link to point to the relevant comment (https://github.com/huggingface/transformers/issues/8403#issuecomment-724787083) - if you click in the right upper corner of the comment - you will see an option to copy a link to that comment and not just thread.)
Yes, I saw that error but didn't have a chance to try to understand the cause at that time - I had a closer look now and this seems to be a `pytorch-lightning` bug - so you might have to ask via their issue tracker.
I can suggest two possible solutions:
1. Download pytorch-nightly which has the leak fixed so you can use native amp no problem.
`pip install --pre torch torchvision -f https://download.pytorch.org/whl/nightly/cu110/torch_nightly.html -U`
2. There is also a relatively new `finetune_trainer.py` in the same directory, which uses HF trainer. Have a look and perhaps it'd work better for you.
Please let me know if one of the proposed solutions addresses your needs.
And feel free to file a bug report with pytorch-lightning if you'd like to follow that use case through.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 8,939 | closed | sorry I mistakenly submitted a issue twice. Plz ignore (help delete) this one. | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
- `transformers` version: 3.2.0
- Platform: Linux-4.4.0-1111-aws-x86_64-with-debian-stretch-sid
- Python version: 3.6.8
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): 2.3.1 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
@stas00
## Information
Model I am using (Bert, XLNet ...): BART
The problem arises when using:
*--pt16 --fp16_opt_level O1 + conda-forge apex w/ --fp16 --amp_backend='apex' to run finetune.py
Got this error:
File "/home/ubuntu/anaconda3/envs/nightly/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 429, in fit
self.accelerator_backend.setup(model)
File "/home/ubuntu/anaconda3/envs/nightly/lib/python3.6/site-packages/pytorch_lightning/accelerators/gpu_accelerator.py", line 53, in setup
model = self.trainer.precision_connector.connect(model)
File "/home/ubuntu/anaconda3/envs/nightly/lib/python3.6/site-packages/pytorch_lightning/trainer/connectors/precision_connector.py", line 78, in connect
model, optimizers = self.backend.connect(model, self.trainer.optimizers)
File "/home/ubuntu/anaconda3/envs/nightly/lib/python3.6/site-packages/pytorch_lightning/plugins/apex.py", line 38, in connect
self.trainer.reinit_scheduler_properties(optimizers, self.trainer.lr_schedulers)
File "/home/ubuntu/anaconda3/envs/nightly/lib/python3.6/site-packages/pytorch_lightning/trainer/optimizers.py", line 143, in reinit_scheduler_properties
scheduler.__class__.__mro__[idx].__init__(scheduler, optimizer)
File "/home/ubuntu/anaconda3/envs/nightly/lib/python3.6/site-packages/torch/optim/lr_scheduler.py", line 74, in __init__
self.optimizer.step = with_counter(self.optimizer.step)
File "/home/ubuntu/anaconda3/envs/nightly/lib/python3.6/site-packages/torch/optim/lr_scheduler.py", line 56, in with_counter
instance_ref = weakref.ref(method.__self__)
AttributeError: 'function' object has no attribute '__self__'
This is the same error reported in one of the reply in https://github.com/huggingface/transformers/issues/8403...
The tasks I am working on is:
* [ ] to finetune BART on XSUM.
| 12-05-2020 20:19:06 | 12-05-2020 20:19:06 | |
transformers | 8,938 | closed | MobileBertForSequenceClassification outputs super-high logits | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.0.0
- Platform: Ubuntu 20.04
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.0
- Tensorflow version (GPU?): None
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: False
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@LysandreJik
## Information
Model I am using (Bert, XLNet ...): MobileBertForSequenceClassification
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: please see an example below
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: simple text classification using documents from sec.gov
## To reproduce
Steps to reproduce the behavior:
I am training a whole-text classifier with MobileBert using MobileBertForSequenceClassification:
```{python}
from transformers import MobileBertForSequenceClassification, \
MobileBertTokenizerFast
ARCH = 'google/mobilebert-uncased'
model = MobileBertForSequenceClassification.from_pretrained(ARCH).cuda()
tokenizer = MobileBertTokenizerFast.from_pretrained(ARCH)
x = tokenizer(['def hello(): return "world"', 'This is some test'],
max_length=512,
truncation=True,
return_tensors='pt',
padding='longest')
with torch.no_grad():
l = model(**x.to(model.device)).logits
```
Resulting model outputs are extremely high:
```
tensor([[ 3289181.7500, -2371234.0000],
[ 3198336.7500, -1882639.8750]])
```
Loading model and tokenizer with Auto- classes gives the same result.
When using pooled output from ModileBertModel with a custom linear head (BatchNorm1d+Dropout+Linear) everything works fine.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I expect logits to be near [-3, 3], but not in 6-7 digits.
| 12-05-2020 19:50:03 | 12-05-2020 19:50:03 | I see the same behavior when trying it out with IMDB classification.
I solved it by passing `classifier_activation=True` for the `from_pretrained` function.
[Documentation](https://huggingface.co/transformers/model_doc/mobilebert.html#mobilebertconfig) says it is `True` by default, however it does not seem like it.
[EDIT] Apparently this changes the behavior of the pooling layer <|||||>@hfawaz Thank you for solving this issue, it's very helpful<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 8,937 | closed | Gradients of BERT layer outputs to inputs | I am trying to find the gradient of the output of a layer of BERT to its inputs, token wise. But I keep getting the error saying: 'RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior.' Below is the code snippet:
for count, data in enumerate(iter(data_loader)):
input_ids=torch.squeeze(data['input_ids'],dim=0)
attention_mask=torch.squeeze(data['attention_mask'],dim=0)
last_hidden_state, pooled_output, hidden_states = bert_model(input_ids=input_ids,attention_mask=attention_mask)
bert_layer_i_output=hidden_states[i][0]
print(bert_layer_i_output.shape)
bert_layer_j_output=hidden_states[j][0]
#print(torch.autograd.grad(bert_layer_j_output,bert_layer_i_output,retain_graph=True, create_graph=True))
for k in range(bert_layer_i_output.shape[0]):
gradient=torch.autograd.grad(bert_layer_j_output[k],bert_layer_i_output[k],grad_outputs=torch.ones_like(bert_layer_j_output[k]))
print(gradient.shape)
print(torch.norm(gradient))
break
break
Below is the stack trace of the error:
/usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py in grad(outputs, inputs, grad_outputs, retain_graph, create_graph, only_inputs, allow_unused)
202 return Variable._execution_engine.run_backward(
203 outputs, grad_outputs_, retain_graph, create_graph,
--> 204 inputs, allow_unused)
205
206
RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior.
Am i doing something wrong? Ideally both the tensors should be part of the same computational graph right? | 12-05-2020 17:42:09 | 12-05-2020 17:42:09 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discusss.huggingface.co) instead?
Thanks! |
transformers | 8,936 | closed | Unexpected behavior when using TFRoberta model inside tf.keras model | ## Environment
- `transformers` version: tried with both 4.0.0 and 3.5.0
- Platform: Google Colab
- Python version: 3.6.9
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 2.3.0 GPU
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik
@jplu
## Information
Model I am using (Bert, XLNet ...): TFRoberta
The problem arises when using:
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
Sentence classification
## To reproduce
I am trying to import a pretrained TFRoberta model and extend it with a few layers for classification using tensorflow keras. When I directly use transformers model (Method 1), the model trains well and reaches a validation accuracy of 0.93 after 1 epoch. However, when trying to use the model as a layer within a tf.keras model (Method 2), the model can't get above 0.32 accuracy. As far as I can tell based on the documentation, the two approaches should be equivalent. My goal is to get Method 2 working so that I can add more layers to it instead of directly using the logits produced by the transformers' classifier head but I'm stuck at this stage.
```
import tensorflow as tf
from transformers import TFRobertaForSequenceClassification
```
Method 1:
```
model = TFRobertaForSequenceClassification.from_pretrained("roberta-base", num_labels=6)
```
Method 2:
```
input_ids = tf.keras.Input(shape=(128,), dtype='int32')
attention_mask = tf.keras.Input(shape=(128, ), dtype='int32')
transformer = TFRobertaForSequenceClassification.from_pretrained("roberta-base", num_labels=6)
encoded = transformer([input_ids, attention_mask])
logits = encoded[0]
model = tf.keras.models.Model(inputs = [input_ids, attention_mask], outputs = logits)
```
Rest of the code for either method is identical,
```
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08, clipnorm=1.0),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy('accuracy')])
```
## Expected behavior
Similar validation loss and accuracy for both methods.
| 12-05-2020 16:33:18 | 12-05-2020 16:33:18 | Hello !
Thanks for reporting this. Can you provide a colab in order for us to reproduce your use case?<|||||>Thanks @jplu . Please see the [Colab notebook](https://colab.research.google.com/drive/1qDpqEc4qbeuoQjVEpDx88dXIXnJ6bwXh?usp=sharing).
As you can see, both model1 and model2 have the exact same number of parameters and are initialized using the same pretrained roberta-base model. Yet, first one trains well and reaches val_accuracy of 0.9350 after one epoch while the second one (using transformers model within a tf.keras model) is stuck.<|||||>This issue does not seem to be isolated to TFRoberta. Just tried with TFDistilBertForSequenceClassification and the outcome is similar. Using the transformers model directly works fine whereas embedding it within a tf.keras model (while adding just an input layer and passing the logits directly to output) fails.<|||||>@amir-ghasemi Can you try on the master version with this update, and let me know if you still get the issue:
```
input_ids = tf.keras.Input(shape=(128,), dtype='int32')
attention_mask = tf.keras.Input(shape=(128, ), dtype='int32')
transformer = TFRobertaForSequenceClassification.from_pretrained("roberta-base", num_labels=6)
encoded = transformer({"input_ids": input_ids, "attention_mask": attention_mask})
logits = encoded[0]
model = tf.keras.models.Model(inputs = {"input_ids": input_ids, "attention_mask": attention_mask}, outputs = logits)
```<|||||>Thanks @jplu ! Tried with the master branch and feeding the input using the dict. That did the trick! Closing the issue. |
transformers | 8,935 | closed | phase level tokenizer | # 🚀 Feature request
A tokenizer to encode sentences on phase level
## Motivation
The transformer Tokenizer always tokenize the sentence on word level,which might be good for English,however it might not so in Chinese,for example, sports has single meaning in English,when its translation 运动 is split to 运 and 动,we have no idea what it means.There are also many Technical terms in chinses larger than one word should not be split
The tokenizer has additional_special_tokens, but I am not sure it can solve the phase level token
| 12-05-2020 16:14:18 | 12-05-2020 16:14:18 | I has write a custom tokenizer to rewrite the _tokenize method,it seems work well for chinese.
```
from transformers import *
import jieba
jieba.initialize()
class CostoumToken(BertTokenizer):
def __init__(self,vocab_path,pre_token_method=lambda x:" ".join(jieba.cut(x,HMM=False))):
super().__init__(vocab_path)
self.pre_token_method=pre_token_method
def _tokenize(self, text):
text=self.pre_token_method(text)
split_tokens=text.split(" ")
return split_tokens
```
##############################################################
testing
##############################################################
```
token.tokenize("中国很好")
out:['中国', '很', '好']
token.encode("中国很好")
out:[2, 13587, 2422, 1861, 3]
token.decode([2, 13587, 2422, 1861, 3])
out:'[CLS] 中国 很 好 [SEP]'
```
<|||||>Have you tried playing around with the [`tokenize_chinese_chars` argument of the `BertTokenizer`?](https://huggingface.co/transformers/model_doc/bert.html?highlight=tokenize_chinese_chars#transformers.BertTokenizer)<|||||>@LysandreJik thanks for your replay,I test it ,and get the answer:
```
token.tokenize("中国很好")
out:['中', '国', '很', '好']
```
it seems that BertTokenizer always tokenize the sentence on word level<|||||>what task do you apply?
i had tried the idea as you implement in your CostoumToken
to a keyword find algorithm in
https://github.com/MaartenGr/KeyBERT
to find chinese keywords .
the conclusion seems meaningful, as expected
the phrase level bert embedding encoded keep the semantic from char level.
if you take task such as phrase fill or prediction ?
<|||||>@svjack here is the code of tensorflow version to a phase level chinese pre_trained bert:
[https://github.com/ZhuiyiTechnology/WoBERT](url)
In order to find the true phase in the dictionary,the dictionary must have thses phases where phases like "中国“ should be treated as a whole.
I am trying to replace some phrases and use the phase level pretrained model to predict which phase in the dictionary can replace them,it seems hard to realise on the word level.
<|||||>> @svjack here is the code of tensorflow version to a phase level chinese pre_trained bert:
>
> [https://github.com/ZhuiyiTechnology/WoBERT](url)
>
> In order to find the true phase in the dictionary,the dictionary must have thses phases where phases like "中国“ should be treated as a whole.
>
> I am trying to replace some phrases and use the phase level pretrained model to predict which phase in the dictionary can replace them,it seems hard to realise on the word level.
>
>
i will try this project later.<|||||>> @svjack here is the code of tensorflow version to a phase level chinese pre_trained bert:
> [https://github.com/ZhuiyiTechnology/WoBERT](url)
> In order to find the true phase in the dictionary,the dictionary must have thses phases where phases like "中国“ should be treated as a whole.
> I am trying to replace some phrases and use the phase level pretrained model to predict which phase in the dictionary can replace them,it seems hard to realise on the word level.
>
it simply tokenize text by jieba firstly and serlize it and use this as vocab_file like transformers project do, you can also set this param in BertTokenizer class init step, but a problem make me confused is
tokenizer conclusion is not unique for the best, but with a probability evidence.
and select the "best" as result, but when it comes to text with different as trained input may induce a different tokenized list with same substring contain it. but its
also the "best" . So this char to word embedding average can not be go back to retrieve best combine of chars i.e. phares in chinese . which is not suitable in nlp argument task.<|||||>many sentence piece tokenizer such as xlm can tackle this kind of problems.<|||||>@svjack thanks for your work.It's a common problem to split chinese phases,lots of researchers are still arguing char based or phase based split in chinese NLP.I will try xlm.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 8,934 | closed | Updating outdated fairseq checkpoint to HF script | The current fairseq checkpoint to HF script is outdated, not being compatible with the newly introduced `hydra` config and fairseq's new PyTorch hub interface. In addition to this, added one more argument (`--data-dir`) for custom RoBERTa models, and modified the `--classification_head` argument to take in a string rather than `store_true`. This is to reflect (a more likely case) of custom classification heads, rather than the most popular (and already available) `mnli` head. Added an import of `os` (if it counts as a "dependency").
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik @stefan-it @myleott @sgugger
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
| 12-05-2020 12:41:49 | 12-05-2020 12:41:49 | This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 8,933 | closed | Relative Attention Bias not initialized for T5ForConditionalGeneration in version 4.0.0 | ## Environment info
- `transformers` version: 4.0.0
- Platform: Linux 18.04
- Python version: 3.6.9
- PyTorch version (GPU?): 1.70 (True)
- Tensorflow version (GPU?): N/A
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten
## Information
Model I am using: T5ForConditionalGeneration, specifically the "allenai/unifiedqa-t5-large" checkpoint from the model hub
The problem arises when I try to load the checkpoint following standard loading procedures under `transformers==4.0.0`. The same doesn't happen in version `3.5.0`.
## To reproduce
```python
from transformers import AutoTokenizer, T5ForConditionalGeneration
model_name_or_path = "allenai/unifiedqa-t5-large"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
# Warnings in version 4.0, not in 3.5.0 an preceding ones
model = T5ForConditionalGeneration.from_pretrained(model_name_or_path)
```
```
Some weights of the model checkpoint at allenai/unifiedqa-t5-large were not used when initializing T5ForConditionalGeneration: ['decoder.blo
ck.0.layer.1.EncDecAttention.relative_attention_bias.weight']
- This IS expected if you are initializing T5ForConditionalGeneration from the checkpoint of a model trained on another task or with another
architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing T5ForConditionalGeneration from the checkpoint of a model that you expect to be exactly ident
ical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
```
## Expected behavior
A consistent behavior across versions, either always or never raising the warning at loading time.
| 12-05-2020 12:17:10 | 12-05-2020 12:17:10 | Hey @gsarti,
yeah there was a tiny bug in T5 previously. T5 actually never had relative positional encodings for the EncoderDecoderLayer, so the unnecessary weight was deleted after 3.5.0. This should not really affect the performance however and is no problem now, see: https://github.com/huggingface/transformers/pull/8518<|||||>Thank you for the clarification! |
transformers | 8,932 | closed | Documentation License Query | We were looking to make some educational material based on the documentation & it wasn't clear what the license is for the documentation.
We see that the repository as a whole is under an Apache license, but the docs page is not explicit on what license the docs with a dedicated license page or in the footer.
Could you please advise?
We have made some other educational material from docs pages for reference (see the flashcards section of https://www.darigovresearch.com/) | 12-05-2020 10:01:54 | 12-05-2020 10:01:54 | The documentation is also under the Apache License (version 2.0). Hope that helps!
Would love some flashcards for Transformers :-)<|||||>@sgugger thanks for letting us know! An initial set based on your glossary will follow shortly as a start.
Is there any way to update the docs so that the license is in the footer?
I am happy to make a pull request if given context. We think having it there will encourage other people as well to make other educational content based on the docs.
Also could you take a look at this issue as it may be relevant but was auto closed?
https://github.com/huggingface/transformers/issues/6140<|||||>I'll work on adding the copyright to individual files and the footer of the docs on Monday. For the issue you mention, I'm not sure what you mean: this points to complete text of the Apache v2 license (as is done for TensorFlow for instance, see [here](https://github.com/tensorflow/tensorflow/blob/master/LICENSE). The snippet is then copied on the code files with the year and authors filled properly (and soon-to-be doc files).<|||||>We believe the original post meant that lines 179-188 of your license file suggests that in the License file you need to add or adjust line 190 to have the copyright year & the name of your organisation. The tensorflow license has done this as the first line of their license file.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 8,931 | closed | Fix typo for `modeling_bert` import resulting in ImportError | # What does this PR do?
Self-explanatory ;) - Fixes a typo resulting in an `ImportError` in the convert RoBERTa from fairseq to HF - Hope it helps!
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 12-05-2020 06:58:57 | 12-05-2020 06:58:57 | |
transformers | 8,930 | closed | [seq2seq] document the caveat of leaky native amp | the native amp leak will be fixed in pt18 (already available in pt-nightly) - this PR documents this caveat and proposes to use apex for pt < 1.8.
@sgugger | 12-04-2020 22:04:09 | 12-04-2020 22:04:09 | |
transformers | 8,929 | closed | Don't pass in token_type_ids to BART for GLUE | # What does this PR do?
Without this fix, training a `BARTForSequenceClassification` model with `run_pl_glue.py` gives `TypeError: forward() got an unexpected keyword argument 'token_type_ids'`, because BART does not have token_type_ids. I've solved this issue in the same way as it's solved for the "distilbert" model, and I can train BART models on SNLI without errors now.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
| 12-04-2020 21:05:21 | 12-04-2020 21:05:21 | Thanks for the fix @ethanjperez ! Looks good to me! |
transformers | 8,927 | closed | run_glue.py fails with RoBERTa but succeeds with other models | ## Environment info
I'm following this instructions: https://github.com/huggingface/transformers/tree/master/examples, meaning I installed the library from source.
- `transformers` version: 4.1.0.dev0
- Platform: Linux
- Python version: 3.6
- PyTorch version (GPU?): 1.7.0
- Using GPU in script?: Yes, Tesla K80
- Using distributed or parallel set-up in script?: running `CUDA_VISIBLE_DEVICES=0 python run_glue.py`
-->
## The problem
I'm running the official `run_glue.py` code, with the command and arguments given here: https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-pytorch-version
When I use BERT - it **succeeds**.
For example, BERT:
```CUDA_VISIBLE_DEVICES=0 python run_glue.py --task_name cola --output_dir results/normal/bert/cola/ --cache_dir cache/normal/bert --model_name_or_path bert-base-cased --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3.0 --do_predict --overwrite_output_dir```
And I receive score that makes sense:
```
[d]$ cat /transformers/examples/text-classification/results/normal/bert/cola/eval_results_cola.txt
eval_loss = 0.518086314201355
eval_matthews_correlation = 0.572739655014278
epoch = 3.0
```
When I use RoBERTA, it **fails** with a stacktrace:
```CUDA_VISIBLE_DEVICES=0 python run_glue.py --task_name cola --output_dir results/normal/roberta/cola/ --cache_dir cache/normal/roberta --model_name_or_path roberta-base --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3.0 --do_predict --overwrite_output_dir```
The error message:
```python
[INFO|trainer.py:674] 2020-12-04 20:23:30,937 >> Total optimization steps = 804
0%| | 0/804 [00:00<?, ?it/s]/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [33,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [33,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
...
Traceback (most recent call last):
File "/transformers/examples/text-classification/run_yonatan.py", line 464, in <module>
main()
File "/transformers/examples/text-classification/run_yonatan.py", line 399, in main
model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None
File "/transformers/src/transformers/trainer.py", line 767, in train
tr_loss += self.training_step(model, inputs)
File "/transformers/src/transformers/trainer.py", line 1096, in training_step
loss = self.compute_loss(model, inputs)
File "/transformers/src/transformers/trainer.py", line 1120, in compute_loss
outputs = model(**inputs)
File "/transformers/trans_env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/transformers/src/transformers/models/roberta/modeling_roberta.py", line 1029, in forward
return_dict=return_dict,
File "/transformers/trans_env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/transformers/src/transformers/models/roberta/modeling_roberta.py", line 717, in forward
return_dict=return_dict,
File "/transformers/trans_env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/transformers/src/transformers/models/roberta/modeling_roberta.py", line 450, in forward
output_attentions,
File "/transformers/trans_env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/transformers/src/transformers/models/roberta/modeling_roberta.py", line 368, in forward
output_attentions=output_attentions,
File "/transformers/trans_env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/transformers/src/transformers/models/roberta/modeling_roberta.py", line 302, in forward
output_attentions,
File "/transformers/trans_env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/transformers/src/transformers/models/roberta/modeling_roberta.py", line 184, in forward
mixed_query_layer = self.query(hidden_states)
File "/transformers/trans_env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/transformers/trans_env/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 93, in forward
return F.linear(input, self.weight, self.bias)
File "/transformers/trans_env/lib/python3.6/site-packages/torch/nn/functional.py", line 1692, in linear
output = input.matmul(weight.t())
RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)`
0%| | 0/804 [00:00<?, ?it/s]
```
I've searched for related solutions but didn't find any relevant solution (https://github.com/huggingface/transformers/issues?q=CUBLAS_STATUS_ALLOC_FAILED).
What am I missing?
Thanks | 12-04-2020 20:25:24 | 12-04-2020 20:25:24 | When **importing** transformers (instead of using the source) the problem does not occur.<|||||>Pinging the Trainer master @sgugger!<|||||>This looks like a problem on the CUDA initialization in your enviromnent. The command runs fine on my side.
> When **importing** transformers (instead of using the source) the problem does not occur.
What do you mean exactly by this?<|||||>> This looks like a problem on the CUDA initialization in your enviromnent. The command runs fine on my side.
>
> > When **importing** transformers (instead of using the source) the problem does not occur.
>
> What do you mean exactly by this?
The note here: https://github.com/huggingface/transformers/tree/master/examples#important-note
suggests to install the library from source. When I do it (with git clone) it doesn't work - I receive the error described here.
On the other hand, when I use the 'transformers' from 'pip install transformers', it does work.
I'm not sure if this specific difference causes the error only in my environment or not. <|||||>This issue has been stale for 1 month. |
transformers | 8,926 | closed | [ci] skip doc jobs - circleCI is not reliable - disable skip for now | We can't do reliable skipping if we can't get a reliable range of changes and cirlcleCI is all over
e.g. in this PR https://github.com/huggingface/transformers/pull/8918 it changed `pipeline.git.base_revision` **on every commit** resulting only the changes from last commit appearing as a change for the whole PR, which is very bad, since the PR could be failing tests, but the last commit's changes in doc file only will make it appear that everything is green, which could be very misleading.
I wasn't able to reproduce this yet another edge case (see attempts below), but we clearly have that happened in #8918
So this PR disables the magic until I hope I get a solution from circleCI devs which we are discussing via their support.
I'm leaving the printouts in place to continue diagnosing the issue.
It could also be that we won't be able to do that if we don't find a reliable to way to get such a simple information from circleCI, then I will remove it completely.
Thank you for bearing with me, as this is a nice-to-have but not an essential feature.
@LysandreJik, @sgugger | 12-04-2020 16:58:41 | 12-04-2020 16:58:41 | Thanks for investigating this!<|||||>The thread discussing getting a reliable range of changes on CircleCI is here:
https://discuss.circleci.com/t/pipeline-git-base-revision-is-completely-unreliable/38301
|
transformers | 8,925 | closed | Fix TF T5 only encoder model with booleans | This model was not adapted to the new inputs processing. | 12-04-2020 16:44:04 | 12-04-2020 16:44:04 | |
transformers | 8,924 | closed | Add new SQUAD example | # What does this PR do?
This PR adds a new example for SQUAD (v1 and v2) for simple models (e.g., not the XLNet/XLM more complex version, another example will follow for those) using the datasets library and all the features of the fast tokenizer to simplify considerably the preprocessing and the post-processing.
I've compared the new version to the old one and did not find major differences when:
- fine-tuning a model on SQUAD v1 or v2 with the old and new script
- evaluation an existing model fine-tuned on SQUAD v1 or v2 with the old and new script
The only difference I found was when evaluating an existing model fine-tuned on SQUAD v1 and evaluating it on SQUAD v2. For those, the new script is a bit less good at predicting the null answers (but those models have crappy results on SQUAD v2 anyway, they are just a bit more crappy).
Further plans are:
- add a subclass of Trainer for QA so that the evaluation is done directly with `trainer.evaluate()`
- add a script for XLNet/XLM | 12-04-2020 15:12:18 | 12-04-2020 15:12:18 | I'd like the new examples scripts to stay fairly focused on one problem (at the cost of potentially have some duplicate codes) so they're easy to understand (and tweak) by users. We don't support any kind of datasets either (if your QA dataset has fields with names slightly different than SQUAD for instance), users are supposed to adapt the relevant lines in the code to their needs.
So with that in mind, I'd definitely prefer a separate script :-) |
transformers | 8,923 | closed | ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: **4.0.0**
- Platform: **google colab**
- Python version: 3
- PyTorch version (GPU?): 1.7.0+cu101
### Who can help
@patrickvonplaten
@patil-suraj
## Information
Model I am using (T5):
The problem arises when using:
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
import sentencepiece
tokenizer = T5Tokenizer.from_pretrained('t5-small')
model = T5ForConditionalGeneration.from_pretrained('t5-small', torchscript = True)
input_ids = tokenizer('The <extra_id_0> walks in <extra_id_1> park', return_tensors='pt').input_ids
labels = tokenizer('<extra_id_0> cute dog <extra_id_1> the <extra_id_2> </s>', return_tensors='pt').input_ids
outputs = model(input_ids=input_ids, labels=labels)
```
```input_ids = tokenizer("summarize: studies have shown that owning a dog is good for you ", return_tensors="pt").input_ids # Batch size 1
outputs = model.generate(input_ids)
```
```import torch
traced_model = torch.jit.trace(model, input_ids )
torch.jit.save(traced_model, "traced_t5.pt")
```
as mentioned in the [article ](https://huggingface.co/transformers/torchscript.html#saving-a-model) I tried to convert the model to `torchscript `
`T5ForConditionalGeneration` model is not supporting `trace` function for converting the model to `torchscript`
the output produced :
```---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-7-e37c13fee7bc> in <module>()
1 import torch
----> 2 traced_model = torch.jit.trace(model, input_ids )
3 torch.jit.save(traced_model, "traced_t5.pt")
7 frames
/usr/local/lib/python3.6/dist-packages/transformers/models/t5/modeling_t5.py in forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, inputs_embeds, head_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)
774 else:
775 err_msg_prefix = "decoder_" if self.is_decoder else ""
--> 776 raise ValueError(f"You have to specify either {err_msg_prefix}inputs or {err_msg_prefix}inputs_embeds")
777
778 if inputs_embeds is None:
ValueError: You have to specify either decoder_inputs or decoder_inputs_embeds
```
I got the same issue when converting a question-generation T5 model to `torchscript`, and the issue is [here](https://github.com/patil-suraj/question_generation/issues/52)
| 12-04-2020 11:32:20 | 12-04-2020 11:32:20 | @patrickvonplaten might be able to help<|||||>Seq2Seq models are a bit special - they also need `decoder_input_ids` as the error message states. Since torchscript however does not allow keyword arguments we need to provide positional arguments and therefore it's mandatory to also provide the 2nd argument being the `attention_mask` (for the encoder).
The following is what you are looking for (I think):
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
import torch
tokenizer = T5Tokenizer.from_pretrained('t5-small')
model = T5ForConditionalGeneration.from_pretrained('t5-small', torchscript = True)
input_ids = tokenizer('The <extra_id_0> walks in <extra_id_1> park', return_tensors='pt').input_ids
attention_mask = input_ids.ne(model.config.pad_token_id).long()
decoder_input_ids = tokenizer('<pad> <extra_id_0> cute dog <extra_id_1> the <extra_id_2>', return_tensors='pt').input_ids
traced_model = torch.jit.trace(model, (input_ids, attention_mask, decoder_input_ids))
torch.jit.save(traced_model, "traced_t5.pt")
```<|||||>Thank you for the solution, your mentioned code works perfectly fine for creating a `torchscript `model.
But I have one more question, the generated `traced_t5.pt` model doesn't seem to have the `model.generate()` method.
how to get the `token ids` output from this newly created model? (only from model()).
Also, In General, how can we get the output (token ids) from a t5 model without using the `generate()` method?<|||||>Yeah, I don't think our `generate()` method is torchscriptable yet :-/ You should take a look at the `greedy_search` method to see how the `generate()` method can be implemented by hand :-)
Greedy search: https://github.com/huggingface/transformers/blob/df311a5ccf50be3031474e289b43b1be43111144/src/transformers/generation_utils.py#L622
Generate:
https://github.com/huggingface/transformers/blob/df311a5ccf50be3031474e289b43b1be43111144/src/transformers/generation_utils.py#L296
<|||||>thank you, I'll look into it. <|||||>@Ki6an : Were you able to figure out how to make use of `greedy_search` to do the work which `generate` does? If so can I request you to share that as a gist?<|||||>@karrtikiyerkcm have a look at [FastT5](https://github.com/Ki6an/fastT5) library, it implements both greedy and beam search for T5. <|||||>Thanks @Ki6an , I was trying something similar for Pegasus Models for the summarisation task.<|||||>@Ki6an Hello, the input_id I inputed is 64*100 (Batch_size,max_sequence), why the size of
T5ForConditionalGeneration.generate result is 100?where is the batch_size
<|||||>anyone know if it's still not possible to use torchscript with generate?<|||||>@patrickvonplaten @Ki6an Hi, what should ```decoder_input_ids``` be if my input text is ```translate English to German: Thank you!```? I'm using this for inference. For decoder models like BERT and GPT, all I need to do is use Tokenizer to get the ```input_ids``` which will be passed into the models. But I'm not sure how that works for encoder-decoder models like T5 here.<|||||>> @patrickvonplaten @Ki6an Hi, what should `decoder_input_ids` be if my input text is `translate English to German: Thank you!`? I'm using this for inference. For decoder models like BERT and GPT, all I need to do is use Tokenizer to get the `input_ids` which will be passed into the models. But I'm not sure how that works for encoder-decoder models like T5 here.
Hi, I wanted to follow up on this. I have the same question.<|||||>> Seq2Seq models are a bit special - they also need `decoder_input_ids` as the error message states. Since torchscript however does not allow keyword arguments we need to provide positional arguments and therefore it's mandatory to also provide the 2nd argument being the `attention_mask` (for the encoder).
>
> The following is what you are looking for (I think):
>
> ```python
> from transformers import T5Tokenizer, T5ForConditionalGeneration
> import torch
>
> tokenizer = T5Tokenizer.from_pretrained('t5-small')
> model = T5ForConditionalGeneration.from_pretrained('t5-small', torchscript = True)
> input_ids = tokenizer('The <extra_id_0> walks in <extra_id_1> park', return_tensors='pt').input_ids
> attention_mask = input_ids.ne(model.config.pad_token_id).long()
> decoder_input_ids = tokenizer('<pad> <extra_id_0> cute dog <extra_id_1> the <extra_id_2>', return_tensors='pt').input_ids
>
> traced_model = torch.jit.trace(model, (input_ids, attention_mask, decoder_input_ids))
> torch.jit.save(traced_model, "traced_t5.pt")
> ```
Hi can you please provide example of how to use this t5 jit traced model for inference..
I tried using it but it requires decoder_input_ids.. is there any way of doing inference without the decoder_input_ids? |
transformers | 8,922 | closed | Add comet | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 12-04-2020 08:27:41 | 12-04-2020 08:27:41 | Hi, what is this?<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 8,921 | closed | TransfoXL Slow Test Fails | This test needs to be fixed:
```
pytest -s tests/test_modeling_tf_transfo_xl.py::TFTransfoXLModelLanguageGenerationTest::test_lm_generate_transfo_xl_wt103
```
@patrickvonplaten pinging myself.
cc @jplu (for notice) | 12-04-2020 07:59:56 | 12-04-2020 07:59:56 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 8,920 | closed | Patch model parallel test | Patches the model parallel tests. | 12-03-2020 21:09:41 | 12-03-2020 21:09:41 | |
transformers | 8,919 | closed | BertModel outputs string instead of tensor | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.0.0
- Platform: Linux-4.15.0-46-generic-x86_64-with-Ubuntu-16.04-xenial
- Python version: 3.6.7
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
Sorry, no idea.
## Information
Model I am using (Bert, XLNet ...): BERT
The problem arises when using:
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] my own task or dataset: (give details below)
## To reproduce
```
import transformers
from transformers import BertModel, BertTokenizer
PRE_TRAINED_MODEL_NAME = 'bert-base-cased'
PATH_OF_CACHE = "some_path"
tokenizer = BertTokenizer.from_pretrained(PRE_TRAINED_MODEL_NAME,cache_dir = PATH_OF_CACHE)
sample_txt = 'When was I last outside? I am stuck at home for 2 weeks.'
encoding_sample = tokenizer.encode_plus(
sample_txt,
max_length=32,
add_special_tokens=True, # Add '[CLS]' and '[SEP]'
return_token_type_ids=False,
padding=True,
truncation = True,
return_attention_mask=True,
return_tensors='pt', # Return PyTorch tensors
)
bert_model = BertModel.from_pretrained(PRE_TRAINED_MODEL_NAME,cache_dir = PATH_OF_CACHE)
last_hidden_state, pooled_output = bert_model(
encoding_sample['input_ids'],
encoding_sample['attention_mask']
)
print([last_hidden_state,pooled_output])
```
I'm getting this very odd behaviour where the output are two strings named from the variables:
```
(env) mwon@sebruno2:~/data-mwon/paperChega/src_classificador$ python test.py
['last_hidden_state', 'pooler_output']
```
## Expected behavior
I expected to output a tensor with the hidden state of the last layer.
| 12-03-2020 20:23:18 | 12-03-2020 20:23:18 | Hi! Indeed, model outputs cannot be unpacked this way. It is mentioned in the [documentation](https://huggingface.co/transformers/main_classes/output.html#transformers.file_utils.ModelOutput). You can retrieve the items by unpacking them like this if you use the `.to_tuple()` method.<|||||>Oh, ok. Thanks and sorry for missing that. <|||||>No problem, happy to help! |
transformers | 8,918 | closed | Put Transformers on Conda | Puts transformers on conda, on the `huggingface` channel. Installation can be done as:
```
conda install -c huggingface transformers
```
Will push a build on the channel on every new tag. | 12-03-2020 19:01:38 | 12-03-2020 19:01:38 | |
transformers | 8,917 | closed | Fix move when the two cache folders exist | # What does this PR do?
When doing local checkouts of PRs that predate the cache move, we end up with the two cache folders existing and the automatic move fails. This PR fixes that.
| 12-03-2020 15:43:54 | 12-03-2020 15:43:54 | |
transformers | 8,916 | closed | Impossible to use sentencepiece | Hi,
I explicitly installed both the latest version of transformers (v4.0.0) and Sentencepiece (v0.1.84) as it is specified as it is specified in the release history:

And the I I try to import MarianMT Tokeninzer I have the following issue:

So, any idea why I 'm getting that issue ?
Best Regards,
Leman
| 12-03-2020 15:27:09 | 12-03-2020 15:27:09 | Have you tried restarting the kernel after installing `sentencepiece`?<|||||>Yes, I did, with:
!pip install --upgrade sentencepiece<|||||>Is it possible for you to share your notebook so that I may take a look?<|||||>I've just relaunched my notebook now, I don't have any issue now.
Thank you for your help
Regards<|||||>Glad it works for you now! |
transformers | 8,915 | closed | Avoid erasing the attention mask when double padding | # What does this PR do?
There is currently a bug when padding the same inputs twice:
```
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
>>> inputs = tokenizer("Hello, my name is Sylvain!", padding="max_length", max_length=32)
>>> print(inputs["attention_mask"])
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
>>> inputs = tokenizer.pad(inputs, padding="max_length", max_length=32)
>>> print(inputs["attention_mask"])
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
```
This is done when using the `DataCollatorWithPadding` inside a `Trainer` (which is the default) when the samples have already been padded. This PR fixes that by honoring the current `attention_mask` when no padding is necessary. | 12-03-2020 15:26:42 | 12-03-2020 15:26:42 | |
transformers | 8,914 | closed | Tweak wording + Add badge w/ number of models on the hub | it seemed pertinent to display this here but maybe we can also add it to some other places | 12-03-2020 13:40:07 | 12-03-2020 13:40:07 | [Tooling comment] For some reason `python utils/check_copies.py --fix_and_overwrite` fails on my Python with following:
```
(.env) ipro:transformers gibbon$ python utils/check_copies.py --fix_and_overwrite
Traceback (most recent call last):
File "utils/check_copies.py", line 432, in <module>
check_model_table(args.fix_and_overwrite)
File "utils/check_copies.py", line 414, in check_model_table
new_table = get_model_table_from_auto_modules()
File "utils/check_copies.py", line 328, in get_model_table_from_auto_modules
spec = importlib.util.spec_from_file_location(
AttributeError: module 'importlib' has no attribute 'util'
```<|||||>I think it's better at the top rather than the end of the list (I don't think a user will read to until the end of the list TBH). We could even put it further up the README! |
transformers | 8,913 | closed | Fine-tune with custom data | 1. What is the difference between `run_squad.py` & `run_squad_trainer.py` ?
I've squad like dataset. 2. What script I used for fine-tuning with my own dataset? | 12-03-2020 10:55:09 | 12-03-2020 10:55:09 | `run_squad.py` is more complete right now, as `run_squad_trainer.py` can't do evaluation (yet! it will be possible in a few days).
We try to keep the github issues for bugs/feature requests.
For next time, could you ask your question on the [forum](https://discusss.huggingface.co) instead?
Thanks!<|||||>Okay sure. I don't know about the forum - sorry for that. Thank you @LysandreJik |
transformers | 8,912 | closed | Create README.md | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 12-03-2020 07:32:11 | 12-03-2020 07:32:11 | This doesn't seem correct |
transformers | 8,911 | closed | Help to run an Example Code (it's a bug maybe ?) | ## Environment info
- `transformers` version: 4.0.0
- Platform: Windows-10-10.0.18362-SP0
- Python version: 3.8.6
- PyTorch version (GPU?): 1.7.0+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
I don't know :(
## Information
Model I am using (Bert, XLNet ...): fmikaelian/camembert-base-fquad
The problem arises when using:
* [x] the official example scripts: (give details below)
When using the default example script i get this error :
```
Traceback (most recent call last):
File ".\test.py", line 5, in <module>
nlp({
File "C:\Users\WTFAn\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\pipelines.py", line 1874, in __call__
start, end = self.model(**fw_args)[:2]
File "C:\Users\WTFAn\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\WTFAn\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\models\roberta\modeling_roberta.py", line 1286, in forward
outputs = self.roberta(
File "C:\Users\WTFAn\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\WTFAn\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\models\roberta\modeling_roberta.py", line 687, in forward
embedding_output = self.embeddings(
File "C:\Users\WTFAn\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\WTFAn\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\models\roberta\modeling_roberta.py", line 117, in forward
inputs_embeds = self.word_embeddings(input_ids)
File "C:\Users\WTFAn\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\WTFAn\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\nn\modules\sparse.py", line 124, in forward
return F.embedding(
File "C:\Users\WTFAn\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\nn\functional.py", line 1852, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.IntTensor instead (while checking arguments for embedding)
```
## Expected behavior
I don't know where I forgot something, but this example code give me this error and I don't know how to resolve this, because the error come from the lib. If someone can help me :)
```
from transformers import pipeline
nlp = pipeline('question-answering', model='fmikaelian/camembert-base-fquad', tokenizer='fmikaelian/camembert-base-fquad')
nlp({
'question': "Qui est Claude Monet?",
'context': "Claude Monet, né le 14 novembre 1840 à Paris et mort le 5 décembre 1926 à Giverny, est un peintre français et l’un des fondateurs de l'impressionnisme."
})
```
Edit : I have this issue with all 'question-answering' pipeline | 12-03-2020 00:36:27 | 12-03-2020 00:36:27 | Hmmm, this is weird, I can't reproduce with a very similar environment:
```py
- `transformers` version: 4.0.0
- Platform: Linux-5.9.11-arch2-1-x86_64-with-glibc2.10
- Python version: 3.8.3
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): 2.3.1 (False)
```
It outputs the following:
```
{'score': 0.26648467779159546, 'start': 90, 'end': 106, 'answer': 'peintre français'}
```
Differing element here seems to be Windows vs Linux. Do you mind trying to install the library from `master` and telling me if that works?
You can try it with:
```
pip install git+https://github.com/huggingface/transformers
```<|||||>For what it's worth, I ran into the same issue using a GitHub Actions workflow with the following environment:
```python
- `transformers` version: 4.0.0
- Platform: Windows Server 2019
- Python version: 3.6.8
- PyTorch version (GPU?): 1.6.0 (No)
```
I modified the workflow to run against the master branch and the issue appears to be resolved there.<|||||>Yes this command solve my issue, thank you :) |
transformers | 8,910 | closed | "No log" when training RobertaForSequenceClassification using Trainer | When training, for the first few logging steps I get "No log".
Looks like this:
Step | Training Loss | Validation Loss | Accuracy | F1
-- | -- | -- | -- | --
150 | No log | 0.695841 | 0.503277 | 0.410575
300 | No log | 0.696622 | 0.488860 | 0.298561
450 | No log | 0.694300 | 0.499345 | 0.356902
What does this mean? My classifier is performing poorly and I am wondering if this is related. I am finetuning roberta-base using 3k question-answer pairs, 50% positively labelled, 50% negatively labelled.
Thanks,
Bryan
| 12-02-2020 22:02:35 | 12-02-2020 22:02:35 | Could you provide the environment information as mentioned in the issue template, alongside the a reproducible that outputs this so that we may check what's going on? Thank you.<|||||>Hi @BryanWBear ,
I am facing this issue too. In the meantime, did you find a solution?
Thank you so much in advance!<|||||>the default `logging_steps` in `TrainingArguments` is set to `500` steps, so no loss is reported before 500 steps<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 8,909 | closed | FlaxBertModel examples (and fast attention) | Are there any examples that show how to use the FlaxBertModel?
Would it be possible to replace the current SelfAttention module with the one proposed here https://github.com/google-research/google-research/tree/master/performer/fast_self_attention? | 12-02-2020 21:26:54 | 12-02-2020 21:26:54 | It should work - we'll run some experiments soon :-) https://github.com/huggingface/transformers/pull/8358 @TevenLeScao<|||||>Ok, great!
For the moment i'm running some experiments myself with FlaxBertModel but i'm getting unexpected behaviors: it seems that the flax implementation is slower than the torch one; I tried to run this simple code in google colab with GPU environment
```python
!pip install --upgrade pip
!pip install --upgrade jax jaxlib==0.1.57+cuda101 -f https://storage.googleapis.com/jax-releases/jax_releases.html
!pip install flax
!pip install transformers
import flax
import jax
from transformers import BertModel
from transformers import FlaxBertModel
from transformers import AutoTokenizer
import time
baseModel ='bert-base-uncased'
torchBert = BertModel.from_pretrained(baseModel)
flaxBert = FlaxBertModel.from_pretrained(baseModel)
tokenizer = AutoTokenizer.from_pretrained(baseModel)
torchInput = tokenizer.encode("Random sentence", truncation=True, padding=True, return_tensors='pt')
flaxInput = tokenizer.encode("Random sentence", truncation=True, padding=True, return_tensors='jax')
start_time = time.time()
for _ in range(10):
torchBert(torchInput)
print("torch - ", time.time()-start_time)
start_time = time.time()
for _ in range(10):
flaxBert(flaxInput)
print("flax - ", time.time()-start_time)
```
and i'm getting the following output
```
torch - 0.6615538597106934
flax - 5.129613161087036
```
What am i missing?<|||||>You should probably use `jax.jit` to speed it up<|||||>Indeed, as @patrickvonplaten says, using `jax.jit` on `flaxBert` will speed things up considerably. This will first shape-compile the function using XLA, and every time you call the function again (provided the shapes are the same), it will run the compiled version directly. I've demonstrated in this Colab: https://colab.research.google.com/drive/1davNsnV34KDZOyJ9i8zZfvxAVjBJC4dp?usp=sharing
Make sure you set the accelerator to GPU/TPU! (Runtime -> Change runtime type)
Here's a summary:
```
>>> %timeit torchBert(torchInput)
1 loop, best of 5: 75.3 ms per loop
>>> %timeit flaxBert(flaxInput)
1 loop, best of 5: 1.41 s per loop
>>> %timeit jitted_flax(flaxInput)
100 loops, best of 5: 11.2 ms per loop
```
Note that this excluded the compilation time for the first time we called `jitted_flax`. Including this will increase the overall execution time, but since it has to be done only once this is negligible as you execute this function more often.
To learn more about JAX's jit, this quickstart is quite useful: https://jax.readthedocs.io/en/latest/notebooks/quickstart.html<|||||>Thank you for you comment @marcvanzee. Indeed after @patrickvonplaten's reply I checked Flax and Jax documentation more carefully confirming that JIT compilation could solve performance issues. It still surprises me though that pytorch is quite fast even without JIT compilation while the same is not true for Flax. Frankly i didn't even know that JIT existed in pytorch so i'd be curious too to se how it compares to Flax.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 8,908 | closed | Question: What's the difference between tokenizer_utils, tokenizer_utils_base & tokenizer_utils_fast | As titled. Thanks! | 12-02-2020 19:53:56 | 12-02-2020 19:53:56 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 8,907 | closed | Unexpected situation when freezing BertForMaskedLM | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.1.0
- Platform: Linux-4.18.0-147.el8.x86_64-x86_64-with-Ubuntu-16.04-xenial
- Python version: 3.7.7
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes and No (It happens in both conditions)
- Using distributed or parallel set-up in script?: Yes and No (It happens in both conditions)
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> @LysandreJik
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Load pretrained BertForMaskedLM
```
from transformers import BertForMaskedLM
model = BertForMaskedLM.from_pretrained('bert-base-uncased')
```
2. Check whether gradients in the cls.predictions.decoder layer are calculated
```
print(model.cls.predictions.decoder.weight.requires_grad)
```
Result:
```
True
```
3. Check again after only freezing the bert layer
```
for param in model.bert.parameters():
param.requires_grad = False
print(model.cls.predictions.decoder.weight.requires_grad)
```
Result:
```
False
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
It only happens only on BertForMaskedLM.
If I tried to freeze only the BertModel, cls.predictions.decoder is also frozen.
But as expected, cls.prediction.transform is not frozen.
The exception only occurs in cls.predictions.decoder .
I don't know it is the way you expected but in my sense, it is a kind of unexpected situation for the ones who try to freeze only the BertModel.
| 12-02-2020 17:12:22 | 12-02-2020 17:12:22 | I think this is probably so because the `cls.predictions.decoder` is a linear layer, which is tied to the embeddings layer. They're pointing to the same weights, so freezeing one of those would result in freezing the other one. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.