repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
β | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 8,606 | closed | converting REALM tensorflow checkpoints to pytorch | ## Environment info
- `transformers` version: 3.3.1
- Platform: Linux-3.10.0-1062.12.1.el7.x86_64-x86_64-with-centos-7.7.1908-Core
- Python version: 3.6.3
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): 2.3.0 (False)
- Using GPU in script?: <Yes>
- Using distributed or parallel set-up in script?: <No>
### Who can help
@jplu, @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): Bert
## To reproduce
Steps to reproduce the behavior:
1. Download checkpoint from the http link https://console.cloud.google.com/storage/browser/realm-data/cc_news_pretrained/embedded/.
2. python convert_bert_original_tf2_checkpoint_to_pytorch.py \
--tf_checkpoint_path="./cc_news_pretrained/embedder/encoded" \
--bert_config_file="./bert_config.json" \
--pytorch_dump_path="./pytorch"
The checkpoint file has the following entries which are probably internal developer files(?):
model_checkpoint_path: "/cns/li-d/home/lumiere/public/models/gatoatigrado/ner-with-dates/10923195/1-active_losses=mlm_loss/export/temp/1580364602/retriever/encoded/encoded.ckpt"
all_model_checkpoint_paths: "/cns/li-d/home/lumiere/public/models/gatoatigrado/ner-with-dates/10923195/1-active_losses=mlm_loss/export/temp/1580364602/retriever/encoded/encoded.ckpt"
1)When I set tf_checkpoint_path to the directory containing the checkpoint, I get the error :
tensorflow.python.framework.errors_impl.NotFoundError: /cns/li-d/home/lumiere/public/models/gatoatigrado/ner-with-dates/10923195/1-active_losses=mlm_loss/export/temp/1580364602/retriever/encoded; No such file or directory
2)When I set tf_checkpoint_path to the checkpoint file encode.ckpt.data-00000-of-00001, I get the error:
env/lib/python3.6/site-packages/tensorflow/python/training/py_checkpoint_reader.py", line 95, in NewCheckpointReader
return CheckpointReader(compat.as_bytes(filepattern))
RuntimeError: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for ./cc_news_pretrained/embedder/encoded/encode.ckpt.data-00000-of-00001
Thanks!
| 11-17-2020 23:11:41 | 11-17-2020 23:11:41 | I changed --tf_checkpoint_path="./cc_news_pretrained/embedder/encoded to
--tf_checkpoint_path="./cc_news_pretrained/embedder/encoded/encoded.ckpt.
and used _convert_bert_original_tf_checkpoint_to_pytorch.py_ .
I see the following messages :
Loading TF weight block_emb with shape [13353718, 128]
Skipping block_emb
Building PyTorch model from configuration: BertConfig {
"attention_probs_dropout_prob": 0.1,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 1024,
"initializer_range": 0.02,
"intermediate_size": 4096,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 16,
"num_hidden_layers": 24,
"pad_token_id": 0,
"type_vocab_size": 2,
"vocab_size": 30522
}
Traceback (most recent call last):
File "convert_tf_checkpoint_to_pytorch.py", line 78, in
args.tf_checkpoint_path, args.bert_config_file, args.pytorch_dump_path
File "convert_tf_checkpoint_to_pytorch.py", line 44, in convert_tf_checkpoint_to_pytorch
load_tf_weights_in_bert(model, config, tf_checkpoint_path)
File "/gstore/home/madabhuc/hayEnv/env/lib/python3.6/site-packages/transformers/modeling_bert.py", line 155, in load_tf_weights_in_bert
pointer.shape == array.shape
File "/gstore/home/madabhuc/hayEnv/env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 772, in getattr
type(self).name, name))
torch.nn.modules.module.ModuleAttributeError: 'BertForPreTraining' object has no attribute 'shape'
<|||||>Following the suggestion #393, I I hacked transformers/src/transformers/modeling_bert.py, and I now see the following :
Converting TensorFlow checkpoint from ./cc_news_pretrained/embedder/encoded/encoded.ckpt
Loading TF weight block_emb with shape [13353718, 128]
Skipping block_emb
Initialize PyTorch weight ['block_emb']
Building PyTorch model from configuration: BertConfig {
"attention_probs_dropout_prob": 0.1,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 1024,
"initializer_range": 0.02,
"intermediate_size": 4096,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 16,
"num_hidden_layers": 24,
"pad_token_id": 0,
"type_vocab_size": 2,
"vocab_size": 30522
}
Save PyTorch model to /gstore/home/madabhuc/hayEnv/pytorch/pytorch.bin
Skipping and intializing 'block_emb' tells me that I have lost the weights info from the checkpoint. Don't believe this is correct.<|||||>Initializing did copy the weights from the checkpoint.<|||||>When I try to open another REALM tensorflow checkpoint, I get the following error message :
transformers/modeling_bert.py", line 135, in load_tf_weights_in_bert
pointer = getattr(pointer, "bias")
File "/gstore/home/madabhuc/hayEnv/env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 772, in __getattr__
type(self).__name__, name))
torch.nn.modules.module.ModuleAttributeError: 'BertForPreTraining' object has no attribute 'bias'
@sgugger , @jplu , @LysandreJik : Any ideas ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread.<|||||>@mchari , do you manage to convert the REALM pre-trained TF models into pytorch models?<|||||>No, I didn't....
On Tue, Jun 29, 2021 at 10:54 PM Wei-Cheng Chang ***@***.***>
wrote:
> @mchari <https://github.com/mchari> , do you manage to convert the REALM
> pre-trained TF models into pytorch models?
>
> β
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/8606#issuecomment-871116705>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AHIXZJ4KXQQFNMXSPSAKWRDTVKWPZANCNFSM4TZGZWCQ>
> .
>
|
transformers | 8,605 | closed | Add Harry Potter Model Card | # What does this PR do?
We made this model that creates new Harry Potter fanfiction based off of popular stories. We hope this will be a fun and useful tool. | 11-17-2020 20:08:46 | 11-17-2020 20:08:46 | That's really cool, thanks for sharing!<|||||>[model page](https://huggingface.co/ceostroff/harry-potter-gpt2-fanfiction) |
transformers | 8,604 | closed | Remove deprecated | # What does this PR do?
This PR removes old deprecated arguments and adjust tests/examples accordingly.
Co-authored with @LysandreJik. | 11-17-2020 20:07:53 | 11-17-2020 20:07:53 | Hi, I noticed that https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_ner_old.py#L294 and https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_ner_old.py#L302 are still referencing is_world_master() instead of using is_world_process_zero(). I have made a change to run_ner_old.py locally and running the process again to see if it fixes the issue for me (AttributeError: 'Trainer' object has no attribute 'is_world_master')<|||||>We're not actively maintaining the old examples scripts anymore and they're still there for people using older versions of transformers. The good fix would be to add a minimum version to the README of that example.<|||||>I'm at loss here - which ones are old examples and which ones are new?
https://github.com/huggingface/transformers/issues/8792<|||||>If there should be an old version of something to be used with an old version of something - why not send users to the branch of that release that they want to use - they will end up with the version of examples that work for that release. And master examples should be working with master version of the code, IMHO. Does it make sense?
If there are fixes to the old branch's examples, not pertaining to master, the fix can go into that branch.<|||||>Yes it all makes sense. Please be patient while I clean up the examples folder, as it's a long work. I promise this will all be clean when I'm done :-) |
transformers | 8,603 | closed | TFTrainer & Eager mode | Is eager mode really not supported with TFTrainer? It's telling me that it is using `tf.gradients`, which is not supported with eager mode.
If that's true, then I maybe it could be displayed a lot more prominently in your documentation ... I wasted so much time implementing custom functions, π€¦ββοΈ
@sgugger
| 11-17-2020 19:50:41 | 11-17-2020 19:50:41 | cc @jplu for TFTrainer.<|||||>Hello @JulesGM!
Yes, the TF Trainer don't support eager execution, partially because of the usage of `tf.gradients` that we use because sometime when loading a model for a specific task, not all the layers are used and bring some None values when computing the gradients. `tf.gradients` allows to ignore these None values.
I'm really sorry that it is not enough detailed in the documentation, we are currently working on an improved version of the TF Trainer that you can find [here](https://github.com/huggingface/transformers/pull/8264). And it should be much easier and convenient to use. Sorry once again for the inconvenience you have encountered with the TFTrainer until now.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 8,602 | closed | New TF model inputs | # What does this PR do?
This PR improves the way the inputs are handled in the TensorFlow models:
- It should be easier now to write a TensorFlow model
- The order of the inputs are now better handled, mostly when using Keras Tensors
- Replace all the occurrences of `inputs` by `input_ids` to make easier to understand what this parameter is for and to align with PyTorch input names and the Tokenizers outputs.
@LysandreJik @sgugger @patrickvonplaten let me know what you think about this new input processing, it is not finished yet but any comment will be helpful for me :) | 11-17-2020 19:33:38 | 11-17-2020 19:33:38 | @sgugger Thanks for these useful comments, I should have addressed them as you proposed. All the modification have been directly integrated in the new `input_processing` function. can you please also check the `modeling_tf_bart.py`, I have updated the documentation, let me know if I did something wrong.
@patrickvonplaten can you please check the same BART file as I have done some updates in order to make it able to properly handle the `return_dict=True` and `return_dict=False` my updates don't affect the usual behaviour of the model even if the tests are ok.<|||||>@LysandreJik @sgugger @patrickvonplaten Do you think it would be interesting to have the same thing for the outputs? knowing that for graph mode compliance we will need to have such or such output behaviour depending the state we are (eager or not). <|||||>> Remember to do separate PRs for separate issues please, a lot of changes in TFBart here are unrelated to the main focus of the PR.
Do you prefer that I move the Bart changes into another PR and keep only the inputs changes? I don't mind to do this :)<|||||>I have also updated the TF template.<|||||>> Do you prefer that I move the Bart changes into another PR and keep only the inputs changes? I don't mind to do this :)
Since @patrickvonplaten approved, I think they're okay here for this time (unless he says otherwise ;-) ) <|||||>I would prefer the same, it looks a bit too big to be added at this point of the release. I should have addressed all the comments. Let me know if I missed some. |
transformers | 8,601 | closed | Accessing gradients of Bart hidden states | The forums suggested that this be filed as a bug report:
https://discuss.huggingface.co/t/finding-gradients-in-zero-shot-learning/2033/5
The solution to the problem was solved on SO:
https://stackoverflow.com/questions/64823332/gradients-returning-none-in-huggingface-module/64866990#64866990
The question and answer are reproduced below. Filling as an issue as we should be able to compute gradients on output without a monkey-patch. It looks like the `transpose` is causing it.
## Environment info
- `transformers` version: 3.4.0
- Platform: Linux-4.15.0-123-generic-x86_64-with-glibc2.27
- Python version: 3.8.1
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: CPU & GPU
- Using distributed or parallel set-up in script?: No
### Who can help
Bart: @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
```python
from transformers import pipeline
import torch
model_name = 'facebook/bart-large-mnli'
nlp = pipeline("zero-shot-classification", model=model_name)
responses = ["I'm having a great day!!"]
hypothesis_template = 'This person feels {}'
candidate_labels = ['happy', 'sad']
nlp(responses, candidate_labels, hypothesis_template=hypothesis_template)
```
This works well! The output is:
```
{'sequence': "I'm having a great day!!",
'labels': ['happy', 'sad'],
'scores': [0.9989933371543884, 0.0010066736722365022]}
```
What I'd like to do however, is look at the gradients of the input tokens to see which tokens are important. This is in contrast to looking at the attention heads (which is also another viable tactic). Trying to rip apart the internals of the module, I can get the logics and embedding layers:
```
inputs = nlp._parse_and_tokenize(responses, candidate_labels, hypothesis_template)
predictions = nlp.model(**inputs, return_dict=True, output_hidden_states=True)
predictions['logits']
tensor([[-3.1864, -0.0714, 3.2625],
[ 4.5919, -1.9473, -3.6376]], grad_fn=<AddmmBackward>)
```
This is expected, as the label for "happy" is index 0 and the entailment index for this model is 2, so the value of 3.2625 is an extremely strong signal. The label for "sad" is 1 and the contradiction index is 0, so the value of 4.5919 is also the correct answer.
Great! Now I should be able to look at the first embedding layer and check out the gradient with respect to the happy entailment scalar:
```
layer = predictions['encoder_hidden_states'][0]
layer.retain_grad()
predictions['logits'][0][2].backward(retain_graph=True)
```
Unfortunately, `layer.grad` is `None`.
## [Solution from StackOverflow](https://stackoverflow.com/a/64866990/249341)
I was also very surprised of this issue. Although I have never used the library I went down and did some debugging and found out that the issue is coming from the library transformers. The problem is comming from from this [line][1] :
encoder_states = tuple(hidden_state.transpose(0, 1) for hidden_state in encoder_states)
If you comment it out, you will get the gradient just with some dimensions transposed.
This issue is related to the fact that Pytorch Autograd does not do very well on inplace operations as mentioned [here][2].
So to recap the solution is to comment line 382 in *`modeling_bart.py`*.
You will get the gradient with this shape T x B x C instead of B x T x C, but you can reshape it as you want later.
[1]: https://github.com/huggingface/transformers/blob/1073a2bde5d608f9891d6da6df7b63921dca1b71/src/transformers/modeling_bart.py#L382
[2]: https://discuss.pytorch.org/t/encounter-the-runtimeerror-one-of-the-variables-needed-for-gradient-computation-has-been-modified-by-an-inplace-operation/836/5
| 11-17-2020 17:32:11 | 11-17-2020 17:32:11 | @joeddav - feel free to ping me again if you're too busy. Leaving it up to you for now :-) <|||||>Hey thanks for opening the detailed issue. As I mentioned this is a Bart issue, nothing specific to zero shot, so I've renamed it to get the right eyes on it.
The problem here is that the hidden states are transposed _after_ they're passed forward in the computation graph (with the exception of the last encoder layer), which means that the hidden states returned are no longer upstream from the logits in the graph and therefore don't have any gradient information. I'm not sure I see a trivial fix though βΒ any ideas @patrickvonplaten? We could just do the transposes inside `EncoderLayer.forward` instead but would the superfluous transpose ops slow things down?<|||||>At the very least, having an option to return the value _before_ the transpose would allow access to the gradients. |
transformers | 8,600 | closed | Fix check repo utils | # What does this PR do?
This PR fixes the `check_repo` script with the recent repo reorganization. | 11-17-2020 17:20:43 | 11-17-2020 17:20:43 | |
transformers | 8,599 | closed | Tokenizers should be framework agnostic | The `prepare_seq2seq_batch` method should not return PyTorch tensors by default. It does not in the base class, and all our tokenizer methods should be agnostic to the framework.
Updated Marian, Pegasus, mBART, FSMT that had `return_tensors="pt"` and RAG that had `return_tensors="np"`.
The documentation for these methods was inconsistent, added the docstrings via the decorator where it was needed. | 11-17-2020 16:54:45 | 11-17-2020 16:54:45 | Can't ask you to review here @stas00 but would love your review as FSMT is impacted by this.<|||||>LGTM |
transformers | 8,598 | closed | Vectorize RepetitionPenaltyLogitsProcessor to improve performance | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
This PR replaces the nested loops in the [`RepetitionPenaltyLogitsProcessor`](https://github.com/huggingface/transformers/blob/a1bbcf3f6c20e15fe799a8659d6b7bd36fdf11ed/src/transformers/generation_logits_process.py#L147-L155) with a vectorized implementation to provide speedups on long sequences of roughly 3 orders of magnitude on GPUs and 2 orders of magnitude on CPUs.
<!-- Remove if not applicable -->
Fixes # [8596](https://github.com/huggingface/transformers/issues/8596)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSTM: @stas00
-->
@patrickvonplaten | 11-17-2020 16:39:28 | 11-17-2020 16:39:28 | ```
ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.
```
Seems like a flake. Tests passes locally. |
transformers | 8,597 | closed | BART & FSMT: fix decoder not returning hidden states from the last layer | # What does this PR do?
The activations from the last decoder layer accidentally were not a part of the output.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
@stas00
| 11-17-2020 16:33:04 | 11-17-2020 16:33:04 | yay, a first fsmt user that found an issue! Thank you!
OK, here I literally copied the bart implementation where it didn't have that line you added:
https://github.com/huggingface/transformers/blob/36a19915ea4fc3dc337a310e4a1af43eb3c81c9a/src/transformers/models/bart/modeling_bart.py#L627-L629
So most likely if this is indeed a bug then it affects many `transformers` models.
Now let us diagnose what's going on. I see that the `x` is stored in the loop above at the beginning of a layers iteration:
https://github.com/huggingface/transformers/blob/36a19915ea4fc3dc337a310e4a1af43eb3c81c9a/src/transformers/models/bart/modeling_bart.py#L597-L600
Looking closely, the current code doesn't add the `x` from the last iteration of the `for idx, decoder_layer in enumerate(self.layers)` loop, which is clearly a bug. We have a one-off problem here.
The only thing I'm not sure about is whether we need the `x` before the loop, if not then `all_hidden_states += (x,)` needs to be moved to the end of the loop. If we do need it, then your change is due.
Either way it is I'd code it differently. I'd add add `x` before the loop starts if it is needed, and then add it for each layer once we have a new x defined in the loop.
Adding it after the loop is likely to cause other bugs in the future where the wrong x will be added.
Could you please share the use case so that we could write a test for it? Or if you could write the test that's even better - either way works.
I didn't have a use case for this myself so relied on `transformers` common tests to catch this.
Thank you!<|||||>So this is what I propose, which does the same as your PR, but respects the locality rule better, if that makes sense.
```
# XXX: do we need to save this hidden state?
if output_hidden_states:
all_hidden_states += (x,)
for idx, decoder_layer in enumerate(self.layers):
dropout_probability = random.uniform(0, 1)
if self.training and (dropout_probability < self.layerdrop):
continue
layer_state = past_key_values[idx] if past_key_values is not None else None
x, layer_self_attn, layer_past, layer_cross_attn = decoder_layer(
x,
encoder_hidden_states,
encoder_attn_mask=encoder_padding_mask,
decoder_padding_mask=decoder_padding_mask,
layer_state=layer_state,
causal_mask=decoder_causal_mask,
output_attentions=output_attentions,
)
# add LayerDrop (see https://arxiv.org/abs/1909.11556 for description)
if output_hidden_states:
all_hidden_states += (x,)
```
@patrickvonplaten, how should we proceed - solve this for fsmt and then replicate to other copy-cats - or solve it at once in a new PR - and need to create a new common test I suppose. I, unfortunately, have no perms to make suggestions directly in the code. so passing the flag to you if the former.<|||||>Thanks, Stas @stas00!
I implemented a fix the way I did just to be consistent with how the analogous code is written in other places (e.g. FSMTEncoder, BERT model, etc.):
https://github.com/huggingface/transformers/blob/dd52804f5fce0a568ffbb3dc7fd088d2de0a0e56/src/transformers/models/bert/modeling_bert.py#L491-L492
However, I would also personally prefer adding contextualized embedding before the loop first and then collecting hidden states at the end of the loop, just like you described. It just has to be changed for all the models in the repo if we want to keep the codebase consistent.
The test might check that the size of the list with output hidden states aligns in shape with what we expect it to be based on the model configuration. It would catch the error and be general enough for many usecases. It is just that it is a job for a bigger PR if we want to cover all the models in the repo.
Regarding whether to return ```decoder``` input uncontextualized embeddings, GPT2 already does it (GPT2 can be viewed as a transformer decoder):
https://github.com/huggingface/transformers/blob/5cf9c79665266e49cf498839da90d7aeeff21c3a/src/transformers/models/gpt2/modeling_gpt2.py#L618-L620
Also, decoder input embeddings from layer 0 get fed into further complex layers analogously to how it is done for encoders. And for all the encoders in the lib (like BERT) we do return the outputs from this layer. So I would vote for not dropping it for the decoder.
<|||||>Well, based on the research that you shared, it's easy then - keep them all.
So we just need to decide whether to:
1. a. keep the current implementation in most (all?) modules where the incoming states are stored first and then the last state is stored as sort of an afterthought and potentially is forgotten which is the case with every bart-copy, b. and fix `modeling_bart` and every other module that copied it to add the missing state.
2. or recode it in a more clean way as I suggested [here](https://github.com/huggingface/transformers/pull/8597#issuecomment-729119379) and you concurred with me, which will fix the bug on the way and prevent it from re-appearing in the future.
Since I wasn't there when the code was written and since it impacts the whole project let's see what @LysandreJik, @patrickvonplaten, @sgugger think.
Thank you for the detailed answer, the research, and the suggestion on how to write the missing test, @maksym-del! <|||||>I would avoid changing the existing code since it produces the desired output, I think we can all employ our time to do more meaningful contributions to the library :-) I don't think one implementation is better than the other in the sense you have to remember to either add the first hidden state or the last.
On the models that do not produce the desired outputs, you can fix it the way you prefer. The modeling files don't need to do everything the exact same way and since you're the contributor fixing things, you get to choose which one you like better. What interests me more however is how this got the tests passing, since the number of hidden states is tested and we're discovering there is one missing, a things the common tests should have caught.<|||||>While I disagree about your suggestion to two ways being equal, since the current implementation is a bug waiting to occur, should some code be added after the loop and before the last layer's hidden state is added, especially with all the code copying. I am in agreement with the rest.
To clarify, you're saying:
- Do not change anything in models that don't have this bug.
- You can change things in models that do have this bug besides fixing the bug (i.e. all bart copy-cats)
> What interests me more however is how this got the tests passing, since the number of hidden states is tested and we're discovering there is one missing, a things the common tests should have caught.
My intuition is that since it counts, it counted the "incoming" hidden state as one of the layer hidden states. If this is a common test, then the correct models should have failed this test instead. But will need to look at the actual test to tell for sure.
<|||||>@maksym-del thanks so much for finding this bug -> you are correct this should be corrected.
I think we should do two things here (@maksym-del let me or @stas00 know if you need help here):
1. Apply the same change to `modeling_bart.py`
2. Improve the test (this might be a bit more difficult, but I'll help you if needed :-)):
- If you compare the common test of the hidden states output: https://github.com/huggingface/transformers/blob/0ad45e108d156e24b0cbd0fe0f5a27a4e7a3c1c3/tests/test_modeling_common.py#L653 with the common test of the attention output: https://github.com/huggingface/transformers/blob/0ad45e108d156e24b0cbd0fe0f5a27a4e7a3c1c3/tests/test_modeling_common.py#L295 you can see that the test of the attention output does an extra check for `is_encoder_decoder=True` models while the hidden states test does not. This is why this bug was unnoticed -> so we should add a `if config.is_encoder_decoder:` clause to the hidden states test that checks that the decoder also has the correct number of layers and that those hidden states have the correct size.
If you have trouble adding the test ping me or @stas00 again and we'll finish the PR for you!
Thanks for spotting the bug :-)
<|||||>Thanks a lot for rebasing this! I think the only thing left to do now is to add a test as explained above :-) <|||||>Thanks, @patrickvonplaten , @stas00 and @sgugger !
I added the test and think this PR is ready to be merged. <|||||>Unrelated to this PR, as it's replicating the existing approach, but won't it be simpler to replace:
```
x = x.transpose(0, 1)
all_hidden_states += (x,)
x = x.transpose(0, 1)
```
with just:
```
all_hidden_states += (x.transpose(0, 1),)
```
@patrickvonplaten, replying inside my comment:
this doesn't work. `x` needs to be kept in the graph `x.transpose(0, 1)` would return a new view on the tensor which is not in the graph anymore<|||||>@patrickvonplaten - I edited your editing of my comment to make it readable. otherwise it made no sense as you made it look like I was saying something and then saying that it is not so.
Thank you for the clarification!
p.s. github doesn't send notification on edits, so this is probably not the most ideal way to reply ;)<|||||>> @patrickvonplaten - I edited your editing of my comment to make it readable. otherwise it made no sense as you made it look like I was saying something and then saying that it is not so.
>
> Thank you for the clarification!
>
> p.s. github doesn't send notification on edits, so this is probably not the most ideal way to reply ;)
Oh, I'm sorry. I meant to reply to your comment :D |
transformers | 8,596 | closed | Speed up repetition penalty logits processor | # π Feature request
Hey Team, Thanks for the great work on the project to date! This is more of an enhancement so putting it here instead of as a bug.
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
The [`RepetitionPenaltyLogitsProcessor`](https://github.com/huggingface/transformers/blob/a1bbcf3f6c20e15fe799a8659d6b7bd36fdf11ed/src/transformers/generation_logits_process.py#L147-L155) which is used to enforce the repetition penalty when generating tokens from a Seq2Seq head is extremely slow for long sequences due to the nested loops.
A vectorized implementation will be much much faster.
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
Here's a minimal example to reproduce the slow behavior:
```
import torch
from transformers import RepetitionPenaltyLogitsProcessor
import timeit
def vectorized(input_ids, scores, penalty):
score_range = torch.arange(scores.shape[0])
score = scores[score_range[:, None], input_ids]
score[score >= 0] = score[score >= 0] / penalty
score[score < 0] = score[score < 0] * penalty
scores[score_range[:, None], input_ids] = score
input_ids = torch.randint(0, 10000, (256, 256))
scores = torch.randn(256, 10000)
rep_proc = RepetitionPenaltyLogitsProcessor(2.0)
print(f"Existing impl time for 10 iterations on CPU = {timeit.timeit(lambda: rep_proc(input_ids, scores), number=10)}")
print(f"Vectorized impl time for 10 iterations on CPU = {timeit.timeit(lambda: vectorized(input_ids, scores, 2.0), number=10)}")
if torch.cuda.is_available():
input_ids = input_ids.cuda()
scores = scores.cuda()
print(f"Existing impl time for 10 iterations on GPU = {timeit.timeit(lambda: rep_proc(input_ids, scores), number=10)}")
print(f"Vectorized impl time for 10 iterations on GPU = {timeit.timeit(lambda: vectorized(input_ids, scores, 2.0), number=10)}")
```
Here's the speedups on CPU and GPU with the vectorized version:
```
Existing impl time for 10 iterations on CPU = 23.23520456800179
Vectorized impl time for 10 iterations on CPU = 0.035849231004249305
```
```
Existing impl time for 10 iterations on GPU = 42.0977192690043
Vectorized impl time for 10 iterations on GPU = 0.008036320999963209
```
These numbers are from a machine with [email protected] and an Nvidia T4 GPU.
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
I'll have a PR up for this shortly.
@patrickvonplaten pinging you on this for your thoughts because I saw your last few commits on this code.
Thanks! | 11-17-2020 15:58:53 | 11-17-2020 15:58:53 | PR is up [8598](https://github.com/huggingface/transformers/pull/8598)<|||||>Can you please check speed of this implementation also:
```
# if score < 0 then repetition penalty has to be multiplied to reduce the previous token probability
scores = torch.where(scores < 0, scores * penalty, scores / penalty)
```<|||||>@LSinev Thanks for the suggestion! This is slightly faster than my implementation and much more elegant. Also makes sense because we don't really need to look at previous tokens for modifying the score.
I'll replace my implementation with this. Credit to you!<|||||>Oh, no. I didn't mean that this should be the only code used. Of course, only input ids should be penalized, not everything. And of course, one should ensure (and probably add tests) that this solution works per row for input batches (batches as input and batches after the expansion due to num_return_sequences > 1). This was just a hypothesis that such code may work faster.<|||||>Hmm, that makes sense. I hadn't considered that earlier. All - greedy and beam search and sampling may produce incorrect tokens because only the `torch.where` approach will alter scores for all other tokens.
Having accounted for only the input_ids, the `torch.where` approach is still marginally faster than performing the score modification in 2 steps.
I'll update the code accordingly. Thanks for your help! |
transformers | 8,595 | closed | Fix model templates | # What does this PR do?
This PR fixes the model templates that were broken by the recent reorganization (in truth they never worked, it fixes that too :-p). | 11-17-2020 15:31:38 | 11-17-2020 15:31:38 | |
transformers | 8,594 | closed | PEGASUS do not have mask token | @mfuntowicz @patrickvonplaten
Hi,
I am using PEGASUS google/pegasus-large
I would like to fill the mask sentence of a document , i.e. try the pretraining task. But I don't find the mask_token.
Steps to reproduce the behavior:
1. from transformers import PegasusTokenizer
2. tok = PegasusTokenizer.from_pretrained("google/pegasus-large")
3. tok.mask_token
output is "Using mask_token, but it is not set yet."
| 11-17-2020 14:28:24 | 11-17-2020 14:28:24 | Hey @ShichaoSun - thanks for the issue!
I agree with you that Pegasus should have some mask tokens defined and I'd set `tokenizer.mask_token` to Pegasus' MLM mask token => `[MASK_2]` and add an additional `mask_token_sent` ...
I'm still waiting for some more insight on Pegasus of the Pegasus expert @sshleifer -> https://github.com/huggingface/transformers/issues/8689 .
I'll hope to get an answer there to be sure that adding the `[MASK_1]` and `[MASK_2]` tokens is the correct thing to do here!<|||||>Hi @patrickvonplaten ,
Really thanks for your reply and great job !<|||||>Is it possible to MASK several tokens using Pegasus? |
transformers | 8,593 | closed | Fix missing space in unavailable PyTorch/TensorFlow warning | # What does this PR do?
Fixes missing space in unavailable PyTorch/TensorFlow warning
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSTM: @stas00
-->
| 11-17-2020 14:27:48 | 11-17-2020 14:27:48 | |
transformers | 8,592 | closed | Improving performance results for BERT | I'm using the bert-base-german-cased model to perform token classification with custom NER labels on a dataset of German court documents. I have 11 labels in total (including the O label), which are however not tagged in BIO form. I'm letting the model train and evaluate on an NVidia GeForce GTX Titan X.
But despite the good ressources and the model, which was actually pretrained on German judicial documents, the results are rather lacking.
```
precision recall f1-score support
Date 0.87 0.99 0.93 407
Schadensbetrag 0.77 0.58 0.66 112
Delikt 0.59 0.50 0.54 44
Gestaendnis_ja 0.60 0.71 0.65 21
Vorstrafe_nein 0.00 0.00 0.00 6
Strafe_Gesamtfreiheitsstrafe_Dauer 0.76 0.91 0.83 35
Strafe_Gesamtsatz_Betrag 0.42 0.52 0.46 25
Strafe_Gesamtsatz_Dauer 0.52 0.82 0.64 28
Strafe_Tatbestand 0.30 0.29 0.30 283
micro avg 0.65 0.68 0.66 961
macro avg 0.54 0.59 0.56 961
weighted avg 0.64 0.68 0.66 961
```
What could be some steps to improve these results?
Perhaps it's the low data count for some of the labels, or that the labels often are not single tokens but text spans of multiple tokens?
I would be glad for every hint of some more experienced users. I can also share data or other files, if they are relevant.
This is my config file:
```
{
"data_dir": "./Data",
"labels": "./Data/labels.txt",
"model_name_or_path": "bert-base-german-cased",
"output_dir": "./Data/Models",
"task_type": "NER",
"max_seq_length": 180,
"num_train_epochs": 6,
"per_device_train_batch_size": 48,
"seed": 7,
"fp16": true,
"do_train": true,
"do_predict": true,
"do_eval": true
}
``` | 11-17-2020 13:35:43 | 11-17-2020 13:35:43 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discusss.huggingface.co) instead?
Thanks! |
transformers | 8,591 | closed | Fix init for MT5 | # What does this PR do?
Fix the init, the config should be imported outside of tests. | 11-17-2020 13:02:10 | 11-17-2020 13:02:10 | |
transformers | 8,590 | closed | Cannot train model from scratch using `run_mlm.py`. | Looks like the trainer does not like when it gets a `None`, so when we train from scratch, there is a `None` in this `if` and crashes:
https://github.com/huggingface/transformers/blob/a6cf9ca00b74a8b2244421a6101b83d8cf43cd6b/examples/language-modeling/run_mlm.py#L357
I solved it by deleting that line, but I guess it could affect to other use cases.
To reproduce, call `run_mlm` this way (I guess it is easier to reproduce, but this might be enough):
```
python run_mlm.py \
--model_type bert \
--train_file ./data/oscar_1000.txt \
--validation_file ./data/oscar_1000_valid.txt \
--output_dir testing_model \
--tokenizer_name bert-base-spanish-wwm-cased \
--overwrite_output_dir \
--do_train \
--do_eval \
--evaluation_strategy steps \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 16 \
--max_steps 500 \
--save_steps 2000 \
--save_total_limit 15 \
--overwrite_cache \
--max_seq_length 512 \
--eval_accumulation_steps 10 \
--logging_steps 1000 \
```
The dataset I'm using I guess that isn't relevant so any corpus will do.
@sgugger | 11-17-2020 12:52:48 | 11-17-2020 12:52:48 | Mmm, that is weird as `None` is the default for that argument. Will investigate this when I'm finished with v4 stuff, thanks for flagging! |
transformers | 8,589 | closed | [MT5] More docs | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
^^I knew that I forgot something with the docs...
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSTM: @stas00
-->
| 11-17-2020 11:39:40 | 11-17-2020 11:39:40 | cc @sgugger for notification. |
transformers | 8,588 | closed | Hosting and online deployment of a transformer chatbot (built with huggingface library) | I'm building a chatbot using BERT for a startup company. At some point it will be deployed online. It turns out, most chatbot hosting services actually want to sell you a chatbot rather than host the one you developed, which is obviously not an option for us, especially that the solution is opensource (pytorch+huggingface).
I would like to know of a hosting service that 1) accepts custom-built chatbot solution, 2) can accommodate up to 500 online users at any given time, 3) does not charge exorbitant prices. We can't buy a solution, or even augment an existing one, because it is a specific requirement of the project funding (research rather than development).
| 11-17-2020 11:32:14 | 11-17-2020 11:32:14 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discusss.huggingface.co) instead?
Thanks! |
transformers | 8,587 | closed | The Albert tokenizer file cannot download automatically and the official Albert tokenizer file is wrong, I cannot use it. | Have anybody meet the same problem? | 11-17-2020 11:24:47 | 11-17-2020 11:24:47 | Do you mind following and filing in the issue template?
Thanks<|||||>I cannot load the offiline vocab as well:
RuntimeError: Internal: C:\projects\sentencepiece\src\sentencepiece_processor.cc(824) [model_proto->ParseFromArray(serialized.data(), serialized.size())]
but it can run for me to download the vocab automatically.
<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 8,586 | closed | Tokenizers: ability to load from model subfolder | Should fix #8447 | 11-17-2020 10:59:36 | 11-17-2020 10:59:36 | |
transformers | 8,585 | closed | Fix rag finetuning + add finetuning test | Following #7715 we need more test coverage of the RAG example scripts.
In this PR I'm adding a test for the finetuning script.
The test includes a single gpu test and a multi gpu test. Both are passing.
As mentioned in #7816 and #8345 there were some errors in the script that I had to fix.
Moreover since @amogkam has been working on the finetuning script as well to integrate Ray, I made sure to reduce the possible conflicts with his PR #8583 . More precisely I'm reusing the CustomAccel class that will allow to init either the pytorch distributed retrieval or the ray distributed retrieval.
Also fixed a bug in RAG forward pass (see #8665 )
Fix #7816
Fix #8345 | 11-17-2020 10:42:22 | 11-17-2020 10:42:22 | @lhoestq
Hi, I tried to execute finetune.py on two GPUs. It mainly fails with the following error. But when I run with the single GPU it works. I have also attached a screen shot.
**RuntimeError: [/pytorch/third_party/gloo/gloo/transport/tcp/pair.cc:575] Connection closed by peer [130.216.209.142]:55728**


<|||||>What command did you run exactly ?<|||||>> What command did you run exactly ?
`python examples/rag/finetune.py --data_dir ./examples/rag/test_data/dummy_seq2seq --output_dir ./examples/rag/outputs --model_name_or_path facebook/rag-token-base --model_type rag_sequence --do_train --do_predict --n_val -1 --val_check_interval 0.25 --train_batch_size 1 --eval_batch_size 1 --max_source_length 128 --max_target_length 25 --val_max_target_length 25 --test_max_target_length 25 --label_smoothing 0.1 --dropout 0.1 --attention_dropout 0.1 --weight_decay 0.001 --adam_epsilon 1e-08 --max_grad_norm 0.1 --lr_scheduler polynomial --learning_rate 3e-05 --num_train_epochs 100 --warmup_steps 500 --gradient_accumulation_steps 1 --index_name custom --passages_path ./examples/rag/data/my_knowledge_dataset --index_path ./examples/rag/data/my_knowledge_dataset_hnsw_index.faiss --gpus 2
`<|||||>Does changing the port with `--distributed_port 8888` help in your case ?<|||||>It says,
`finetune.py: error: unrecognized arguments: --distributed-port 8888`
<|||||>I tried with `--distributed-port 8888` still gives the same error.
btw my torch version is **Version: 1.7.0+cu110**
<|||||>What's your pytorch lightning version ?
(also sorry I misspelled distributed-port)<|||||>> pytorch lightning
**Version: 1.0.4**
<|||||>@lhoestq
Hi just wanted to know .. did you managed to run the finetune.sh script
without any errors.
<|||||>Yes I have no issue on my side. The finetuning test works fine too.
Could you try to update pytorch lightning and see if it fixes your issue ?
Let me know if you manage to fix it<|||||>Did you try with custom index ?
On Fri, Nov 20, 2020, 22:43 Quentin Lhoest <[email protected]> wrote:
> Yes I have no issue on my side. The finetuning test works fine too.
> Could you try to update pytorch lightning and see if it fixes your issue ?
> Let me know if you manage to fix it
>
> β
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/pull/8585#issuecomment-731061351>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AEA4FGRRHXMLZB2KC5WTSJ3SQY25LANCNFSM4TYMOBIQ>
> .
>
<|||||>Let me test right now<|||||>Can you also send me your pytorch and tranformers versions.
On Fri, Nov 20, 2020, 22:49 Shamane Siriwardhana <[email protected]> wrote:
> Did you try with custom index ?
>
> On Fri, Nov 20, 2020, 22:43 Quentin Lhoest <[email protected]>
> wrote:
>
>> Yes I have no issue on my side. The finetuning test works fine too.
>> Could you try to update pytorch lightning and see if it fixes your issue ?
>> Let me know if you manage to fix it
>>
>> β
>> You are receiving this because you commented.
>> Reply to this email directly, view it on GitHub
>> <https://github.com/huggingface/transformers/pull/8585#issuecomment-731061351>,
>> or unsubscribe
>> <https://github.com/notifications/unsubscribe-auth/AEA4FGRRHXMLZB2KC5WTSJ3SQY25LANCNFSM4TYMOBIQ>
>> .
>>
>
<|||||>Awesome I managed to reproduce your issue using the custom index :)
I will investigate
And I'm using pytorch 1.7.0 cu11 and transformers with the latest changes from master and this PR<|||||>Perfect. I feel it is some tensor issue that happens in the validation
sanity check.
On Fri, Nov 20, 2020, 23:02 Quentin Lhoest <[email protected]> wrote:
> Awesome I managed to reproduce your issue using the custom index :)
> I will investigate
> And I'm using pytorch 1.7.0 cu11 and transformers with the latest changes
> from master and this PR
>
> β
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/pull/8585#issuecomment-731071657>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AEA4FGXTIZMVU76CGOR3FZTSQY5EHANCNFSM4TYMOBIQ>
> .
>
<|||||>Indeed it was an issue with the precision of the tensor. I'm fixing it<|||||>Amazing. If you can .. once you fixed this can you please add finetuning
commands for a custom dataset in the read me. The current one is not with
all commands. I really think this RAG framework will be a game changer if
we can apply cor other tasks:)
On Fri, Nov 20, 2020, 23:18 Quentin Lhoest <[email protected]> wrote:
> Indeed it was an issue with the precision of the tensor. I'm fixing it
>
> β
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/pull/8585#issuecomment-731080487>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AEA4FGWGLNSO3ITQGNLLUYTSQY7BFANCNFSM4TYMOBIQ>
> .
>
<|||||>Ok I fixed the tensor issue and updated the readme
I also had to rename some the examples files of RAG to avoid collisions with the files of the seq2seq examples. The name collision broke the CI tests with failed imports.
I did:
```
examples/rag/utils.py -> exmaples/rag/utils_rag.py
examples/rag/callbacks.py -> exmaples/rag/callbacks_rag.py
examples/rag/finetune.py -> exmaples/rag/finetune_rag.py
examples/rag/finetune.sh -> exmaples/rag/finetune_rag.sh
```
All tests are green now :)<|||||>Thanks a lot for your quick response.
On Sat, Nov 21, 2020, 00:15 Quentin Lhoest <[email protected]> wrote:
> Ok I fixed the tensor issue and updated the readme
>
> I also had to rename some the examples files of RAG to avoid collisions
> with the files of the seq2seq examples. The name collision broke the CI
> tests with failed imports.
>
> I did:
>
> examples/rag/utils.py -> exmaples/rag/utils_rag.py
> examples/rag/callbacks.py -> exmaples/rag/callbacks_rag.py
> examples/rag/finetune.py -> exmaples/rag/finetune_rag.py
> examples/rag/finetune.sh -> exmaples/rag/finetune_rag.sh
>
> All tests are green now :)
>
> β
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/pull/8585#issuecomment-731108211>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AEA4FGTA626DKBJ5S7JXLFDSQZFVNANCNFSM4TYMOBIQ>
> .
>
<|||||>I took your comment into account @patrickvonplaten
The only thing I didn't change is the return_dict=True - I kept them to avoid playing with tuples indices.<|||||>@lhoestq hello, thank you for this amazing feature.
when I try to create my custom dataset I receveing this error:
`2020-12-16 00:48:44.645715: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
INFO:__main__:Step 1 - Create the dataset
Using custom data configuration default
Reusing dataset csv (/root/.cache/huggingface/datasets/csv/default-d44cf86c96b535d8/0.0.0/2960f95a26e85d40ca41a230ac88787f715ee3003edaacb8b1f0891e9f04dda2)
Loading cached processed dataset at /root/.cache/huggingface/datasets/csv/default-d44cf86c96b535d8/0.0.0/2960f95a26e85d40ca41a230ac88787f715ee3003edaacb8b1f0891e9f04dda2/cache-ad363af188e673b0.arrow
100% 1/1 [00:00<00:00, 10.92ba/s]
INFO:__main__:Step 2 - Index the dataset
Traceback (most recent call last):
File "examples/rag/use_own_knowledge_dataset.py", line 200, in <module>
main(rag_example_args, processing_args, index_hnsw_args)
File "examples/rag/use_own_knowledge_dataset.py", line 102, in main
index = faiss.IndexHNSWFlat(index_hnsw_args.d, index_hnsw_args.m, faiss.METRIC_INNER_PRODUCT)
File "/usr/local/lib/python3.6/dist-packages/faiss/swigfaiss.py", line 3746, in __init__
this = _swigfaiss.new_IndexHNSWFlat(*args)
NotImplementedError: Wrong number or type of arguments for overloaded function 'new_IndexHNSWFlat'.
Possible C/C++ prototypes are:
faiss::IndexHNSWFlat::IndexHNSWFlat()
faiss::IndexHNSWFlat::IndexHNSWFlat(int,int)`
I'm using Google Colab to test this - https://colab.research.google.com/drive/1Cjj18rYmeS0Bueis_KPB5Wbybl-JNDLL?usp=sharing
<|||||>Well, i didn't install the specific dependencies you defined. excuse me.
Solved running - !pip install -r /transformers/examples/rag/requirements.txt
At least it is registered if someone has the same problem. haha |
transformers | 8,584 | closed | Add output control for TFGPT2LMHeadModel | # What does this PR do?
Hi, guys. This pull request is going to add a parameter for `GPT2Config` and `TFGPT2LMHeadModel`, to control whether return multi-layer logits or not when training and fine-tuning.
Related issue #8503
Before this change, we need to assign the 'special' loss and metric:
```python
class MyMetrice(tf.keras.metrics.SparseCategoricalAccuracy):
def update_state(self, y_true, y_pred, sample_weight=None):
# It will receive output from all layers by default.
if len(y_pred.shape) > 3:
return 0
return super(Mymetrice, self).update_state(y_true, y_pred, sample_weight)
model = TFGPT2LMHeadModel.from_pretrained("distilgpt2")
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metric = MyMetrice('accuracy')
model.compile(
optimizer=optimizer,
loss=[loss, *[None] * model.config.n_layer], # It's hard to guess if there is no example
metrics=[metric]
)
```
After this change, we can train or finetune samples easily.
```python
model = TFGPT2LMHeadModel.from_pretrained("distilgpt2", return_dict=False)
model.compile(
optimizer=optimizer,
loss=model.compute_loss,
metrics=[tf.keras.metrics.SparseCategoricalAccuracy]
)
```
GPT2: @LysandreJik, @patrickvonplaten
tensorflow: @jplu
Hope this helps. | 11-17-2020 10:35:05 | 11-17-2020 10:35:05 | Hello!
Apparently you are rewriting the `output_attentions` and `output_hidden_states` parameters.<|||||>Thanks for your advice. Yes, so it's a bad idea to only return `last_hidden_state` like this?<|||||>I mean, why you are not using the `output_attentions` or `output_hidden_states` parameters that basically do what you are proposing?<|||||>I'm sorry to make you confused, maybe my mastery of gpt2 is not enough. Could you please tell me where `output_attentions` and
`output_hidden_states`(except last) will be used during training and text generation, I can't find the answer from the source code directly.
I mean, if `output_attentions` or `output_hidden_states` is not using basically, it's better to hide it by default? And call something like `return_multilayer=True` to return those if we need it.
Thanks in advance. :)<|||||>No worries! Everything is nicely explained in the [documentation](https://huggingface.co/transformers/model_doc/gpt2.html#tfgpt2lmheadmodel):
```
output_attentions (bool, optional) β Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) β Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
```<|||||>Thank @jplu , this RP exactly rewriting the `output_attentions` and `output_hidden_states` parameters of `GPT2Config`, close it.
<|||||>You're welcome, happy to help :) |
transformers | 8,583 | closed | [RAG] Add Ray implementation for distributed retrieval | # What does this PR do?
This PR adds a new distributed retriever implementation for RAG built on Ray, as an alternative to the current retriever implementation that uses torch.distributed. With Ray it's possible to load the index on multiple processes instead of just the rank 0 training worker, allowing fine tuning to scale out better to multiple GPUs, and also allowing the index to potentially be fit in GPU memory. This also removes a core dependency on Pytorch, allowing a Tensorflow implementation of `finetune.py`.
This PR also makes changes to support finetune.py with Pytorch Lightning >v1.0.
A benchmark of Pytorch distribtued retrieval vs. Ray distributed retrieval

## Implementation Details
In the current Pytorch retrieval implementation, the index is loaded once on just the rank 0 training workers. Training worker 0 gathers the inputs from all other workers, performs the index lookup, and scatters the results back to the other workers.

With the Ray implementation, the index is loaded on *separate* processes, which are referred to as Ray actors. Each training worker randomly selects a retrieval actor to query for documents and Ray handles all the communication between the processes. Because the index can be loaded in *multiple* processes, training can scale up since no synchronization needs to happen for the index lookup.

Note that Pytorch Lightning is still handling distributed *training*, but Ray manages distributed *retrieval*. Because PTL calls the entire training script under the hood multiple times, we have to use Ray's named actors feature (https://docs.ray.io/en/master/actors.html?highlight=named%20actors#named-actors) allowing the retrieval actors to be referenced by all training processes. The use of named actors is necessitated by how PTL handles distributed training, and a simpler approach could probably be used for a Tensorflow implentation.
## Testing Strategy
Unit tests were added to `test_distributed_retriever.py`. Note that the local Ray cluster for the tests had to be started with `local_mode=True` because the test file modifies `sys.path` and these changes are not propagated to remote processes. See https://stackoverflow.com/questions/54338013/parallel-import-a-python-file-from-sibling-folder for more info.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 11-17-2020 08:01:51 | 11-17-2020 08:01:51 | Hi ! This looks awesome :)
I was about to create a PR that fixes the init_ddp_connection in finetune.py and that adds a test script to make sure the finetuning script works as expected. With minimal changes on my side I can easily reduce conflicts between our two changes to finetune.py (I guess I'll just reuse the CustomAccelerator). Does that sound good to you ?<|||||>@lhoestq yes that sounds great!<|||||>@amogkam
Hi seems like finetune.sh is not working in multi gpu training.<|||||>@shamanez Hmm that's odd, I was able to get this working on a single node with 4 GPUs. Do you have a stack trace?<|||||>> @shamanez Hmm that's odd, I was able to get this working on a single node with 4 GPUs. Do you have a stack trace?
I tried to run without RAY , but with pytorch DDP. Here is the error I got.


This is the command-line argument I used,
`python examples/rag/finetune.py --data_dir ./examples/rag/test_data/dummy_seq2seq --output_dir ./examples/rag/outputs --model_name_or_path facebook/rag-token-base --model_type rag_sequence --do_train --do_predict --n_val -1 --val_check_interval 0.25 --train_batch_size 1 --eval_batch_size 1 --max_source_length 128 --max_target_length 25 --val_max_target_length 25 --test_max_target_length 25 --label_smoothing 0.1 --dropout 0.1 --attention_dropout 0.1 --weight_decay 0.001 --adam_epsilon 1e-08 --max_grad_norm 0.1 --lr_scheduler polynomial --learning_rate 3e-05 --num_train_epochs 100 --warmup_steps 500 --gradient_accumulation_steps 1 --index_name custom --passages_path ./examples/rag/data/my_knowledge_dataset --index_path ./examples/rag/data/my_knowledge_dataset_hnsw_index.faiss --gpus 2
`
<|||||>I think this has to do with using a custom index which I didn't try out. Can you try with just the wiki_dpr index to confirm? It seems like the training workers are expecting a tensor of type float, but a tensor of type double is being sent instead. I think the fix might just be to set an explicit target_type in line 137 of distributed_pytorch_retriever.py- @lhoestq does this seem right?<|||||>Ok I will also give it a try
On Fri, Nov 20, 2020, 15:30 Amog Kamsetty <[email protected]> wrote:
> I think this has to do with using a custom index which I didn't try out.
> Can you try with just the wiki_dpr index to confirm? It seems like the
> training workers are expecting a tensor of type float, but a tensor of type
> double is being sent instead. I think the fix might just be to set an
> explicit target_type in line 137 of distributed_pytorch_retriever.py-
> @lhoestq <https://github.com/lhoestq> does this seem right?
>
> β
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/pull/8583#issuecomment-730805824>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AEA4FGVO47GE3WRYB2VWLLTSQXIFZANCNFSM4TYGZUWA>
> .
>
<|||||>@lhoestq now that https://github.com/huggingface/transformers/pull/8585 is merged, should I mark this PR as ready for review?<|||||>Yes indeed ! Feel free to set this PR to ready for review
Also it looks like the CI fails because of a failed import of `ray`.
To fix that you need to move the import of ray into the test functions decorated with `require_distributed_retrieval `.
You should also add `ray` to the test dependencies, or the test will simply be ignored<|||||>@lhoestq CI is passing now!<|||||>@lhoestq any ETA on when this PR can get reviewed? Thanks<|||||>Hi ! I've already started to look at the changes and it looks pretty good so far :) I'll finish my review soon, probably tomorrow<|||||>Awesome thanks!<|||||>@sgugger it would be cool if you could review as this changes some things in the trainer/integrations.<|||||>Hi @lhoestq @sgugger I addressed the feedback you guys gave. Do you think you can take another look? Thanks<|||||>Hi there, sorry for the delay. Could you close and reopen your PR? Because of a bad force-push on our side, the diff has become unreadable. Also, the examples folder has slightly changed structure, so you might need to move the folder.
Ping me, @patrickvonplaten and @LysandreJik on the PR you reopen and we'll look at it quickly.<|||||>Opened a new one here: https://github.com/huggingface/transformers/pull/9197! |
transformers | 8,582 | closed | [examples tests] tests that are fine on multi-gpu | This PR removes `require_torch_non_multi_gpu_but_fix_me` for those tests I know should work. Well, they used to work before some recent PRs https://github.com/huggingface/transformers/pull/8073#issuecomment-728677627 - my suspicion is that the problem wasn't detected before it was merged because these tests were skipped as the dev was probably on a multi-gpu machine. So we need to sort this issue out sooner than later.
Currently a bunch of `examples` tests get skipped for devs with multi-gpus - it's probably a good idea for each dev with more than 1 gpu to take a sub-folder under `examples` and test which tests can be run on multi-gpu and which can't.
1. ensure you're on a multi-gpu machine - don't take this assignment if you don't have 2+ gpus
2. pick a test file and remove `@require_torch_non_multi_gpu_but_fix_me` from its tests if any
3. don't forget RUN_SLOW=1 - probably just always add it so tests aren't missed.
4. run tests
5. if any tests fail either fix those or restore `@require_torch_non_multi_gpu_but_fix_me` for the failing tests. if the failure is not multi-gpu related please file an issue.
6. go to 2 until no more test files is left.
7. send a PR with changes for this sub-folder you chose.
To remind, the initial skip-all-examples-tests-on-multi-gpu tests was added so that we could start multi-gpu tests on github runner CI.
If meanwhile you have a problem with important examples tests skipping, please force a single gpu mode with:
```
CUDA_VISIBLE_DEVICES=0 pytest
```
I trust @LysandreJik or someone else can coordinate this effort?
| 11-17-2020 04:47:09 | 11-17-2020 04:47:09 | CI failure is unrelated.<|||||>Hi @stas00! We're currently strongly focusing on the v4.0.0 release. Your proposal here is definitely interesting, and we can take a look at doing this when we do a large test dive, since we have a few things to fix:
- The multi-gpu tests you mention
- The tests for torch v1.3.0+
- The current slow tests are not all passing
- The model templates tasting framework desperately needs an improvement.
I'll come back to you next week regarding this once everything has settled down. Thanks for your patience! <|||||>It doesn't sound like my comment drove the point across - the problem is that now most examples tests are skipped if the developer has 2+ cards, resulting in commits that break master https://github.com/huggingface/transformers/pull/8073 - this is a problem for the pending release obviously.
I originally suggested to explicitly enable tests that have been ported to be run on multi-gpu CI, but since it was decided to run them all and instead to disable them on masse and then re-enable them in groups as they get ported, but nothing has been done about it, we now have the situation where a huge part of the test suite is practically disabled.
Please let me know whether you still think this is secondary...<|||||>If I understand correctly, you're saying that now examples tests are skipped if the environment has 2+ cards. So the CI still runs the example tests on single-gpu, correct?
And I believe we never had a multi-gpu ci that worked for examples, so we're essentially at the same point we've always been: examples are not tested in a multi-gpu setting. Is that correct? If it is, how is that a problem for the pending release, as examples are independent of releases? <|||||>You are correct with regards to CIs, yes. But who is now impacted is the developers. If @thomwolf run the tests before committing https://github.com/huggingface/transformers/pull/8073 he would have noticed the failure. My fantasy is that he has 2+ cards, did run the tests, which they got skipped and hence he happily committed the change unware that it had issues. Now seq2seq is broken.
I was surprised that this happened, scrambled to run the tests and promptly discovered that they were skipped since I have 2 cards. Once I forced 1-gpu with CUDA_VISIBLE_DEVICES the tests failed. That's why I urgently made this PR and encouraged we bring that incomplete effort to completion.
It's possible that my fantasy was incorrect, and this is just the situation were we rely on CIs to catch the errors. But since CIs don't run gpu, merges look good.
(and nothing personal against @thomwolf, on the contrary I feel that I'm to blame that I disabled the tests on multi-gpu and didn't see it through for them to be re-enabled) |
transformers | 8,581 | closed | Add early stopping callback to pytorch trainer | # Summary
Address PyTorch half of https://github.com/huggingface/transformers/issues/4894 by adding early stopping patience and a minimum threshold metrics must improve to prevent early stopping. I piggybacked heavily off of https://github.com/huggingface/transformers/pull/7431/ since the two functions are very similar.
Since https://github.com/huggingface/transformers/pull/4186 seems to be abandoned and behind master, I figured I'd take a crack at this.
## Who can review?
Anyone! But @julien-c and @sgugger seem the most appropriate. | 11-17-2020 04:43:53 | 11-17-2020 04:43:53 | Hi there. Thanks your PR! When I was designing the callbacks, it was to be them small independent pieces of code. I would prefer if early stopping had its own callback that the user would then choose to add or not. Do you think you could amend your PR in that direction?<|||||>Hello, thank you for your feedback! I will amend the PR in that direction.
Could you clarify which pieces of early stopping should be in `TrainerState` and which should be in the callback? I'm grappling with the similarities between `best_model_checkpoint` and early stopping attributes.
```python
class EarlyStoppingCallback(TrainerCallback):
best_metric: Optional[float] = None # maybe not this
best_model_checkpoint: Optional[str] = None # maybe not this either
early_stopping_patience: int = None
early_stopping_patience_counter: int = None
def on_evaluate(self, args, state, control, **kwargs):
# Keep track of patience
# End training via early stopping
if (
self.early_stopping_patience is not None
and self.early_sotpping_patience_counter >= self.early_stopping_patience
):
control.should_training_stop = True
```<|||||>Or do you mean I just move the if statement I added to its own callback and keep `TrainerState` as is?<|||||>The `TrainerState` shouldn't change, so the callback you are writing above sounds fine, without the arguments marked with `# maybe not this`, which should already be in the `TrainerState`, I think.
Does that sound right to you?<|||||>That makes sense. I think [this](https://github.com/huggingface/transformers/blob/e812753736f475b62849ef0e72149306408c1395/src/transformers/trainer.py#L910) block of code (to line 933) could be a callback because it's all about the best metric. Then users could customize the best model calculations. Is that desirable?
If you think that's out of scope I'll keep the early stopping callback simple and separate from the best metric calculation.<|||||>I had put it in `Trainer` because I thought multiple callbacks could need it and it's used by `load_best_model_at_end` which is kind of a core feature.<|||||>Sounds good, you know best! I keep `load_best_model_at_end` in the `Trainer` and push up an early stopping callback sometime this week.<|||||>Thanks for your thorough and affable review! |
transformers | 8,580 | closed | Reorganize repo | # What does this PR do?
This PR reorgnaizes the structure of the repository by putting all model-related files (modeling, configuration, tokenization, conversion) in subfolders.
**Breaking change**
This breaks any import for model/config/tokenizer objects that is not done at the top level:
```
from transformers import BertModel
```
works but not
```
from transformers.modeling_bert import BertModel
```
It needs to be updated to
```
from transformers.models.bert.modeling_bert import BertModel
```
Internally, after this PR is merged the following will need fixing:
- the check_repo script does not properly finds the models now (so it does not check if they are properly tested/documented/ in auto classes)
- the new model template needs to be updated
The other internal scripts work as usual. | 11-17-2020 02:33:56 | 11-17-2020 02:33:56 | Got approval offline from @LysandreJik so merging. If anything needs fixing, we'll do so tomorrow morning!<|||||>The GPU tests throw this error:
```
2020-11-17 11:24:06.538757: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
Traceback (most recent call last):
File "/home/hf/actions-runner_transformers/_work/transformers/transformers/.env/bin/transformers-cli", line 5, in <module>
from transformers.commands.transformers_cli import main
File "/home/hf/actions-runner_transformers/_work/transformers/transformers/.env/lib/python3.7/site-packages/transformers/__init__.py", line 34, in <module>
from .data import (
File "/home/hf/actions-runner_transformers/_work/transformers/transformers/.env/lib/python3.7/site-packages/transformers/data/__init__.py", line 6, in <module>
from .processors import (
File "/home/hf/actions-runner_transformers/_work/transformers/transformers/.env/lib/python3.7/site-packages/transformers/data/processors/__init__.py", line 6, in <module>
from .squad import SquadExample, SquadFeatures, SquadV1Processor, SquadV2Processor, squad_convert_examples_to_features
File "/home/hf/actions-runner_transformers/_work/transformers/transformers/.env/lib/python3.7/site-packages/transformers/data/processors/squad.py", line 10, in <module>
from ...models.bert.tokenization_bert import whitespace_tokenize
ModuleNotFoundError: No module named 'transformers.models'
```
https://github.com/huggingface/transformers/runs/1411825179
Not 100% sure what's going on there<|||||>Normal import throw this error aswell:
```
Successfully installed sacremoses-0.0.43 tokenizers-0.9.4 transformers-4.0.0.dev0
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-2-11bca228d6cd> in <module>()
1 get_ipython().system(' pip install ./transformers/')
----> 2 import transformers
3 frames
/usr/local/lib/python3.6/dist-packages/transformers/__init__.py in <module>()
32
33 # Data
---> 34 from .data import (
35 DataProcessor,
36 InputExample,
/usr/local/lib/python3.6/dist-packages/transformers/data/__init__.py in <module>()
4
5 from .metrics import glue_compute_metrics, xnli_compute_metrics
----> 6 from .processors import (
7 DataProcessor,
8 InputExample,
/usr/local/lib/python3.6/dist-packages/transformers/data/processors/__init__.py in <module>()
4
5 from .glue import glue_convert_examples_to_features, glue_output_modes, glue_processors, glue_tasks_num_labels
----> 6 from .squad import SquadExample, SquadFeatures, SquadV1Processor, SquadV2Processor, squad_convert_examples_to_features
7 from .utils import DataProcessor, InputExample, InputFeatures, SingleSentenceClassificationProcessor
8 from .xnli import xnli_output_modes, xnli_processors, xnli_tasks_num_labels
/usr/local/lib/python3.6/dist-packages/transformers/data/processors/squad.py in <module>()
8
9 from ...file_utils import is_tf_available, is_torch_available
---> 10 from ...models.bert.tokenization_bert import whitespace_tokenize
11 from ...tokenization_utils_base import BatchEncoding, PreTrainedTokenizerBase, TruncationStrategy
12 from ...utils import logging
ModuleNotFoundError: No module named 'transformers.models'
```
<|||||>I think maybe the `models` folder needs a `__init__.py`<|||||>I'm always confused by why Python lets it works in some cases and sometimes not. Will push an init directly on master. |
transformers | 8,579 | closed | Create README.md | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adding model card for `indolem/indobert-base-uncased`
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSTM: @stas00
-->
| 11-17-2020 00:43:22 | 11-17-2020 00:43:22 | Thanks for sharing, looks really cool! |
transformers | 8,578 | closed | Error: Asking to return token_type_ids while setting add_special_tokens to False | In the code below, while using `batch_encode_plus`, i get the error saying that i asked ***βto return token_type_ids while setting add_special_tokens to Falseβ***, when `return_token_type_ids` is `False`. I'm not sure if i am comprehending the error message correctly. For this specific case, I also found that the behaviour between `BertTokenizer` and `BertTokenizerFast` is different.
```python
from transformers import AutoTokenizer
t = AutoTokenizer.from_pretrained("bert-base-uncased", add_special_tokens=False)
txt = ["huggingface", "transformers"]
t.batch_encode_plus(
txt,
add_special_tokens=False,
return_attention_mask=False,
return_token_type_ids=False,
)
```
Error message
```
Traceback (most recent call last):
File "/Users/charcohui/Desktop/env/mlenv/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3417, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-30-226d8ccb1c88>", line 5, in <module>
return_token_type_ids=False,
File "/Users/charcohui/Desktop/env/mlenv/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2399, in batch_encode_plus
**kwargs,
File "/Users/charcohui/Desktop/env/mlenv/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 567, in _batch_encode_plus
verbose=verbose,
File "/Users/charcohui/Desktop/env/mlenv/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 617, in _batch_prepare_for_model
verbose=verbose,
File "/Users/charcohui/Desktop/env/mlenv/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2681, in prepare_for_model
"Asking to return token_type_ids while setting add_special_tokens to False "
ValueError: Asking to return token_type_ids while setting add_special_tokens to False results in an undefined behavior. Please set add_special_tokens to True or set return_token_type_ids to None.
```
However, it works when `BertTokenizerFast` is used:
```python
tfast = AutoTokenizer.from_pretrained("bert-base-uncased", use_fast=True, add_special_tokens=False)
tfast.batch_encode_plus(
txt,
add_special_tokens=False,
return_attention_mask=False,
return_token_type_ids=False,
)
# {'input_ids': [[17662, 12172], [19081]]}
``` | 11-17-2020 00:29:14 | 11-17-2020 00:29:14 | Hi, indeed this seems to be an error. Fixing this in #8854 if it is so. |
transformers | 8,577 | closed | [examples/seq2seq] fix PL deprecation warning | This PR fixes a PL deprecation warning, since we require PL-1.0.4 this is a safe switch.
Reference: https://pytorch-lightning.readthedocs.io/en/latest/generated/pytorch_lightning.callbacks.ModelCheckpoint.html#pytorch_lightning.callbacks.ModelCheckpoint.params.filepath
@patrickvonplaten | 11-16-2020 20:23:03 | 11-16-2020 20:23:03 | |
transformers | 8,576 | closed | run_pl_glue.py token_type_id error on fresh install | If you try to run the run_glue.py example with e.g. roberta from a fresh install of the library, it errors out with the following error:
```
Traceback (most recent call last):
File "examples/text-classification/run_pl_glue.py", line 228, in <module>
main()
File "examples/text-classification/run_pl_glue.py", line 218, in main
trainer = generic_train(model, args)
File "/home/ejp416/complexity/examples/lightning_base.py", line 400, in generic_train
trainer.fit(model)
File "/home/ejp416/miniconda3/envs/complexity2/lib/python3.7/site-packages/pytorch_lightning/trainer/states.py", line 48, in wrapped_fn
result = fn(self, *args, **kwargs)
File "/home/ejp416/miniconda3/envs/complexity2/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1072, in fit
model = self.accelerator_backend.setup(model)
File "/home/ejp416/miniconda3/envs/complexity2/lib/python3.7/site-packages/pytorch_lightning/accelerators/gpu_backend.py", line 34, in setup
self.trainer.call_setup_hook(model)
File "/home/ejp416/miniconda3/envs/complexity2/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1444, in call_setup_hook
model.setup(stage_name)
File "/home/ejp416/complexity/examples/lightning_base.py", line 175, in setup
self.train_loader = self.get_dataloader("train", self.hparams.train_batch_size, shuffle=True)
File "examples/text-classification/run_pl_glue.py", line 98, in get_dataloader
all_token_type_ids = torch.tensor([f.token_type_ids for f in features], dtype=torch.long)
TypeError: an integer is required (got type NoneType)
```
To reproduce, run e.g.
```
python examples/text-classification/run_pl_glue.py --model_name_or_path roberta-base --output_dir ./blah --task mnli --do_train --data_dir ./glue_data/MNLI --max_seq_length 512 --max_grad_norm inf --adam_epsilon 1e-6 --weight_decay 0.1 --num_train_epochs 2 --train_batch_size 2 --eval_batch_size 4 --learning_rate 1e-5 --seed 12 --gradient_accumulation_steps 8 --gpus 1
```
The reason is that roberta does not have segment ids so token_type_ids is set to null in the data loader, causing torch.tensor to freak out. There's probably a more elegant long-term solution for this, but it's easy to fix by just setting it to 0 instead of null for those models. This issue has come up before in other scripts:
- https://github.com/huggingface/transformers/pull/3801
- https://github.com/huggingface/transformers/issues/3810 | 11-16-2020 19:07:59 | 11-16-2020 19:07:59 | Ah, indeed. Out of curiosity, have you tried using `run_glue.py` instead of `run_pl_glue.py`? Does the error still happen?<|||||>Closed by mistake.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 8,575 | closed | REALM checkpoints to pytorch checkpoints | Will it be possible to convert checkpoints in https://console.cloud.google.com/storage/browser/realm-data/cc_news_pretrained to pytorch implementations ?
I am not very familiar with Tensorflow....
I tried using convert_bert_original_tf_checkpoint_to_pytorch.py but I am not sure I am invoking it correctly.
Here is how I am invoking the python script...
--tf_checkpoint_path="./cc_news_pretrained/embedder/encoded/" \
--bert_config_file="./bert_config.json" \
--pytorch_dump_path="./pytorch"
I am using tensorflow 2.3.0.
The checkpoint file has the following entries which are probably internal developer files(?):
model_checkpoint_path: "/cns/li-d/home/lumiere/public/models/gatoatigrado/ner-with-dates/10923195/1-active_losses=mlm_loss/export/temp/1580364602/retriever/encoded/encoded.ckpt"
all_model_checkpoint_paths: "/cns/li-d/home/lumiere/public/models/gatoatigrado/ner-with-dates/10923195/1-active_losses=mlm_loss/export/temp/1580364602/retriever/encoded/encoded.ckpt"
1)When I set tf_checkpoint_path to the directory containing the checkpoint, I get the error :
tensorflow.python.framework.errors_impl.NotFoundError: /cns/li-d/home/lumiere/public/models/gatoatigrado/ner-with-dates/10923195/1-active_losses=mlm_loss/export/temp/1580364602/retriever/encoded; No such file or directory
2)When I set tf_checkpoint_path to the checkpoint file encode.ckpt.data-00000-of-00001, I get the error:
env/lib/python3.6/site-packages/tensorflow/python/training/py_checkpoint_reader.py", line 95, in NewCheckpointReader
return CheckpointReader(compat.as_bytes(filepattern))
RuntimeError: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for ./cc_news_pretrained/embedder/encoded/encode.ckpt.data-00000-of-00001
Thanks! | 11-16-2020 18:58:02 | 11-16-2020 18:58:02 | |
transformers | 8,574 | closed | [Improvements] Enable `git push` without requiring login when uploading model | # π Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
Is there a way to allow `git push` a new model version without requiring login?
`transformers-cli` will auto generate a key for the user. Is there a way to leverage this key?
## Motivation
GitHub allows you to push in VSCode without explicitly login in each time.
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
| 11-16-2020 18:57:54 | 11-16-2020 18:57:54 | Yes. Two ways:
- you can add your credentials when cloning, e.g. `git clone https://username:[email protected]/user/model_id` (you can also use your token instead of your password if you know it).
- but the cleaner way is to have a credential store set-up in git (https://git-scm.com/book/en/v2/Git-Tools-Credential-Storage) which means git will only ask you once. GitHub's doc is good as well (and not specific to GitHub): https://docs.github.com/en/free-pro-team@latest/github/using-git/caching-your-github-credentials-in-git
Let me know if this helps<|||||>Thanks! I'll try this out and reopen this issue if any problem arises!<|||||>Is there any documentation for "How to set the credential helper for Hugging Face URLs without changing the helper used for all other repos?"<|||||>you can set the helper for a specific repo clone, but I'm not sure if you can set the helper system-wide for specific hosts |
transformers | 8,573 | closed | Bert that receives text triplet as an input | I would like to train bert on triplets of texts as inputs (for example, something like (context, question, answer)). encode_plus (https://huggingface.co/transformers/internal/tokenization_utils.html#transformers.tokenization_utils_base.PreTrainedTokenizerBase.encode_plus) receives either a single text, or a text_pair. Is there a way to use it with triplets? | 11-16-2020 18:27:43 | 11-16-2020 18:27:43 | Unfortunately we do not have such a method, as it would imply having an opinionated approach on how to do it. The `encode_plus` method follows what was done during each model's training, since our aim is to replicate as closely as possible the original approach.
I would recommend encoding your sequences by placing your own special tokens, and by specifying `add_special_tokens=False` so that the encoding method does not add these tokens automatically. Let me know if you want a code sample showing this.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>@LysandreJik If you can share a code sample that would be great!
<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread.<|||||>I wonder why `text_pair` gets a special treatment. I guess datasets which have two strings as "input" (like glue's mnli with premise and hypothesis) are more common, but aren't there datasets with three or more input strings? |
transformers | 8,572 | closed | Fix mixed precision issue for GPT2 | # What does this PR do?
This PR makes GPT2 to be trained and run in any mixed precision.
Fixes # (issue)
#8559
| 11-16-2020 18:17:21 | 11-16-2020 18:17:21 | |
transformers | 8,571 | closed | [WIP] Move BERT and ALBERT | # What does this PR do?
This is a PoC for the reorg of the model on just two files. I didn't update very reference everywhere, just enough to give a sense of what it will render. | 11-16-2020 16:58:57 | 11-16-2020 16:58:57 | |
transformers | 8,570 | closed | [T5] Add open / closed book answering models | # π New model addition
## Model description
Check here: https://github.com/google-research/google-research/tree/master/t5_closed_book_qa
<!-- Important information -->
## Open source status
* [ ] the model implementation is available: (give details)
* [ ] the model weights are available: (give details)
* [ ] who are the authors: (mention them, if possible by @gh-username)
| 11-16-2020 16:35:31 | 11-16-2020 16:35:31 | Done: https://huggingface.co/models?search=ssm |
transformers | 8,569 | closed | After 2nd iteration: always same result when training in a loop | I train a BERT model on a binary classification task. I do the training 4 times in a row. With same train and validation data and with the exact same hyperparameters. I use the default seed in TrainerArguments and set no other seeds.
The results of the 2nd, 3rd and 4th iteration are 100% the same. The result of the 1st run is unique. This behavior is 100% reproducible.
It is not clear why this is the case. Since I set a seet and work with same data.
PIP Libs I use (no conda libs):
- sentencepiece-0.1.91
- tokenizers-0.9.3
- transformers-3.5.1
- torch-1.7.0+cu101
Screenshot:

Colab to reproduce:
https://colab.research.google.com/drive/1HjQ7p5AlDY9kcWo7uSXzteh7x1ustEdN?usp=sharing
I already did some facy stuff after each iteration:
```python
del trainer
del training_args
del model
del config
del train_result
del tokenizer
del labeled_dataset_test
del labeled_dataset_train
gc.collect()
torch.cuda.empty_cache()
torch.cuda.ipc_collect()
gc.collect()
torch.cuda.empty_cache()
torch.cuda.ipc_collect()
```
But that does not help. Can someone please help me here and claify whats up?
PS: I do not think that a seed can be the reason. If a seed would be the reason the 1st and 2nd run would also be the same and AFAIK a GPU training is not 100% deterministic. So a small difference would still remain.
| 11-16-2020 15:51:11 | 11-16-2020 15:51:11 | Here the same code as a .py script with transformers 3.4.0 (instead of 3.5.1), CUDA 11.0 and torch 1.7.0

<|||||>Same problem also happening with torch 1.6.0+cu101 - see here: https://colab.research.google.com/drive/1-9jcAsFGf79kpiCSQa4QaBdXvZQstE_n?usp=sharing

<|||||>Same Bug also with torch==1.5.1+cu101 and transformers==3.3.1
see here: https://colab.research.google.com/drive/1HqMOQ_UzGI4z_OWOd0qFsfdVpKfLbHaM?usp=sharing<|||||>Ok I think I found the reason why this happens.
When `TrainingArguments` is called with default params a seed is not just used to init the network but also set everywhere else.
This is done by calling this: https://github.com/huggingface/transformers/blob/c89bdfbe720bc8f41c7dc6db5473a2cb0955f224/src/transformers/trainer_utils.py#L29
After that point everything that should be randomized in the next iteration like shuffling the training data and so on is not random anymore but dependent on the seed. Since this seed is set again and again to the same value everything seems to be deterministic and not random anymore. The reson why the 1st iteration has a different value is because the seed is set relativly late after data is loaded.
Closing this...
<|||||>Well - I was thinking about this:
There is a default param "hidden" in a class like `TrainingArguments`. This default value sets all seeds of all libs like python default, numpy, torch to a fixed value. This can be the source of some nasty problems because it removes randomness where nobody would have ever suspected it.
Best positive example are the `sklearn.model_selection` classes (functions). Most of them accept a seed but they just use the seed internaly and do not set it in every module you could think of.
I am not sure if I would call this a bug but at least it is an issue. That's why I reopen.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 8,568 | closed | Update version to v4.0.0-dev | # What does this PR do?
This PR puts the right version in the setup and `__init__` and adds one last step to the release guide in the setup.
Fixes #8566 | 11-16-2020 15:15:16 | 11-16-2020 15:15:16 | |
transformers | 8,567 | closed | [XLNet] Fix mems behavior | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #7584
XLNet is arguably not the cleanent implementation with a lot of config parameters flying around that all interact with each other in an overly complicated way: `use_cache`, `mem_len`, and `reuse_len`. Due to backward compatibility we cannot really get rid of those. I'd love to just completely get rid of `use_cache` and `reuse_len`, but this would require to update all configs which is not possible...
At first, this PR removes the `use_cache` and replaces it with `use_mems_eval` => `use_mems_eval` decides whether the mems should be used in evaluation mode, which defaults to `True` so that the arguably most important model `XLNetLMHeadModel` keeps full backward compatibility at inference. `use_cache` is a confusing name IMO because it does not correspond to the `use_cache` we know from GPT2 (we had a longer discussion on this internally on Slack).
The issue #7584 shows that if there is one training batch that is smaller than the other batches in the train data, the training breaks. Also as can be read upon in the issue linked below, the authors also don't use the memory mechanism for fine-tuning => therefore we add another param `use_mems_train` which defaults to `False` so that training works as a default.
If for some special reason the user wants to use the memory mechanism during fine-tuning, he/she has to make sure that the batch_size of all training batches is the same.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSTM: @stas00
-->
| 11-16-2020 15:11:10 | 11-16-2020 15:11:10 | > The new arguments are perfectly acceptable, but I think it would be nice to keep backwards compatibility. It should be easy enough by checking if there's a `use_cache` value in the configuration and setting it to the value of `use_mems`.
Yes, true! The problem is that `use_cache` is always in `configuration_utils.py` so we can't really add something that checks whether `use_cache` is in the config (It's always there) and sets `use_mems` to the same value (It'd always do it)...
Thinking more about it `use_cache` should maybe not even be in `configuration_utils.py` but just in all "LMHead" configs?! I could move `use_cache` to all individual config files. I think we could do this without breaking changes -> Wdyt? @LysandreJik @sgugger ?<|||||>I have mixed feelings about this, on one hand it makes sense to remove that argument which is very specific to a few models from the main `configuration_utils`, but on the other I fear the associated breaking changes.<|||||>I think your last proposal would be great @patrickvonplaten, moving `use_cache` to the appropriate configuration files. This way we could have this PR with no breaking changes, and a more robust `use_cache`!
If it is not possible to do the move without breaking change, then let's forget this and fix the mems behavior with the small breaking change for version 4. Would still like to prevent this, if possible.<|||||>## Update
@LysandreJik @sgugger - `use_cache` is only used in the following modeling files (TF & PT):
```
- modeling_t5.py
- modeling_bart.py (+ inherited)
- modeling_fstm.py
- modeling_prophetnet.py
- modeling_gpt2.py
- modeling_openai.py
- modeling_xlnet.py
- modeling_ctrl.py
```
Therefore, we can move `use_cache` to the respective configuration files and delete it from the general `configuration_utils.py` file.
I cannot really think of a use case where this would lead to breaking changes. If *e.g.* a saved BERT config includes a `use_cache`, this parameter will still be set to the config: https://github.com/huggingface/transformers/blob/18c8cf000bed04ec03470270ec2fbd9d49cce5c4/src/transformers/configuration_utils.py#L235 and still remain unused as before => I don't think this is a problem. A lot of old configs have unused config parameters that are then just ignored... For all models that make use of `use_cache`, the `use_cache` param is now "caught" and saved directly in the model's config class.
=> this allows us to have 0 breaking changes for this PR and is also cleaner IMO. <|||||>## Update
To have a bit more clarity for this PR.
This PR mainly solved the issue that XLNet cannot be trained at the moment. It does so by depreciating `use_cache` and replacing it by `use_mems_eval` and `use_mems_train`.
The PR keeps 100% backward compatibility for the PyTorch model.
The TF XLNet model was previously not kept up-to-date with the PT model. This PR notably forgot to update the TF model: https://github.com/huggingface/transformers/pull/5770 when the behavior of the PT model was changed.
This PR brings the TF model up to date with the PT model, but does not keep 100% backward compatibility by completely removing the `use_cache` parameter from the models forward functions. I decided to not add depreciate the `use_cache` parameter because a) Before this PR `use_cache` was used incorrectly (as was corrected in #5770 but only for PT), thus b) bringing the TF model up to date with PT is breaking anyways and c) there are breaking changes for boolean inputs in the moment for TF. So in conclusion:
- for PT: No breaking changes -> `use_cache` is depreciated both in the config and in the model forward's
- for TF: One breaking change for TF that `use_cache` cannot be forwarded anymore to the models and has to be replaced by `use_mems`
At the same time this PR removes `use_cache` from `configuration_utils.py` which has no breaking changes except for models that don't use `use_cache` don't add an unused `use_cache` that defaults to `True` to their config anymore. But since none of those models is using `use_cache` this is hardly breaking.
@LysandreJik - is that ok for you?<|||||>Thank you for clarifying, I think that's fine. We should add that to the v4.0.0 as well.
Thanks for taking care of it @patrickvonplaten |
transformers | 8,566 | closed | "setup.py" does not seem to have been updated for v3.5.1 | ## Environment info
I try
```
!git clone https://github.com/huggingface/transformers.git
%cd transformers/
!pip install -e .
```
on Colaboratory.
- `transformers` version: 3.5.0 <- this seems strange.
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101 (True)
- Tensorflow version (GPU?): 2.3.0 (True)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
documentation: @sgugger
## Information
"setup.py" does not seem to have been updated for v3.5.1.
When I install transformers by `pip install -e .`, the version of transformers is shown as v3.5.0.
## To reproduce
Steps to reproduce the behavior:
I try
```
!git clone https://github.com/huggingface/transformers.git
%cd transformers/
!pip install -e .
```
on Colaboratory after v3.5.1 release.
Then,
```
import transformers
transformers.__version__
```
returns
```
'3.5.0'
```
## Expected behavior
The return of `transformers.__version__` is expected to be '3.5.1' now, if my understanding is not wrong.
Maybe, in https://github.com/huggingface/transformers/blob/afb50c663a5d5623906ead1e87481926467d59fa/setup.py#L120
'3.5.0' should be changed to '3.5.1'.
Is my understanding correct? Sorry if I misunderstand your intension.
| 11-16-2020 14:57:52 | 11-16-2020 14:57:52 | I found https://github.com/huggingface/transformers/commit/d5b3e56de5376aa85ef46e7f0325139d9e299a41 and it seems the related files are updated there.
If there is a reason (or rules for releases) not to merge the change into the master branch, I'm sorry for opening this issue that I haven't fully considered.<|||||>This was a hotfix on a branch which is why we didn't update master (mostly because we forgot).
To do things right, we'll actually put v4.0.0-dev since that's what master is right now :-)<|||||>@sgugger
Thank you for your quick and detailed response!
Now I understood what is the reason.
I appreciate your creating a new PR for this issue. |
transformers | 8,565 | closed | replace performance table with markdown | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSTM: @stas00
-->
| 11-16-2020 14:46:13 | 11-16-2020 14:46:13 | Thanks! you can also add a sample input for the widget if you'd like (https://huggingface.co/docs#how-can-i-control-my-models-widgets-example-inputs) |
transformers | 8,564 | closed | Make BART more ONNX friendly | **Major concerns:**
1. ONNX complains about Python raw integer usage.
2. ONNX doesn't support boolean indexing with something else than vector. The code was using 2D Tensor indices (batch, token).
**PR workarounds:**
1. Remove the call to `len(..)` and prefer the use of `.size(-1)`
2. Attempt to index the output tensors to retrieve only the **last** EOS token over the sequence axis through `.index_select(..)` | 11-16-2020 14:46:11 | 11-16-2020 14:46:11 | I believe @mfuntowicz ran the slow tests on this one and there were no failures, but this PR is so cryptic I don't understand what's happening.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 8,563 | closed | Wrong model_max_length for BERTOverflow tokenizer | ## Environment info
- `transformers` version: 3.4.0
- Platform: Linux-5.4.0-52-generic-x86_64-with-glibc2.29
- Python version: 3.8.5
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): 2.3.1 (False)
- Using GPU in script?: N
- Using distributed or parallel set-up in script?: N
## Information
Hi. I used [BERTOverflow](https://huggingface.co/jeniya/BERTOverflow) and found strange behavior for the tokenizer property `model_max_length`. It is equal 1000000000000000019884624838656, although it should be 512.
## To reproduce
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('jeniya/BERTOverflow')
print(tokenizer.model_max_length)
``` | 11-16-2020 12:40:24 | 11-16-2020 12:40:24 | Ah, this is an error, indeed. The uploader should have uploaded a `tokenizer_config.json` containing the model maximum length. I see it only contains the `do_lower_case` argument right now.
@julien-c what should we do here? Should we update that configuration ourselves?
Also @thomwolf @julien-c, I feel like we should have sensible defaults for each model. This is a `BertModel` which currently only has access to absolute positional embeddings. It doesn't make sense to have an unlimited `model_max_length`. I think `BertTokenizer` and all its relatives should have a default of `512`.<|||||>@LysandreJik Yes, our policy here is that we fix the config ourselves and notify the model author (via email/GitHub mention/anything). Feel free to do it and link the resulting commit from here.
(use your hf.co email/author name for your commit to be linked to your hf.co user profile)<|||||>Updated with [212cd3e](https://huggingface.co/jeniya/BERTOverflow/commit/212cd3ef9615e7292da65a49162e0beb0cd3d604) and sent an email.<|||||>see [`huggingface.co:212cd3e`](https://huggingface.co/jeniya/BERTOverflow/commit/212cd3ef9615e7292da65a49162e0beb0cd3d604)<|||||>I happen to get the same error for all of:
```
dmis-lab/biobert-base-cased-v1.1
dmis-lab/biobert-large-cased-v1.1
microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract
emilyalsentzer/Bio_ClinicalBERT
```
I understand that in user code I should just do a fallback to sensible default if I get that large value, but that seem like a problem not related to a single model.<|||||>This issue has been stale for 1 month. |
transformers | 8,562 | closed | Clearer Model Versioning Example in Model Card | Clearer model card example | 11-16-2020 11:14:51 | 11-16-2020 11:14:51 | |
transformers | 8,561 | closed | Reset loss to zero on logging in Trainer to avoid bfloat16 issues | # What does this PR do?
I have had a weird issue with the logged loss going to zero after some training steps training with `bfloat16` on v3 TPUs:

while it works correctly using 32bit precision:

After some investigation I found that the `tr_loss` variable in `Trainer.train` seems to overflow after reaching 1024 (?).
I did not track this down more closely because it is easily fixed by making `tr_loss` a regular Python float instead of a tensor. It doesn't actually need to be a tensor as it is only ever accessed by `.item()`.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSTM: @stas00
-->
| 11-16-2020 11:14:16 | 11-16-2020 11:14:16 | No, we don't want to do `loss.item()` at each step since it slows down a lot the training on TPUs. We can cast this scalar to Float if it overflows. Not super familiar with `bfloat16` on TPUs so we may have overlooked something there (on mixed precision for GPUs, the loss is always a float).<|||||>Oh ok, I hadn't considered that. I'll take a closer look then to check what exactly causes the overflow.
And just curious, do you have any metrics regarding slowdown from `loss.item()` on TPUs? I'm currently using the code from this PR and see good TPU utilization and training time.<|||||>Alright, this might be a problem with XLA? I'm not at all familiar with how TPUs and XLA work internally but here is a minimal example of the problem:
```bfloat_demo.py```
```py
import torch
import torch_xla
import torch_xla.core.xla_model as xm
device = xm.xla_device()
loss_tensor = torch.tensor(0.0).to(device)
loss_float = 0.0
print(f"loss_tensor is on device {loss_tensor.device} with dtype {loss_tensor.dtype}")
to_add = torch.tensor(10.0).to(device)
for i in range(10):
for _ in range(100):
loss_tensor += to_add
loss_float += to_add.item()
print(loss_tensor, loss_float)
```
Running this regularly:
```
(torch-xla-1.7) bminixhofer@gerpt2:~$ python bfloat_demo.py
0.0 is on device xla:1 with dtype torch.float32
tensor(1000., device='xla:1') 1000.0
tensor(2000., device='xla:1') 2000.0
tensor(3000., device='xla:1') 3000.0
tensor(4000., device='xla:1') 4000.0
tensor(5000., device='xla:1') 5000.0
tensor(6000., device='xla:1') 6000.0
tensor(7000., device='xla:1') 7000.0
tensor(8000., device='xla:1') 8000.0
tensor(9000., device='xla:1') 9000.0
tensor(10000., device='xla:1') 10000.0
```
and with `bfloat16`:
```
(torch-xla-1.7) bminixhofer@gerpt2:~$ XLA_USE_BF16=1 python bfloat_demo.py
2020-11-16 14:52:12.488382: I 1663 torch_xla/csrc/tensor_util.cpp:28] Using BF16 data type for floating point values
0.0 is on device xla:1 with dtype torch.float32
tensor(904., device='xla:1') 1000.0
tensor(1704., device='xla:1') 2000.0
tensor(2960., device='xla:1') 3000.0
tensor(4096., device='xla:1') 4000.0
tensor(4096., device='xla:1') 5000.0
tensor(4096., device='xla:1') 6000.0
tensor(4096., device='xla:1') 7000.0
tensor(4096., device='xla:1') 8000.0
tensor(4096., device='xla:1') 9000.0
tensor(4096., device='xla:1') 10000.0
```
but notably the issue doesn't seem to be the magnitude at all but rather how often a value is added:
(setting `to_add = torch.tensor(0.1).to(device)`)
```
(torch-xla-1.7) bminixhofer@gerpt2:~$ XLA_USE_BF16=1 python bfloat_demo.py
2020-11-16 14:57:57.844438: I 1860 torch_xla/csrc/tensor_util.cpp:28] Using BF16 data type for floating point values
loss_tensor is on device xla:1 with dtype torch.float32
tensor(10.0625, device='xla:1') 10.009765625
tensor(22.5000, device='xla:1') 20.01953125
tensor(32., device='xla:1') 30.029296875
tensor(32., device='xla:1') 40.0390625
tensor(32., device='xla:1') 50.048828125
tensor(32., device='xla:1') 60.05859375
tensor(32., device='xla:1') 70.068359375
tensor(32., device='xla:1') 80.078125
tensor(32., device='xla:1') 90.087890625
tensor(32., device='xla:1') 100.09765625
```
So the dtype does not seem to be the problem, the problem seems to be something along the lines of not enough operations being tracked by XLA, but as I said I really don't know the internals at all so I don't want to go off on speculation here :)
Any ideas how to proceed?<|||||>And if you do:
```python
import torch
import torch_xla
import torch_xla.core.xla_model as xm
device = xm.xla_device()
loss_tensor = torch.tensor(0.0).to(device)
loss_float = 0.0
print(f"loss_tensor is on device {loss_tensor.device} with dtype {loss_tensor.dtype}")
to_add = torch.tensor(10.0).to(device)
for i in range(10):
for _ in range(100):
loss_tensor += to_add.float()
loss_float += to_add.item()
print(loss_tensor, loss_float)
```
does this solve the issue?
Trying to avoid the `.item` as it triggers a synchronization of TPUs :-)<|||||>No, still the same :(
`bfloat_demo.py`
```python
import torch
import torch_xla
import torch_xla.core.xla_model as xm
device = xm.xla_device()
loss_tensor = torch.tensor(0.0).to(device)
loss_float = 0.0
print(f"loss_tensor is on device {loss_tensor.device} with dtype {loss_tensor.dtype}")
to_add = torch.tensor(10.0).to(device)
for i in range(10):
for _ in range(100):
loss_tensor += to_add.float()
loss_float += to_add.item()
print(loss_tensor, loss_float)
```
```
(torch-xla-1.7) bminixhofer@gerpt2:~$ XLA_USE_BF16=1 python bfloat_demo.py
2020-11-16 16:00:35.131065: I 1197 torch_xla/csrc/tensor_util.cpp:28] Using BF16 data type for floating point values
loss_tensor is on device xla:1 with dtype torch.float32
tensor(904., device='xla:1') 1000.0
tensor(1704., device='xla:1') 2000.0
tensor(2960., device='xla:1') 3000.0
tensor(4096., device='xla:1') 4000.0
tensor(4096., device='xla:1') 5000.0
tensor(4096., device='xla:1') 6000.0
tensor(4096., device='xla:1') 7000.0
tensor(4096., device='xla:1') 8000.0
tensor(4096., device='xla:1') 9000.0
tensor(4096., device='xla:1') 10000.0
```<|||||>Ok investigated a bit more by asking some TPU experts and there is no way around this. So the proper fix will be to reset the loss to 0 each time we log it (instead of summing everything from the beginning). I can work on that later this week after the changes needed for v4, or you can work on it if it interests you.<|||||>Ok thanks for the quick followup!
> So the proper fix will be to reset the loss to 0 each time we log it (instead of summing everything from the beginning)
Wouldn't that still have the same issue if `logging_steps` is sufficiently large?<|||||>I think something like this could be a solution:
```python
import torch
import torch_xla
import torch_xla.core.xla_model as xm
device = xm.xla_device()
loss = 0.0
loss_agg_steps = 52
loss_agg = torch.tensor(0.0).to(device)
zero = torch.tensor(0.0).to(device)
to_add = torch.tensor(10.0).to(device)
for i in range(10):
for j in range(100):
loss_agg += to_add
if (j + 1) % loss_agg_steps == 0:
loss += loss_agg.item()
loss_agg.copy_(zero)
loss += loss_agg.item()
loss_agg.copy_(zero)
print(loss)
```
updating the loss tensor for `n` steps and syncing with a Python float loss afterwards (and always syncing before logging and after the last batch in an epoch). `n = 52` is the highest that worked in this demo but maybe a more informed decision could be taken about that value.<|||||>The `agg_step` would be the `logging_step` we have as training argument, the user can then tune it to their need.<|||||>Ok, I thought it would be good to have a second parameter as the logging step is tied to other things as well but in that case I can give implementing it the way you described earlier a shot.<|||||>The latest commit should do the trick. Just not sure if `tr_loss -= tr_loss` is the best way to reset `tr_loss` to zero.<|||||>I think you also need to do something for the final reported training loss.<|||||>Oh right, thanks. I introduced a variable `_total_loss_scalar` which handles that (comparable to `loss` in the demo code). |
transformers | 8,560 | closed | Prophetnet - predicted n-future tokens | Hi,
How can we get the predicted n-future tokens **as a string** data from the model output. I couldn't find it in the API doc and sample code. Could you please guide / provide the code snippet for ProphetNet and XLM-Prophetnet ? Thanks in advance. | 11-16-2020 10:49:36 | 11-16-2020 10:49:36 | Hey @nsankar - that's a good question for the forum https://discuss.huggingface.co/ I think. Currently we don't really support sampling from the "n-future" tokens, but it should not be too difficult - just sample from each of the 1...n logit vectors and then you can use the tokenizer to decode the sampled tokens. So in short, you should do this operation: https://github.com/huggingface/transformers/blob/42111f1d56947797d9dfb0908908f42a22ca9823/src/transformers/generation_utils.py#L843 for all n-future logit vectors and then pass it to the tokenizer. |
transformers | 8,559 | closed | TFGPT2LMHeadModel fp16 support | ## Environment info
- `transformers` version:
- Platform: ubuntu 18.04
- Python version: python3.8
- Tensorflow version (GPU?): tf-nightly==2.5
- Using GPU in script?: Y
- Using distributed or parallel set-up in script?: N
### Who can help
albert, bert, GPT2, XLM: @LysandreJik
Text Generation: @patrickvonplaten @TevenLeScao
tensorflow: @jplu
## Information
Hi, there. If I want to use the mixed precision setting with keras apis when training `TFGPT2LMHeadModel`, [like this](https://www.tensorflow.org/guide/mixed_precision):
```python
policy = mixed_precision.Policy('mixed_float16')
mixed_precision.set_policy(policy)
```
Then I will got this error:
```
File "/home/mymusise/pro/fast-gpt2/env/lib/python3.8/site-packages/transformers/modeling_tf_gpt2.py", line 154, in call
attn_outputs = self._attn(query, key, value, attention_mask, head_mask, output_attentions, training=training)
File "/home/mymusise/pro/fast-gpt2/env/lib/python3.8/site-packages/transformers/modeling_tf_gpt2.py", line 101, in _attn
w = w / tf.math.sqrt(dk)
File "/home/mymusise/pro/fast-gpt2/env/lib/python3.8/site-packages/tensorflow/python/ops/math_ops.py", line 1181, in binary_op_wrapper
raise e
File "/home/mymusise/pro/fast-gpt2/env/lib/python3.8/site-packages/tensorflow/python/ops/math_ops.py", line 1165, in binary_op_wrapper
return func(x, y, name=name)
File "/home/mymusise/pro/fast-gpt2/env/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py", line 206, in wrapper
return target(*args, **kwargs)
File "/home/mymusise/pro/fast-gpt2/env/lib/python3.8/site-packages/tensorflow/python/ops/math_ops.py", line 1337, in truediv
return _truediv_python3(x, y, name)
File "/home/mymusise/pro/fast-gpt2/env/lib/python3.8/site-packages/tensorflow/python/ops/math_ops.py", line 1267, in _truediv_python3
raise TypeError("x and y must have the same dtype, got %r != %r" %
TypeError: x and y must have the same dtype, got tf.float16 != tf.float32
```
Here's a [example](https://gist.github.com/mymusise/7192b7c252ff67ff84496cd8b27a91ff) to reappear this.
Please help me guys. | 11-16-2020 10:39:08 | 11-16-2020 10:39:08 | Hello !
It is a known issue that will be fixed in a future release. Sorry.<|||||>Thanks, I'm looking forward to the future release.<|||||>Thank @jplu's work! :+1: It works now with the mixed-precision policy when training.
But I think it still has some problem with `TextGenerationPipeline`, for example:
```python
from transformers import TextGenerationPipeline
from tensorflow.keras.mixed_precision import experimental as mixed_precision
policy = mixed_precision.Policy('mixed_float16')
mixed_precision.set_policy(policy)
text_generator = TextGenerationPipeline(model, tokenizer)
text_generator("Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow")
```
Then it raises an exception:
```
File "./env/lib/python3.8/site-packages/transformers-4.0.0.dev0-py3.8.egg/transformers/generation_tf_utils.py", line 386, in generate
output = self._generate_no_beam_search(
File "./env/lib/python3.8/site-packages/transformers-4.0.0.dev0-py3.8.egg/transformers/generation_tf_utils.py", line 457, in _generate_no_beam_search
next_token_logits = tf.math.multiply(next_token_logits, next_token_logits_penalties)
File "./env/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py", line 206, in wrapper
return target(*args, **kwargs)
File "./env/lib/python3.8/site-packages/tensorflow/python/ops/math_ops.py", line 519, in multiply
return gen_math_ops.mul(x, y, name)
File "./env/lib/python3.8/site-packages/tensorflow/python/ops/gen_math_ops.py", line 6068, in mul
_ops.raise_from_not_ok_status(e, name)
File "./env/lib/python3.8/site-packages/tensorflow/python/framework/ops.py", line 6867, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: cannot compute Mul as input #1(zero-based) was expected to be a half tensor but is a float tensor [Op:Mul]
```<|||||>Sorry for this, the generation part is not yet compliant with mixed precision. It is in the pipeline to add this but we don't know when yet.<|||||>Okay, thank you!<|||||>Hello~ @jplu.
Recently, I try to train my `TFGPT2LMHeadModel` model with mixed_precision again, it performs badly after many epochs, it seems didn't learn anything.
If I train without mixed_precision, it performs well after training with the same epochs.
I think maybe it will lose something when counting the loss with `logits` and `labels` in `fp16` here: https://github.com/mymusise/transformers/blob/master/src/transformers/models/gpt2/modeling_tf_gpt2.py#L742.<|||||>Here I make some change to `TFGPT2LMHeadModel` in https://github.com/huggingface/transformers/pull/10689
I'm not sure it's the right way to do it, correct me if it's wrong.
And I make a small test in colab, hope it can help to recur the problem.
https://colab.research.google.com/github/mymusise/gpt2-quickly/blob/main/examples/mixed_precision_test.ipynb<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 8,558 | closed | Readme for Wiki Summary [Persian] bert2bert | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSTM: @stas00
-->
| 11-16-2020 09:05:56 | 11-16-2020 09:05:56 | |
transformers | 8,557 | closed | Readme for News Headline Generation (bert2bert) | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSTM: @stas00
-->
| 11-16-2020 07:01:09 | 11-16-2020 07:01:09 | Really cool! |
transformers | 8,556 | closed | tokenization_bart.py: return_tensors default should be "pt" | return_tensors default should be "pt" in bart's `prepare_seq2seq_batch`. | 11-15-2020 23:31:53 | 11-15-2020 23:31:53 | Tokenizers are supposed to be framework (PyTorch/TensorFlow/FLAX) agnostic so we probably don't want to do in that direction.<|||||>Gotcha. Is this going to be the case for all tokenizers in the future? Because currently they default to PyTorch except for Bart's.
Also, I think the docstring for Bart tokenizer's `return_tensors` needs to be updated then since it says: `optional`, defaults to "pt"<|||||>The fact that the BART-like tokenizers have `return_tensors="pt"` is a mistake. The tokenizers should be framework-agnostic.<|||||>We will have to update this, which will be a breaking change, so we'll try to put it in the v4.0.0. Do you want to open a PR to fix the issue?<|||||>Sure. I'll close this then and make a new PR for that.
<|||||>Hi @Mehrad0711! We're rushing to `[email protected]`, and we don't want that in this release. I've taken the liberty of fixing it in https://github.com/huggingface/transformers/pull/8599. Sorry about that, I hope you had not started your development.
If you have, you can push your fixes and open a PR and I'll incorporate those changes in my PR and mark you as co-author.<|||||>No problem @LysandreJik . Thanks for fixing it! |
transformers | 8,555 | closed | Allow the user to input positional embeddings | # π Feature request
Hi,
I think that allowing the user to input positional embeddings in the same way that `inputs_embeds` can be fed directly to the Transformer would be greatly appreciated. (If this is already possible, please let me know :) ).
## Motivation
This is important for multimodal approaches, such as **VisualBERT** (https://arxiv.org/pdf/1908.03557.pdf), where each modality requires a separate positional embedding. The user could then use a `torch.nn.Embedding` per modality and concatenate the positional embeddings and feed this to the Transformer along with the concatenated input embeddings of the modalities.
This would also resolve other related issues:
https://github.com/huggingface/transformers/issues/5095
https://github.com/huggingface/transformers/issues/3395
## Your contribution
I can help in any form.
| 11-15-2020 22:54:02 | 11-15-2020 22:54:02 | Hey @anicolson,
I'm really not sure whether such a functionality is general enough to be added to the lib. It would also be very different from our current design of the library in that we would allow `torch.nn.Embedding` types as input, so I'd rather not add it. @LysandreJik what do you think? <|||||>I think this is specifically where tweaking the library so that it supports your use-case is the way to go. It should be easy enough to modify the files directly in order to do this, but it would add unnecessary complexity to several model files.
Let's keep this issue open, and if other users are interested in this, we'll have a deeper look.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 8,554 | closed | `disable_ngram_loss` fix for prophetnet | # What does this PR do?
This PR fixes `disable_ngram_loss` behaviour for ProphetNetForConditionalGeneration and is related to #8553
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
Fixes #8553
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
I guess @patrickvonplaten was using this model (I saw models on hub), sorry if I am wrong, but there is no one to tag for ProphetNet
| 11-15-2020 21:55:06 | 11-15-2020 21:55:06 | Hey @Zhylkaaa,
Thanks a lot for your PR! @qiweizhen - could you maybe take a look and give your opinion on this PR? I don't have much experience with training ProphetNet<|||||>Hi @patrickvonplaten , thanks for informing me. It seems it's still related to the padding tokens (default -100 or **padding_idx**) which should not be calculated loss.
In the old version code, **expend_targets** is filled with **self.padding_idx** however in the loss function, **padding_idx** is not fed in. It results in calculating wrong loss.
Here @Zhylkaaa set them consistent. I suggest that 1) the outside data preprocess padding function, 2) here **expend_targets** and 3) the loss function to be consistent.
If Huggingface Transformers default uses -100 for all the NLG models padding, then this code can be merged. If Huggingface Transformers default uses self.padding_idx for all the NLG models padding, then not merge this code, but feed **padding_idx** into the loss function.<|||||>Thanks @qiweizhen for reviewing,
to my knowledge Huggingface Transformers use -100 to indicate ignored tokens during loss calculations, also I wanted to ask if reduction strategy is important (Transformers use reduction=`'mean'` to my knowledge, but hear it is set to `'sum'`) because for me it's not?
I also noticed that I haven't changed this behaviour in another ProphetNet model, I will examine if it's necessary and commit changes, also I will write some tests to check for this behaviour in nearest future.<|||||>> Thanks @qiweizhen for reviewing,
> to my knowledge Huggingface Transformers use -100 to indicate ignored tokens during loss calculations, also I wanted to ask if reduction strategy is important (Transformers use reduction=`'mean'` to my knowledge, but hear it is set to `'sum'`) because for me it's not?
> I also noticed that I haven't changed this behaviour in another ProphetNet model, I will examine if it's necessary and commit changes, also I will write some tests to check for this behaviour in nearest future.
Thank you for pointing out this "mean" or "sum" problem! This line of code is converted from Fairseq version ProphetNet, which use loss sum here, to be consistent with [Fairseq Transformer] (https://github.com/pytorch/fairseq/blob/v0.9.0/fairseq/criterions/label_smoothed_cross_entropy.py#L26-L27). The reason is that in the training pipeline of Fairseq, they will do the ["mean" operation in their trainer](https://github.com/pytorch/fairseq/blob/v0.9.0/fairseq/trainer.py#L429). So we return the sum loss and sample_size for Fairseq to calculate sum loss / sample_size (mean).
So I agree here we should use "mean" as you suggested. Thank you @Zhylkaaa !<|||||>Hi @qiweizhen, I want to verify that I should mean label smoothing loss instead of summing it to be consistent with change of reduction strategy and also should I change `non_pad_mask` to mask where I exclude -100? (You can see this changes in last commit but I just want to be sure π)
Also @patrickvonplaten, I have messed up with rebasing so I needed to make reset hard, is it ok or should I close this PR and open one that doesn't change commit history when I finish?)<|||||>Great PR @Zhylkaaa! I did a small refactor and fixed the test. Thanks for your help @qiweizhen |
transformers | 8,553 | closed | `disable_ngram_loss` doesn't work correctly in ProphetNetForConditionalGeneration | When I am using ProphetNet with `disable_ngram_loss=True` I am getting loss that is greater than with `disable_ngram_loss=False`. It seems to me that this is the problem of setting `fill_(self.padding_idx)` in `_compute_loss` instead of -100 so that ngram part is omitted in loss calculation
Also I think that `loss = F.nll_loss(lprobs, expend_targets.view(-1), reduction="sum")` reduce should be set to `mean` so that model loss is comparable between models working on the same task (like `mbart`). Can somebody tell me if it's a good point or should I leave it as it is? I am planning to add PR with this changes.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.5.0 from source
- Platform: macOS Catalina
- Python version: 3.7.5
- PyTorch version (GPU?): 1.6.0
### Who can help
I can't figure out whom to tag.
## Information
Model I am using (Bert, XLNet ...): ProphetNetForConditionalGeneration
## To reproduce
```
from transformers import XLMProphetNetTokenizer, XLMProphetNetForConditionalGeneration
tokenizer = XLMProphetNetTokenizer.from_pretrained('microsoft/xprophetnet-large-wiki100-cased')
model = XLMProphetNetForConditionalGeneration.from_pretrained('microsoft/xprophetnet-large-wiki100-cased')
inputs = tokenizer('Hi my name is', return_tensors='pt').input_ids
targets = tokenizer('Hi my name is John', return_tensors='pt').input_ids
model_loss = model(input_ids=inputs, labels=targets, return_dict=True).loss
model.disable_ngram_loss = True
model_disable_loss = model(input_ids=inputs, labels=targets, return_dict=True).loss
from torch.nn import CrossEntropyLoss
loss_fct = CrossEntropyLoss(reduction='sum')
logits = model(input_ids=inputs, labels=targets, return_dict=True).logits
loss_cross_entropy = loss_fct(logits.view(-1, model.config.vocab_size), targets.view(-1))
```
the problem is `model_loss < model_disable_loss` and `model_disable_loss != loss_cross_entropy` which it should be I think.
Note:
`CrossEntropyLoss(reduction='sum')` is used to match implementation in `_compute_loss` (`loss = F.nll_loss(lprobs, expend_targets.view(-1), reduction="sum")`) but other models use default reduction which makes outputs incomparable (at least directly)
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
when `model.disable_ngram_loss=True` `CrossEntropyLoss` should be equal to `model(input_ids=inputs, labels=targets, return_dict=True).loss` | 11-15-2020 21:21:07 | 11-15-2020 21:21:07 | |
transformers | 8,552 | closed | T5 & mT5 | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR adds T5v1.1 and MT5.
I was really unsure whether I should make a new model file `modeling_5_v1_1.py` or not. I finally decided (also after discussions in https://github.com/huggingface/transformers/issues/6285) that in this case, it is better to add the few new T5 features to the existing `modeling_t5.py` file.
The context is the following:
The T5 team released weights for a **T5v1.1** model: https://github.com/google-research/text-to-text-transfer-transformer/blob/master/released_checkpoints.md. These model architecture of T5v1.1 is equivalent to T5 besides two changes:
1) input and output word embedding of decoder is not shared anymore
2) different feed forward layer is used
In addition, the **mT5** model checkpoints were also released and are fully based on T5v1_1: https://github.com/google-research/multilingual-t5#released-model-checkpoints .
Now the philosophy of the library is to create a new model class if the architecture slightly differs from a previously integrated model, but I'd argue that in this case it is better to add a new config param called "feed_forward_proj" which defines whether a "relu" (T5) or a "gated-gelu" (T5v1.1) should be used. The arguments for not creating a new model class are the following:
1) Both T5 and T5v1.1 are "officially" the same model and both belong to the same code base: https://github.com/google-research/text-to-text-transfer-transformer
2) T5v1.1 has no official paper and it's quite difficult to find a good name as discussed in https://github.com/huggingface/transformers/issues/6285 => `T5v11` is a very cryptic name IMO and `T5v2` is not ideal either.
3) One could argue that it makes sense to create a mixture of T5 and T5v1v1 by sharing input and output word embedding of the decoder but using the "gated-gelu" feed-forward layer. This design would allow a user to define a model whereas creating a new T5v11 model file would make this impossible
4) `"feed-forward-proj"` is less of a model-specific architecture configuration than `"do_blenderbot_90_layernorm"` IMO
The obvious disadvantage is that I am adding new architecture code to an existing model to make a new model (or new model version) work, which is kinda what we did not want to do.
It's a tough decision here I think, so I'd be very happy for your feedback on this @LysandreJik @sgugger @thomwolf . I'd also understand if you guys think we should make a new model file here.
I already uploaded `google/t5-v1_1-small` and `google/mt5-small`. If you guys are ok with this PR, I'll add the TF code, add the new model to README.md and we're good for merge. | 11-15-2020 21:03:41 | 11-15-2020 21:03:41 | I'm in favor of this solution and happy you selected it :)
Our philosophy of simple to use and modify standalone code is made not to be a hard requirement but a guide for user-experience and in this case I also think the best is to adapt T5 as you do.<|||||>Big thanks for integrating mT5! I checked out your PR and there seems to be a problem with extra_ids. I guess, sentencepiece vocab already has <extra_id> tokens but T5Tokenizer goes on to add it's own. E.g., real id of <external_id_0> is 250099 but T5Tokenizer translates it into 250199<|||||>> Big thanks for integrating mT5! I checked out your PR and there seems to be a problem with extra_ids. I guess, sentencepiece vocab already has <extra_id> tokens but T5Tokenizer goes on to add it's own. E.g., real id of <external_id_0> is 250099 but T5Tokenizer translates it into 250199
You're 100% correct - thanks a lot for letting me know! I corrected the behavior - should be good now: https://huggingface.co/google/mt5-small/tree/main<|||||>Ran the slow tests & added multiple new slow tests & checked the docs => good to merge. |
transformers | 8,551 | closed | "special token {} has to be either str or AddedToken but got: | Hello, two weeks ago I fine-tune the Roberta language model on my data using hugging face (codes given below) and save output at the google drive. Now when using it for semantic search with sentence transformer. I am getting the error that
**"TypeError: special token bos_token has to be either str or AddedToken but got: <class 'dict'>"**
```
from sentence_transformers import SentenceTransformer
from sentence_transformers import models, losses
import scipy.spatial
import pickle as pkl
word_embedding_model = models.RoBERTa("/content/drive/My Drive/Ottawa_citit")
# Apply mean pooling to get one fixed sized sentence vector
pooling_model = models.Pooling(word_embedding_model.get_word_embedding_dimension(),
pooling_mode_mean_tokens=True,
pooling_mode_cls_token=False,
pooling_mode_max_tokens=False)
model = SentenceTransformer(modules=[word_embedding_model, pooling_model])
Previously fine-tuned Roberta
`!python "/content/transformers/examples/contrib/legacy/run_language_modeling.py" \
--output_dir "/content/drive/My Drive/Ottawa_citit" \
--model_name_or_path roberta-base \
--do_train \
--per_gpu_train_batch_size 8 \
--seed 42 \
--train_data_file "/content/input_text.txt" \
--block_size 256 \
--line_by_line \
--learning_rate 6e-4 \
--num_train_epochs 3 \
--save_total_limit 2 \
--save_steps 200 \
--weight_decay 0.01 \
--mlm`
``` | 11-15-2020 20:55:26 | 11-15-2020 20:55:26 | I had the same issue while using SentenceTransformers. When I checked with my backed up pretrained model, the tokenizer_config.json file had been altered for some reaon. I changed it back to the original and it was working fine. Probably some issue with SentenceTransformers calling Huggingface functions on the model.<|||||>If one of you (or @nreimers) want share a colab exhibiting the behavior, happy to investigate further and fix if needed!<|||||>Thanks a lot!!
I fine-tuned the Roberta language model again with hugging face and using it with a sentence transformer for semantic search. And no longer getting the error as mentioned above. But I don't know the reason for the error.
<|||||>I also met this issue before. By uninstalling `sentence-transformers`(I think it may be also ok if you fix the version conflict issue), this bug disapeared.<|||||>Hi @Shafi2016
Could you share you model, so that I can have a look?<|||||>Has there been any movement on this? I'm having the same issue with BartTokenizer using sschelifer/cnn-12-6<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread.<|||||>> Hi @Shafi2016 Could you share you model, so that I can have a look?
Hello @nreimers , I am trying to use paraphrase-multilingual-MiniLM-L12-v2 pretrained model and getting the similar error -
TypeError: special token mask_token has to be either str or AddedToken but got: <class 'dict'>
Would it be possible for you share what change is needed in tokenizer_config.json? Accordingly I will change my tokenizer_config.json.
This is the current version of tokenizer_config.json's mask_token dict -
"mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true, "__type": "AddedToken"}
One of the reason I would like to edit the tokenizer_config.json is to keep my transformer and sentence_transformer versions as is -
sentence-transformers 0.4.1.2
transformers 3.3.1<|||||>update transformers from 3.2.0 to 4.1.1 helps me solve the problem when call AutoTokenizer.from_pretrained get 'TypeError: special token bos_token has to be either str or AddedToken but got: <class 'dict'>'. |
transformers | 8,550 | closed | Create README.md for Chinese RoBERTa Miniatures | Create model card for Chinese RoBERTa Miniatures. | 11-15-2020 18:14:17 | 11-15-2020 18:14:17 | Create model card for Chinese RoBERTa Miniatures. |
transformers | 8,545 | closed | Pretrain BERT with user defined vocabulary | # π Feature request
I'm wondering if there is a way to pretrain BERT with user-defined vocabulary, number of layers/heads, and use a WordLevel Tokenizer instead of a WordPiece Tokenizer current BERT uses. Thank you!
## Motivation
This could help the BERT mode to adapt to different tasks.
| 11-15-2020 05:59:48 | 11-15-2020 05:59:48 | Hi! Yes, you can definitely do so (not sure about the word level).
You can define a custom model configuration:
```py
from transformers import BertConfig
config = BertConfig(vocab_size=xxx, num_attention_heads=xxx, ...)
```
You can then use that configuration with the `run_mlm.py` script to train your model.
Regarding the tokenizer, you could use a word-level, but you would need to manually modify the files for that. There exists a word-level tokenizer in the `tokenization_bert.py` file that you could leverage for this.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>> tokenization_bert.py
excuse me does this class have the functions that help in word-level, not sub-word level , please ? |
transformers | 8,544 | closed | Update README.md | Modified Model in Action section. The class `AutoModelWithLMHead` is deprecated so changed it to `AutoModelForSeq2SeqLM` for encoder-decoder models. Removed duplicate eos token.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSTM: @stas00
-->
| 11-15-2020 05:11:21 | 11-15-2020 05:11:21 | What do you think @mrm8488?<|||||>Great! The community rocks! So I think I have to update many T5 model cards |
transformers | 8,543 | closed | Upload models using Git fails | ## Environment info
- `transformers` version: 3.5.0
- Platform: Linux-4.15.0-112-generic-x86_64-with-debian-buster-sid
- Python version: 3.6.12
- PyTorch version (GPU?): 1.7.0 (False)
- Tensorflow version (GPU?): 2.3.1 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
Model Cards: @julien-c
T5: @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...):
T5
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
git clone https://huggingface.co/Rostlab/prot_t5_xl_bfd
Cloning into 'prot_t5_xl_bfd'...
remote: Enumerating objects: 31, done.
remote: Counting objects: 100% (31/31), done.
remote: Compressing objects: 100% (29/29), done.
remote: Total 31 (delta 13), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (31/31), done.
```
`cp config.json pytorch_model.bin prot_t5_xl_bf`
`git add --all`
```
git status
On branch main
Your branch is up to date with 'origin/main'.
Changes to be committed:
(use "git reset HEAD <file>..." to unstage)
modified: config.json
modified: pytorch_model.bin
```
`git commit -m "[T5] Fix load weights function #8528"`
```
git push
Username for 'https://huggingface.co': xxxxxx
Password for 'https://[email protected]':
Counting objects: 4, done.
Delta compression using up to 80 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (4/4), 4.92 GiB | 23.09 MiB/s, done.
Total 4 (delta 2), reused 1 (delta 0)
error: RPC failed; HTTP 504 curl 22 The requested URL returned error: 504 Gateway Time-out
fatal: The remote end hung up unexpectedly
fatal: The remote end hung up unexpectedly
Everything up-to-date
```
OR
```
GIT_CURL_VERBOSE=1 git push
* Couldn't find host huggingface.co in the .netrc file; using defaults
* Trying 192.99.39.165...
* TCP_NODELAY set
* Connected to huggingface.co (192.99.39.165) port 443 (#0)
* found 140 certificates in /etc/ssl/certs/ca-certificates.crt
* found 421 certificates in /etc/ssl/certs
* ALPN, offering http/1.1
* SSL connection using TLS1.2 / ECDHE_RSA_AES_256_GCM_SHA384
* server certificate verification OK
* server certificate status verification SKIPPED
* common name: huggingface.co (matched)
* server certificate expiration date OK
* server certificate activation date OK
* certificate public key: RSA
* certificate version: #3
* subject: CN=huggingface.co
* start date: Tue, 10 Nov 2020 08:05:46 GMT
* expire date: Mon, 08 Feb 2021 08:05:46 GMT
* issuer: C=US,O=Let's Encrypt,CN=Let's Encrypt Authority X3
* compression: NULL
* ALPN, server accepted to use http/1.1
> GET /Rostlab/prot_t5_xl_bfd/info/refs?service=git-receive-pack HTTP/1.1
Host: huggingface.co
User-Agent: git/2.17.1
Accept: */*
Accept-Encoding: gzip
Accept-Language: C, *;q=0.9
Pragma: no-cache
< HTTP/1.1 401 Unauthorized
< Server: nginx/1.14.2
< Date: Sat, 14 Nov 2020 23:43:52 GMT
< Content-Type: text/plain; charset=utf-8
< Content-Length: 12
< Connection: keep-alive
< X-Powered-By: huggingface-moon
< WWW-Authenticate: Basic realm="Authentication required", charset="UTF-8"
< ETag: W/"c-dAuDFQrdjS3hezqxDTNgW7AOlYk"
<
* Connection #0 to host huggingface.co left intact
Username for 'https://huggingface.co': agemagician
Password for 'https://[email protected]':
* Couldn't find host huggingface.co in the .netrc file; using defaults
* Found bundle for host huggingface.co: 0x55acdab63f80 [can pipeline]
* Re-using existing connection! (#0) with host huggingface.co
* Connected to huggingface.co (192.99.39.165) port 443 (#0)
* Server auth using Basic with user 'agemagician'
> GET /Rostlab/prot_t5_xl_bfd/info/refs?service=git-receive-pack HTTP/1.1
Host: huggingface.co
Authorization: Basic YWdlbWFnaWNpYW46VWRjXzEyMDA=
User-Agent: git/2.17.1
Accept: */*
Accept-Encoding: gzip
Accept-Language: C, *;q=0.9
Pragma: no-cache
< HTTP/1.1 200 OK
< Server: nginx/1.14.2
< Date: Sat, 14 Nov 2020 23:43:59 GMT
< Content-Type: application/x-git-receive-pack-advertisement
< Transfer-Encoding: chunked
< Connection: keep-alive
< X-Powered-By: huggingface-moon
<
* Connection #0 to host huggingface.co left intact
Counting objects: 4, done.
Delta compression using up to 80 threads.
Compressing objects: 100% (3/3), done.
* Couldn't find host huggingface.co in the .netrc file; using defaults
* Found bundle for host huggingface.co: 0x55acdab63f80 [can pipeline]
* Re-using existing connection! (#0) with host huggingface.co
* Connected to huggingface.co (192.99.39.165) port 443 (#0)
* Server auth using Basic with user 'agemagician'
> POST /Rostlab/prot_t5_xl_bfd/git-receive-pack HTTP/1.1
Host: huggingface.co
Authorization: Basic YWdlbWFnaWNpYW46VWRjXzEyMDA=
User-Agent: git/2.17.1
Content-Type: application/x-git-receive-pack-request
Accept: application/x-git-receive-pack-result
Content-Length: 4
* upload completely sent off: 4 out of 4 bytes
< HTTP/1.1 200 OK
< Server: nginx/1.14.2
< Date: Sat, 14 Nov 2020 23:44:02 GMT
< Content-Type: application/x-git-receive-pack-result
< Content-Length: 0
< Connection: keep-alive
< X-Powered-By: huggingface-moon
<
* Connection #0 to host huggingface.co left intact
* Couldn't find host huggingface.co in the .netrc file; using defaults
* Found bundle for host huggingface.co: 0x55acdab63f80 [can pipeline]
* Re-using existing connection! (#0) with host huggingface.co
* Connected to huggingface.co (192.99.39.165) port 443 (#0)
* Server auth using Basic with user 'xxxxxx'
> POST /Rostlab/prot_t5_xl_bfd/git-receive-pack HTTP/1.1
Host: huggingface.co
Authorization: Basic YWdlbWFnaWNpYW46VWRjXzEyMDA=
User-Agent: git/2.17.1
Accept-Encoding: gzip
Content-Type: application/x-git-receive-pack-request
Accept: application/x-git-receive-pack-result
Transfer-Encoding: chunked
Writing objects: 100% (4/4), 4.92 GiB | 23.17 MiB/s, done.
Total 4 (delta 2), reused 1 (delta 0)
* Signaling end of chunked upload via terminating chunk.
* Signaling end of chunked upload via terminating chunk.
* The requested URL returned error: 504 Gateway Time-out
* stopped the pause stream!
* Closing connection 0
error: RPC failed; HTTP 504 curl 22 The requested URL returned error: 504 Gateway Time-out
fatal: The remote end hung up unexpectedly
fatal: The remote end hung up unexpectedly
Everything up-to-date
```
## Expected behavior
I had issue before for uploading big models like T5 "#7480".
It was marked as solved after moving to model versioning:
https://github.com/huggingface/transformers/pull/8324
However, @patrickvonplaten fixed some problems in T5 weights :
https://github.com/huggingface/transformers/pull/8528
I tried to update the model but it still doesn't work as showed above.
I tried several tricks like:
https://stackoverflow.com/questions/54061758/error-rpc-failed-http-504-curl-22-the-requested-url-returned-error-504-gatewa
But could not solve it.
Any ideas ? | 11-14-2020 23:54:59 | 11-14-2020 23:54:59 | Hi @agemagician ,
I think this is heavily related to #8480 and can only be fixed by HF team at the moment.<|||||>yep , @stefan-it . It seems to be a similar issue.
In this case @julien-c or @patrickvonplaten can someone update ProtT5-XL-BFD Model:
https://huggingface.co/Rostlab/prot_t5_xl_bfd
Configuration:
https://www.dropbox.com/s/pchr2vbvckanfu5/config.json?dl=1
Pytorch Model:
https://www.dropbox.com/s/2brwq1cvxo116c7/pytorch_model.bin?dl=1
Thanks in advance for your help<|||||>Yes @agemagician, for clarity, I will re-open your initial issue which is indeed not yet closed (although we now know how to support it)<|||||>I will be grateful if someone could update the model from the above links until you fix this issue.<|||||>Should be good, I updated tf and pt - lemme know if something didn't work :-) <|||||>Perfect, thanks a lot Patrick. We will test it right away.
Hopefully, the issue will be fixed soon, so I don't have to waste your time again with the 11B model soon.<|||||>> Perfect, thanks a lot Patrick. We will test it right away.
>
> Hopefully, the issue will be fixed soon, so I don't have to waste your time again with the 11B model soon.
No worries at all! Feel free to open a new issue -> doesn't take me long at all to upload it :-) <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>This can now be closed. Thanks for reporting!<|||||>```
/content/russianpoetrymany# git push
Counting objects: 11, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (9/9), done.
Writing objects: 100% (11/11), 6.26 GiB | 91.43 MiB/s, done.
Total 11 (delta 0), reused 2 (delta 0)
error: RPC failed; HTTP 504 curl 22 The requested URL returned error: 504 Gateway Time-out
fatal: The remote end hung up unexpectedly
fatal: The remote end hung up unexpectedly
Everything up-to-date
```
@julien-c @patrickvonplaten uploading a mbart model from google colab and got it . any pointers ??<|||||>```Enumerating objects: 13, done.
Counting objects: 100% (13/13), done.
Delta compression using up to 16 threads
Compressing objects: 100% (10/10), done.
Writing objects: 100% (12/12), 2.55 GiB | 1.03 MiB/s, done.
Total 12 (delta 1), reused 1 (delta 0), pack-reused 0
error: RPC failed; HTTP 504 curl 22 The requested URL returned error: 504
send-pack: unexpected disconnect while reading sideband packet
fatal: the remote end hung up unexpectedly
Everything up-to-date
```
Me too<|||||>```
Counting objects: 3, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (2/2), done.
Writing objects: 100% (3/3), 1.80 GiB | 37.19 MiB/s, done.
Total 3 (delta 0), reused 1 (delta 0)
error: RPC failed; HTTP 504 curl 22 The requested URL returned error: 504 Gateway Time-out
fatal: The remote end hung up unexpectedly
fatal: The remote end hung up unexpectedly
Everything up-to-date
```
Same here with `!git push` in `colab`. `!GIT_CURL_VERBOSE=1 git push` does not help.<|||||>Those are usually transient errors, can you try again with flags `GIT_CURL_VERBOSE=1 GIT_TRACE=1` and paste the output to a gist if it occurs again?<|||||>This happened to me too:
```
β― git push origin main
Enumerating objects: 5, done.
Counting objects: 100% (5/5), done.
Delta compression using up to 8 threads
Compressing objects: 100% (3/3), done.
Writing objects: 75% (3/4), 2.00 GiB | 2.75 MiB/s
Writing objects: 100% (4/4), 2.22 GiB | 2.86 MiB/s, done.
Total 4 (delta 0), reused 1 (delta 0), pack-reused 0
error: RPC failed; HTTP 504 curl 22 The requested URL returned error: 504
send-pack: unexpected disconnect while reading sideband packet
fatal: the remote end hung up unexpectedly
Everything up-to-date
```<|||||>Having the same issue when uploading a ~6GB GPT-J model via command line:
```
git push
Enumerating objects: 4, done.
Counting objects: 100% (4/4), done.
Delta compression using up to 10 threads
Compressing objects: 100% (2/2), done.
Writing objects: 100% (3/3), 5.11 GiB | 624.00 KiB/s, done.
Total 3 (delta 1), reused 1 (delta 0), pack-reused 0
error: RPC failed; HTTP 504 curl 22 The requested URL returned error: 504
send-pack: unexpected disconnect while reading sideband packet
fatal: the remote end hung up unexpectedly
```
It seems to be always around 5.11GB (total file size is 6.32GB). When uploading the same file via the HF website interface I get an error 400. Tried it from various locations (wifis), always the same behavior.
Any helpful advises on how to proceed?
<|||||>I was able to upload it after encountering the error, after restarting the run-time and using an earlier version of HuggingFace<|||||>Running into the same issue here trying to upload a 1.4Gb dataset
```
Enumerating objects: 23985, done.
Counting objects: 100% (23985/23985), done.
Delta compression using up to 4 threads
Compressing objects: 100% (23948/23948), done.
error: RPC failed; HTTP 408 curl 22 The requested URL returned error: 408
send-pack: unexpected disconnect while reading sideband packet
Writing objects: 100% (23949/23949), 1.43 GiB | 4.74 MiB/s, done.
Total 23949 (delta 2), reused 23948 (delta 1), pack-reused 0
fatal: the remote end hung up unexpectedly
Everything up-to-date
```<|||||>How did you all solve this?<|||||>@sgugger @thomwolf can you reopen this issue? |
transformers | 8,542 | closed | Failed in predict function after converting xlnet model to onnx format | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:ubuntu 18.04
- Python version:3.7.9
- PyTorch version (1.7.0):
- Using GPU in script:No
- Using distributed or parallel set-up in script?:No
- onnx 1.8.0
- onnxruntime 1.5.2
- transformers 3.5.1
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
TransfoXL/XLNet: @TevenLeScao
-->
TransfoXL/XLNet:
@TevenLeScao
## Information
Model I am using (XLNet):
The problem arises when using:
* [x] my own modified scripts: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.train a pytorch model by XLNetForSequenceClassification successfully
2.convert to onnx format by convert_graph_to_onnx.py with some warning:
python ../convert_graph_to_onnx.py --pipeline sentiment-analysis --framework pt --opset 12 ....
====== Converting model to ONNX ======
ONNX opset version set to: 12
Loading pipeline (model: /home/yhq/onnx/icd_model/9-4, tokenizer: /home/yhq/onnx/icd_model/9-4)
Using framework PyTorch: 1.7.0
Found input input_ids with shape: {0: 'batch', 1: 'sequence'}
Found input token_type_ids with shape: {0: 'batch', 1: 'sequence'}
Found input attention_mask with shape: {0: 'batch', 1: 'sequence'}
Found output output_0 with shape: {0: 'batch'}
Found output output_1 with shape: {1: 'batch', 0: 'sequence'}
Found output output_2 with shape: {1: 'batch', 0: 'sequence'}
Found output output_3 with shape: {1: 'batch', 0: 'sequence'}
Found output output_4 with shape: {1: 'batch', 0: 'sequence'}
Found output output_5 with shape: {1: 'batch', 0: 'sequence'}
Found output output_6 with shape: {1: 'batch', 0: 'sequence'}
Found output output_7 with shape: {1: 'batch', 0: 'sequence'}
Found output output_8 with shape: {1: 'batch', 0: 'sequence'}
Found output output_9 with shape: {1: 'batch', 0: 'sequence'}
Found output output_10 with shape: {1: 'batch', 0: 'sequence'}
Found output output_11 with shape: {1: 'batch', 0: 'sequence'}
Found output output_12 with shape: {1: 'batch', 0: 'sequence'}
Ensuring inputs are in correct order
mems is not present in the generated input list.
Generated inputs order: ['input_ids', 'attention_mask']
/home/yhq/anaconda3/envs/convert/lib/python3.7/site-packages/torch/onnx/utils.py:1109: UserWarning: Provided key token_type_ids for dynamic axes is not a valid input/output name
warnings.warn("Provided key {} for dynamic axes is not a valid input/output name".format(key))
/home/yhq/anaconda3/envs/convert/lib/python3.7/site-packages/transformers/modeling_xlnet.py:1160: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
non_tgt_mask = -torch.eye(qlen).to(attn_mask)
/home/yhq/anaconda3/envs/convert/lib/python3.7/site-packages/transformers/modeling_utils.py:1673: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
input_tensor.shape == tensor_shape for input_tensor in input_tensors
3.the model I trained works well while predict by loading the model in pytorch format, however failed while loading the model in onnx format,errors as follow:
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Add node. Name:'Add_26' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/math/element_wise_ops.h:475 void onnxruntime::BroadcastIterator::Init(int64_t, int64_t) axis == 1 || axis == largest was false. Attempting to broadcast an axis by a dimension other than 1. 6 by 48
my codes:
import onnxruntime,os,torch
from tqdm.auto import tqdm
from torch.utils.data import TensorDataset,SequentialSampler,DataLoader
from transformers import XLNetTokenizer
sess_options = onnxruntime.SessionOptions()
sess_options.graph_optimization_level = onnxruntime.GraphOptimizationLevel.ORT_ENABLE_ALL
sess_options.intra_op_num_threads = 1
session = onnxruntime.InferenceSession(os.path.join(onnx_path,'onnx_model.onnx'), sess_options)
tokenizer = XLNetTokenizer.from_pretrained(onnx_path)
input_ids = []
attention_masks = []
for name in name_list:
encoded_dict = tokenizer.encode_plus(
name,
add_special_tokens=True,
max_length=max_len,
pad_to_max_length=True,
return_attention_mask=True,
return_tensors='pt',
)
input_ids.append(encoded_dict['input_ids'])
attention_masks.append(encoded_dict['attention_mask'])
input_ids = torch.cat(input_ids, dim=0)
attention_mask = torch.cat(attention_masks, dim=0)
prediction_data = TensorDataset(input_ids,attention_mask)
prediction_sampler = SequentialSampler(prediction_data)
prediction_dataloader = DataLoader(prediction_data, sampler=prediction_sampler, batch_size=1)
for step in tqdm(prediction_dataloader, desc="Predicting"):
step = tuple(t.detach().cpu().numpy() for t in step)
ort_inputs = {'input_ids': step[0],
'attention_mask': step[1]
}
logits = session.run(None, ort_inputs) #failed
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 11-14-2020 20:49:35 | 11-14-2020 20:49:35 | I am facing a similar issue. Oddly, it seems to work when input length is 6. Could anyone help out here please? @EricYangCn, were you able to resolve this? Thank you.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 8,541 | closed | Specify label name | I'm trying to train a classification network on T5 encoder from this training code:
https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb
As I have a special output for label, I need to make trainer associate field from batch with field from my net output dictionary, as they represent output class label. How to do that? | 11-14-2020 20:40:34 | 11-14-2020 20:40:34 | |
transformers | 8,540 | closed | Add model_max_length property on fast tokenizers | Fast tokenizers doesn't have model_max_length property as their "slow" counterpart.
```python
>>> t.model_max_len
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'BertTokenizerFast' object has no attribute 'model_max_len'
```
Signed-off-by: Morgan Funtowicz <[email protected]> | 11-14-2020 19:40:20 | 11-14-2020 19:40:20 | I think this is handled in the base tokenizer class, Morgan.<|||||>I think I've got an issue in my dev env, something fucked up... |
transformers | 8,539 | closed | TFLongformer Error: TypeError: __init__() missing 1 required positional argument: 'last_hidden_state' | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.5.0
- Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1
- Python version: 3.7.3
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.3.1 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
Longformer/Reformer: @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): TFLongformer
The problem arises when using:
* [x] my own modified scripts: Scripts below.
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: TriviaQA converted to SQuAD format
Hi, Iβm getting an error when trying to fine-tune TFLongformer on TriviaQA using a simple architecture that is discussed [here](https://keras.io/examples/nlp/text_extraction_with_bert/) (I want to use TFLongformer instead of BERT and formatted-TriviaQA instead of SQuAD).
The error I'm getting is:
```
Traceback (most recent call last): [29/1207]
File "model_for_issues.py", line 102, in <module>
model = build_model()
File "model_for_issues.py", line 65, in build_model
embedding = encoder(input_ids, attention_mask=attention_mask).last_hidden_state
File "path/to/project/env/lib/python3.7/site-packages/tensorflow/python/keras/engine/base$
layer.py", line 926, in __call__
input_list)
File "path/to/project/env/lib/python3.7/site-packages/tensorflow/python/keras/engine/base$
layer.py", line 1117, in _functional_construction_call
outputs = call_fn(cast_inputs, *args, **kwargs)
File "path/to/project/env/lib/python3.7/site-packages/tensorflow/python/autograph/impl/api
.py", line 258, in wrapper
raise e.ag_error_metadata.to_exception(e)
TypeError: in user code:
path/to/project/env/lib/python3.7/site-packages/transformers/modeling_tf_longformer.py:1
760 call *
outputs = self.longformer(inputs, **kwargs)
path/to/project/env/lib/python3.7/site-packages/transformers/modeling_tf_longformer.py:1
509 call *
encoder_outputs = self.encoder(
path/to/project/env/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_laye
r.py:926 __call__ **
input_list)
path/to/project/env/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_laye
r.py:1147 _functional_construction_call
outputs)
path/to/project/env/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_laye
r.py:2568 _set_connectivity_metadata
outputs = nest.pack_sequence_as(outputs, outputs_copy)
path/to/project/env/lib/python3.7/site-packages/tensorflow/python/util/nest.py:570 pack_
sequence_as
return _pack_sequence_as(structure, flat_sequence, expand_composites)
path/to/project/env/lib/python3.7/site-packages/tensorflow/python/util/nest.py:533 _pack
_sequence_as
return sequence_fn(structure, packed)
path/to/project/env/lib/python3.7/site-packages/tensorflow/python/util/nest.py:152 _sequ
ence_like
d = instance_type()
TypeError: __init__() missing 1 required positional argument: 'last_hidden_state'
```
_Remark: I changed the path to the working directory to βpath/to/projectβ for my privacy_
## To reproduce
Steps to reproduce the behavior:
1. Convert TriviaQA data to SQuAD format using this script Longformerβs github https://github.com/allenai/longformer/blob/master/scripts/cheatsheet.txt#L29
2. Use this code to re-format the data to the format I work with:
```python
""" Usage:
TriviaQA_formatting.py [--in=INPUT_FILE] [--out=OUTPUT_FILE] [--debug]
"""
# External imports
import logging
import pdb
from pprint import pprint
from pprint import pformat
from docopt import docopt
from pathlib import Path
from tqdm import tqdm
import json
# Local imports
import numpy as np
#----
if __name__ == "__main__":
# Parse command line arguments
args = docopt(__doc__)
inp_fn = Path(args["--in"]) if args["--in"] else None
out_fn = Path(args["--out"]) if args["--out"] else Path("./tmp.out")
# Determine logging level
debug = args["--debug"]
if debug:
logging.basicConfig(level = logging.DEBUG)
else:
logging.basicConfig(level = logging.INFO)
# Start computation
data = []
with open(inp_fn, 'r') as f:
print("started loading")
data = json.load(f)["data"]
print("ended loading")
data_dict = {}
contexts = [entry["paragraphs"][0]["context"] for entry in tqdm(data)]
questions = [entry["paragraphs"][0]["qas"][0]["question"] for entry in tqdm(data)]
answer_texts = [entry["paragraphs"][0]["qas"][0]["aliases"] for entry in tqdm(data)]
start_indices = [None] * len(data)
end_indices = [None] * len(data)
for index, entry in enumerate(data):
# taking the first answer
answers = entry["paragraphs"][0]["qas"][0]["answers"]
start_indices[index] = answers[0]["answer_start"] if answers else -1
end_indices[index] = (answers[0]["answer_start"] + len(answers[0]["text"])) if answers else -1
data_dict = {
"context": contexts,
"question": questions,
"start_index": start_indices,
"end_index": end_indices,
"answer_texts": answer_texts,
}
with open(out_fn, 'w+') as f:
json.dump(data_dict, f)
# End
logging.info("DONE")
```
Scripts used to run this formatting code:
```
python TriviaQA_formatting.py --in=squad-wikipedia-train-4096.json --out=formatted_wiki_train_4096.json
```
3. Run this script for tokenization and training model:
```python
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from transformers import TFLongformerModel,LongformerConfig, LongformerTokenizer, LongformerTokenizerFast
from longformer_encoding import *
import numpy as np
from tqdm import tqdm
import json
def encode_sentence(s, tokenizer):
s = s + tokenizer.sep_token
return list(tokenizer.encode(s))
def pad_to_max_length(t, max_len, padding_value):
t = t[:, :max_len]
paddings = [[0, 0], [0, max_len-tf.shape(t)[1]]]
return tf.pad(t, paddings, 'CONSTANT', constant_values=padding_value)
def longformer_encode(texts1, texts2, answer_end, tokenizer, max_len=4096):
num_examples = len(texts1)
sentence1 = tf.ragged.constant([encode_sentence(tokenizer.cls_token+s, tokenizer)
for s in tqdm(texts1)])
sentence2 = tf.ragged.constant([
encode_sentence(s, tokenizer)
for s in tqdm(texts2)])
input_word_ids = tf.concat([sentence1, sentence2], axis=-1)
attention_mask = tf.ones_like(input_word_ids).to_tensor()
type_s1 = tf.ones_like(sentence1)
type_s2 = tf.zeros_like(sentence2)
global_attention_mask = tf.concat(
[type_s1, type_s2], axis=-1).to_tensor()
sentence2_start_index = [len(s1) for s1 in sentence1]
# get indices of examples to ignore:
valid_sample_indices = []
for i in tqdm(range(num_examples)):
if sentence2_start_index[i] + answer_end[i] <= max_len:
valid_sample_indices.append(i)
re
inputs = {
'input_ids': pad_to_max_length(input_word_ids.to_tensor(), max_len, tokenizer.pad_token_id),
'attention_mask': pad_to_max_length(attention_mask, max_len, 0),
'global_attention_mask': pad_to_max_length(global_attention_mask, max_len, 0),
}
return inputs, valid_sample_indices
def read_data_file(fn):
with open(fn, 'r') as f:
data = json.load(f)
answer_exists_slice = np.ndarray.flatten(np.argwhere(np.array(data["end_index"]) != -1))
return {key:np.ndarray.tolist(np.array(value)[answer_exists_slice]) for (key,value) in data.items()}
def build_model( max_len=4096):
config = LongformerConfig()
LongformerModel = TFLongformerModel(config=config)
encoder = LongformerModel.from_pretrained('allenai/longformer-base-4096',return_dict=True)
input_ids = layers.Input(shape=(max_len,), dtype=tf.int32, name="input_ids")
attention_mask = layers.Input(shape=(max_len,), dtype=tf.int32, name="attention_mask")
global_attention_mask = layers.Input(shape=(max_len,), dtype=tf.int32, name="global_attention_mask")
embedding = encoder(input_ids, attention_mask=attention_mask).last_hidden_state
start_logits = layers.Dense(1, name="start_logit", use_bias=False)(embedding)
start_logits = layers.Flatten()(start_logits)
end_logits = layers.Dense(1, name="end_logit", use_bias=False)(embedding)
end_logits = layers.Flatten()(end_logits)
start_probs = layers.Activation(keras.activations.softmax)(start_logits)
end_probs = layers.Activation(keras.activations.softmax)(end_logits)
model = keras.Model(
inputs=[input_ids, attention_mask],
outputs=[start_probs, end_probs],
)
loss = keras.losses.SparseCategoricalCrossentropy(from_logits=False)
optimizer = keras.optimizers.Adam(lr=5e-5)
model.compile(optimizer=optimizer, loss=[loss, loss])
model.summary()
return model
# Preprocessing
train_fn = "data/formatted_wiki_train_4096.json"
data = read_data_file(train_fn)
print("One data example:\n Question: {}\ncontext: {}\nstart_index: {}".format(data['question'][0], data['context'][0][:200], data["start_index"][0]))
tokenizer = LongformerTokenizerFast.from_pretrained('allenai/longformer-base-4096')
inputs, valid_sample_indices = longformer_encode(data["question"], data["context"], data["end_index"], tokenizer, 4096)
valid_samples_slice = np.array(valid_sample_indices)
clean_inputs = {key:np.array(value)[valid_samples_slice] for (key,value) in inputs.items()}
clean_data = {key:np.array(value)[valid_samples_slice] for (key,value) in data.items()}
clean_data["encoded_input"] = clean_inputs
x_train = clean_data["encoded_input"]
y_train = [clean_data["start_index"], clean_data["end_index"]]
model = build_model()
model.fit(
x_train,
y_train,
epochs=3,
verbose=2,
batch_size=1,
)
```
###### Trainig sample example:
```
One data example:
Question: Ownership of which worldwide publishing concern gave Conrad Black control of the Daily Telegraph?
context: Conrad Moffat Black , Baron Black of Crossharbour , ( born 25 August 1944 ) is a Canadian-born British former newspaper publisher and author . He is a non-affiliated life peer .
Black controlled H
start_index: 199
```
## Expected behavior
The model should start training.
| 11-14-2020 17:33:33 | 11-14-2020 17:33:33 | Hey @SapirWeissbuch - it's very hard for us to debug such long and user-specific examples that also require external files.
I'd recommend that you either:
1) Narrow down the error much more and give us a 10-liner to reproduce the error (also making sure that it is in fact an error and not a wrong pre-processing step)
OR
2) Ask your question on the forum https://discuss.huggingface.co/
OR
3) Make a google colab (also keeping it to an absolute minimum of lines) to reproduce your error which makes it easier for us to quickly debug the problem.
We are only 4 people maintaining the library and sadly don't find the time to debug such longer issues. Hope you understand and looking forward for a pin-pointed error description :-) <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>@SapirWeissbuch , I'm also facing this issue, were you able to resolve this? I get this error when running training for "allenai/longformer-base-4096" model with distributed training |
transformers | 8,538 | closed | TFBertModel not working at all with any type of model | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: macOS
- Python version: 3.7
- PyTorch version (GPU?): NO
- Tensorflow version (GPU?): 2.3.1
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?: NO
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): Bert "dbmdz/bert-base-italian-xxl-cased"
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
<---------------------------------------------->
Hi @LysandreJik , @stefan-it , @sgugger , I'm trying to use `dbmdz/bert-base-italian-xxl-cased`for creating a keras model for a classification task.
I've followed the documentation but I continue to receive the following error:
```
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[5,0] = 102 is not in [0, 2)
[[node functional_1/bert/embeddings/token_type_embeddings/embedding_lookup (defined at /anaconda3/envs/profanity-detector/lib/python3.7/site-packages/transformers/modeling_tf_bert.py:186) ]] [Op:__inference_train_function_29179]
```
This is the model:
```
from transformers import TFBertModel, BertTokenizer
bert_model = TFBertModel.from_pretrained("dbmdz/bert-base-italian-xxl-cased")
tokenizer = BertTokenizer.from_pretrained("dbmdz/bert-base-italian-xxl-cased")
input_ids = tf.keras.layers.Input(shape=(constants.MAX_SEQ_LENGTH,), dtype=tf.int32)
token_type_ids = tf.keras.layers.Input(shape=(constants.MAX_SEQ_LENGTH,), dtype=tf.int32)
attention_mask = tf.keras.layers.Input(shape=(constants.MAX_SEQ_LENGTH,), dtype=tf.int32)
seq_output, _ = bert_model({
"input_ids": input_ids,
"token_type_ids": token_type_ids,
"attention_mask": attention_mask
})
pooling = tf.keras.layers.GlobalAveragePooling1D()(seq_output)
dropout = tf.keras.layers.Dropout(0.2)(pooling)
output = tf.keras.layers.Dense(constants.CLASSES, activation="softmax")(dropout)
model = tf.keras.Model(
inputs=[input_ids, token_type_ids, attention_mask],
outputs=[output]
)
model.compile(optimizer=tf.optimizers.Adam(lr=0.00001), loss='sparse_categorical_crossentropy', metrics=['accuracy'])
```
My dataset is tokenized by this method:
```
def map_to_dict(self, input_ids, attention_masks, token_type_ids, labels):
return {
"input_ids": input_ids,
"token_type_ids": token_type_ids,
"attention_mask": attention_masks,
}, labels
def tokenize_sequences(self, tokenizer, max_sequence_length, data, labels):
try:
token_ids = []
token_type_ids = []
attention_mask = []
for sentence in data:
bert_input = tokenizer.encode_plus(
sentence,
add_special_tokens=True, # add [CLS], [SEP]
max_length=max_sequence_length, # max length of the text that can go to BERT
truncation=True,
pad_to_max_length=True, # add [PAD] tokens
return_attention_mask=True # add attention mask to not focus on pad tokens
)
token_ids.append(bert_input["input_ids"])
token_type_ids.append(bert_input["token_type_ids"])
attention_mask.append(bert_input["attention_mask"])
return tf.data.Dataset.from_tensor_slices((token_ids, token_type_ids, attention_mask, labels)).map(self.map_to_dict)
except Exception as e:
stacktrace = traceback.format_exc()
logger.error("{}".format(stacktrace))
raise e
ds_train_encoded = tokenize_sequences(tokenizer, 512, X_train, y_train).shuffle(10000).batch(6)
```
X_train examples:
```
["Questo video Γ¨ davvero bellissimo", "La qualitΓ del video non Γ¨ proprio il massimo"......]
```
y_train examples:
```
[[1], [0]...]
```
I continue to receive the error described before.
```
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[5,0] = 102 is not in [0, 2)
[[node functional_1/bert/embeddings/token_type_embeddings/embedding_lookup (defined at /anaconda3/envs/profanity-detector/lib/python3.7/site-packages/transformers/modeling_tf_bert.py:186) ]] [Op:__inference_train_function_29179]
```
If I try to use TFBertForSequenceClassification everything works fine (for this reason I'm excluding tokenization problems).
Can you please provide a solution or a well documented guide for using TFBertModel class with Keras model (I cannot find it)?
Thank you
| 11-14-2020 15:22:13 | 11-14-2020 15:22:13 | Hi! Thanks for opening such a detailed issue!
Let me ping @jplu, the TensorFlow master so that he may help you.<|||||>Hello!
This is because you are not updating your embedding size for `token_type_embeddings`. If you check carefully the config for the `dbmdz/bert-base-italian-xxl-cased` mode you will see:
```
BertConfig {
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 0,
"type_vocab_size": 2,
"vocab_size": 32102
}
```
And `type_vocab_size` equals `2`. While in your example your are trying with a different value, that's why the error tells you that it cannot find any lookup in the range `[0:2)`.
<|||||>Hi @jplu , thank you for your answer. Now it works!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread.<|||||>I am trying this exact thing and i get an error on this line:
net = tf.keras.layers.Dense(512, activation='relu')(seq_output)
Inputs to a layer should be tensors. Got: last_hidden_state
Any ideas? |
transformers | 8,537 | closed | Add a new model ConvBert | # π New model addition
Pre-trained language models like BERT and its variants have recently achieved impressive performance in various natural language understanding tasks. However, BERT heavily relies on the global self-attention block and thus suffers large memory footprint and computation cost. Although all its attention heads query on the whole input sequence for generating the attention map from a global perspective, we observe some heads only need to learn local dependencies, which means the existence of computation redundancy. We therefore propose a novel span-based dynamic convolution to replace these self-attention heads to directly model local dependencies. The novel convolution heads, together with the rest self-attention heads, form a new mixed attention block that is more efficient at both global and local context learning. We equip BERT with this mixed attention design and build a ConvBERT model. Experiments have shown that ConvBERT significantly outperforms BERT and its variants in various downstream tasks, with lower training cost and fewer model parameters. Remarkably, ConvBERTbase model achieves 86.4 GLUE score, 0.7 higher than ELECTRAbase, while using less than 1/4 training cost.
<!-- Important information -->
## Open source status
* [x] the model implementation is available: (https://github.com/yitu-opensource/ConvBert)
* [x] the model weights are available: (https://drive.google.com/drive/folders/1pSsPcQrGXyt1FB45clALUQf-WTNAbUQa)
* [x] who are the authors: (@zihangJiang @zhoudaquan)
| 11-14-2020 09:29:35 | 11-14-2020 09:29:35 | there is one implement:https://github.com/JunnYu/ConvBert_huggingface<|||||>I have implemented this model in [https://github.com/gitabtion/ConvBert-PyTorch](https://github.com/gitabtion/ConvBert-PyTorch), and it pass the unittest right now. :heavy_check_mark:<|||||>> I have implemented this model in https://github.com/gitabtion/ConvBert-PyTorch, and it pass the unittest right now. heavy_check_mark
how about a conversion script? |
transformers | 8,536 | closed | Pretrain PEGASUS from scratch | I want to pre-train PEGASUS model from scratch on a language other than English. Is there any way to do this using huggingace API's? The source code released by the authors is complicated in use to pre-train. Also little documentation available to do this. | 11-14-2020 08:40:32 | 11-14-2020 08:40:32 | @patil-suraj or @patrickvonplaten can chime in if I'm wrong, but I believe we currently only have fine-tuning & distillation schemes for the BART-family models, no pre-training.<|||||>Hey @EKebriaei - yeah we sadly don't have any pre-training notebooks for pegasus yet. Are you looking for the summary specific pre-training of pegasus or just the BART-like denoising pre-training? <|||||>> Hey @EKebriaei - yeah we sadly don't have any pre-training notebooks for pegasus yet. Are you looking for the summary specific pre-training of pegasus or just the BART-like denoising pre-training?
I want to pre-train pegasus on a language other than English. <|||||>Yeah, we don't have a script or good documentation for this yet.
cc https://github.com/huggingface/transformers/issues/8594#issuecomment-731248819<|||||>> Yeah, we don't have a script or good documentation for this yet.
>
> cc [#8594 (comment)](https://github.com/huggingface/transformers/issues/8594#issuecomment-731248819)
I have some dependency problems when compiling this: https://github.com/google-research/pegasus/blob/master/pegasus/ops/pretrain_parsing_ops.cc
Do you have any comments that help?<|||||>This PR will enable a pretraining script: #8731<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread.<|||||>> Yeah, we don't have a script or good documentation for this yet.
>
> cc [#8594 (comment)](https://github.com/huggingface/transformers/issues/8594#issuecomment-731248819)
Could we follow the same approach you (@patrickvonplaten) provided [here](https://github.com/huggingface/transformers/issues/5096#issuecomment-645860271) to pretrain BART for PEGASUS ? PEGASUS has also a GSG training objective on top of the BART-like denoising as detailed in the original [paper](https://arxiv.org/pdf/1912.08777.pdf).
The GSG work by masking the most important sentences according to ROUGE then the target are the missing sentences.
So my attempt by changing your code would be:
```
from transformers import PegasusTokenizer, PegasusForConditionalGeneration, PegasusConfig
tok = PegasusTokenizer.from_pretrained("google/pegasus")
model = PegasusForConditionalGeneration(PegasusConfig())
input_string = ["Pegasus is <mask_2> . <mask_1> it <mask_2> the model ."
decoder_input_string = "<s> It is pure white ."
labels_string = "It is pure white . <eos>"
input_ids = tok(input_string, add_special_tokens=False, return_tensors="pt").input_ids
decoder_input_ids =tok(decoder_input_string, add_special_tokens=False, return_tensors="pt").input_ids
labels = tok(labels_string, add_special_tokens=False, return_tensors="pt").input_ids
loss = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids, labels=labels)[0]
```
Does this look reasonable (the selection strategy of masked sentences will naturally need to be implemented)? @patrickvonplaten
<|||||>@Skylixia - yes this looks reasonable to me! I guess in the original PEGASUS paper another masking loss was added on top of the encoder to predict the <mask_2> tokens, which would be difficult here (but should also be feasible). But this looks like the right approach to me!<|||||>Hi. I've been struggling with a pretty simple issue trying to get the above code to work.
Essentially, the Pegasus tokenizer's eos is `</s>` (not `<eos>` as mentioned above) and it does not seem to have a bos symbol. So no matter what combination I try, I keep getting a ValueError as the lengths of the label and decoder inputs don't match.
I tried to follow what happens in [BART](https://github.com/huggingface/transformers/issues/5096), but the following does not work:
```
from transformers import PegasusForConditionalGeneration, PegasusTokenizer
model_name = 'google/pegasus-xsum'
tokenizer = PegasusTokenizer.from_pretrained(model_name)
model = PegasusForConditionalGeneration.from_pretrained(model_name)
input_string = ["Pegasus is mythical . <mask_1> it names the model ."]
decoder_input_string = ["<s>It is pure white . "]
labels_string = ["It is pure white .</s>"]
input_ids = tokenizer(input_string, add_special_tokens=False, return_tensors="pt").input_ids
decoder_input_ids = tokenizer(decoder_input_string, add_special_tokens=False, return_tensors="pt").input_ids
labels = tokenizer(labels_string, add_special_tokens=False, return_tensors="pt").input_ids
loss = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids, labels=labels)[0]
```
If I try to run this, I get `Expected input batch_size (10) to match target batch_size (7).` Complete stack trace:
```
---> 15 loss = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids, labels=labels)[0]
16 # for _ in range(1_000):
17 # loss = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids, labels=labels)[0]
/home/ubuntu/anaconda3/envs/pytorch_new/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
/home/ubuntu/anaconda3/envs/pytorch_new/lib/python3.8/site-packages/transformers/models/pegasus/modeling_pegasus.py in forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)
1285 if labels is not None:
1286 loss_fct = CrossEntropyLoss()
-> 1287 masked_lm_loss = loss_fct(lm_logits.view(-1, self.config.vocab_size), labels.view(-1))
1288
1289 if not return_dict:
/home/ubuntu/anaconda3/envs/pytorch_new/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
/home/ubuntu/anaconda3/envs/pytorch_new/lib/python3.8/site-packages/torch/nn/modules/loss.py in forward(self, input, target)
959
960 def forward(self, input: Tensor, target: Tensor) -> Tensor:
--> 961 return F.cross_entropy(input, target, weight=self.weight,
962 ignore_index=self.ignore_index, reduction=self.reduction)
963
/home/ubuntu/anaconda3/envs/pytorch_new/lib/python3.8/site-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction)
2466 if size_average is not None or reduce is not None:
2467 reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 2468 return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
2469
2470
/home/ubuntu/anaconda3/envs/pytorch_new/lib/python3.8/site-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction)
2259
2260 if input.size(0) != target.size(0):
-> 2261 raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).'
2262 .format(input.size(0), target.size(0)))
2263 if dim == 2:
ValueError: Expected input batch_size (10) to match target batch_size (7).
```<|||||>I have opened a new issue with complete detail (and a corrected example) here: https://github.com/huggingface/transformers/issues/11541<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>> Yeah, we don't have a script or good documentation for this yet.
>
> cc [#8594 (comment)](https://github.com/huggingface/transformers/issues/8594#issuecomment-731248819)
@patrickvonplaten Any update on this? I am planning on researching abstractive summarization in a non-English language and the PEGASUS model seems to be a worthwhile model to pursue. It would be great if you could either direct me to any resources or suggest another model to pursue in my project. Thanks! |
transformers | 8,535 | closed | [doc] typo fix | s/e.g./i.e./ as what follows is not an example, but a "that is" statement + fix language
@sgugger
| 11-14-2020 06:10:50 | 11-14-2020 06:10:50 | Apparently, there is no agreement on the comma: https://www.dailywritingtips.com/comma-after-i-e-and-e-g/, especially after i.e. https://english.stackexchange.com/questions/16172/should-i-always-use-a-comma-after-e-g-or-i-e.
Therefore it's probably the best to stick to whatever manual of style this projects prefers. And, perhaps, even starting a brief guide for the `transformers` manual of style, where you also include all those stylistic recommendations, such as, not documenting `None` as a default for optional objects, the item in question, etc.
My recommendation is that every time a style change is introduced, to change all docs at once, so that the diverse developers will most likely copy/observe how the text is written and follow suite when they create new docs. It's not hard to do:
```
find . -type d -name ".git" -prune -o -type f -exec perl -pi -e "s|e\.g\. |e.g., |g; s|i\.e\. |i.e., |g;" {} \;
```
this one fixes a missing comma.<|||||>Thanks @stas00 !<|||||>I had missed your comment. We can do the perl magic thing (though check me with privately before applying it as we are scheduling a few big changes today and tomorrow in preparation for the v4.0 so you don't want to do this in the middle of one ;-) ).
As for the documentation, we can add it to our [documentation guide](https://github.com/huggingface/transformers/tree/master/docs#writing-documentation---specification) (which already contains the not defaults to `None`).<|||||>> I had missed your comment. We can do the perl magic thing (though check me with privately before applying it as we are scheduling a few big changes today and tomorrow in preparation for the v4.0 so you don't want to do this in the middle of one ;-) ).
You can run it any time you know it's a good time. All credits go to perl ;)
> As for the documentation, we can add it to our [documentation guide](https://github.com/huggingface/transformers/tree/master/docs#writing-documentation---specification) (which already contains the not defaults to `None`).
That works. Probably start a "grammar style" section.
|
transformers | 8,534 | closed | mBart prefix and suffix for language id | ## Environment info
- `transformers` version: 3.5.1
### Who can help
mBART: @patrickvonplaten
documentation: @sgugger
## Information
It seems that there is error or inconsistency between [mbart tokenizer](https://github.com/huggingface/transformers/blob/55e8d0cea25be18b044523b30f4bef58fec63289/src/transformers/tokenization_mbart.py) and comments/docs inline the code.
comments on code explain
```
def build_inputs_with_special_tokens(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An MBART sequence has the following format, where ``X`` represents the sequence:
- ``input_ids`` (for encoder) ``X [eos, src_lang_code]``
- ``decoder_input_ids``: (for decoder) ``[tgt_lang_code] X [eos]``
```
but `set_tgt_lang_special_tokens` considers that `decoder_input_ids` is **X [eos][tgt_lang_code]** instead of **[tgt_lang_code] X [eos]**
```
def set_tgt_lang_special_tokens(self, lang: str) -> None:
"""Reset the special tokens to the target language setting. Prefix [tgt_lang_code], suffix =[eos]."""
self.cur_lang_code = self.lang_code_to_id[lang]
self.prefix_tokens = []
self.suffix_tokens = [self.eos_token_id, self.cur_lang_code]
```
Shouldn't it be:
```
def set_tgt_lang_special_tokens(self, lang: str) -> None:
"""Reset the special tokens to the target language setting. Prefix [tgt_lang_code], suffix =[eos]."""
self.cur_lang_code = self.lang_code_to_id[lang]
self.prefix_tokens = [self.cur_lang_code]
self.suffix_tokens = [self.eos_token_id]
```
| 11-14-2020 00:13:40 | 11-14-2020 00:13:40 | doublechecking the paper, it is expected that the language id as suffix so there is not a bug in the code but a error in the inline documentation.<|||||>Hey @RQuispeC - thanks a lot for checking! Do you feel like opening a PR to fix the doc? That would be super helpful :-)<|||||>Just added the PR @patrickvonplaten :) <|||||>Hi, I've ran into the same issue - but it seems that the documentation was correct, the target language tag should be the bos (and maybe also eos?).
eos is set to be the target language id here: https://github.com/pytorch/fairseq/blob/c8a0659be5cdc15caa102d5bbf72b872567c4859/fairseq/tasks/translation_from_pretrained_bart.py#L116
but then for inference, it is also used as bos by default, as far as I can tell:
https://github.com/pytorch/fairseq/blob/c8a0659be5cdc15caa102d5bbf72b872567c4859/fairseq/sequence_generator.py#L173
The paper states "A language id symbol <LID> is used as the initial token to predict the sentence. "
I get very strange predictions with the pretrained mBART model unless I set "decoder_start_token_id" to the target language id in model.generate().<|||||>Hi, I didn't check FAIR's code but if you check the figure 1 of the paper ("Multilingual Denoising Pre-Training (mBART)") you can see that the language id is at:
* eos of input (source lang text)
* bos of decoder (I guess this is what author's mean with "A language id symbol is used as the initial token to predict the sentence. ")
* eos of output (target lang text)
The documentation refers to input and output, but not the decoder . `decoder_start_token_id` sets the decoder input symbol so it's expected that not using it gives weird results.
That's what I understand, Am I missing something?<|||||>The tokenizer produces `X [eos, tgt_lang_code]` for the target text, but to be consistent with the pretrained model, it should be `[tgt_lang_code] X [tgt_lang_code] ` (refering to the situation where you have a target side text)? `decoder_start_token_id` is only for inference, not for training, right?
At least that's what I understand from the paper/fairseq code.<|||||>`decoder_start_token_id` is used during training and testing, it's the first token of the decoder, I found this easier to understand with the figure 1 of the paper.
Where did you get the input used for the pretrained model?<|||||>I don't have acces to the input used for pretraining (that would be so much easier :) ), I'm just trying to understand the format from the paper and code.
Maybe a stupid question, but how can I set `decoder_start_token_id` for training (in the forward pass of the module)? I could only find it as an argument of generate() (I'm using version 3.1.0).
Also I've noticed that the models seems to have problems in inference if I replace `eos` with the language token on the target side (it randomly stops generating mid sentence), so I guess `eos` has to be present for generate() to function properly (I did set `eos_token_id` to the language tag)?
<|||||>Not stupit question :)
You can find an example of you to set source and target languages here
https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune_trainer.py#L222
```
model.config.decoder_start_token_id = tokenizer.lang_code_to_id[data_args.tgt_lang]
```
Not sure if it's available in version 3.1, I was using version 3.5
The caveat is that (at least in up to version 3.5) the hugging face implementation only allows 1 language for all your data in source and 1 language for all your data in the target, hence `decoder_start_token_id`, `tgt_lang_code` and `src_lang_code` are just one string.
Depending in your problem this may be fine. For instance, if you are working on a translation then you are good to go with hugging face implementation because for translation you usually aim to translate one language (e.g. english) to another (e.g. spanish).
In my case I needed something more generic. I was working on a project that needed to train multiple languages at the same time, so my training data looked something like:
* source
| text | lang |
|:--------- | ----------: |
| my source text 1 | en_XX |
| my source text 2 | es_XX |
| my source text 3 | fr_FR |
* target
| text | lang |
|:--------- | ----------: |
| my target text 1 | en_XX|
| my target text 2 | es_XX|
| my target text 3 | fr_FR|
In this case `decoder_start_token_id`, `tgt_lang_code` and `src_lang_code` should be `['en_XX', 'es_XX', 'fr_FR']` but this is not supported by hugging face. I implemented a custom version of `MBartForConditionalGeneration` and `MBartTokenizer`.
About the `eos` I have just used it as in the seq2seq example so I'm not sure about the behavior you mention.
<|||||>Thank you for the explanations! There are major differences to the (old) version I'm using, so I think the best way for me is to update my code to work with the newest huggingface version. Hopefully, that will solve the problem :) But just in case, where would I find your custom versions of `MBartForConditionalGeneration` and `MBartTokenizer` ?<|||||>sorry for answering so late, I was really busy at work.
I'm not sure if can open source it at this time but the main ideas was to check the flow of the parameters `tgt_lang_code` and `src_lang_code` then update the function calls to support vectors of strings instead of only strings.<|||||>I find this problem too
```
class MyMBartTokenizer(MBartTokenizer):
def set_tgt_lang_special_tokens(self, lang: str) -> None:
"""Reset the special tokens to the target language setting. No prefix and suffix=[eos, tgt_lang_code]."""
self.cur_lang_code = self.lang_code_to_id[lang]
self.prefix_tokens = [self.cur_lang_code]
self.suffix_tokens = [self.eos_token_id]
``` |
transformers | 8,533 | closed | Bertabs example: index_select(): Expected dtype int64 for index | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1
- Platform: Ubuntu 18.04
- Python version: 3.8.0
- PyTorch version (GPU?): torch==1.7.0
- Tensorflow version (GPU?):
- Using GPU in script?:yes
- Using distributed or parallel set-up in script?: no
### Who can help
@patil-suraj
## Information
Following the example in the seq2seq/bertabs readme.md.
I am getting this error:
```
File "/code/tools/transformers/examples/seq2seq/bertabs/modeling_bertabs.py", line 919, in _fast_translate_batch
alive_seq = torch.cat([alive_seq.index_select(0, select_indices), topk_ids.view(-1, 1)], -1)
RuntimeError: index_select(): Expected dtype int64 for index
```
In a debugger, I see that the 'select_indices' parameter is a tensor of floats.
I don't understand the beam mechanism, so I don't know where to start troubleshooting this.
Any help would be great!
-Tim | 11-13-2020 23:42:47 | 11-13-2020 23:42:47 | Hello! The `bert_abs` example is not maintained anymore, and should be moved to `examples/contrib/legacy`.
The recommended way of training sequence-to-sequence models is described in the `examples/seq2seq/README.md` file. What are you trying to do with `bertabs`, so that we may help you find what you need?<|||||>Hi!
Thanks for your response.
I'm just starting to experiment with abstractive text summarization.
Is this something I should look for in the Hugging Face tools and samples?
Thanks again,
Tim<|||||>I believe abstractive text summarization is implemented in the `seq2seq` examples, as the XSUM models were trained to do abstractive text summarization.
Have you taken a look at the summarization examples in https://github.com/huggingface/transformers/tree/master/examples/seq2seq?
@patil-suraj may also be of help.<|||||>Thanks again!
Iβll take a look at the seq2seq examples.
-Tim<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 8,532 | closed | converting tensorflow checkpoint to pytorch | @LysandreJik , I am trying to convert REALM checkpoints in google-research/language/ to a pytorch checkpoint.
One of the arguments to convert_bert_original_tf_checkpoint_to_pytorch.py is bert config.json file. I don't see this file in the model directory. Just wanted to confirm that I can use any bert_config.json ?
From the code, I see :
" help="The config json file corresponding to the pre-trained BERT model. \n"
"This specifies the model architecture.",
| 11-13-2020 23:32:29 | 11-13-2020 23:32:29 | |
transformers | 8,531 | closed | [models website: files section] various issues/suggestions for a better UI | Models website - files section: suggestions/issues:
1. [ ] `git lfs install` not working
I tried to download a model as instructed here:
https://huggingface.co/facebook/bart-large/tree/main#how-to-use
```
$ git lfs install
git: 'lfs' is not a git command. See 'git --help'.
The most similar commands are
last
ls
```
I had to do:
```
apt install git-lfs
```
and then it worked:
```
$ git lfs install
Updated git hooks.
Git LFS initialized.
```
Perhaps it needs a link to:
https://docs.github.com/en/free-pro-team@latest/github/managing-large-files/installing-git-large-file-storage
if apt doesn't have it or a user is on a different setup.
2. [ ] would it be possible to rename "Use in transformers" to "Use/Download" - it wasn't obvious to look there for download instructions. I was trying to click on the files.
3. [x] the fact that the files in the [listing](https://huggingface.co/facebook/bart-large/tree/main#) are clickable and some of them are containing the final data and others contain just a reference and no way to get to the final data is kind of inconsistent and confusing. at the very least the ones with a reference instead of data should have a link to how to get to the data.
4. [ ] ` .gitattributes` is irrelevant to the user in that listing, it's a circumstantial file and not part of the model files, IMHO ;)
5. [ ] <title> is missing (uses generic title) - not optimal for bookmarking/email forwarding
@julien-c | 11-13-2020 20:34:59 | 11-13-2020 20:34:59 | Our assumption was that git-lfs was pretty widespread, but it's an assumption we'd like to check in the coming weeks, and if not, explain/document better. I like the official website: https://git-lfs.github.com/
Let's keep this issue open to find out if many users have issues/questions?
On your second bullet point, we just pushed an update where we display file sizes and download links: https://huggingface.co/facebook/bart-large/tree/main#
let me know if this helps.
<|||||>> Our assumption was that git-lfs was pretty widespread, but it's an assumption we'd like to check in the coming weeks, and if not, explain/document better. I like the official website: https://git-lfs.github.com/
I'm not attached to what you link to ;) I have never used it before.
Probably the most ideal solution is for `git` to be made more smart and inform users about this website, rather than fail with `'lfs' is not a git command`
> Let's keep this issue open to find out if many users have issues/questions?
Of course. I renamed it to something more generic then. Turned the bullets into completion boxes.
> On your second bullet point, we just pushed an update where we display file sizes and download links: https://huggingface.co/facebook/bart-large/tree/main#
> let me know if this helps.
This is awesome! Love it!
Can we add the download link from the file's page too in case the user missed the download button on the index page?
i.e from https://huggingface.co/facebook/bart-large/blob/main/pytorch_model.bin
- added an additional nice-to-have item in OP wrt `<title>`,
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 8,530 | closed | Switch `return_dict` to `True` by default. | # What does this PR do?
As part of the main changes we want to bring to Transformers v4, this PR switches the `return_dict` argument default from `False` to `True`. Most of the examples and documentation were already using this value (the PR removes `return_dict=True` in all those instances since this is now the default).
**New model outputs**
The new model outputs are dictionaries (instead of tuples) with a bit of added functionality: you can access elements by their keys, as attributes or even by index (to keep most of the backward compatibility). Here is an example:
```python
from transformers import BertTokenizer, BertForSequenceClassification
import torch
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForSequenceClassification.from_pretrained('bert-base-uncased')
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
labels = torch.tensor([1]).unsqueeze(0) # Batch size 1
outputs = model(**inputs, labels=labels)
```
The `outputs` object contains loss and logits and you can access them as keys:
```python
loss = outputs["loss"]
logits = outputs["logits"]
```
or as attributes.
```python
loss = outputs.loss
logits = outputs.logits
```
In an IDE you can thus use autocomplete to help.
You can still index with integers or a slice:
```python
loss = outputs[0]
logits = outputs[1]
loss, logits = outputs[:2]
```
but you can't unpack the tuple directly as was done before:
```python
loss, logits = outputs
```
will return "loss" and "logits" in `loss` and `logits` (like dictionaries).
**Known caveats**
Torchscript is incompatible with dict outputs (until 1.7) so doesn't work with those new outputs. The flag `return_dict` will default to `False` if you use `torchscript=True` when building your model.
TF Saved models deal with dictionaries but not their subclasses, so if you save and load your model using `tf.keras.models.load_model`, it will output regular dictionaries. This means that you won't be able to index in the outputs with integers or attributes, only their string keys.
```python
tf.saved_model.save(model, tmpdirname)
model = tf.keras.models.load_model(tmpdirname)
outputs = model(inputs)
loss = outputs["loss"]
logits = outputs["logits"]
```
Apex used in optimization mode "O2" will also lose the output types of the model and return regular dictionaries. This means that you won't be able to index in the outputs with integers or attributes, only their string keys.
**Breaking change**
All code of the form
```python
loss, output = model(**inputs)
```
needs to be switched to
```python
loss, output = model(**inputs).to_tuple()
```
or use the key/attributes of the model output returned.
Alternatively, you can pass `return_dict=False` when creating your model to get regular tuples as outputs. | 11-13-2020 20:22:32 | 11-13-2020 20:22:32 | > You can still index with integers or a slice:
> loss = outputs[0]
Doesn't seem to be the case if I use fp16/apex (possibly any fp16):
`finetune.py` currently fails with:
```
File "finetune.py", line 170, in training_step
loss_tensors = self._step(batch)
File "finetune.py", line 151, in _step
lm_logits = outputs[0]
KeyError: 0
```
Oddly enough all finetune tests pass, and it works with fp32, but if I pass apex (which tests don't test) it fails. Any ideas how fp16 could make a difference? I can't test native amp due to the leak.
This may have something to do with PL too, since the call stack goes through PL:
Full backtrace follows:
```
Traceback (most recent call last):
File "finetune.py", line 444, in <module>
main(args)
File "finetune.py", line 411, in main
trainer: pl.Trainer = generic_train(
File "/mnt/nvme1/code/huggingface/transformers-finetune-fixes/examples/lightning_base.py", line 398, in generic_train
trainer.fit(model)
File "/mnt/nvme1/code/github/00pytorch/pytorch-lightning/pytorch_lightning/trainer/trainer.py", line 469, in fit
results = self.accelerator_backend.train()
File "/mnt/nvme1/code/github/00pytorch/pytorch-lightning/pytorch_lightning/accelerators/gpu_accelerator.py", line 64, in train
results = self.train_or_test()
File "/mnt/nvme1/code/github/00pytorch/pytorch-lightning/pytorch_lightning/accelerators/accelerator.py", line 66, in train_or_test
results = self.trainer.train()
File "/mnt/nvme1/code/github/00pytorch/pytorch-lightning/pytorch_lightning/trainer/trainer.py", line 521, in train
self.train_loop.run_training_epoch()
File "/mnt/nvme1/code/github/00pytorch/pytorch-lightning/pytorch_lightning/trainer/training_loop.py", line 539, in run_training_epoch
batch_output = self.run_training_batch(batch, batch_idx, dataloader_idx)
File "/mnt/nvme1/code/github/00pytorch/pytorch-lightning/pytorch_lightning/trainer/training_loop.py", line 691, in run_training_batch
self.optimizer_step(optimizer, opt_idx, batch_idx, train_step_and_backward_closure)
File "/mnt/nvme1/code/github/00pytorch/pytorch-lightning/pytorch_lightning/trainer/training_loop.py", line 477, in optimizer_step
self.trainer.accelerator_backend.optimizer_step(
File "/mnt/nvme1/code/github/00pytorch/pytorch-lightning/pytorch_lightning/accelerators/accelerator.py", line 114, in optimizer_step
model_ref.optimizer_step(
File "/mnt/nvme1/code/github/00pytorch/pytorch-lightning/pytorch_lightning/core/lightning.py", line 1406, in optimizer_step
optimizer_closure()
File "/mnt/nvme1/code/github/00pytorch/pytorch-lightning/pytorch_lightning/trainer/training_loop.py", line 681, in train_step_and_backward_closure
result = self.training_step_and_backward(
File "/mnt/nvme1/code/github/00pytorch/pytorch-lightning/pytorch_lightning/trainer/training_loop.py", line 770, in training_step_and_backward
result = self.training_step(split_batch, batch_idx, opt_idx, hiddens)
File "/mnt/nvme1/code/github/00pytorch/pytorch-lightning/pytorch_lightning/trainer/training_loop.py", line 324, in training_step
training_step_output = self.trainer.accelerator_backend.training_step(args)
File "/mnt/nvme1/code/github/00pytorch/pytorch-lightning/pytorch_lightning/accelerators/gpu_accelerator.py", line 72, in training_step
output = self.__training_step(args)
File "/mnt/nvme1/code/github/00pytorch/pytorch-lightning/pytorch_lightning/accelerators/gpu_accelerator.py", line 80, in __training_step
output = self.trainer.model.training_step(*args)
File "finetune.py", line 172, in training_step
loss_tensors = self._step(batch)
File "finetune.py", line 153, in _step
lm_logits = outputs[0]
KeyError: 0
```
I will fix the program, but thought I'd share this strange case.
I really should get away from finetune for awhile, it seems to be the doorway to some incessant work...
<|||||>Fixed with:
```
- lm_logits = outputs[0]
+ lm_logits = outputs["logits"]
```
can't use attributes either:
`outputs.logits` fails:
`AttributeError: 'dict' object has no attribute 'logits'`
|
transformers | 8,529 | closed | Adding PrefixConstrainedLogitsProcessor | # What does this PR do?
This pull request adds a new decoding strategy that constrains the next token to generate based on a callable function. It is a new PR that fixes https://github.com/huggingface/transformers/pull/7784 since the generate function went trough refactoring. @patrickvonplaten | 11-13-2020 19:48:10 | 11-13-2020 19:48:10 | @patrickvonplaten I think we are ready to merge now π<|||||>Hey @nicola-decao,
Thanks a lot for re-implementing your PR! A couple of things I think we should improve.
1) I'm still not 100% sure how the API of `prefix_allowed_tokens_fn` would look like. Does it take one or two arguments? Also it would be very useful to correctly define the type hints in this case as I mentioned in the comment above
2) Can we add a test for this logits processor? It should be relatively straight-forward in /home/patrick/hugging_face/transformers/tests/test_generation_logits_process.py. The test should only test the `prefix_allowed_tokens_fn` not the whole generate function.
3) It would be awesome if the docstring could be a bit more explicit (*e.g.* including your paper and maybe even a tiny example)
4) Can we delete the `src/.DS_Store ` file?
5) (Not required for merge) I think we could speed up this function by just using torch.Tensor operations (see comment above), but I'm happy to keep this for a future PR.
Also, let me know if you need help! :-) <|||||>> Great looks good to me now! @yjernite - do you want to take a final look as well?
Yup! Will take a look by EOD<|||||>LGTM, thanks for implementing this functionality!<|||||>Great work @nicola-decao <|||||>> Great work @nicola-decao
Thanks :) |
transformers | 8,528 | closed | [T5] Fix load weights function | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #7791
The problem in 7791 was that the code that was used to convert mtf t5 to hf t5 was outdated and had a couple of bugs:
1) weight embedding were not tied -> random weight matrices were used instead
2) the tokenizer didn't add EOS which meant that the input_ids were wrong.
@agemagician I will merge this into master now so that you don't have to work on the hacky branch I did a while back. Your models should be convertible with the new code added here and then everything should work as expected.
The following code now produces the correct results:
```python
#!/usr/bin/env python3
from transformers import T5Tokenizer # noqa: E402
from transformers.convert_t5_original_tf_checkpoint_to_pytorch import ( # noqa: E402
convert_tf_checkpoint_to_pytorch,
)
from transformers.modeling_t5 import T5Config, T5ForConditionalGeneration # noqa: E402
import torch
path_to_tf_checkpoint = "t5_mesh_checkpoints"
config = T5Config.from_pretrained("t5-base")
config.d_ff = 2048
config.d_kv = 64
config.d_model = 512
config.num_decoder_layers = 6
config.num_layers = 6
config.num_heads = 8
config.vocab_size = 32128
config.tie_word_embeddings = True
config.save_pretrained(path_to_tf_checkpoint)
convert_tf_checkpoint_to_pytorch(path_to_tf_checkpoint, path_to_tf_checkpoint + "/config.json", path_to_tf_checkpoint)
input_txt = ["javascript documentation generation: function isStandardBrowserEnv ( ) { if ( typeof navigator !== 'undefined' && ( navigator . product === 'ReactNative' || navigator . product === 'NativeScript' || navigator . product === 'NS' ) ) { return false ; } return ( typeof window !== 'undefined' && typeof document !== 'undefined' ) ; }"]
tok = T5Tokenizer.from_pretrained(path_to_tf_checkpoint)
model = T5ForConditionalGeneration.from_pretrained(path_to_tf_checkpoint, return_dict=True)
model.to("cuda")
input_ids = tok(input_txt, return_tensors="pt").input_ids
outputs = model.generate(input_ids.to("cuda"), num_beams=4)
print(tok.decode(outputs[0]))
```
=> gives "<pad> Returns true if the browser is a native element.</s> "
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSTM: @stas00
-->
I checked that the new `convert_weights_function` works with previous mtf (t5==0.7.1) models. | 11-13-2020 19:14:11 | 11-13-2020 19:14:11 | Failing tests are unrelated<|||||>Thanks again @patrickvonplaten , you made my weekend π
|
transformers | 8,527 | closed | Add bart-large-mnli model card | # What does this PR do?
Adds a model card for facebook/bart-large-mnli. Since this model is currently the default for the zero-shot pipeline/widget, it adds an introduction to zero-shot text classification with references & example snippets. | 11-13-2020 18:19:03 | 11-13-2020 18:19:03 | looks good! |
transformers | 8,526 | closed | Problem while pretraining MLM from scratch using Transformers | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.1.0
- Platform: Linux-3.10.0-1127.19.1.el7.x86_64-x86_64-with-centos-7.8.2003-Core
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): 2.3.0 (True)
- Using GPU in script?: Yes(Two GPUs, Nvidia P100)
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Trainer: **@sgugger**
-->
Trainer: @sgugger
## Information
Model I am using (RoBERTa):
The problem arises when using:
* [* ] my own modified scripts: (give details below)
'''Python
config=RobertaConfig(vocab_size=30_000, max_position_embeddings=inputlen, num_attention_heads=12,
num_hidden_layers=6,hidden_dropout_prob=0.1,attention_probs_dropout_prob=0.1,
initializer_range=0.2, intermediate_size=3072, type_vocab_size=1)
training_args=TrainingArguments(output_dir="/LAB_SHARED/Projects/011-Language_Model/Data/LanguageModel/BadgerBERT",overwrite_output_dir=True,
num_train_epochs=10, do_train=True, do_eval=True, evaluate_during_training=True,
per_gpu_train_batch_size=64, learning_rate=0.0004,
gradient_accumulation_steps=32,
logging_steps=128,
warmup_steps=30000,
weight_decay=0.01,
eval_steps=128,
save_steps=128,
save_total_limit=2, prediction_loss_only=True)
trainer=Trainer(model=model,args=training_args,data_collator=data_collator,train_dataset=dataset,eval_dataset=dataset1,prediction_loss_only=True)
'''
The tasks I am working on is:
* [ *] my own task or dataset: (give details below)
Training masked language model
## To reproduce
Steps to reproduce the behavior:
1. Please use the following configuration
2. I used my own training dataset, but I guess there should be no difference. My input text file size is around 40GB
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Traceback (most recent call last):ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 279691/279692 [5:22:01<00:00, 15.48it/s]
File "CreateLanguageModel.py", line 84, in <module>
trainer.train()
File "**/lib64/python3.6/site-packages/transformers/trainer.py", line 785, in train
torch.save(self.optimizer.state_dict(), os.path.join(output_dir, "optimizer.pt"))
File "**/lib64/python3.6/site-packages/torch/serialization.py", line 361, in save
with _open_file_like(f, 'wb') as opened_file:
File "**/lib64/python3.6/site-packages/torch/serialization.py", line 229, in _open_file_like
return _open_file(name_or_buffer, mode)
File "**/lib64/python3.6/site-packages/torch/serialization.py", line 210, in __init__
super(_open_file, self).__init__(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: '**/checkpoint-128/optimizer.pt'
## Expected behavior
Instead of continue training, it stops. It seems that the problem is related to the check-points. I don't know why it tries to open (save) check-point-128 while it already saved 512!
Besides that it already saved check-point-160 while I did not set any parameter as 160, so I don't know where the 160 came from.
I expected the code to just report the evaluation loss periodically, save checkpoints and finish the training.
<!-- A clear and concise description of what you would expect to happen. -->
| 11-13-2020 17:51:07 | 11-13-2020 17:51:07 | It looks like you're using Transformers v3.1.0. There have been quite q few improvements/bug fixes on Trainer since then. Could you check you still have the issue with the latest version?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 8,525 | closed | `TypeError: unhashable type: 'list'` when using DataCollatorForWholeWordMask | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.5.0
- Platform: Linux-5.9.1-arch1-1-x86_64-with-arch
- Python version: 3.7.3
- PyTorch version (GPU?): 1.7.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.-->
@sgugger
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
This is the code I am running
```python
from transformers import BertTokenizer, BertForMaskedLM, AdamW, BertConfig, get_linear_schedule_with_warmup, pipeline, DataCollatorForWholeWordMask, DataCollatorForLanguageModeling
tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased', do_lower_case=True)
sent = "The capital of France is Paris."
data_collator = DataCollatorForWholeWordMask(tokenizer=tokenizer, mlm=True, mlm_probability=0.15)
encoded = tokenizer(
sent,
truncation = True,
add_special_tokens = True,
max_length = 64,
return_tensors = 'pt',
return_special_tokens_mask = True,
)
masked = data_collator([encoded])
print(masked)
```
It gives me the following error
```python
Traceback (most recent call last):
File "sadness.py", line 19, in <module>
masked = data_collator([encoded])
File "/home/csikfeng/.local/lib/python3.7/site-packages/transformers/data/data_collator.py", line 328, in __call__
token = self.tokenizer._convert_id_to_token(id)
File "/home/csikfeng/.local/lib/python3.7/site-packages/transformers/tokenization_bert.py", line 241, in _convert_id_to_token
return self.ids_to_tokens.get(index, self.unk_token)
TypeError: unhashable type: 'list'
```
But if instead I use `data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=True, mlm_probability=0.15)
` like this
```python
from transformers import BertTokenizer, BertForMaskedLM, AdamW, BertConfig, get_linear_schedule_with_warmup, pipeline, DataCollatorForWholeWordMask, DataCollatorForLanguageModeling
tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased', do_lower_case=True)
sent = "The capital of France is Paris."
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=True, mlm_probability=0.15)
encoded = tokenizer(
sent,
truncation = True,
add_special_tokens = True,
max_length = 64,
return_tensors = 'pt',
return_special_tokens_mask = True,
)
masked = data_collator([encoded])
print(masked)
```
I do not get any errors
```
{'input_ids': tensor([[[ 101, 10105, 12185, 10108, 63184, 10112, 10124, 24289, 10107, 119,
102]]]), 'token_type_ids': tensor([[[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]]), 'attention_mask': tensor([[1]]), 'labels': tensor([[[-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100]]])}
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
It should just perform whole word masking and not have errors. | 11-13-2020 16:37:42 | 11-13-2020 16:37:42 | Can reproduce, working on a fix right now.<|||||>So, `DataCollatorForWholeWordMask` has a few deisgn flaws (it only works for BERT for instance) and fixing it is not directly doable (basically what it tries to do should be done at the tokenization level). I will adapt the `run_mlm_wwm` example to stop using it and we will probably deprecate it afterward.
For your specific problem however, there is a fix, which is to remove the `return_tensors='pt'` from the tokenzier call.<|||||>This solves my problem, thanks! |
transformers | 8,524 | closed | LayoutLM Token Classification not learning | ## Environment info
- `transformers` version: 3.4.0
- Platform: in docker based on image: nvidia/cuda:10.1-cudnn7-devel-ubuntu18.04
- Python version: 3.7.9
- PyTorch version (GPU?): 1.5.1+cu92 (True)
- Tensorflow version (GPU?): 2.2.0-rc0 (True)
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: <fill in>
## Information
Model I am using (Bert, XLNet ...): LayoutLMForTokenClassification
The problem arises when using: my own scripts
The tasks I am working on is: my own task
NER task. I've reproduced the implementation of Dataset, compute metrics (and other helper functions) as in the original repo [microsoft/layoutlm repo](https://github.com/microsoft/unilm/blob/master/layoutlm/examples/seq_labeling/run_seq_labeling.py)
When initially trying with the original repo and training script the model managed to learn and provided reasonable results after very few epochs. After implementing with Huggingface the model doesn't learn at all even after a much higher number of epochs.
## To reproduce
Model loading and trainer configuration:
```
config = LayoutLMConfig.from_pretrained(
<path_layoutlm_base_uncased>,
num_labels=<num_labels>,
cache_dir=None
)
model = LayoutLMForTokenClassification.from_pretrained(
<path_layoutlm_base_uncased>
from_tf=bool(".ckpt" in <path_layoutlm_base_uncased>),
config=config,
cache_dir=None,
)
device = torch.device("cuda")
model.train().to(device)
TrainingArguments(
output_dir=<pytorch_model_dir>, # output directory
do_train=True,
do_eval=True,
do_predict=False,
evaluation_strategy=EvaluationStrategy.EPOCH,
num_train_epochs=<epochs>, # total # of training epochs
per_device_train_batch_size=<batch_size>, # batch size per device during training
per_device_eval_batch_size=<batch_size>, # batch size for evaluation
weight_decay=<weight_decay>, # strength of weight decay
learning_rate=<learning_rate>,
adam_epsilon=<adam_epsilon>,
logging_dir=<profile_logs>, # Tensorboard log directory
logging_steps=0, # it logs when running evaluation so no need to log on step interval
save_steps=0,
seed=seed,
overwrite_output_dir=True,
disable_tqdm=False,
load_best_model_at_end=True,
save_total_limit=10,
fp16=True,
)
trainer = MetaMazeTrainer(
model=model, # the instantiated π€ Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=test_dataset, # evaluation dataset
compute_metrics=compute_metrics,
)
```
## Expected behavior
Similar results to the original repo as the given the same parameters to the trainer and the Dataset being the same after processing the data.
Is this due to the ongoing integration of this model? Is the setup wrong? | 11-13-2020 16:04:14 | 11-13-2020 16:04:14 | Is there any update on this issue?<|||||>Hi there!
I have been investigating the model by making [integration tests](https://github.com/NielsRogge/transformers/blob/e5431da34ab2d03d6114303f18fd70192c880913/tests/test_modeling_layoutlm.py#L318), and turns out it outputs the same tensors as the original repository on the same input data, so there are no issues (tested this both for the base model - `LayoutLMModel` as well as the models with heads on top - `LayoutLMForTokenClassification` and `LayoutLMForSequenceClassification`).
However, the model is poorly documented in my opinion, I needed to first look at the original repository to understand everything. I made a demo notebook that showcases how to fine-tune HuggingFace's `LayoutLMForTokenClassification` on the FUNSD dataset (a sequence labeling task): https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForTokenClassification_on_FUNSD.ipynb
Let me know if this helps you!<|||||>I have experienced the same issue, I realized that model files from [here](https://huggingface.co/microsoft/layoutlm-base-uncased) are different than the weights in the original repo. I was using weights from the original repo and the model couldn't load them at the start of the training. So, I was starting from a random model instead of a pre-trained one. That's why it is not learning much in a down-stream task.
I solved the issue by using model files from [huggingface](https://huggingface.co/microsoft/layoutlm-base-uncased)<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 8,523 | closed | Reformer model crashes during casual LM evaluation | ## Environment info
- `transformers` version: 3.4.0
- Platform: Linux-5.4.0-47-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.6.12
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: -
### Who can help
I tried to dig into the code but could not find out why this is happening, so I am tagging @sgugger since this might be a `Trainer` related issue as well as @patrickvonplaten as I am using `ReformerWithLMHead`.
## Information
I am using `ReformerWithLMHead` with a custom dataset and already set up the masked language modeling task so I moved on to casual LM but something odd happened. My setup is based on the official notebook from @patrickvonplaten and it works fine for masked LM.
```python
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm=False
)
def compute_metrics(pred):
"""
pred.label_ids = (prediction_set_size, sequence_length)
pred.predictions = (prediction_set_size, sequence_length, vocab_size)
prob. dist. along vocab size
Since we do masked language modelling, most of the sequence is MASKED with -100
and only the non masked should be checked. :)
"""
non_masked_indices = (pred.label_ids != -100)
predictions = np.argmax(pred.predictions, axis=-1)
labels = pred.label_ids[non_masked_indices]
predictions = predictions[non_masked_indices]
return {"accuracy": np.mean(np.asarray(predictions == labels), dtype=np.float)}
trainer = Trainer(
model=model,
args=training_args,
compute_metrics=compute_metrics,
data_collator=data_collator,
train_dataset=dataset,
eval_dataset=eval_dataset,
prediction_loss_only=False)
trainer.train()
```
I set up the collator for the non-mlm task but left the custom metric (also based on the official notebook) to calculate accuracy since it should be the same as before (IMO). The tricky part is if I explicitly set `prediction_loss_only=False` I get an error indicating that the `logits` could not have been nested_detached:
```bash
File "src/lm/reformer_casual_lm.py", line 146, in <module>
trainer.train()
File "/home/qbeer/miniconda3/envs/nlp/lib/python3.6/site-packages/transformers/trainer.py", line 786, in train
self._maybe_log_save_evalute(tr_loss, model, trial, epoch)
File "/home/qbeer/miniconda3/envs/nlp/lib/python3.6/site-packages/transformers/trainer.py", line 843, in _maybe_log_save_evalute
metrics = self.evaluate()
File "/home/qbeer/miniconda3/envs/nlp/lib/python3.6/site-packages/transformers/trainer.py", line 1251, in evaluate
output = self.prediction_loop(eval_dataloader, description="Evaluation")
File "/home/qbeer/miniconda3/envs/nlp/lib/python3.6/site-packages/transformers/trainer.py", line 1348, in prediction_loop
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only)
File "/home/qbeer/miniconda3/envs/nlp/lib/python3.6/site-packages/transformers/trainer.py", line 1452, in prediction_step
logits = nested_detach(logits)
File "/home/qbeer/miniconda3/envs/nlp/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 66, in nested_detach
return type(tensors)(nested_detach(t) for t in tensors)
File "/home/qbeer/miniconda3/envs/nlp/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 66, in <genexpr>
return type(tensors)(nested_detach(t) for t in tensors)
File "/home/qbeer/miniconda3/envs/nlp/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 66, in nested_detach
return type(tensors)(nested_detach(t) for t in tensors)
File "/home/qbeer/miniconda3/envs/nlp/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 66, in <genexpr>
return type(tensors)(nested_detach(t) for t in tensors)
File "/home/qbeer/miniconda3/envs/nlp/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 66, in nested_detach
return type(tensors)(nested_detach(t) for t in tensors)
File "/home/qbeer/miniconda3/envs/nlp/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 66, in <genexpr>
return type(tensors)(nested_detach(t) for t in tensors)
File "/home/qbeer/miniconda3/envs/nlp/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 67, in nested_detach
return tensors.detach()
AttributeError: 'NoneType' object has no attribute 'detach'
```
If I just delete the `prediction_loss_only=False` line the training runs but my custom metric is not evaluated since in the training class, the gathered labels and predictions are only not `None` when this value is set to `False`:
```python
eval_loss = eval_losses_gatherer.finalize()
preds = preds_gatherer.finalize() if not prediction_loss_only else None
label_ids = labels_gatherer.finalize() if not prediction_loss_only else None
if self.compute_metrics is not None and preds is not None and label_ids is not None:
metrics = self.compute_metrics(EvalPrediction(predictions=preds, label_ids=label_ids))
```
## Expected behavior
I expect that my custom metric is evaluated and the training not crashing randomly.
Thanks in advance.
| 11-13-2020 14:50:44 | 11-13-2020 14:50:44 | Mmm, looks like the reformer model is outputing some `None`s, which it shouldn't do. Can make a fix for that in `Trainer` but the model itself should not do that. Looks like there is work for both of us @patrickvonplaten :-) |
transformers | 8,522 | closed | Update deepset/roberta-base-squad2 model card | Update model card since our v1 and v2 of the model are in this repo.
Note that accessing models doesn't seem to be working when referencing tag name #8521 | 11-13-2020 14:40:27 | 11-13-2020 14:40:27 | Really cool! |
transformers | 8,521 | closed | Tagged versions of model in new model hub don't work | Our model, deepset/roberta-base-squad2 was originally uploaded under the old style model hub.
I have committed a new version of the deepset/roberta-base-squad2 model onto the model hub using the new git based system introduced in transformers 3.5.0. I have 2 tags (v1.0 and v2.0) that I have also pushed to the repo. The tags show up in the model hub drop down but when I click on either of the tags, it says "Not Found: Error: Invalid rev id".
It seems I cannot load the models when I specify `revision=v1.0` or `revision=v2.0`. If I don't specify a revision, it seems to load a model though I'm not sure which. This is the code I used:
```
tokenizer = AutoTokenizer.from_pretrained(
"deepset/roberta-base-squad2",
revision="v2.0" # tag name, or branch name, or commit hash
)
```
What steps can I take so that I can access both versions through the model hub website, and by specifying name and revision?
Thanks,
Branden
@julien-c | 11-13-2020 14:17:51 | 11-13-2020 14:17:51 | Looking into it right now (cc @Pierrci)<|||||>looks like we were supporting lightweight tags but not annotated ones. <|||||>Should be fixed: https://huggingface.co/deepset/roberta-base-squad2/tree/v2.0<|||||>That fixed the problem! Thanks very much<|||||>Thanks, and please keep the feedback coming! <3 |
transformers | 8,520 | closed | Model sharing doc: more tweaks | cc @Pierrci | 11-13-2020 13:53:08 | 11-13-2020 13:53:08 | feel free to merge this when it's a good time! |
transformers | 8,519 | closed | MLflowCallback to log run_name argument | # π Feature request
When using the MLflowCallback (set as default for Trainer), I would like to log the `run_name` argument passed to TrainingArguments as the Run Name on the MLflow dashboard. Currently runs are being logged as nameless.
E.g. see below.

## Motivation
Trainer makes training π€ models so easy and MLflow is great for organising experiments/caching artifacts. I would like to make it easier to organise experimental runs and make research easier, particularly for larger teams. This feature would be a very simple patch on the original PR #8016
example usage
```
training_args = TrainingArguments(
label_names=['labels_t1', 'labels_t2'],
output_dir='./runs, # output directory
run_name='multitask_clf_<run_name>',
)
trainer = Trainer(
model=model, # the instantiated π€ Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
)
```
## Your contribution
I can submit a PR
| 11-13-2020 12:07:03 | 11-13-2020 12:07:03 | Please don't hesitate to suggest a PR, `run_name` is there just for this reason!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>It will be useful to have this, and it will organize all the experiments better.<|||||>Is there any update on this issue and #12841? I have a very simple one-line solution of passing `args.run_name` (which currently serves for `wandb`) to `mlflow.start_run` that can fix this. I can submit a PR in case of need.<|||||>@HenryMaguire can you send link to your notebook or colab? |
transformers | 8,518 | closed | [T5] Bug correction & Refactor | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
**!!!BUG DETECTION!!!**
While integrating T5v1.1/mT5, a bug in T5 was detected. T5 actually never uses a `relative_position_bias` in the `EncDecSelfAttention` layer. Previously we used a bi-directional `relative_position_bias` in the `EncDecSelfAttention`, which is wrong IMO (see https://github.com/huggingface/transformers/issues/6285#issuecomment-702371111 for reference). An integration test against original T5 model was added to make sure removing `relative_posiiton_bias` is the correct behavior (in case @craffel reads this - maybe you could confirm :-)).
Luckily, the bug did not significantly influence the results as can be seen by the very minor changes in the slow tests. This is also why it wasn't noticed earlier. => Oo all pre-trained & fine-tuned T5Models still work!
In addition this PR:
- Refactor: clean the code, and remove unnecessarily complicated code
- Remove `n_positions` / `max_position_embeddings` from the config, since T5 is not limited by a fixed learned position embedding matrix, see: https://github.com/huggingface/transformers/issues/8047
Fixes #8047
Also cc @agemagician for information (I highly doubt though that this will fix the problem we have in your case)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSTM: @stas00
-->
| 11-13-2020 11:40:04 | 11-13-2020 11:40:04 | I have tested both "refactor_t5" and "major_t5_refactor" branches using "T5ForConditionalGeneration".
I didn't re-convert the Tensorflow to Pytorch, since you already told me the conversion process is not the problem.
Doesn't seem it solved our issue yet, but it gives different rubbish output. Maybe, it is a good step into the right direction.
Thanks Patrick.
<|||||>Hi @patrickvonplaten, does this PR effects the checkpoint one gets when calling ```AutoModel.from_pretrained("t5-3b")```?
I am investigating why my results with transformers 3.3.1 and T5 changed and encountered this, according to the date seems like it was merged in version 3.5.0. (edit: I see now it was merged in 4.0.0 https://github.com/huggingface/transformers/releases/tag/v4.0.0)
I wonder if this changes the T5 weights/checkpoint I get with my version? |
transformers | 8,517 | closed | XLM-RoBERTa tokenizer changes characters during tokenization | I'm using `transformers` 3.5.0.
Whenever the XLM-RoBERTa tokenizer encounters some characters such as 'Β²' (super-script 2, `ord('Β²')` is 178), it converts them to other characters (in this example, to the plain '2', with `ord('2')` being 50).
That is, with `tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large')`, `tokenizer.tokenize('Β²')` or, altnernatively, `tokenizer.tokenize(chr(178))` returns `['β2']`.
| 11-13-2020 09:53:07 | 11-13-2020 09:53:07 | Hi, this is unfortunately a limit of this tokenizer. We try to stay as close as possible to the original implementation, so we will not be able to change this behavior.<|||||>I see. Well, it's not a big deal, as the characters it replaces are quite rare and not really important, at least not for my application. So, I just replace them upfront in the text to make sure that the output of the tokenizer still matches the input.
Thanks for the awesome work you are doing by providing this and other libraries!<|||||>Glad you like it :) |
transformers | 8,516 | closed | SWA | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSTM: @stas00
-->
| 11-13-2020 09:44:08 | 11-13-2020 09:44:08 | |
transformers | 8,515 | closed | Adding the prepare_seq2seq_batch function to ProphetNet | # What does this PR do?
I tried to use ProphetNet with Seq2SeqTrainer, but it failed.
The error message told me: this is because the collator uses `prepare_seq2seq_batch()` in `_encode()`, but `prepare_seq2seq_batch()` is not implemented in ProphetNet Tokenizer.
I've gotten kind advices in the HuggingFace forum, and implemented the function.
https://discuss.huggingface.co/t/the-reason-prepare-seq2seq-batch-for-prophetnet-is-not-existed/1758
The modifications are as below:
- Add `prepare_seq2seq_batch()` in `/src/transformers/tokenization_prophetnet.py`.
- To use .view in loss computation in Seq2SeqTrainer, I add a part where it is confirmed that logits is contiguous in `/src/transformers/modeling_prophetnet.py`.
I've checked it works on CPU and GPU as below:
```
!python finetune_trainer.py \
--learning_rate=3e-5 \
--do_train --do_eval --evaluate_during_training \
--max_source_length 511 \
--per_device_train_batch_size 2 \
--predict_with_generate \
--n_train 300 \
--n_val 100 \
--model_name_or_path microsoft/prophetnet-large-uncased \
--data_dir $XSUM_DIR \
--output_dir tmp_gpu \
--overwrite_output_dir
```
Although the `PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES` is 512, If `--max_source_lenght` is set to 512, `CUDA error` occurs.
I'm sorry, but I have not been able to identify the cause of this.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
https://discuss.huggingface.co/t/the-reason-prepare-seq2seq-batch-for-prophetnet-is-not-existed/1758
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
https://github.com/forest1988/colaboratory/blob/main/prophetnet_seq2seqtrainer.ipynb
I'm sorry, I misunderstood what is being asked here. Now I understood that the `./tests/` code is needed.
~~I'm working on this, but I'm getting errors in formatting etc. and using `black` won't fix it.~~
I added related content to the `test_tokenization_prophetnet.py`. I changed the environment and made it work again according to the instructions, and it seems that formatting with `black` works appropriately.
## Who can review?
@patrickvonplaten
@sshleifer
Thank you for kindly answering my questions in the forum! | 11-13-2020 09:24:27 | 11-13-2020 09:24:27 | For fixing the check_code_quality failure, I run `make style` in the repository, but it tries to re-format so many files.
Should I check some settings for `black`?<|||||>I changed the environment and made it work again according to the instructions, and it seems that formatting with black works appropriately. Sorry for the bother you.<|||||>@patrickvonplaten
Thank you for reviewing and merging the PR! I'm so happy to read your comment.
I haven't tried using the full dataset for fine-tuning ProphetNet with the seq2seq trainer, but I think I can try it by adding some modifications to my test code used during my implementation process.
I will try it and would like to post the results on the URL if I can get something interesting!
<|||||>@patrickvonplaten
I've just posted my fine-tuning experiment result on https://discuss.huggingface.co/t/how-can-i-do-text-summarization-using-prophetnet/1661/2.
I'm sorry it is not the case of using the full dataset.
Considering the time limitations of the execution environment, I used only about one-tenth of the dataset for now, but I think we could get better results if we used the entire dataset.<|||||>That's already of great help - thank you so much!<|||||>It's my pleasure! |
transformers | 8,514 | closed | How to pretrain the model (like Roberta) again? | I don't want to pretrain the model from scratch. I have some dataset related to my task. I want to pretrain the model from transformers the second time. Could someone give me some advice on how to do it or which document to read? Thank you! | 11-13-2020 08:37:58 | 11-13-2020 08:37:58 | The authors of the Transformers library wrote a script for this. You can find it under the examples folder -> language modeling [here](https://github.com/huggingface/transformers/tree/master/examples/language-modeling#robertabertdistilbert-and-masked-language-modeling). <|||||>> The authors of the Transformers library wrote a script for this. You can find it under the examples folder -> language modeling [here](https://github.com/huggingface/transformers/tree/master/examples/language-modeling#robertabertdistilbert-and-masked-language-modeling).
Thank you! I will check it out.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 8,513 | closed | Using Pretrained BERT model to add additional words that are not recognized by the model | Hello,
I want some help regarding adding additional words in the existing BERT model. I have two quires kindly guide me:
I am working on NER task for a domain:
There are few words (not sure the exact numbers) that BERT recognized as [UNK], but those entities are required for the model to recognize. The pretrained model learns well (up to 80%) accuracy on "bert-base-cased" while providing labeled data and fine-tune the model but intuitively the model will learn better if it recognize all the entities.
1. Do i need to add those unknown entities in vocabs.txt and train the model again?
2. Do i need to train the BERT model on my data from Scratch?
Thanks...
| 11-13-2020 06:49:24 | 11-13-2020 06:49:24 | You don't need to do either of those things! You should add tokens to your tokenizer by leveraging the `add_tokens()` method, and then resize your model's embedding matrix.
Then, you should train your model on a dataset that has those entities so that it understands the meaning of these entities. It seems to be what you're doing here, so just make sure to add the tokens to your tokenizer first. You can see the doc about it [here](https://huggingface.co/transformers/internal/tokenization_utils.html?highlight=add_tokens#transformers.tokenization_utils_base.SpecialTokensMixin.add_tokens).
Also, we try to keep the github issues only for bugs and feature requests. Please ask questions/discussions on the [instead](https://discuss.huggingface.co). Thanks! |
transformers | 8,512 | closed | Issue while model sharing and uploading on huggingface |
The model I am using (Bert, XLNet ...): Roberta for questionanswering
I am trying to follow the tutorial given in https://huggingface.co/transformers/model_sharing.html and while I am able to load my model in the local repository, I am unable to save my model and tokenizer using "model.save_pretrained("https://huggingface.co/saburbutt/testing")
tokenizer.save_pretrained("https://huggingface.co/saburbutt/testing")"
If I try to open the link it says "Cannot GET /saburbutt/testing/tokenizer_config.json"
While I try to "echo "hello" >> README.md" or use git functions, It gives me the error "fatal: not a git repository (or any of the parent directories): .git" The task I am working on is SQuAD
<!-- A clear and concise description of what you would expect to happen. -->
I expect the model to be saved in the huggingface/saburbutt/testing repository | 11-13-2020 05:21:25 | 11-13-2020 05:21:25 | You cannot save directly remotely like this (though it could be nice to be able to do this in the future, cc @madlag).
You need to first create a repo with `transformers-cli repo create`, or directly on the website.
Then clone it locally and add your files, then push.
Hopefully #8520 makes it clearer?
<|||||>yes. Thankyou. :)<|||||>Hello, I have the same issue with different outputs. I followed all steps in https://huggingface.co/transformers/model_sharing.html
in order, entered my account with transformers-cli login, created repo, installed lfs, cloned repo, added BERT model via "git add BERTMODEL", committed and pushed but always I got the same error.
remote:
remote: -------------------------------------------------------------------------
remote: Your push was rejected because it contains files larger than 10M.
remote: Please use https://git-lfs.github.com/ to store larger files.
remote: -------------------------------------------------------------------------
remote:
remote: Offending files:
remote: - BERTMODEL (ref: refs/heads/main)
To https://huggingface.co/Serdar/your-model-name
! [remote rejected] main -> main (pre-receive hook declined)
OS: Ubuntu 20.04
I already used "git lfs install" but I could not figure out this problem. I wish someone can help<|||||>@serdarakyol Please make sure you read the Getting started guide at https://git-lfs.github.com/ βΒ in your case I think you didn't lfs-track your actual model file<|||||>@julien-c Thank you so much. fixed the problem |
transformers | 8,511 | closed | Adding Confusion matrix support in Trainer | I want to add confusion matrix support in Trainer. It would be a useful addition. The only dependency would be `sklearn` which this library already uses for metrics. It can allow users to better understand about predictions coming from model. @sgugger Let me know if this is something you want to add.
| 11-13-2020 02:26:34 | 11-13-2020 02:26:34 | Hi there! We love contributions, but metrics should now be implemented directly in the datasets library, not inside Transformers. So you should check there if it does not already exist, and if not, suggest a Pr on that repository :-)<|||||>Okay then, I will close it. |
transformers | 8,510 | closed | Finetune TFBertForMaskedLM model.fit() ValueError | ## The Problem
I have been trying to train TFBertForMaskedLM model with tensorflow. But when i use model.fit() always encounter some question.Hope someone can help and propose some solution.
## Reference Paper and sample output
The Paper title is "Conditional Bert for Contextual Augmentation". In short, just change type_token_ids to label_ids. if the label of sentence is 5, length is 10 and max_sequence_length = 16. It will process output as follows:
```
input_ids = [101, 523, 791, 3189, 677, 5221, 524, 1920, 686, 102, 0, 0, 0, 0, 0, 0]
attention_mask = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0]
token_type_ids = [5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 0, 0, 0, 0, 0, 0]
labels = [-100, -100, 791, -100, 677, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100]
```
## Environment
- tensorflow == 2.2.0
- huggingface == 3.5.0
- datasets == 1.1.2
- dataset total label is 5. (1~5)
- GPU : GCP P100 * 1
## Dataset output (max_sequence_length=128, batch_size=1)
```python
{'attention_mask': <tf.Tensor: shape=(128,), dtype=int32, numpy=
array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=int32)>,
'input_ids': <tf.Tensor: shape=(128,), dtype=int32, numpy=
array([ 101, 523, 791, 3189, 677, 5221, 524, 1920, 686,
4518, 6240, 103, 2466, 2204, 2695, 100, 519, 5064,
1918, 736, 2336, 520, 103, 2695, 1564, 4923, 8013,
678, 6734, 8038, 8532, 131, 120, 120, 8373, 119,
103, 9989, 103, 8450, 120, 103, 120, 12990, 8921,
8165, 102, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0], dtype=int32)>,
'labels': <tf.Tensor: shape=(128,), dtype=int32, numpy=
array([-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
4634, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
4158, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, 8429, -100, 119, -100, -100, 100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100], dtype=int32)>,
'token_type_ids': <tf.Tensor: shape=(128,), dtype=int32, numpy=
array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=int32)>}
```
## Model code
```python
from transformers import AdamWeightDecay, TFBertForMaskedLM, BertConfig
def create_model():
configuration = BertConfig.from_pretrained('bert-base-chinese')
model = TFBertForMaskedLM.from_pretrained('bert-base-chinese',
config=configuration)
model.bert.embeddings.token_type_embeddings = tf.keras.layers.Embedding(5, 768,
embeddings_initializer=tf.keras.initializers.TruncatedNormal(stddev=0.02))
return model
model = create_model()
optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metrics = [tf.keras.metrics.Mean(), tf.keras.metrics.SparseCategoricalAccuracy('accuracy')]
model.compile(optimizer = optimizer,
loss = loss,
metrics = metrics)
model.fit(tf_sms_dataset,
epochs=1,
verbose=1)
```
## Warning Message when use TFBertForMaskedLM
```
Some layers from the model checkpoint at bert-base-chinese were not used when initializing TFBertForMaskedLM: ['nsp___cls']
- This IS expected if you are initializing TFBertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing TFBertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
All the layers of TFBertForMaskedLM were initialized from the model checkpoint at bert-base-chinese.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFBertForMaskedLM for predictions without further training.
```
## Error Message
```
ValueError Traceback (most recent call last)
<ipython-input-42-99b78906fef7> in <module>()
5 model.fit(tf_sms_dataset,
6 epochs=1,
----> 7 verbose=1)
10 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in _method_wrapper(self, *args, **kwargs)
64 def _method_wrapper(self, *args, **kwargs):
65 if not self._in_multi_worker_mode(): # pylint: disable=protected-access
---> 66 return method(self, *args, **kwargs)
67
68 # Running inside `run_distribute_coordinator` already.
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
846 batch_size=batch_size):
847 callbacks.on_train_batch_begin(step)
--> 848 tmp_logs = train_function(iterator)
849 # Catch OutOfRangeError for Datasets of unknown size.
850 # This blocks until the batch has finished executing.
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds)
578 xla_context.Exit()
579 else:
--> 580 result = self._call(*args, **kwds)
581
582 if tracing_count == self._get_tracing_count():
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)
625 # This is the first call of __call__, so we have to initialize.
626 initializers = []
--> 627 self._initialize(args, kwds, add_initializers_to=initializers)
628 finally:
629 # At this point we know that the initialization is complete (or less
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to)
504 self._concrete_stateful_fn = (
505 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
--> 506 *args, **kwds))
507
508 def invalid_creator_scope(*unused_args, **unused_kwds):
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
2444 args, kwargs = None, None
2445 with self._lock:
-> 2446 graph_function, _, _ = self._maybe_define_function(args, kwargs)
2447 return graph_function
2448
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)
2775
2776 self._function_cache.missed.add(call_context_key)
-> 2777 graph_function = self._create_graph_function(args, kwargs)
2778 self._function_cache.primary[cache_key] = graph_function
2779 return graph_function, args, kwargs
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
2665 arg_names=arg_names,
2666 override_flat_arg_shapes=override_flat_arg_shapes,
-> 2667 capture_by_value=self._capture_by_value),
2668 self._function_attributes,
2669 # Tell the ConcreteFunction to clean up its graph once it goes out of
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
979 _, original_func = tf_decorator.unwrap(python_func)
980
--> 981 func_outputs = python_func(*func_args, **func_kwargs)
982
983 # invariant: `func_outputs` contains only Tensors, CompositeTensors,
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds)
439 # __wrapped__ allows AutoGraph to swap in a converted function. We give
440 # the function a weak reference to itself to avoid a reference cycle.
--> 441 return weak_wrapped_fn().__wrapped__(*args, **kwds)
442 weak_wrapped_fn = weakref.ref(wrapped_fn)
443
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
966 except Exception as e: # pylint:disable=broad-except
967 if hasattr(e, "ag_error_metadata"):
--> 968 raise e.ag_error_metadata.to_exception(e)
969 else:
970 raise
ValueError: in user code:
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:571 train_function *
outputs = self.distribute_strategy.run(
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:951 run **
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2290 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2649 _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:541 train_step **
self.trainable_variables)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:1804 _minimize
trainable_variables))
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:521 _aggregate_gradients
filtered_grads_and_vars = _filter_grads(grads_and_vars)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:1219 _filter_grads
([v.name for _, v in grads_and_vars],))
ValueError: No gradients provided for any variable: ['tf_bert_for_masked_lm_2/bert/embeddings/word_embeddings/weight:0', 'tf_bert_for_masked_lm_2/bert/embeddings/position_embeddings/embeddings:0', 'tf_bert_for_masked_lm_2/bert/embeddings/LayerNorm/gamma:0', 'tf_bert_for_masked_lm_2/bert/embeddings/LayerNorm/beta:0', 'tf_bert_for_masked_lm_2/bert/embeddings/embedding_1/embeddings:0', 'tf_bert_for_masked_lm_2/bert/encoder/layer_._0/attention/self/query/kernel:0', 'tf_bert_for_masked_lm_2/bert/encoder/layer_._0/attention/self/query/bias:0', 'tf_bert_for_masked_lm_2/bert/encoder/layer_._0/attention/self/key/kernel:0', 'tf_bert_for_masked_lm_2/bert/encoder/layer_._0/attention/self/key/bias:0', 'tf_bert_for_masked_lm_2/bert/encoder/layer_._0/attention/self/value/kernel:0', 'tf_bert_for_masked_lm_2/bert/encoder/layer_._0/attention/self/value/bias:0', 'tf_bert_for_masked_lm_2/bert/encoder/layer_._0/attention/output/dense/kernel:0', 'tf_bert_for_masked_lm_2/bert/encoder/layer_._0/attention/output/dense/bias:0', 'tf_bert_for_masked_lm_2/bert/encoder/layer_._0/attention/output/LayerNorm/gamma:0', 'tf_bert_for_masked_lm_2/bert/encoder/layer_._0/attention/output/LayerNorm/beta:0', 'tf_bert_for_masked_lm_2/bert/encoder/layer_._0/intermediate/dense/kernel:0', 'tf_bert_for_masked_lm_2/bert/encoder/layer_._0/intermediate/dense/bias:0', 'tf_bert_for_masked_lm_2/bert/encoder/layer_._0/output/dense/kernel:0', 'tf_bert_for_masked_lm_2/bert/encoder/layer_._0/output/dense/bias:0', 'tf_bert_f...
```
Have Someone can help. I will thanks a lot.
## Other Test
I used english sentence to test. example as follows:
```python
from transformers import TFBertForMaskedLM, BertConfig
def create_model():
configuration = BertConfig.from_pretrained('bert-base-uncased')
model = TFBertForMaskedLM.from_pretrained('bert-base-uncased',
config=configuration)
return model
model = create_model()
eng_tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
token_info = eng_tokenizer(text="We are very happy to show you the π€ Transformers library.", padding='max_length', max_length=20)
optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metrics = [tf.keras.metrics.Mean(), tf.keras.metrics.SparseCategoricalAccuracy("acc")]
dataset = tf.data.Dataset.from_tensor_slices(dict(token_info))
dataset = dataset.batch(1).prefetch(tf.data.experimental.AUTOTUNE)
model.compile(optimizer = optimizer,
loss = model.compute_loss,
metrics = metrics)
model.fit(dataset)
```
token_info output dataset
```
{
'input_ids': [101, 2057, 2024, 2200, 103, 2000, 2265, 2017, 103, 100, 19081, 3075, 1012, 102, 0, 0, 0, 0, 0, 0]
'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0]
'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
'labels': [-100, -100, -100, -100, 3407, -100, -100, -100, 1996, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100]
}
```
Get same error.....
```
ValueError: in user code:
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:571 train_function *
outputs = self.distribute_strategy.run(
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:951 run **
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2290 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2649 _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:541 train_step **
self.trainable_variables)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:1804 _minimize
trainable_variables))
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:521 _aggregate_gradients
filtered_grads_and_vars = _filter_grads(grads_and_vars)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:1219 _filter_grads
([v.name for _, v in grads_and_vars],))
ValueError: No gradients provided for any variable: ['tf_bert_for_masked_lm_2/bert/embeddings/word_embeddings/weight:0', 'tf_bert_for_masked_lm_2/bert/embeddings/position_embeddings/embeddings:0', 'tf_bert_for_masked_lm_2/bert/embeddings/token_type_embeddings/embeddings:0', 'tf_bert_for_masked_lm_2/bert/embeddings/LayerNorm/gamma:0', 'tf_bert_for_masked_lm_2/bert/embeddings/LayerNorm/beta:0',
```
I'm not sure if there is a problem with the integration of fit() into the model? | 11-13-2020 00:58:37 | 11-13-2020 00:58:37 | Maybe @jplu has an idea!<|||||>Hello @MarsSu0618
The bad news is that it is currently not possible to train an LM from scratch or fine tune it with `.fit()`. The good one is that we are heavily working on it and should be feasible soon.
Sorry for the inconvenience.<|||||>@jplu
So i can not fine tune Bert MLM model with fit(), right?
Because i alter pytorch framework(train loop) and it can be work.
In addtition, I guess maybe should be divide feature and labels. So I change new tensor as follows:
```python
({'attention_mask': <tf.Tensor: shape=(256,), dtype=int32, numpy=
array([1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=int32)>,
'input_ids': <tf.Tensor: shape=(256,), dtype=int32, numpy=
array([ 101, 1962, 102, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0], dtype=int32)>,
'token_type_ids': <tf.Tensor: shape=(256,), dtype=int32, numpy=
array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=int32)>},
<tf.Tensor: shape=(256,), dtype=int32, numpy=
array([-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100], dtype=int32)>)
```
But Error message is change
```
WARNING:tensorflow:Gradients do not exist for variables ['tf_bert_for_masked_lm_6/bert/pooler/dense/kernel:0', 'tf_bert_for_masked_lm_6/bert/pooler/dense/bias:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['tf_bert_for_masked_lm_6/bert/pooler/dense/kernel:0', 'tf_bert_for_masked_lm_6/bert/pooler/dense/bias:0'] when minimizing the loss.
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/indexed_slices.py:432: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
"Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-110-48f551163bd4> in <module>()
4
5
----> 6 model.fit(batched_tfdataset, epochs=1, verbose=1)
10 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
971 except Exception as e: # pylint:disable=broad-except
972 if hasattr(e, "ag_error_metadata"):
--> 973 raise e.ag_error_metadata.to_exception(e)
974 else:
975 raise
TypeError: in user code:
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:806 train_function *
return step_function(self, iterator)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:796 step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:1211 run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2585 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2945 _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:789 run_step **
outputs = model.train_step(data)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:759 train_step
self.compiled_metrics.update_state(y, y_pred, sample_weight)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/compile_utils.py:409 update_state
metric_obj.update_state(y_t, y_p, sample_weight=mask)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/utils/metrics_utils.py:90 decorated
update_op = update_state_fn(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/metrics.py:176 update_state_fn
return ag_update_state(*args, **kwargs)
TypeError: update_state() got multiple values for argument 'sample_weight'
```
how to solve the problem, thanks.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 8,509 | closed | Model templates encoder only | Only merge the encoder part, not the encoder-decoder part of #7636.
Will work on a decoder in the future.
Applied your comments @sgugger @patrickvonplaten, but opened a new PR on a new branch so that we can keep the old one for reference when integrating the encoder-decoder model. | 11-12-2020 20:51:50 | 11-12-2020 20:51:50 | Thank you both for your reviews! |
transformers | 8,508 | closed | TPU issue: possible memory leak in eval loop | I am running into a HBM OOM during the eval loop of xlnet (`--model_name_or_path xlnet-large-cased`) when running on TPUs. No matter which batch size I use, the behavior is the same:
1. training loop succeeds
2. eval loop starts, makes it about halfway, then the TPU runs out of HBM memory and eval loop dies
All the other models that we test are OK. The `xlnet-large-cased` test last passed on 2020-09-14.
Since this is unrelated to batch size, I thought maybe there is a memory leak on the TPU. I think the eval loop is the more likely culprit than the training loop since the only OOM happens during eval.
Here are the last few lines of output before oom:
```
E 2020-11-12T04:51:27.984001264Z Saving model checkpoint to MNLI
E 2020-11-12T04:51:27.989368910Z Configuration saved in MNLI/config.json
E 2020-11-12T04:51:40.438957029Z Model weights saved in MNLI/pytorch_model.bin
E 2020-11-12T04:51:40.535782031Z 11/12/2020 04:51:40 - INFO - run_glue - *** Evaluate ***
E 2020-11-12T04:51:40.536480018Z The following columns in the evaluation set don't have a corresponding argument in `XLNetForSequenceClassification.forward` and have been ignored: idx, hypothesis, premise.
E 2020-11-12T04:51:40.540513400Z ***** Running Evaluation *****
E 2020-11-12T04:51:40.540566285Z Num examples = 9815
E 2020-11-12T04:51:40.540575559Z Batch size = 8
E 2020-11-12T05:11:26.995136217Z
0%| | 0/154 [00:00<?, ?it/s]
1%|1 | 2/154 [00:11<14:01, 5.53s/it]
2%|1 | 3/154 [00:22<18:15, 7.25s/it]
...
49%|####9 | 76/154 [14:34<15:49, 12.17s/it]
50%|##### | 77/154 [14:48<16:25, 12.80s/it]2020-11-12 05:11:26.994477: E 511 tensorflow/compiler/xla/xla_client/xla_util.cc:76] >>> Dumping Computation 0
```
I'm not sure what the issue could be. It seems like both the [training loop](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L743) and the [eval loop](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L1409) are using `ParallelLoader`, which should call `xm.mark_step` for [every call to `next`](https://github.com/pytorch/xla/blob/master/torch_xla/distributed/parallel_loader.py#L37).
Does anyone else have any ideas what could be happening?
## Environment info
- `transformers` version: 3.5.0
- Platform: Linux-4.9.0-13-amd64-x86_64-with-debian-9.13
- Python version: 3.6.10
- PyTorch version (GPU?): 1.8.0a0+d0df29a (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: Yes
- Using TPU in script?: Yes
### Who can help
@sgugger @LysandreJik
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: MNLI
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. git clone https://github.com/huggingface/transformers.git
2. cd transformers && pip install .
3. pip install datasets
4. Training command:
```
python examples/xla_spawn.py \
--num_cores 8 \
examples/text-classification/run_glue.py \
--logging_dir=./tensorboard-metrics \
--task_name MNLI \
--cache_dir ./cache_dir \
--do_train \
--do_eval \
--num_train_epochs 3 \
--max_seq_length 128 \
--learning_rate 3e-5 \
--output_dir MNLI \
--overwrite_output_dir \
--logging_steps 100 \
--save_steps 3000 \
--overwrite_cache \
--tpu_metrics_debug \
--model_name_or_path xlnet-large-cased \
--per_device_train_batch_size 32 \
--per_device_eval_batch_size 8
```
## Expected behavior
Eval loop finishes without TPU OOM.
| 11-12-2020 20:45:23 | 11-12-2020 20:45:23 | The problem is that you are aggregating all your predictions on the TPU host, with a big evaluation set. You should use the `eval_accumulation_steps` argument to pass the predictions back to the CPU every, let's say 20 evaluation steps for instance to avoid the OOM.<|||||>Thanks for the response!
I started a version of the workload that uses that flag and I'll update here once it finishes the training loop<|||||>With that flag, I don't get the same OOM error. Instead I see:
```
E 2020-11-13T05:36:28.200219317Z 11/13/2020 05:36:28 - INFO - run_glue - *** Evaluate ***
E 2020-11-13T05:36:28.201262406Z [INFO|trainer.py:388] 2020-11-13 05:36:28,200 >> The following columns in the evaluation set don't have a corresponding argument in `XLNetForSequenceClassification.forward` and have been ignored: premise, hypothesis, idx.
E 2020-11-13T05:36:28.205409874Z [INFO|trainer.py:1387] 2020-11-13 05:36:28,204 >> ***** Running Evaluation *****
E 2020-11-13T05:36:28.205583892Z [INFO|trainer.py:1388] 2020-11-13 05:36:28,205 >> Num examples = 9815
E 2020-11-13T05:36:28.205718259Z [INFO|trainer.py:1389] 2020-11-13 05:36:28,205 >> Batch size = 32
E 2020-11-13T05:43:14.914374736Z
0%| | 0/39 [00:00<?, ?it/s]
5%|5 | 2/39 [00:10<03:14, 5.26s/it]
8%|7 | 3/39 [00:21<04:09, 6.92s/it]
10%|# | 4/39 [00:31<04:41, 8.04s/it]
13%|#2 | 5/39 [00:42<05:02, 8.89s/it]
15%|#5 | 6/39 [00:53<05:13, 9.51s/it]
18%|#7 | 7/39 [01:05<05:21, 10.04s/it]
21%|## | 8/39 [01:16<05:20, 10.34s/it]
23%|##3 | 9/39 [01:27<05:19, 10.64s/it]
26%|##5 | 10/39 [01:38<05:10, 10.71s/it]Exception in device=TPU:0: tensorflow/compiler/xla/xla_client/mesh_service.cc:364 : Failed to meet rendezvous 'eval_preds_1_0': Sent message larger than max (1342183400 vs. 1073741824) (8)
E 2020-11-13T05:43:14.914454025Z Traceback (most recent call last):
E 2020-11-13T05:43:14.914462893Z File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 329, in _mp_start_fn
E 2020-11-13T05:43:14.914469141Z _start_fn(index, pf_cfg, fn, args)
E 2020-11-13T05:43:14.914474634Z File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 323, in _start_fn
E 2020-11-13T05:43:14.914481036Z fn(gindex, *args)
E 2020-11-13T05:43:14.914486906Z File "/transformers/examples/text-classification/run_glue.py", line 414, in _mp_fn
E 2020-11-13T05:43:14.914495083Z main()
E 2020-11-13T05:43:14.914623679Z File "/transformers/examples/text-classification/run_glue.py", line 370, in main
E 2020-11-13T05:43:14.914648860Z eval_result = trainer.evaluate(eval_dataset=eval_dataset)
E 2020-11-13T05:43:14.914657065Z File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer.py", line 1313, in evaluate
E 2020-11-13T05:43:14.914667922Z prediction_loss_only=True if self.compute_metrics is None else None,
E 2020-11-13T05:43:14.914675010Z File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer.py", line 1431, in prediction_loop
E 2020-11-13T05:43:14.914681724Z preds_gatherer.add_arrays(self._gather_and_numpify(preds_host, "eval_preds"))
E 2020-11-13T05:43:14.914712087Z File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer.py", line 1474, in _gather_and_numpify
E 2020-11-13T05:43:14.914718679Z tensors = nested_xla_mesh_reduce(tensors, name)
E 2020-11-13T05:43:14.914724791Z File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 112, in nested_xla_mesh_reduce
E 2020-11-13T05:43:14.914731470Z return type(tensors)(nested_xla_mesh_reduce(t, f"{name}_{i}") for i, t in enumerate(tensors))
E 2020-11-13T05:43:14.914737871Z File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 112, in <genexpr>
E 2020-11-13T05:43:14.914744687Z return type(tensors)(nested_xla_mesh_reduce(t, f"{name}_{i}") for i, t in enumerate(tensors))
E 2020-11-13T05:43:14.914751282Z File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 112, in nested_xla_mesh_reduce
E 2020-11-13T05:43:14.914761474Z return type(tensors)(nested_xla_mesh_reduce(t, f"{name}_{i}") for i, t in enumerate(tensors))
E 2020-11-13T05:43:14.914768115Z File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 112, in <genexpr>
E 2020-11-13T05:43:14.914774306Z return type(tensors)(nested_xla_mesh_reduce(t, f"{name}_{i}") for i, t in enumerate(tensors))
E 2020-11-13T05:43:14.914780896Z File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 113, in nested_xla_mesh_reduce
E 2020-11-13T05:43:14.914788363Z return xm.mesh_reduce(name, tensors, torch.cat)
E 2020-11-13T05:43:14.914794375Z File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/core/xla_model.py", line 909, in mesh_reduce
E 2020-11-13T05:43:14.914801139Z xdata = rendezvous(tag, bio.getvalue())
E 2020-11-13T05:43:14.914806782Z File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/core/xla_model.py", line 861, in rendezvous
E 2020-11-13T05:43:14.914813625Z return torch_xla._XLAC._xla_rendezvous(get_ordinal(), tag, payload, replicas)
E 2020-11-13T05:43:14.914819959Z RuntimeError: tensorflow/compiler/xla/xla_client/mesh_service.cc:364 : Failed to meet rendezvous 'eval_preds_1_0': Sent message larger than max (1342183400 vs. 1073741824) (8)
E 2020-11-13T05:43:15.468075089Z
26%|##5 | 10/39 [02:11<06:20, 13.12s/it]
```
I'll try some things on my side. It looks like the accumulation was fine for "eval_losses" but then failed on "eval_preds". I will just try a more frequent eval accumulation and a smaller batch size and see if that results in a smaller message being sent between TPU/CPU<|||||>It still looks like a problem of memory (from the `Sent message larger than max` in the stack trace). Maybe try a lower `eval_accumulation_step`?
Maybe we should move those tensors to the CPU before doing the mesh reduce to save a bit of host memory (right now they are reduced on all hosts *then* moved).<|||||>I have a version running now with half the accumulation size and half the eval batch size.
Memory saving on device is probably always good but in this case it seems to be complaining about the size of the transfer payload. If you don't reduce before moving, probably the size of the transfer would be even bigger<|||||>I tried with `--eval_accumulation_steps 5` instead of 10 and `--per_device_eval_batch_size 16` instead of 32 and ran into:
`Exception in device=TPU:4: tensorflow/compiler/xla/xla_client/mesh_service.cc:364 : Failed to meet rendezvous 'eval_preds_1_0': Received message larger than max (335550440 vs. 4194304) (8)`
The 335550440 number is much less than the previous error message's larger number 1342183400. I will try `--eval_accumulation_steps 1` just in case but I'm wondering if this error means something else than what I was assuming<|||||>`eval_accumulation_steps 1` resulted in the same error:
```
E 2020-11-17T00:27:24.619933766Z main()
E 2020-11-17T00:27:24.619937169Z File "/transformers/examples/text-classification/run_glue.py", line 370, in main
E 2020-11-17T00:27:24.619940804Z eval_result = trainer.evaluate(eval_dataset=eval_dataset)
E 2020-11-17T00:27:24.619944189Z File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer.py", line 1313, in evaluate
E 2020-11-17T00:27:24.619947752Z prediction_loss_only=True if self.compute_metrics is None else None,
E 2020-11-17T00:27:24.619951181Z File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer.py", line 1431, in prediction_loop
E 2020-11-17T00:27:24.619954905Z preds_gatherer.add_arrays(self._gather_and_numpify(preds_host, "eval_preds"))
E 2020-11-17T00:27:24.619958638Z File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer.py", line 1474, in _gather_and_numpify
E 2020-11-17T00:27:24.619962458Z tensors = nested_xla_mesh_reduce(tensors, name)
E 2020-11-17T00:27:24.619965855Z File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 112, in nested_xla_mesh_reduce
E 2020-11-17T00:27:24.619976695Z return type(tensors)(nested_xla_mesh_reduce(t, f"{name}_{i}") for i, t in enumerate(tensors))
E 2020-11-17T00:27:24.619980624Z File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 112, in <genexpr>
E 2020-11-17T00:27:24.619984750Z return type(tensors)(nested_xla_mesh_reduce(t, f"{name}_{i}") for i, t in enumerate(tensors))
E 2020-11-17T00:27:24.619988344Z File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 112, in nested_xla_mesh_reduce
E 2020-11-17T00:27:24.619992533Z return type(tensors)(nested_xla_mesh_reduce(t, f"{name}_{i}") for i, t in enumerate(tensors))
E 2020-11-17T00:27:24.619996086Z File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 112, in <genexpr>
E 2020-11-17T00:27:24.619999738Z return type(tensors)(nested_xla_mesh_reduce(t, f"{name}_{i}") for i, t in enumerate(tensors))
E 2020-11-17T00:27:24.620003216Z File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 113, in nested_xla_mesh_reduce
E 2020-11-17T00:27:24.620006752Z return xm.mesh_reduce(name, tensors, torch.cat)
E 2020-11-17T00:27:24.620010015Z File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/core/xla_model.py", line 909, in mesh_reduce
E 2020-11-17T00:27:24.620013568Z xdata = rendezvous(tag, bio.getvalue())
E 2020-11-17T00:27:24.620016833Z File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/core/xla_model.py", line 861, in rendezvous
E 2020-11-17T00:27:24.620020510Z return torch_xla._XLAC._xla_rendezvous(get_ordinal(), tag, payload, replicas)
E 2020-11-17T00:27:24.620024011Z RuntimeError: tensorflow/compiler/xla/xla_client/mesh_service.cc:364 : Failed to meet rendezvous 'eval_preds_1_0': Received message larger than max (67114984 vs. 4194304) (8)
```<|||||>It may be linked to the issue of XLNet outputing its memories on top of the logits (there is a PR under review to fix that).<|||||>That sounds plausible since this issue is only affecting xlnet and none of our other tests.
Is this the right PR: https://github.com/huggingface/transformers/pull/8567 ?<|||||>Yes this PR will fix that, but current v4 release candidate should have another fix on the `Trainer` side (which basically ignores some of the keys in the model outputs).<|||||>Looks like #8567 was submitted and now our xlnet test started passing. Thank you!<|||||>Glad to hear it's fixed your issue :-) |
transformers | 8,507 | closed | Fill-mask pipeline removes space after token prediction when loading pre-training model based on roberta-base | ## Environment info
- `transformers` version: 3.5.0
- Platform: Linux-4.15.0-122-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.5
- PyTorch version (GPU?): 1.7.0+cu101 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False
### Who can help @mfuntowicz
## Information
Model I am using (Bert, XLNet ...): **roberta-base**
The problem arises when using:
* [x] the official example scripts:
-> continued **pre-training roberta-base** using the v3.5.0 examples/language-modeling/**run_mlm_wwm.py**
* [x] my own modified scripts:
To test the model during training I simply instantiate a pipeline class with target pretraining checkpoint folder and input masked strings to check the probabilities:
```
unmasker = pipeline('fill-mask', model=model_checkpoint_path)
results = unmasker(masked_text)
print(json.dumps(results, indent=4))
```
## Expected behavior
The expected behavior for input string "The goal of MASK is happiness." if loading model "roberta-base" would be:
[
{
"sequence": "The goal of life is happiness.",
"score": 0.07787031680345535,
"token": 301,
"token_str": "\u0120life"
},
{
"sequence": "The goal of meditation is happiness.",
"score": 0.040741581469774246,
"token": 20183,
"token_str": "\u0120meditation"
}
]
## Observed behavior
For the same input string I obtain no space following the predicted token when loading the further pre-trained model from a checkpoint folder, example result:
[
{
"sequence": "The goal of Kiwis happiness.",
"score": 0.11430764198303223,
"token": 21472,
"token_str": "\u0120Kiw"
},
{
"sequence": "The goal of anis happiness.",
"score": 0.04334629327058792,
"token": 41,
"token_str": "\u0120an"
},
{
"sequence": "The goal of buis happiness.",
"score": 0.03720756620168686,
"token": 10306,
"token_str": "\u0120bu"
}
]
As I thought it might be a tokenizer issue from the checkpoint, I tried to specify one from the roberta-base model used to continue the pre-training from. It solves the issue, so it seems like the pre-training steps have corrupted the tokenizer as loaded from the checkpoint. The results I get after 200,000 steps of additional pretraining from roberta-base:
[
{
"sequence": "The goal of this is happiness.",
"score": 0.28572556376457214,
"token": 42,
"token_str": "\u0120this"
},
{
"sequence": "The goal of it is happiness.",
"score": 0.10664933174848557,
"token": 24,
"token_str": "\u0120it"
},
{
"sequence": "The goal of all is happiness.",
"score": 0.07055338472127914,
"token": 70,
"token_str": "\u0120all"
},
{
"sequence": "The goal of life is happiness.",
"score": 0.056005414575338364,
"token": 301,
"token_str": "\u0120life"
}
]
-> so regardless of the quality of the result token prediciton, loading the roberta-base original tokenizer solves the issue. | 11-12-2020 20:10:06 | 11-12-2020 20:10:06 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Could this be reopened. I'm facing the same issue.<|||||>@sk- Same. Still getting this with `distilroberta-base` |
transformers | 8,506 | closed | DPR model: FileNotFoundError: Couldn't find file | I am using DPR model:
```python
from transformers import DPRQuestionEncoderTokenizer, DPRQuestionEncoder
from datasets import load_dataset
question_tokenizer = DPRQuestionEncoderTokenizer.from_pretrained('facebook/dpr-question_encoder-single-nq-base')
question_encoder = DPRQuestionEncoder.from_pretrained('facebook/dpr-question_encoder-single-nq-base')
wiki = load_dataset("wiki_dpr", with_embeddings=False, with_index=True, split="train")
def get_top(question, topk=5):
question_emb = question_encoder(**question_tokenizer(question, return_tensors="pt"))[0].detach().numpy()
passages_scores, passages = wiki.get_nearest_examples("embeddings", question_emb, k=topk)
all_passgae = ""
for score, title, text in zip(passages_scores, passages['title'], passages['text']):
if len(all_passgae.split(" ")) < 450:
all_passgae += f" ({title}) {text}"
return all_passgae
get_top("who was the first US president?")
```
This was working until last week. However, now when running it I am getting the following error:
```
Downloading: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 232k/232k [00:00<00:00, 1.65MB/s]
Downloading: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 493/493 [00:00<00:00, 443kB/s]
Downloading: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 438M/438M [00:05<00:00, 73.8MB/s]
Downloading: 7.91kB [00:00, 7.04MB/s]
Downloading: 21.9kB [00:00, 19.5MB/s]
Using custom data configuration psgs_w100.no_embeddings.compressed
Downloading and preparing dataset wiki_dpr/psgs_w100.no_embeddings.compressed (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/danielk/.cache/huggingface/datasets/wiki_dpr/psgs_w100.no_embeddings.compressed/0.0.0/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2...
Downloading: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1.54k/1.54k [00:00<00:00, 1.66MB/s]
Downloading: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 13.8G/13.8G [03:41<00:00, 62.3MB/s]
Traceback (most recent call last):
File "2.create_tasks.py", line 9, in <module>
wiki = load_dataset("wiki_dpr", with_embeddings=False, with_index=True, split="train")
File "/home/danielk/qoogle-experiments/env37/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "/home/danielk/qoogle-experiments/env37/lib/python3.7/site-packages/datasets/builder.py", line 468, in download_and_prepare
self._download_prepared_from_hf_gcs()
File "/home/danielk/qoogle-experiments/env37/lib/python3.7/site-packages/datasets/builder.py", line 507, in _download_prepared_from_hf_gcs
resource_path = utils.cached_path(remote_cache_dir + "/" + resource_file_name)
File "/home/danielk/qoogle-experiments/env37/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path
use_etag=download_config.use_etag,
File "/home/danielk/qoogle-experiments/env37/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 474, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://storage.googleapis.com/huggingface-nlp/cache/datasets/wiki_dpr/psgs_w100.no_embeddings.compressed/0.0.0/psgs_w100.nq.IVFPQ4096_HNSW32_PQ64-IP-train.faiss
```
I wonder if the changes made to the model-hub has anything to do with this.
@LysandreJik @lhoestq @julien-c
Here is my environment, for completeness:
```
Python 3.7.5 (default, Nov 7 2019, 10:50:52)
[GCC 8.3.0] on linux
```
and
```
tensorboard 2.4.0
tensorboard-plugin-wit 1.7.0
tensorboardX 2.1
tensorflow 2.3.1
tensorflow-datasets 4.1.0
tensorflow-estimator 2.3.0
tensorflow-metadata 0.25.0
tensorflow-text 2.3.0
termcolor 1.1.0
tfds-nightly 4.1.0.dev202011120108
threadpoolctl 2.1.0
tokenizers 0.9.3
torch 1.6.0
tqdm 4.49.0
transformers 3.5.0
typing-extensions 3.7.4.3
urllib3 1.26.1
```
| 11-12-2020 19:09:49 | 11-12-2020 19:09:49 | Thanks for reporting !
Indeed the index was renamed recently. I fixed it, it should be good now<|||||>Got it. Upon re-trying the code, it works fine. |
transformers | 8,505 | closed | Unexpected behavior when using PubMedBERT with AutoModelForMaskedLM | ## Information
I'm getting some strange behavior when using `AutoModelForMaskedLM` with PubMedBERT to impute masked tokens. The screenshot below shows a simple example where I would expect PubMedBERT to give reasonable values, but the suggested tokens are really strange. As shown just below this, `bert-base-uncased` seems to behave reasonably.
<img width="1013" alt="Screen Shot 2020-11-12 at 10 28 13 AM" src="https://user-images.githubusercontent.com/3958904/98984623-4e96b400-24d7-11eb-9a5f-faac70b1b399.png">
Same code as above, in text:
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
model_name = 'microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForMaskedLM.from_pretrained(model_name).to('cuda')
text = f'Heart disease is {tokenizer.mask_token} leading cause of death in the United States.'
tokenized = tokenizer(text, return_tensors='pt').to('cuda')
print(tokenizer.convert_ids_to_tokens(tokenized.input_ids.squeeze()))
output = model(**tokenized, return_dict=True)
output.logits.size()
print(tokenizer.convert_ids_to_tokens(torch.topk(output.logits[0, 4, :], 10).indices))
model_name = 'bert-base-uncased'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForMaskedLM.from_pretrained(model_name).to('cuda')
tokenized = tokenizer(text, return_tensors='pt').to('cuda')
print(tokenizer.convert_ids_to_tokens(tokenized.input_ids.squeeze()))
output = model(**tokenized, return_dict=True)
print(tokenizer.convert_ids_to_tokens(torch.topk(output.logits[0, 4, :], 10).indices))
```
## Environment info
- `transformers` version: 3.3.1
- Platform: Linux-3.10.0-957.5.1.el7.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No | 11-12-2020 19:09:27 | 11-12-2020 19:09:27 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>I know it's an old issue but I just came across this page.
The original PubMedBERT checkpoint didn't have the mask prediction heads, but we updated the checkpoint ~10 months ago
https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract
Also, we have a new biomed+clinical domain-specific model if you're interested: https://huggingface.co/microsoft/BiomedVLP-CXR-BERT-general
@rahuln |
transformers | 8,504 | closed | Failed to push model repo | Hi, when I upload my model to hub as the [new document](https://huggingface.co/transformers/model_sharing.html) say, I got this error:
```
Delta compression using up to 16 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (4/4), 1.07 GiB | 3.08 MiB/s, done.
Total 4 (delta 0), reused 1 (delta 0)
remote:
remote: -------------------------------------------------------------------------
remote: Your push was rejected because it contains files larger than 10M.
remote: Please use https://git-lfs.github.com/ to store larger files.
remote: -------------------------------------------------------------------------
remote:
remote: Offending files:
remote: - tf_model.h5 (ref: refs/heads/main)
To https://huggingface.co/mymusise/gpt2-medium-chinese
! [remote rejected] main -> main (pre-receive hook declined)
error: failed to push some refs to 'https://huggingface.co/mymusise/gpt2-medium-chinese'
```
## Environment info
- `transformers` version: 3.5.0
- Platform: Ubuntu 18.04
| 11-12-2020 18:31:57 | 11-12-2020 18:31:57 | And `git-lfs` is already installed in my local.
```bash
$ git lfs install
Updated git hooks.
Git LFS initialized.
```
What should I do?<|||||>Well, it's magical. It works if I clone it again. But this time is different from the first time.
- The first time, the model file seems not complete.
```bash
/data2/wiki_zh β 1:44:35
$ git clone https://huggingface.co/mymusise/gpt2-medium-chinese
Cloning into 'gpt2-medium-chinese'...
remote: Enumerating objects: 15, done.
remote: Counting objects: 100% (15/15), done.
remote: Compressing objects: 100% (14/14), done.
remote: Total 15 (delta 3), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (15/15), done.
/data2/wiki_zh β 1:44:50
$ cd gpt2-medium-chinese
/data2/wiki_zh/gpt2-medium-chinese on ξ main β 1:44:56
$ ls
config.json tf_model.h5 vocab.txt
/data2/wiki_zh/gpt2-medium-chinese on ξ main β 1:44:56
$ ls -lh
total 44K
-rw-rw-r-- 1 mymusise mymusise 849 11ζ 13 01:44 config.json
-rw-rw-r-- 1 mymusise mymusise 135 11ζ 13 01:44 tf_model.h5
-rw-rw-r-- 1 mymusise mymusise 35K 11ζ 13 01:44 vocab.txt
```
- When I clone model repo again, the model file seems complete.
```bash
/data2/wiki_zh β 2:40:12
$ rm -rf gpt2-medium-chinese
/data2/wiki_zh β 2:40:16
$ git clone https://huggingface.co/mymusise/gpt2-medium-chinese
Cloning into 'gpt2-medium-chinese'...
remote: Enumerating objects: 15, done.
remote: Counting objects: 100% (15/15), done.
remote: Compressing objects: 100% (14/14), done.
remote: Total 15 (delta 3), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (15/15), done.
/data2/wiki_zh/gpt2-medium-chinese on ξ main β 2:43:08
$ ls -lh
total 1.2G
-rw-rw-r-- 1 mymusise mymusise 849 11ζ 13 02:40 config.json
-rw-rw-r-- 1 mymusise mymusise 1.2G 11ζ 13 02:42 tf_model.h5
-rw-rw-r-- 1 mymusise mymusise 35K 11ζ 13 02:40 vocab.txt
```
Then I can push model successfully when I update the model file.<|||||>I think, the reason why I push fails may be because I haven't added git-lfs before I `git add` the model file for the first time. `git lfs install` may not work after adding a big model file.<|||||>Yes, you need to run `git lfs install` before adding files. I'll make that clearer in the documentation<|||||>alternatively your can use ``` git lfs migrate import --everything ``` even before adding file without lfs. This will reindex files and let you push them using git lfs<|||||>will add this to my upcoming video about `git-lfs` @jqueguiner β€οΈ <|||||>I have to admit I'm lazy sometimes

<|||||>Maybe this one could help (for future searchers ;) )
```
huggingface-cli lfs-enable-largefiles
```
I had the same problem and was curious why it's working from Trainer that uses huggingface_hub
and found that need to run "huggingface-cli lfs-enable-largefiles"
https://github.com/huggingface/huggingface_hub/blob/2e81cf3ec04b0dd5ce2acc92d25f8261a8484f3e/src/huggingface_hub/commands/lfs.py#L45
```
This should be executed once for each model repo that contains a model file >5GB. It's documented in the error
message you get if you just try to git push a 5GB file without having enabled it before.
``` |
transformers | 8,503 | closed | Training the TFGPT2LMHeadModel with model.fit produces error | MWE:
```python
from transformers.modeling_tf_gpt2 import TFGPT2LMHeadModel
from transformers.configuration_gpt2 import GPT2Config
from tensorflow.data import Dataset
import tensorflow as tf
data = tf.random.uniform(shape=[10000], dtype = tf.int32, maxval = 100)
src = tf.constant(data)
def split_input_target(chunk):
return chunk[:-1], chunk[1:]
ds = Dataset.from_tensor_slices(src) \
.batch(256 + 1, drop_remainder = True) \
.map(split_input_target)
model = TFGPT2LMHeadModel(GPT2Config())
loss = ['sparse_categorical_crossentropy'] + [None] * 12
model.compile(loss = loss, metrics = ['sparse_categorical_accuracy'])
model.fit(ds)
```
This generates the error `ValueError: Dimensions must be equal, but are 256 and 12 for '{{node Equal_1}} = Equal[T=DT_FLOAT, incompatible_shape_error=true](Cast_6, Cast_7)' with input shapes: [256,1], [2,256,12,1].` Is there any explanation somewhere on how to train like this?
| 11-12-2020 17:27:11 | 11-12-2020 17:27:11 | Hello!
Your way of computing the loss is wrong. I suggest you to look at how we compute it [here](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_tf_gpt2.py#L650) and [here](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_tf_utils.py#L125). You should also rewrite your metric the same way.<|||||>I know but changing the loss metric won't fix the ValueError.<|||||>It is. Just look at how we do it. Including in the tests.<|||||>Met the same error before, it works for me with removing the `metrics` params when `model.compile`. But I think it's a bad idea.<|||||>Hi, there. I found it won't raise this error if `batch size` == `n_head`. For example, LysandreJik's [gits](https://gist.github.com/LysandreJik/c958925768eb6a9a72609ea99561d1cb) works only `BATCH_SIZE = 12`<|||||>~~Hey, guys. If the output of other layers was used rarely, can we add a controller to select whether return multi-layer logits or not? (correct me if it's necessary to return multi-layer logits.)~~
~~If we can do this, this [pull request](https://github.com/huggingface/transformers/pull/8584) may help.~~<|||||>@bjourne Hey guy, I think adding `output_attentions=False` and `output_hidden_states=False` may help:
```
model = TFGPT2LMHeadModel(GPT2Config(output_attentions=False, output_hidden_states=False, use_cache=False))
```<|||||>I haven't had the chance to try that. Maybe someone else can? Will the trained network with `output_hidden_states=True` though? You need to set it to True when generating text.<|||||>Hello,
I got the same error.
When I tried to set `output_attentions = False`, it didn't seem to be in GPT2Config.
Looking at the [documentation](https://huggingface.co/transformers/model_doc/gpt2.html#gpt2config), GPT2Config doesn't have that parameter, where should I set it?<|||||>> Hello,
>
> I got the same error.
> When I tried to set `output_attentions = False`, it didn't seem to be in GPT2Config.
> Looking at the [documentation](https://huggingface.co/transformers/model_doc/gpt2.html#gpt2config), GPT2Config doesn't have that parameter, where should I set it?
`output_attentions` is a Parameter of `PretrainedConfig` which is the SuperClass of `GPT2Config`, [see](https://huggingface.co/transformers/main_classes/configuration.html#transformers.PretrainedConfig) <|||||>Thank you for your reply.
> PretrainedConfig which is the SuperClass of GPT2Config
I see, I understand.
I noticed that the version of the transformer I'm using was out of date :cry:
> GPT2Config(output_attentions=False, output_hidden_states=False, use_cache=False)
I used the latest version and it worked without any errors!
Your advice was very helpful, thank you!<|||||>This issue has been stale for 1 month. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.