repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 7,702 | closed | Trainer callback breaks old code | https://github.com/huggingface/transformers/blob/ba4bbd92bcb55febbfa06aaa1551738388ec7eb0/src/transformers/trainer_callback.py#L438-L447
Currently it depends on the fact that evaluate() will first call `self.prediction_loop` https://github.com/huggingface/transformers/blob/ba4bbd92bcb55febbfa06aaa1551738388ec7eb0/src/transformers/trainer.py#L1181
which will then call `.callback_handler.on_prediction_step `
https://github.com/huggingface/transformers/blob/ba4bbd92bcb55febbfa06aaa1551738388ec7eb0/src/transformers/trainer.py#L1270
But in my old code (3.1.0), I subclass Trainer and overwrite evaluate(), without calling self.prediction_loop.
And results in this error:
```
self.prediction_bar.close()
AttributeError: 'NoneType' object has no attribute 'close'
```
I propose we add `on_predict_begin` and `on_predict_end`. | 10-11-2020 02:39:46 | 10-11-2020 02:39:46 | Or add this check in `on_evaluate`
```python
if self.prediction_bar is not None:
self.prediction_bar.close()
```
<|||||>I think the second solution is better (since it should be there in any case). We can add the events you suggest in the first one if we find other use cases that need them.
Do you want to tackle this in a PR?<|||||>I was thinking we can move the opening and closing of `prediction_bar` into separate events, so we don't need the `if` statements.
But it can be hard to debug if someone miss the opening, so it's probably unnecessary.
I will create a PR with the second solution. |
transformers | 7,701 | closed | Strange error while using the `LongformerForMultipleChoice` | Hello,
I am trying to use `LongformerForMultipleChoice` model, and the code I am using is the following:
```python
# import the pre-trained HuggingFace Longformer tokenizer.
longformer_tokenizer = LongformerTokenizer.from_pretrained('allenai/longformer-base-4096')
# get the pre-trained HuggingFace Longformer
best_model_longformer = LongformerForMultipleChoice.from_pretrained('allenai/longformer-base-4096',
output_hidden_states = True)
# my multiple choice question has 4 options.
question_list = [main_question, main_question, main_question, main_question]
options_list = [option1, option2, option3, option4]
mc_labels = torch.tensor([my_answer])
encoded_dict = longformer_tokenizer(question_list, options_list,
return_tensors = 'pt',
add_prefix_space = True,
padding = True)
input_hidden_state = best_model_longformer(
**{k: v.unsqueeze(0) for k,v in encoded_dict.items()},
labels = mc_labels)[2][0][:,:,:].detach()
```
and I am getting the error below:
```
/home/ec2-user/anaconda3/lib/python3.7/site-packages/transformers/modeling_longformer.py:71: UserWarning: This overload of nonzero is deprecated:
nonzero()
Consider using one of the following signatures instead:
nonzero(*, bool as_tuple) (Triggered internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:766.)
sep_token_indices = (input_ids == sep_token_id).nonzero()
Traceback (most recent call last):
File "SEED_125_V20_15_LONGFORMER.py", line 427, in <module>
main_function('/home/ec2-user/G1G2.txt','/home/ec2-user/G1G2_answer_num.txt', num_iter)
File "SEED_125_V20_15_LONGFORMER.py", line 389, in main_function
best_model_longformer)
File "SEED_125_V20_15_LONGFORMER.py", line 198, in fill_MC_loss_accuracy_tensor
input_hidden_state = best_model_longformer(**{k: v.unsqueeze(0) for k,v in encoded_dict.items()}, labels = mc_labels)[2][0][:,:,:].detach()
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/transformers/modeling_longformer.py", line 1808, in forward
loss = loss_fct(reshaped_logits, labels)
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/torch/nn/modules/loss.py", line 948, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py", line 2422, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py", line 2218, in nll_loss
ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
IndexError: Target 1 is out of bounds.
```
How can I fix this error?
I also tried solving this issue with:
```python
# group question_list and options_list as a single list rather than specifying them seperately
encoded_dict = longformer_tokenizer([question_list, options_list],
return_tensors = 'pt',
add_prefix_space = True,
padding = True)
```
But this generates a different error, saying:
```
ValueError: 2 expected but found 1
```
PS: I don't think my Longformer model is correctly getting that my multiple-choice questions have 4 options...is there any way to make Longformer to take my mutiple-choice questions with the ones with 4 options (instead of 2 options)?
Thank you.
PS: I am interested in extracting hidden embedding more than the error or the logit themselves | 10-11-2020 01:59:49 | 10-11-2020 01:59:49 | The way `xxxForMultipleChoice` models work is actually a bit tricky. It works as follows (based on the [original explanation by the author of BERT](https://github.com/google-research/bert/issues/38)):
Given a question, and several options, the question + options are processed by the model independently. So they will look as follows: `[CLS] question [SEP] option 1 [SEP]` first, then `[CLS] question [SEP] option 2 [SEP]` second, and so on.
So when you're using the tokenizer to encode the input, it should be used as follows:
```
# my multiple choice question has 4 options.
question = "this is a question"
option1 = "option 1"
option2 = "option 2"
option3 = "option 3"
option4 = "option 4"
encoded_input = longformer_tokenizer([question, question, question, question],
[option1, option2, option3, option4],
return_tensors='pt',
padding='max_length')
```
we need to `unsqueeze` the values of that dictionary, so to make sure they are all of shape (batch_size, num_choices, seq_len), or thus (1, 4, 4096). Also, the answer should be a tensor having shape (batch_size,), so in our case this is just a tensor containing a single element, containing the index of the correct option. Suppose the correct option is 3, then `answer` will be `torch.tensor([2])` (since indexing starts at zero). Next, we can run the forward pass as follows:
```
mc_labels = torch.tensor([2])
outputs = model(**{k: v.unsqueeze(0) for k,v in encoding.items()}, labels=mc_labels,
return_dict=True) # batch size is 1
```
The `outputs` will be a `MultipleChoiceOutput` containing the loss and the logits. The logits are of shape (batch_size, number of choices), so (1, 4).
If you want to train the model, simply get the loss using `outputs.loss` and perform `loss.backward()`. If you want to get the predictions of the model, convert the logits into predictions by typing `outputs.logits.argmax(-1)`.
UPDATE: fixed the fact that the answer shouldn't be unsqueezed, since it's just a tensor of shape (batch_size,).
UPDATE: fix indexing of answer.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,700 | closed | GPT2DoubleHeadsModel documentation example question (error in documentation)? | Hello,
I was reading the documentation for the GPT2DoubleHeadsModel, and I have a question.
In the documentation, the example shown is:
```python
import torch
from transformers import GPT2Tokenizer, GPT2DoubleHeadsModel
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2DoubleHeadsModel.from_pretrained('gpt2, return_dict=True)
# Add a [CLS] to the vocabulary (we should train it also!)
num_added_tokens = tokenizer.add_special_tokens({'cls_token': '[CLS]'})
embedding_layer = model.resize_token_embeddings(len(tokenizer)) # Update the model embeddings with the new vocabulary size
choices = ["Hello, my dog is cute [CLS]", "Hello, my cat is cute [CLS]"]
encoded_choices = [tokenizer.encode(s) for s in choices]
cls_token_location = [tokens.index(tokenizer.cls_token_id) for tokens in encoded_choices]
input_ids = torch.tensor(encoded_choices).unsqueeze(0) # Batch size: 1, number of choices: 2
mc_token_ids = torch.tensor([cls_token_location]) # Batch size: 1
outputs = model(input_ids, mc_token_ids=mc_token_ids)
lm_logits = outputs.lm_logits
mc_logits = outputs.mc_logits
```
Now, I don't see any main question statement in this example, although I see two multiple-choice options. The way I used to use `GPT2DoubleHeadsModel` is I first did `tokenizer(question_statement, option_statement)` and do `encoded_dict['input_ids']` to extract `input_ids` and similarly `encoded_dict['token_type_ids']` to extract the `token_type_ids`. Has this changed? I am getting an impression that the example is wrong (maybe the example could apply for BERT, but not GPT2DoubleHeadsModel). Is this an error in the documentation? I thought that, since GPT-2 does causal language modeling, question statement and option statement has to be encoded together, and put the `[CLS]` token at the end (usually), so that GPT-2 can apply the causal language modeling process to solve the multiple-choice problem.
Reading this again, I think the examples for BERTForMultipleChoice and GPT2DoubleHeadsModel are flipped.
Thanks, | 10-10-2020 22:04:30 | 10-10-2020 22:04:30 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,699 | closed | Fix check for xla in PreTrainedModel.save_pretrained() |
# What does this PR do?
Added is_torch_tpu_available() to the condition for saving a model as xla model when calling `PreTrainedModel.save_pretrained()`
The `xla_device` property of `config` can also be `True` on a non-xla device, when loading a checkpoint that was previously trained and saved on xla.
Loading a model that was trained on xla was fixed previously with #5636 , this PR fixes the problem of saving such a model again.
Fixes #7695
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
| 10-10-2020 19:42:56 | 10-10-2020 19:42:56 | |
transformers | 7,698 | closed | MLflow Trainer Callback | # 🚀 Feature request
A callback to log hyperparameters, metrics and cofings/weights to MLFlow, like the existing wandb and Tensorboard callbacks.
## Motivation
I use MLFlow as my primary experiment tracking tool. It is convenient to run on a remote server and log the results from any of your training machines, andit also facilitates collaboration.
Trainer is an amazing tool, it makes it very simple to train models, however, the only way to modify the training loop to include custom logging seems to add a callback.
## Your contribution
I can contribute a PR.
| 10-10-2020 19:22:36 | 10-10-2020 19:22:36 | Happy to get a PR on this!<|||||>@noise-field is remote tracking with remote server uri with authentication also enabled as part of this feature request ?<|||||>@RahulKulhari well, this is not part of the feature request, but you can certainly do remote tracking with mlflow callback. However, you will need to use environment variables (MLFLOW_TRACKING_URI , MLFLOW_TRACKING_USERNAME, MLFLOW_TRACKING_PASSWORD) in advance to configure your connection to the remote server. |
transformers | 7,697 | closed | tokenizers dependency warning: `transformers 3.3.1 has requirement tokenizers==0.8.1.rc2, but you'll have tokenizers 0.9.0` | ## Environment info
- `transformers` version: 3.3.1
- Platform: Darwin-19.6.0-x86_64-i386-64bit
- Python version: 3.7.5
- PyTorch version (GPU?): 1.4.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
tokenizers: @mfuntowicz
## Information
Hey all, it'll probably be fixed soon but when updating to `tokenziers` 0.9.0 with `pip install tokenizers --upgrade` I get:
`ERROR: transformers 3.3.1 has requirement tokenizers==0.8.1.rc2, but you'll have tokenizers 0.9.0 which is incompatible.`
## To reproduce
Steps to reproduce the behavior:
1. pip Install transformers --upgrade
2. pip install tokenizers --upgrade
## Expected behavior
Expected compatibility between transformers 3.3.1 and tokenizers 0.9.0
| 10-10-2020 16:26:13 | 10-10-2020 16:26:13 | Also stumbled upon this package. I was surprised we were forced to add the RC version of tokenizers. I would expect this to be pinned to "0.8.1" or "0.9.1"<|||||>https://github.com/huggingface/transformers/pull/7794 to update to release version 0.9.1<|||||>Hi, we have a strict requirement on `tokenizers==0.8.1rc2`. We're updating it in https://github.com/huggingface/transformers/pull/7659 but the current `transformers` `master` branch will stay pinned until that PR is merged.
Both libraries evolve quickly and generally evolve together, so having a strict `==` dependency is necessary until tokenizers version 1.0.0 is released. |
transformers | 7,696 | closed | Minor spelling corrections in docstrings. "information" is uncountable in English and has no plural. | Minor spelling corrections in docstrings. "information" is uncountable in English and has no plural. | 10-10-2020 12:31:31 | 10-10-2020 12:31:31 | |
transformers | 7,695 | closed | save_pretrained() does not check if xla is available | ## Environment info
- `transformers` version: 3.3.1
- Platform: Linux
- Python version: 3.6
- PyTorch version (GPU?): 1.4
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [ X] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. load any model trained on TPU with `BertModel.from_pretrained(tpu_checkpoint_path)`
2. run/train the model - works fine
3. save the model with `model.save_pretrained(save_path)`
```
line 720, in save_pretrained
import torch_xla.core.xla_model as xm
ModuleNotFoundError: No module named 'torch_xla'
```
## Expected behavior
I am pretraining a LM on TPU, and for the downstream task fine-tuning I load the saved checkpoints on a non-TPU device.
Loading works fine now (#5636), but saving again does not.
`save_pretrained` should check whether the device is still xla - the original config attribute that is used in `save_pretrained` to check for the device persists when loading the xla model on another device:
```
getattr(config, 'xla_device')
True
```
It is easy to fix by changing the config attribute `setattr(config, 'xla_device', False)` in the script, but I would still consider it a bug.
| 10-10-2020 12:29:20 | 10-10-2020 12:29:20 | |
transformers | 7,694 | closed | Fix docstring in AutoModel class | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes doc string for the class AutoModel
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> | 10-10-2020 12:26:47 | 10-10-2020 12:26:47 | Thanks! |
transformers | 7,693 | closed | How to get the word embedding after pre-training ? for example, a embedding matrix | I am excited on this great model. And I want to get the word embedding . Where shold I find the file from output or should I change to code to do this? | 10-10-2020 09:51:42 | 10-10-2020 09:51:42 | It depends of what you understand as "embedding", as it can be ambiguous with transformer models.
Embeddings can be the embedding matrix, which returns context-less embedding of tokens, that you can obtain with `model.get_input_embeddings()`.
Embeddings can also be understood as the features generated by the base model, which are token embeddings with context (depends on the tokens surrounding the token you're studying). You can simply do a forward pass through the base models (e.g., `BertModel`, `GPT2Model`, etc.) to get these embeddings.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,692 | closed | Fail to run text classification example with run_tf_text_classification | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Ubuntu 18.04
- Python version: 3.6.9
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 2.3.1
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Distributed
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
Text classification with own dataset
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I use my own datasets {train/dev/test}/.csv and run `run_tf_text_classification.py`, the training seems OK, while error occurs while evaluation as below:
> 2020-10-10 07:33:15.368292: W tensorflow/core/framework/op_kernel.cc:1767] OP_REQUIRES failed at resource_variable_ops.cc:537 : Not found: Resource localhost/_AnonymousVar110/N10tensorflow3VarE does not exist.
Traceback (most recent call last):
File "run_tf_text_classification.py", line 292, in <module>
main()
File "run_tf_text_classification.py", line 267, in main
trainer.train()
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer_tf.py", line 592, in train
self.evaluate()
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer_tf.py", line 438, in evaluate
output = self.prediction_loop(eval_ds, steps, num_examples, description="Evaluation")
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer_tf.py", line 327, in prediction_loop
logits = self.distributed_prediction_steps(batch)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py", line 780, in __call__
result = self._call(*args, **kwds)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py", line 814, in _call
results = self._stateful_fn(*args, **kwds)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py", line 2829, in __call__
return graph_function._filtered_call(args, kwargs) # pylint: disable=protected-access
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py", line 1848, in _filtered_call
cancellation_manager=cancellation_manager)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py", line 1924, in _call_flat
ctx, args, cancellation_manager=cancellation_manager))
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py", line 550, in call
ctx=ctx)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/execute.py", line 60, in quick_execute
inputs, attrs, num_outputs)
tensorflow.python.framework.errors_impl.NotFoundError: Resource localhost/_AnonymousVar110/N10tensorflow3VarE does not exist.
[[node AssignAddVariableOp (defined at /usr/local/lib/python3.6/dist-packages/transformers/trainer_tf.py:457) ]] [Op:__inference_distributed_prediction_steps_11885]
Function call stack:
distributed_prediction_steps
| 10-10-2020 07:39:33 | 10-10-2020 07:39:33 | Might be of interest to @jplu <|||||>Hello!
Can you give more detail on how to reproduce your issue, otherwise we cannot help you.<|||||>> Hello!
>
> Can you give more detail on how to reproduce your issue, otherwise we cannot help you.
Thanks for your replay. I followed the instruction [here](https://github.com/huggingface/transformers/blob/master/examples/text-classification/README.md) with my own datasets (I can not provide my dataset due to confidentiality.). My script is below:
> python3 run_tf_text_classification.py \
--train_file $data_dir/train.csv \
--dev_file $data_dir/dev.csv \
--test_file $data_dir/test.csv \
--label_column_id 0 \
--model_name_or_path distilbert-base-uncased \
--cache_dir $cache_dir \
--output_dir $output_dir \
--num_train_epochs 4 \
--per_device_train_batch_size 12 \
--per_device_eval_batch_size 12 \
--do_train \
--do_eval \
--do_predict \
--logging_steps 10 \
--evaluate_during_training \
--save_steps 10 \
--overwrite_output_dir \
--max_seq_length 128
I can see the training started and run for a while, as there were checkpoints saved in the `output_dir`.<|||||>Sorry, I tried with one of my dataset with the exact same command line on 4/2/1 GPUs and CPU and cannot reproduce your error. The only one thing I can tell you to do is to be sure to use the master version of the script. Otherwise without more information I cannot really help you more sorry :(<|||||>> Sorry, I tried with one of my dataset with the exact same command line on 4/2/1 GPUs and CPU and cannot reproduce your error. The only one thing I can tell you to do is to be sure to use the master version of the script. Otherwise without more information I cannot really help you more sorry :(
Thanks for your help. It could be related to tensorflow or transformers versions. I will try a few of them and see how it would solve my problem.<|||||>@lkluo I have the exact same problem. Did you solve this?<|||||>> @lkluo I have the exact same problem. Did you solve this?
I have tried many ways without any luck, so I gave up.
You may open a new issue and seek help from @jplu.<|||||>@lkluo Thanks for your response. I think I got it fixed. I was saving the model after each epoch with `tf.saved_model.save(self.model, self.args.output_dir)`. However, when using the model for evaluation after saving it once with this method, I got the error you described. I changed it to using `self.model.ckpt_manager.save()` which is a bit inconvenient since I want .pb files, but at least the code runs fine now. If your error is also related to storing the model, this might help you.
<|||||>> @lkluo Thanks for your response. I think I got it fixed. I was saving the model after each epoch with `tf.saved_model.save(self.model, self.args.output_dir)`. However, when using the model for evaluation after saving it once with this method, I got the error you described. I changed it to using `self.model.ckpt_manager.save()` which is a bit inconvenient since I want .pb files, but at least the code runs fine now. If your error is also related to storing the model, this might help you.
Good to know, thanks for letting me know. I will definitely give a try with your method.<|||||>## Environment info
Platform: Jupyter notebook on Ubuntu 2004
TF version: 2.3.1
Transformers version: 3.5.0
Python version: 3.6.9
Single GPU: RTX2080TI
## Issue
I am encountering the same error during evaluation using TFTrainer.train(). It is not reproducible, and it seems to happen randomly. I installed the latest docker image from the Tensorflow website (docker pull tensorflow/tensorflow:latest-gpu-jupyter). It seems everyone is downgrading Tensorflow to avoid this issue? What is the lowest possible version of Tensorflow for using Transformers 3.5.0?<|||||>I met the same problem. It caused by TFTrainer.train() when excute to line 573 'self.distributed_training_steps(batch)' in trainer_tf.py. And it throws
```
2020-11-21 19:34:31.165454: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:172] Filling up shuffle buffer (this may take a while): 9713 of 16392
2020-11-21 19:34:38.101580: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:221] Shuffle buffer filled.
```
I tried with colab gpu, it is not work. And I searched the same issue "Shuffle buffer filled." in tensorflow, it is still not solved.<|||||>I reopen this issue because many others encountered the same problem.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 7,691 | closed | Seq2Seq Example with Bart not Saving Best Model | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1
- Platform: Ubuntu
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0+cu101
- Tensorflow version (GPU?): N/A
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@sshleifer
## Information
Model I am using (Bert, XLNet ...): Bart
The problem arises when using:
* [x] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
I am using a slightly modified version of the examples/seq2seq/finetune_bart_tiny.sh script, where I just add the `--val_check_interval 0.1 --do_predict` flags to the finetune.py call:
```
python finetune.py \
--data_dir=cnn_tiny/ \
--model_name_or_path=sshleifer/bart-tiny-random \
--learning_rate=3e-5 \
--train_batch_size=2 \
--eval_batch_size=2 \
--output_dir=$OUTPUT_DIR \
--num_train_epochs=1 \
--gpus=0 \
--val_check_interval 0.1 \
--do_train --do_predict "$@"
```
Which is supposed to save the best performing model based on the val_check_interval and then evaluate the model, as is done in the regular `finetune.sh` script (thought the error is also in this one as well, I am using the tiny version so that it is easier to see the issue).
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: tiny-cnn
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Go through this google colab: https://colab.research.google.com/drive/1xtyvXI6gNAJpSkqYi_0ieWkMFRw3OSm2?usp=sharing
```
._cnn_tiny
cnn_tiny/
cnn_tiny/._train.target
cnn_tiny/train.target
cnn_tiny/._train.source
cnn_tiny/train.source
cnn_tiny/._val.source
cnn_tiny/val.source
cnn_tiny/._val.target
cnn_tiny/val.target
cnn_tiny/._test.source
cnn_tiny/test.source
cnn_tiny/._test.target
cnn_tiny/test.target
Epoch 0: 17%|█▋ | 1/6 [00:00<00:02, 2.20it/s, loss=10.839, v_num=1]
Validating: 0it [00:00, ?it/s]
Epoch 0: 33%|███▎ | 2/6 [00:00<00:01, 2.02it/s, loss=10.839, v_num=1]
Epoch 0: 50%|█████ | 3/6 [00:01<00:01, 2.07it/s, loss=10.839, v_num=1]
Epoch 0: 67%|██████▋ | 4/6 [00:01<00:00, 2.33it/s, loss=10.837, v_num=1]
Validating: 0it [00:00, ?it/s]
Epoch 0: 83%|████████▎ | 5/6 [00:02<00:00, 2.24it/s, loss=10.837, v_num=1]
Epoch 0: 100%|██████████| 6/6 [00:02<00:00, 2.28it/s, loss=10.837, v_num=1]
Epoch 0: 100%|██████████| 6/6 [00:02<00:00, 2.28it/s, loss=10.837, v_num=1]
--2020-10-10 02:28:52-- https://cdn-datasets.huggingface.co/summarization/cnn_tiny.tgz
Resolving cdn-datasets.huggingface.co (cdn-datasets.huggingface.co)... 13.227.209.120, 13.227.209.109, 13.227.209.124, ...
Connecting to cdn-datasets.huggingface.co (cdn-datasets.huggingface.co)|13.227.209.120|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 23131 (23K) [application/x-tar]
Saving to: ‘cnn_tiny.tgz’
0K .......... .......... .. 100% 44.4M=0s
2020-10-10 02:28:52 (44.4 MB/s) - ‘cnn_tiny.tgz’ saved [23131/23131]
2020-10-10 02:28:54.290821: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
/usr/local/lib/python3.6/dist-packages/pytorch_lightning/utilities/distributed.py:45: UserWarning: The validation_epoch_end should not return anything as of 9.1.to log, use self.log(...) or self.write(...) directly in the LightningModule
warnings.warn(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/pytorch_lightning/utilities/distributed.py:45: UserWarning: The {log:dict keyword} was deprecated in 0.9.1 and will be removed in 1.0.0
Please use self.log(...) inside the lightningModule instead.
# log on a step or aggregate epoch metric to the logger and/or progress bar
# (inside LightningModule)
self.log('train_loss', loss, on_step=True, on_epoch=True, prog_bar=True)
warnings.warn(*args, **kwargs)
Traceback (most recent call last):
File "finetune.py", line 440, in <module>
main(args)
File "finetune.py", line 429, in main
trainer.test()
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 728, in test
results = self.__test_using_best_weights(ckpt_path, test_dataloaders)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 740, in __test_using_best_weights
'ckpt_path is "best", but ModelCheckpoint is not configured to save the best model.'
pytorch_lightning.utilities.exceptions.MisconfigurationException: ckpt_path is "best", but ModelCheckpoint is not configured to save the best model
```
## Expected behavior
The script should save the model with the best performing validation loss and should then use this saved model for evaluation against a test set. This is the same case for the regular `finetune.sh` script. This was working as of Oct 4/5th, but stopped sometime after.
Any help with this issue would be greatly appreciated! | 10-10-2020 02:35:33 | 10-10-2020 02:35:33 | Using tiny was very smart.
We upgraded to pytorch_lightning 0.9.0 (`pip install -r examples/requirements.txt`), does that fix your issue?<|||||>It worked! But.... I had to update pyarrow from 0.14.1 to 0.17.1 because I was getting the following error:
`AttributeError: module 'pyarrow' has no attribute 'PyExtensionType'`
Which I am guessing is due to y'all epic datasets library requiring pyarrow=>0.17.1:
`ERROR: datasets 1.1.2 has requirement pyarrow>=0.17.1, but you'll have pyarrow 0.14.1 which is incompatible.`
I opened a PR to add this dependency on pyarrow 0.17.1 to the `examples/requirements.txt`: https://github.com/huggingface/transformers/pull/7750#issue-501958657
If the PR can be accepted, I'd say this issue can be fully closed.
Thanks for your help with this! |
transformers | 7,690 | closed | RAG Tokenizer erroring out | ## Environment info
- `transformers` version: 3.3.1
- Platform: Linux-5.4.0-48-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.9
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
@ola13 @mfuntowicz
## Information
Hi- I am trying to get the RAG running, however I am getting the error when I follow the instructions here: <https://huggingface.co/facebook/rag-token-nq>
Particularly, the error message is as follows:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-7-35cd6a2213c0> in <module>
1 from transformers import AutoTokenizer, AutoModelWithLMHead
2
----> 3 tokenizer = AutoTokenizer.from_pretrained("facebook/rag-token-nq")
~/src/transformers/src/transformers/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
258 return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
259 else:
--> 260 return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
261
262 raise ValueError(
~/src/transformers/src/transformers/tokenization_rag.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
61 print(config.generator)
62 print("***")
---> 63 generator = AutoTokenizer.from_pretrained(generator_path, config=config.generator)
64 return cls(question_encoder=question_encoder, generator=generator)
65
~/src/transformers/src/transformers/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
258 return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
259 else:
--> 260 return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
261
262 raise ValueError(
~/src/transformers/src/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
1557
1558 return cls._from_pretrained(
-> 1559 resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs
1560 )
1561
~/src/transformers/src/transformers/tokenization_utils_base.py in _from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs)
1648
1649 # Add supplementary tokens.
-> 1650 special_tokens = tokenizer.all_special_tokens
1651 if added_tokens_file is not None:
1652 with open(added_tokens_file, encoding="utf-8") as added_tokens_handle:
~/src/transformers/src/transformers/tokenization_utils_base.py in all_special_tokens(self)
1026 Convert tokens of :obj:`tokenizers.AddedToken` type to string.
1027 """
-> 1028 all_toks = [str(s) for s in self.all_special_tokens_extended]
1029 return all_toks
1030
~/src/transformers/src/transformers/tokenization_utils_base.py in all_special_tokens_extended(self)
1046 logger.info(all_toks)
1047 print(all_toks)
-> 1048 all_toks = list(OrderedDict.fromkeys(all_toks))
1049 return all_toks
1050
TypeError: unhashable type: 'dict'
```
`all_toks` variable looks as follows. Obviously, it is a dictionary and `OrderedDict.fromkeys` doesn't like it.
```
[{'content': '<s>', 'single_word': False, 'lstrip': False, 'rstrip': False, 'normalized': True}, {'content': '</s>', 'single_word': False, 'lstrip': False, 'rstrip': False, 'normalized': True}, {'content': '<unk>', 'single_word': False, 'lstrip': False, 'rstrip': False, 'normalized': True}, {'content': '</s>', 'single_word': False, 'lstrip': False, 'rstrip': False, 'normalized': True}, {'content': '<pad>', 'single_word': False, 'lstrip': False, 'rstrip': False, 'normalized': True}, {'content': '<s>', 'single_word': False, 'lstrip': False, 'rstrip': False, 'normalized': True}, {'content': '<mask>', 'single_word': False, 'lstrip': True, 'rstrip': False, 'normalized': True}]
```
I will be digging deeper, hoping that I am doing an obvious mistake.
## To reproduce
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("facebook/rag-token-nq")
```
## Expected behavior
It should load the tokenizer!
Thank you.
| 10-09-2020 23:23:17 | 10-09-2020 23:23:17 | Just to follow up on this, look like special tokens are loaded for the RAG generator [here](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils_base.py#L1635), but it is not converted to `AddedTokens` [here](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils_base.py#L1592) and hence not compatible with downstream operations. <|||||>When I run the examples from:
https://huggingface.co/transformers/model_doc/rag.html
I get exactly the same error:

<|||||>Hey @dzorlu - thanks for your error, I will take a look tomorrow!<|||||>> Hey @dzorlu - thanks for your error, I will take a look tomorrow!
thanks @patrickvonplaten . Appreciate all the hard work :+1: <|||||>Should be solved now - let me know if you still experience problems @dzorlu <|||||>Thank you! |
transformers | 7,689 | closed | Fix flaky test in test_trainer | # What does this PR do?
As investigated with @LysandreJik today, the corresponding test was flaky because `loggin_dir` has a default that depends of time (so the test fails if the two Trainer are instantiated just before a new minute and just after). | 10-09-2020 23:20:51 | 10-09-2020 23:20:51 | |
transformers | 7,688 | closed | Adds license information for default and distilbert models | # What does this PR do?
Adds license information for default and `distilbert*` models. A follow up to #7668.
- Apache 2.0 for `distilbert*` based on https://github.com/huggingface/transformers/issues/3357#issuecomment-614856396
- MIT for `facebook/bart-large-mnli` based on https://github.com/huggingface/transformers/issues/7668#issuecomment-706064737
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case. #7668
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
Model Cards: @julien-c | 10-09-2020 23:15:36 | 10-09-2020 23:15:36 | Thanks!<|||||>Thanks @julien-c! |
transformers | 7,687 | closed | Fix title level in Blenderbot doc | # What does this PR do?
The navigation bar is a bit crazy because the documentation of BlenderBot puts title and sections at the same level. This PR fixes that.
@LysandreJik merging as soon as it's green because it's a small fix, tagging you so you're aware. | 10-09-2020 23:09:56 | 10-09-2020 23:09:56 | |
transformers | 7,686 | closed | When downloading RAG dpr indexes, there is a pickle file loading error | When I try to finetune RAG with following code:
```
self.config = RagConfig.from_pretrained("facebook/rag-token-base", n_docs=2, use_dummy_dataset=False)
self.tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-base")
self.tokenizer.pad_token_id = AutoTokenizer.from_pretrained("facebook/bart-large").pad_token_id
retriever = RagRetriever.from_pretrained("facebook/rag-token-base" use_dummy_dataset=False)
self.model = RagTokenForGeneration.from_pretrained("facebook/rag-token-base", retriever=retriever, config=self.config)
```
An error occurs, which seems that a downloaded pickle file cannot be loaded. I put the error message below.
I assigned 200GB memory so it should not be an memory issue. I'm not sure if this is a trivial error due to my own implementation bugs or it is more common. Thank you very much!!
--------------
File "anaconda3/envs/transformers/lib/python3.7/site-packages/numpy/lib/npyio.py", line 447, in load return pickle.load(fid, **pickle_kwargs)
_pickle.UnpicklingError: pickle data was truncated
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/builder.py", line 553, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/builder.py", line 841, in _prepare_split
generator, unit=" examples", total=split_info.num_examples, leave=False, disable=not_verbose
File "anaconda3/envs/transformers/lib/python3.7/site-packages/tqdm/std.py", line 1130, in __iter__
for obj in iterable:
File ".cache/huggingface/modules/datasets_modules/datasets/wiki_dpr/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2/wiki_dpr.py", line 132, in _generate_examples
vecs = np.load(open(vectors_files.pop(0), "rb"), allow_pickle=True)
File "anaconda3/envs/transformers/lib/python3.7/site-packages/numpy/lib/npyio.py", line 450, in load
"Failed to interpret file %s as a pickle" % repr(file))
OSError: Failed to interpret file <_io.BufferedReader name='.cache/huggingface/datasets/downloads/cd4183aaa482e0e3724cb8b2efafc6c762914aabed38c16a41f922ff7d5e90f9'> as a pickle
Traceback (most recent call last):
File "src/finetune.py", line 432, in <module>
main(args)
File "src/finetune.py", line 371, in main
model: SummarizationModule = SummarizationModule(args)
File "src/finetune.py", line 73, in __init__
super().__init__(hparams, num_labels=None, mode=self.mode, **kwargs)
File "src/lightning_base.py", line 130, in __init__
retriever = RagRetriever.from_pretrained("facebook/rag-token-base", use_dummy_dataset=False)
File "anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/retrieval_rag.py", line 310, in from_pretrained
config, question_encoder_tokenizer=question_encoder_tokenizer, generator_tokenizer=generator_tokenizer
File "anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/retrieval_rag.py", line 301, in __init__
self.init_retrieval()
File "anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/retrieval_rag.py", line 327, in init_retrieval
self.index.init_index()
File "anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/retrieval_rag.py", line 241, in init_index
dummy=self.use_dummy_dataset,
File "anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/builder.py", line 555, in _download_and_prepare
raise OSError("Cannot find data file. " + (self.manual_download_instructions or ""))
OSError: Cannot find data file.
srun: error: node006: task 0: Exited with exit code 1 | 10-09-2020 23:02:28 | 10-09-2020 23:02:28 | It was probably because I repeatly downloaded the indexes and the previous incomplete files still exist. Issue closed<|||||>I met the same bug. |
transformers | 7,685 | closed | Using PaddingStrategy and TruncationStrategy throws an UnboundLocalError in tokenizers | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1
- Platform: Darwin-19.6.0-x86_64-i386-64bit
- Python version: 3.7.7
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
@mfuntowicz I am almost sure I am using it right. But after looking at the code I found that there are two variables that are being accessed before assignment. Thanks in advance.
## Information
This is exactly what I am doing:
I am trying to load a tokenizer using `AutoTokenizer` and encode a single string. I am using a pretrained tokenizer `distilbert-base-uncased`
## To reproduce
Steps to reproduce the behavior:
This is exactly what I am running:
```
from transformers import AutoTokenizer
from transformers.tokenization_utils_base import PaddingStrategy
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased", use_fast=True)
encoded_sentence = tokenizer.encode_plus(
"some input text",
return_attention_mask=True,
padding=PaddingStrategy.MAX_LENGTH,
add_special_tokens=True,
max_length=20,
return_token_type_ids=True
)
print(encoded_sentence)
```
This throws the following error:
```
Traceback (most recent call last):
File "......", line 22, in <module>
return_token_type_ids=True
File ".../.venv/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2029, in encode_plus
**kwargs,
File "..../.venv/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1837, in _get_padding_truncation_strategies
if padding_strategy != PaddingStrategy.DO_NOT_PAD and (not self.pad_token or self.pad_token_id < 0):
UnboundLocalError: local variable 'padding_strategy' referenced before assignment
```
I have padded some of the paths to hide my local directories.
The same thing happens for truncation too. E.g.,
```
from transformers import AutoTokenizer
from transformers.tokenization_utils_base import TruncationStrategy
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased", use_fast=True)
encoded_sentence = tokenizer.encode_plus(
"some input text",
return_attention_mask=True,
add_special_tokens=True,
max_length=20,
return_token_type_ids=True,
truncation=TruncationStrategy.LONGEST_FIRST
)
print(encoded_sentence)
```
This raises the following error:
```
Traceback (most recent call last):
File ".../scratch.py", line 22, in <module>
truncation=TruncationStrategy.LONGEST_FIRST
File ".../.venv/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2029, in encode_plus
**kwargs,
File ".../.venv/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1846, in _get_padding_truncation_strategies
truncation_strategy != TruncationStrategy.DO_NOT_TRUNCATE
UnboundLocalError: local variable 'truncation_strategy' referenced before assignment
```
## Expected behavior
This should not throw an exception like this. I looked at the code as well. I kind of know what is the actual issue.
| 10-09-2020 18:50:40 | 10-09-2020 18:50:40 | Hi! This should have been solved in `master`. Can you install from source and let us know if you're facing the same issue?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,684 | closed | Error with running run_language_modeling.py on GCP TPU | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1
- Platform: GCP
- Python version: 3.6.10
- PyTorch version (GPU?): 1.6.0
- Tensorflow version (GPU?): -
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: Yes/NO
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@sgugger @LysandreJik
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ run_language_modeling.py ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [fine tuning BERT on Wikitext103 ] my own task or dataset: (give details below)
## To reproduce
I am trying to fine-tune Bert on the Wikitext-103 dataset by running the example code run_language_modeling.py on Google cloud TPU v3-8 using xla_spawn.py launcher. I tried both num_cores =1 and num_cores > 1, but neither worked properly.
Steps to reproduce the behavior:
1. python xla_spawn.py --num_cores 8 \
run_language_modeling.py \
--model_name_or_path=bert-base-uncased \
--do_train \
--train_data_file\
--do_eval \
--eval_data_file\
--mlm\
--per_device_train_batch_size=4
The full output is long and you can find it [here](https://gofile.io/d/tjovN1). Here is the begining of error message:
Iteration: 0%| | 1/28026 [00:00<3:41:44, 2.11it/s][A2020-10-09 16:05:45.623409: W
2449 tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:160] RPC failed with status = "Unavailable: Socket closed" and grpc_error_string = "{"created":"@1602259545.623281190","description":"Error received from peer ipv4:10.48.142.250:8470","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Socket closed","grpc_status":14}", maybe retrying the RPC
2020-10-09 16:06:09.642115: E 2449 tensorflow/compiler/xla/xla_client/xla_util.cc:76] >>> Dumping Computation 0
2020-10-09 16:06:09.642213: E 2449 tensorflow/compiler/xla/xla_client/xla_util.cc:76] HloModule SyncTensorsGraph.23679, input_output_alias={ {0}: (39, {}, may-alias), {1}: (37, {}, may-alias),
......
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Running without error!
<!-- A clear and concise description of what you would expect to happen. -->
Thanks for any help. | 10-09-2020 16:37:06 | 10-09-2020 16:37:06 | This doesn't seem to be an issue with `transformers` but with your TPU and its communication with your VM. You would probably have more help if you asked over at https://github.com/pytorch/xla<|||||>@LysandreJik Thanks for your comment. The problem rooted in the TPU software version. Setting it to `PyTorch-1.6` resolved the issue.
|
transformers | 7,683 | closed | Gpt1 for sequence classification | # What does this PR do?
Adds sequence classification architecture for GPT-1,
Strongly based on modifications made in #7501
Fixes #7623 (issue) (Partially)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@LysandreJik Here is the new PR without that merge problem, let me know if there is anything that should be changed =)
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> | 10-09-2020 15:57:05 | 10-09-2020 15:57:05 | This is great, thanks for working on it! There's something that I had forgotten in the initial GPT-2 implementation, which was to add it to the auto-models. I did it in this PR: https://github.com/huggingface/transformers/pull/7630.
Could you apply the fix to GPT-1 as well, before we merge?<|||||>> This is great, thanks for working on it! There's something that I had forgotten in the initial GPT-2 implementation, which was to add it to the auto-models. I did it in this PR: #7630.
>
> Could you apply the fix to GPT-1 as well, before we merge?
Done! 😊<|||||>Thanks @fmcurti! |
transformers | 7,682 | closed | Fine-tuning | I read lot of articles on pre-training vs fine-tuning but I am still not able to get the meaning in the context of transformer models, I understand the pre-training is training the dataset from scratch , while (point of confusion) the fine-tuning is using the pre-trained model/data and add on the top of that with our own custom dataset?
Then it comes the downstream tasks, any fine line of distinctions among these three terms pre-training, finetuning and downstream tasks
please clarify | 10-09-2020 15:36:47 | 10-09-2020 15:36:47 | Both pre-training and fine-tuning involve _training_ a model (i.e., updating the weights of the model using backpropagation).
* In a first step, a model is (pre-)trained on a very large dataset (such as all English Wikipedia articles). BERT for example, is pre-trained on 2 tasks: masked language modeling (MLM) and next sentence prediction (NSP). Masked language modeling is, for example, given the sentence "I went to the [MASK] to buy a bread", the model must predict the word "bakery" in this case. For sentence prediction, given 2 sentences A and B, the model must predict whether sentence B follows sentence A in the dataset, or is just a random sentence. Note that both MLM and NSP are self-supervised learning (we don't need to manually annotate the dataset, because we can just use Wikipedia and mask out some words or randomize the sentences).
The reason they call it pre-training is because it's training a model before you train it on another, second, dataset.
* In a second step, a model can be fine-tuned on a task of interest, usually called the downstream task. This can be text classification for example, or named-entity recognition, question-answering, summarization,... Fine-tuning is also training a model, but we start with the model that was already (pre-)trained on the large dataset in the first step. The reason this is done is because the model has already learned a lot about language in general (just by predicting masked words and interpreting the order of sentences). So the weights of these model are already quite good, they contain some "knowledge". We can now just use this model, and train it further on our own (usually way smaller) dataset. Note that this is supervised learning (we need to collect a labelled dataset). In case the downstream task is text classification (for example determining whether a movie review is positive or negative), then we need to collect a dataset of movie reviews and label each individual review with either "positive" or "negative".
This picture by [Jay Allamar](http://jalammar.github.io/illustrated-bert/) illustrates this very well (note that in the figure below, the downstream task is also text classification):

BTW, please post any questions which are not bugs/new features you would like to see added on the [forum](https://discuss.huggingface.co/) rather than here.
<|||||>> Both pre-training and fine-tuning involve _training_ a model (i.e., updating the weights of the model using backpropagation).
>
> * In a first step, a model is (pre-)trained on a very large dataset (such as all English Wikipedia articles). BERT for example, is pre-trained on 2 tasks: masked language modeling (MLM) and next sentence prediction (NSP). Masked language modeling is, for example, given the sentence "I went to the [MASK] to buy a bread", the model must predict the word "bakery" in this case. For sentence prediction, given 2 sentences A and B, the model must predict whether sentence B follows sentence A in the dataset, or is just a random sentence. Note that both MLM and NSP are self-supervised learning (we don't need to manually annotate the dataset, because we can just use Wikipedia and mask out some words or randomize the sentences).
>
> The reason they call it pre-training is because it's training a model before you train it on another, second, dataset.
>
> * In a second step, a model can be fine-tuned on a task of interest, usually called the downstream task. This can be text classification for example, or named-entity recognition, question-answering, summarization,... Fine-tuning is also training a model, but we start with the model that was already (pre-)trained on the large dataset in the first step. The reason this is done is because the model has already learned a lot about language in general (just by predicting masked words and interpreting the order of sentences). So the weights of these model are already quite good, they contain some "knowledge". We can now just use this model, and train it further on our own (usually way smaller) dataset. Note that this is supervised learning (we need to collect a labelled dataset). In case the downstream task is text classification (for example determining whether a movie review is positive or negative), then we need to collect a dataset of movie reviews and label each individual review with either "positive" or "negative".
>
> This picture by [Jay Allamar](http://jalammar.github.io/illustrated-bert/) illustrates this very well (note that in the figure below, the downstream task is also text classification):
>
> 
>
> BTW, please post any questions which are not bugs/new features you would like to see added on the [forum](https://discuss.huggingface.co/) rather than here.
thanks @NielsRogge for the splendid information<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,681 | closed | Delete extra test file in repo root | 10-09-2020 15:13:05 | 10-09-2020 15:13:05 | ||
transformers | 7,680 | closed | Better links for models in README and doc index | # What does this PR do?
This PR fixes the links to the docs for unreleased models and makes the automatic copy to the index.rst a little bit better (by using relative links so there is no jump in version).
It adds instructions in the setup.py to clean the master in the links for unreleased models at the time of a release, we will just need to remember to have the right links in the README in a PR that adds a new model.
<!-- Remove if not applicable -->
Fixes #7657 | 10-09-2020 15:01:42 | 10-09-2020 15:01:42 | |
transformers | 7,679 | closed | TFEncoderDecoder | # 🚀 Feature request
Tensorflow version for the `EncoderDecoder` class for sequence to sequence text generation.
## Motivation
I am replicating [this](https://arxiv.org/pdf/1907.12461.pdf) paper which studies several combinations of encoder and decoder with pretrained checkpoints. Most of it is implemented very nicely in the current API, but is not available for tensorflow. The closest thing I found was `TFT5ForConditionalGeneration` which works nicely. Are there any plans to extend this for other pretrained models such as Bert, Roberta, GPT(2)?
## Your contribution
Happy to work on a PR if someone can provide some pointers on where to start and best practices to include new models in the existing API.
| 10-09-2020 14:53:19 | 10-09-2020 14:53:19 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,678 | closed | Fix dataset cardinality | # What does this PR do?
This PR fixes an issue with the generic text classification example where the size of the datasets were not properly set.
Fixes #7637 | 10-09-2020 13:45:05 | 10-09-2020 13:45:05 | |
transformers | 7,677 | closed | Batch and smart batch support for pipelines. | # 🚀 Feature request
## Motivation
I want to use `TextClassificationPipeline` to classify a large number of texts.
The naive approach is:
```python
model = AutoModelForSequenceClassification.from_pretrained(model_dir)
tokenizer = AutoTokenizer.from_pretrained(model_dir)
pipeline = TextClassificationPipeline(
model=model,
tokenizer=tokenizer,
framework="pt",
device=0,
)
results = pipeline(unlabeled_text_list)
```
But this gives me CUDA OOM error when `unlabeled_text_list` is long.
What about adding batch support that lets you specify the batch size and maybe also support for multiprocessing tokenization?
When possible smart batching would be nice. See this: https://github.com/UKPLab/sentence-transformers/issues/454#issuecomment-699496454
What do you think? | 10-09-2020 13:16:50 | 10-09-2020 13:16:50 | I see the following requirements:
1. automatic batching
2. maybe smart batching
3. multi GPU support
4. Tokenization with multiple processes in parallel to the prediction
5. `max_length` and `truncation` support
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,676 | closed | TFTrainer doesn't work | ## Environment info
- `transformers` version: 3.3.1
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0+cu101 (False)
- Tensorflow version (GPU?): 2.3.0 (False)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
albert, bert, GPT2, XLM: @LysandreJik
Trainer: @sgugger
tensorflow: @jplu
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
Protein Sequence dataset
## To reproduce
Steps to reproduce the behavior:
https://colab.research.google.com/drive/1v0FMM_iuRSixvDaoHaiP77pel7qkoYL8?usp=sharing
```
WARNING:tensorflow:TPU system grpc://10.12.199.226:8470 has already been initialized. Reinitializing the TPU can cause previously created variables on TPU to be lost.
WARNING:tensorflow:TPU system grpc://10.12.199.226:8470 has already been initialized. Reinitializing the TPU can cause previously created variables on TPU to be lost.
INFO:tensorflow:Initializing the TPU system: grpc://10.12.199.226:8470
INFO:tensorflow:Initializing the TPU system: grpc://10.12.199.226:8470
INFO:tensorflow:Clearing out eager caches
INFO:tensorflow:Clearing out eager caches
INFO:tensorflow:Finished initializing TPU system.
INFO:tensorflow:Finished initializing TPU system.
WARNING:absl:`tf.distribute.experimental.TPUStrategy` is deprecated, please use the non experimental symbol `tf.distribute.TPUStrategy` instead.
INFO:tensorflow:Found TPU system:
INFO:tensorflow:Found TPU system:
INFO:tensorflow:*** Num TPU Cores: 8
INFO:tensorflow:*** Num TPU Cores: 8
INFO:tensorflow:*** Num TPU Workers: 1
INFO:tensorflow:*** Num TPU Workers: 1
INFO:tensorflow:*** Num TPU Cores Per Worker: 8
INFO:tensorflow:*** Num TPU Cores Per Worker: 8
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:localhost/replica:0/task:0/device:CPU:0, CPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:localhost/replica:0/task:0/device:CPU:0, CPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:localhost/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:localhost/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:CPU:0, CPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:CPU:0, CPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:0, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:0, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:1, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:1, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:2, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:2, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:3, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:3, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:4, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:4, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:5, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:5, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:6, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:6, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:7, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:7, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU_SYSTEM:0, TPU_SYSTEM, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU_SYSTEM:0, TPU_SYSTEM, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 0, 0)
Exception ignored in: <bound method EagerResourceDeleter.__del__ of <tensorflow.python.ops.resource_variable_ops.EagerResourceDeleter object at 0x7fec453cbf60>>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 293, in __del__
self._handle, ignore_lookup_error=True)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_resource_variable_ops.py", line 252, in destroy_resource_op
_ops.raise_from_not_ok_status(e, name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 6843, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: stream is uninitialized or in an error state [Op:DestroyResourceOp]
Exception ignored in: <bound method EagerResourceDeleter.__del__ of <tensorflow.python.ops.resource_variable_ops.EagerResourceDeleter object at 0x7fec453cb710>>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 293, in __del__
self._handle, ignore_lookup_error=True)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_resource_variable_ops.py", line 252, in destroy_resource_op
_ops.raise_from_not_ok_status(e, name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 6843, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: stream is uninitialized or in an error state [Op:DestroyResourceOp]
Exception ignored in: <bound method EagerResourceDeleter.__del__ of <tensorflow.python.ops.resource_variable_ops.EagerResourceDeleter object at 0x7fec453cb7b8>>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 293, in __del__
self._handle, ignore_lookup_error=True)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_resource_variable_ops.py", line 252, in destroy_resource_op
_ops.raise_from_not_ok_status(e, name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 6843, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: stream is uninitialized or in an error state [Op:DestroyResourceOp]
Exception ignored in: <bound method EagerResourceDeleter.__del__ of <tensorflow.python.ops.resource_variable_ops.EagerResourceDeleter object at 0x7fec44b73128>>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 293, in __del__
self._handle, ignore_lookup_error=True)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_resource_variable_ops.py", line 252, in destroy_resource_op
_ops.raise_from_not_ok_status(e, name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 6843, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: stream is uninitialized or in an error state [Op:DestroyResourceOp]
Exception ignored in: <bound method EagerResourceDeleter.__del__ of <tensorflow.python.ops.resource_variable_ops.EagerResourceDeleter object at 0x7fec44b730b8>>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 293, in __del__
self._handle, ignore_lookup_error=True)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_resource_variable_ops.py", line 252, in destroy_resource_op
_ops.raise_from_not_ok_status(e, name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 6843, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: stream is uninitialized or in an error state [Op:DestroyResourceOp]
Exception ignored in: <bound method EagerResourceDeleter.__del__ of <tensorflow.python.ops.resource_variable_ops.EagerResourceDeleter object at 0x7fec44b73048>>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 293, in __del__
self._handle, ignore_lookup_error=True)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_resource_variable_ops.py", line 252, in destroy_resource_op
_ops.raise_from_not_ok_status(e, name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 6843, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: stream is uninitialized or in an error state [Op:DestroyResourceOp]
Exception ignored in: <bound method EagerResourceDeleter.__del__ of <tensorflow.python.ops.resource_variable_ops.EagerResourceDeleter object at 0x7fec453cb9b0>>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 293, in __del__
self._handle, ignore_lookup_error=True)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_resource_variable_ops.py", line 252, in destroy_resource_op
_ops.raise_from_not_ok_status(e, name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 6843, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: stream is uninitialized or in an error state [Op:DestroyResourceOp]
Exception ignored in: <bound method EagerResourceDeleter.__del__ of <tensorflow.python.ops.resource_variable_ops.EagerResourceDeleter object at 0x7fec453cb5c0>>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 293, in __del__
self._handle, ignore_lookup_error=True)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_resource_variable_ops.py", line 252, in destroy_resource_op
_ops.raise_from_not_ok_status(e, name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 6843, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: stream is uninitialized or in an error state [Op:DestroyResourceOp]
Exception ignored in: <bound method EagerResourceDeleter.__del__ of <tensorflow.python.ops.resource_variable_ops.EagerResourceDeleter object at 0x7fec453cbef0>>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 293, in __del__
self._handle, ignore_lookup_error=True)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_resource_variable_ops.py", line 252, in destroy_resource_op
_ops.raise_from_not_ok_status(e, name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 6843, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: stream is uninitialized or in an error state [Op:DestroyResourceOp]
Exception ignored in: <bound method EagerResourceDeleter.__del__ of <tensorflow.python.ops.resource_variable_ops.EagerResourceDeleter object at 0x7fec453cb518>>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 293, in __del__
self._handle, ignore_lookup_error=True)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_resource_variable_ops.py", line 252, in destroy_resource_op
_ops.raise_from_not_ok_status(e, name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 6843, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: stream is uninitialized or in an error state [Op:DestroyResourceOp]
Exception ignored in: <bound method EagerResourceDeleter.__del__ of <tensorflow.python.ops.resource_variable_ops.EagerResourceDeleter object at 0x7fec453cb4e0>>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 293, in __del__
self._handle, ignore_lookup_error=True)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_resource_variable_ops.py", line 252, in destroy_resource_op
_ops.raise_from_not_ok_status(e, name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 6843, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: stream is uninitialized or in an error state [Op:DestroyResourceOp]
Exception ignored in: <bound method EagerResourceDeleter.__del__ of <tensorflow.python.ops.resource_variable_ops.EagerResourceDeleter object at 0x7fec44b734e0>>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 293, in __del__
self._handle, ignore_lookup_error=True)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_resource_variable_ops.py", line 252, in destroy_resource_op
_ops.raise_from_not_ok_status(e, name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 6843, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: stream is uninitialized or in an error state [Op:DestroyResourceOp]
Exception ignored in: <bound method EagerResourceDeleter.__del__ of <tensorflow.python.ops.resource_variable_ops.EagerResourceDeleter object at 0x7fec44b73470>>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 293, in __del__
self._handle, ignore_lookup_error=True)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_resource_variable_ops.py", line 252, in destroy_resource_op
_ops.raise_from_not_ok_status(e, name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 6843, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: stream is uninitialized or in an error state [Op:DestroyResourceOp]
Exception ignored in: <bound method EagerResourceDeleter.__del__ of <tensorflow.python.ops.resource_variable_ops.EagerResourceDeleter object at 0x7fec44b73400>>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 293, in __del__
self._handle, ignore_lookup_error=True)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_resource_variable_ops.py", line 252, in destroy_resource_op
_ops.raise_from_not_ok_status(e, name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 6843, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: stream is uninitialized or in an error state [Op:DestroyResourceOp]
Exception ignored in: <bound method EagerResourceDeleter.__del__ of <tensorflow.python.ops.resource_variable_ops.EagerResourceDeleter object at 0x7fec44b73390>>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 293, in __del__
self._handle, ignore_lookup_error=True)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_resource_variable_ops.py", line 252, in destroy_resource_op
_ops.raise_from_not_ok_status(e, name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 6843, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: stream is uninitialized or in an error state [Op:DestroyResourceOp]
Exception ignored in: <bound method EagerResourceDeleter.__del__ of <tensorflow.python.ops.resource_variable_ops.EagerResourceDeleter object at 0x7fec44b73320>>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 293, in __del__
self._handle, ignore_lookup_error=True)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_resource_variable_ops.py", line 252, in destroy_resource_op
_ops.raise_from_not_ok_status(e, name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 6843, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: stream is uninitialized or in an error state [Op:DestroyResourceOp]
Exception ignored in: <bound method EagerResourceDeleter.__del__ of <tensorflow.python.ops.resource_variable_ops.EagerResourceDeleter object at 0x7fec44b732b0>>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 293, in __del__
self._handle, ignore_lookup_error=True)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_resource_variable_ops.py", line 252, in destroy_resource_op
_ops.raise_from_not_ok_status(e, name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 6843, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: stream is uninitialized or in an error state [Op:DestroyResourceOp]
Exception ignored in: <bound method EagerResourceDeleter.__del__ of <tensorflow.python.ops.resource_variable_ops.EagerResourceDeleter object at 0x7fec44b73240>>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 293, in __del__
self._handle, ignore_lookup_error=True)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_resource_variable_ops.py", line 252, in destroy_resource_op
_ops.raise_from_not_ok_status(e, name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 6843, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: stream is uninitialized or in an error state [Op:DestroyResourceOp]
Exception ignored in: <bound method EagerResourceDeleter.__del__ of <tensorflow.python.ops.resource_variable_ops.EagerResourceDeleter object at 0x7fec44b731d0>>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 293, in __del__
self._handle, ignore_lookup_error=True)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_resource_variable_ops.py", line 252, in destroy_resource_op
_ops.raise_from_not_ok_status(e, name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 6843, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: stream is uninitialized or in an error state [Op:DestroyResourceOp]
---------------------------------------------------------------------------
InternalError Traceback (most recent call last)
<ipython-input-18-06617a21566a> in <module>()
11
12 with training_args.strategy.scope():
---> 13 model = TFAutoModelForSequenceClassification.from_pretrained(model_name, from_pt=True)
14
15 trainer = TFTrainer(
26 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in shape(self)
1165 # `_tensor_shape` is declared and defined in the definition of
1166 # `EagerTensor`, in C.
-> 1167 self._tensor_shape = tensor_shape.TensorShape(self._shape_tuple())
1168 except core._NotOkStatusException as e:
1169 six.raise_from(core._status_to_exception(e.code, e.message), None)
InternalError: RET_CHECK failure (platforms/xla/service/jellyfish/bounds_check.cc:427) allocation_size_words <= std::numeric_limits<int32>::max()
```
## Expected behavior
I am following the example for fine-tuning custom dataset:
https://huggingface.co/transformers/custom_datasets.html
It works with Pytorch, but with tensorflow it doesn't work. Using TPU it gives the above error, and using GPU it just doesn't start.
Any idea what I did wrong ?
| 10-09-2020 12:17:44 | 10-09-2020 12:17:44 | Hello!
It looks to be an XLA error due to a precision issue. Did you train your model with mixed precision?<|||||>No, It was trained with the official Bert script on TPU without mixed precision.<|||||>By official Bert script you mean this one ? https://github.com/tensorflow/models/blob/master/official/nlp/bert/run_pretraining.py
To be sure it is coming from the trainer can you try with the `bert-base-cased` without the `from_pt=True` and let us know if the training finally starts.<|||||>By the official Bert I mean script, I mean:
https://github.com/google-research/bert
With the `bert-base-cased` model without the `from_pt=True`, I get another error:
```
INFO:tensorflow:Initializing the TPU system: grpc://10.112.10.242:8470
INFO:tensorflow:Clearing out eager caches
INFO:tensorflow:Clearing out eager caches
INFO:tensorflow:Finished initializing TPU system.
INFO:tensorflow:Finished initializing TPU system.
WARNING:absl:`tf.distribute.experimental.TPUStrategy` is deprecated, please use the non experimental symbol `tf.distribute.TPUStrategy` instead.
INFO:tensorflow:Found TPU system:
INFO:tensorflow:Found TPU system:
INFO:tensorflow:*** Num TPU Cores: 8
INFO:tensorflow:*** Num TPU Cores: 8
INFO:tensorflow:*** Num TPU Workers: 1
INFO:tensorflow:*** Num TPU Workers: 1
INFO:tensorflow:*** Num TPU Cores Per Worker: 8
INFO:tensorflow:*** Num TPU Cores Per Worker: 8
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:localhost/replica:0/task:0/device:CPU:0, CPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:localhost/replica:0/task:0/device:CPU:0, CPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:localhost/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:localhost/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:CPU:0, CPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:CPU:0, CPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:0, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:0, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:1, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:1, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:2, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:2, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:3, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:3, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:4, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:4, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:5, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:5, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:6, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:6, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:7, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:7, TPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU_SYSTEM:0, TPU_SYSTEM, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU_SYSTEM:0, TPU_SYSTEM, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 0, 0)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 0, 0)
Downloading: 100%
527M/527M [00:07<00:00, 74.8MB/s]
Some weights of the model checkpoint at bert-base-cased were not used when initializing TFBertForSequenceClassification: ['nsp___cls', 'mlm___cls']
- This IS expected if you are initializing TFBertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).
- This IS NOT expected if you are initializing TFBertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of TFBertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['dropout_37', 'classifier']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
---------------------------------------------------------------------------
InvalidArgumentError Traceback (most recent call last)
<ipython-input-15-e4258565c051> in <module>()
21 )
22
---> 23 trainer.train()
4 frames
/usr/local/lib/python3.6/dist-packages/transformers/trainer_tf.py in train(self)
472 Train method to train the model.
473 """
--> 474 train_ds = self.get_train_tfdataset()
475
476 if self.args.debug:
/usr/local/lib/python3.6/dist-packages/transformers/trainer_tf.py in get_train_tfdataset(self)
135
136 self.total_train_batch_size = self.args.train_batch_size * self.args.gradient_accumulation_steps
--> 137 self.num_train_examples = tf.data.experimental.cardinality(self.train_dataset).numpy()
138
139 if self.num_train_examples < 0:
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in numpy(self)
1061 """
1062 # TODO(slebedev): Consider avoiding a copy for non-CPU or remote tensors.
-> 1063 maybe_arr = self._numpy() # pylint: disable=protected-access
1064 return maybe_arr.copy() if isinstance(maybe_arr, np.ndarray) else maybe_arr
1065
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in _numpy(self)
1029 return self._numpy_internal()
1030 except core._NotOkStatusException as e: # pylint: disable=protected-access
-> 1031 six.raise_from(core._status_to_exception(e.code, e.message), None) # pylint: disable=protected-access
1032
1033 @property
/usr/local/lib/python3.6/dist-packages/six.py in raise_from(value, from_value)
InvalidArgumentError: Unable to parse tensor proto
```
It works fine with Pytorch but not with tensorflow for some reason.<|||||>Ok, then the error is normal. The TF part of transformers don't take into account a model that comes straight from the official BERT script. You have to:
1. Convert your checkpoint with this [script](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_bert_original_tf_checkpoint_to_pytorch.py) if you are using the TF1 checkpoint or this [one](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_bert_original_tf2_checkpoint_to_pytorch.py) if you are using the TF2 checkpoint.
2. Once you get the checkpoints in PyTorch use this [script](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_pytorch_checkpoint_to_tf2.py) to get your model in a proper transformers format and use it in your pipeline.
With `bert-base-cased` the error you get is because you have to apply a cardinality to your dataset with:
```
my_dataset.apply(tf.data.experimental.assert_cardinality(number_of_examples_in_my_dataset))
```<|||||>Thanks a lot @jplu , that did solve my issue.
The problem was in the second step which is converting the Pytorch to tf2 checkpoint.
By the way, there is a bug in the conversion script:
```
2020-10-09 21:45:28.508061: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
====================================================================================================
Converting model type 1/1: bert
====================================================================================================
Traceback (most recent call last):
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.6/dist-packages/transformers/convert_pytorch_checkpoint_to_tf2.py", line 437, in <module>
only_convert_finetuned_models=args.only_convert_finetuned_models,
File "/usr/local/lib/python3.6/dist-packages/transformers/convert_pytorch_checkpoint_to_tf2.py", line 322, in convert_all_pt_checkpoints_to_tf
config_class, model_class, pt_model_class, aws_model_maps, aws_config_map = MODEL_CLASSES[model_type]
ValueError: not enough values to unpack (expected 5, got 4)
```
I had to call directly the "convert_pt_checkpoint_to_tf" function inside the file, because "MODEL_CLASSES[model_type]" has only 4 objects while in "convert_all_pt_checkpoints_to_tf" it tries to extract 5 objects .<|||||>Happy it worked.
Can you open a new issue about the error you got with the converting script. Thanks. |
transformers | 7,675 | open | Add FAVOR+ / Performer attention | # 🌟 FAVOR+ / Performer attention addition
Are there any plans to add this new attention approximation block to Transformers library?
## Model description
The new attention mechanism with linear time and space complexity was introduced in
_Rethinking Attention with Performers_ [[https://arxiv.org/abs/2009.14794](https://arxiv.org/abs/2009.14794)].
Authors of the paper claim that the new attention mechanism is backward-compatbile with already existing models
> Backwards compatibility with pretrained models is available as a benefit from softmax approximation, via small finetuning (required due to error propagation)
<!-- Important information -->
## Open source status
* [x] the model implementation is available: it's an original Trax implementation from Google: https://github.com/google-research/google-research/tree/master/performer/fast_self_attention
* [ ] the model weights are available: probably not required as it's a building block for models rather than fully new architecture
* [x] who are the authors: Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, David Belanger, Lucy Colwell, Adrian Weller
| 10-09-2020 11:51:41 | 10-09-2020 11:51:41 | Just for reference, there is two open-source MIT implementations in pytorch.
https://github.com/lucidrains/performer-pytorch
And
https://github.com/idiap/fast-transformers<|||||>This could prove particularly important for longer sequences like protein sequences and long texts.
High level overview at https://ai.googleblog.com/2020/10/rethinking-attention-with-performers.html.<|||||>if this could be implemented it would be dope!<|||||>It would be nice to make it possible to use FAVOR+ in combination with the pretrained models that use softmax attention— at least the popular ones like BERT. Or even better, someone would just do the fine tuning for the common pretrained models and then we could make those available out of the box. I should be able to do that for DistilBERT since I plan to be using DistilBERT + FAVOR for a project soon.<|||||>Just started a fork to work on this at https://github.com/norabelrose/transformers-plus-performers. Is it okay with everyone if I implement it by creating a new file implementing FAVOR+ multihead attention (maybe one file for the PyTorch implementation and one for the TF implementation), then adding an option to BertConfig and DistilBertConfig (and maybe other model config classes) allowing the user to select FAVOR+ as the attention implementation?
It just seems sort of silly and wasteful to create multiple entirely new models for this when FAVOR+ has backwards compatibility.
Also since FAVOR+ is an unbiased estimator of full softmax attention, it should be possible to have an option that would tell the model to dynamically switch between FAVOR+ and full attention at test time depending on the sequence length. This would be desirable since FAVOR+ is slower than softmax attention when the sequence is shorter than O(d*log(d)), where d is the number of dimensions per attention head. Implementing such dynamic switching would be easier and more elegant if FAVOR+ is just a config option and not a new model class.<|||||>Any update on the implementation of this new architecture? @norabelrose<|||||>@marcoabrate The initial implementation is complete at https://github.com/norabelrose/transformers-plus-performers/blob/performers/src/transformers/modeling_performer_attention.py. Haven't been able to test it yet because getting my hands on the right datasets for finetuning DistilBERT with Performer attention, preprocessing the data, etc. has proven to be a huge ordeal. Should hopefully be able to do it today though.<|||||>UPDATE: The most recent commit on my transformers-plus-performers repo is now up and running. Right now I only changed DistilBertModel and DistilBertConfig to enable them to use Performer attention (just set attention_type='performer'), but it should be quite trivial to add the feature to other models.
As I type this I'm fine-tuning the distilbert-base-uncased pretrained model to work with Performer attention by distilling it against bert-base-uncased. You should be able to just directly fine-tune it with MLM but I figured that distillation might get you better results. It seems to be converging rather quickly but I haven't been running it for long and I only have one GPU to work with.
I would welcome other people taking a look at my repo and submitting pull requests to it.<|||||>> FAVOR+ is slower than softmax attention when the sequence is shorter than O(d*log(d)), where d is the number of dimensions per attention head
What are those numbers for DistilBERT, BERT-base and BERT-large?
Did you compare real speed?<|||||>I haven't had a chance to compare the difference on actual models yet, but I should be able to do that in the next day or two.
I have, however, tested the speed difference between softmax attention and FAVOR+ on random Gaussian matrices. FAVOR+ really starts to get faster when the sequence length is ~18 times larger than d*ln(d), at least on my GPU. With BERT settings (d_model = 768, num_heads = 12) that means about 5000 tokens.



This is basically because you have to matrix-multiply Q and K by the random feature matrix, which you don't have to do for softmax attention. You get better results with Performer when (d_model / num_heads) is smaller:

I should mention that while FAVOR+ might be slower than softmax for some of these "medium" sequence lengths, it should still be using less _memory_ than softmax, since it isn't allocating that L x L attention matrix. So there's somewhat of a memory-time tradeoff here.
The numbers I show above are from my own implementation of FAVOR+, but I also tried it with the performer_pytorch implementation and got almost identical results. Really, FAVOR+ is an attention mechanism for long sequences. It's got this great unique property that it's an unbiased estimator of softmax attention. That means that you can easily use it with models that were pretrained on softmax attention, and you can switch between FAVOR+ and softmax at inference time. And that's why it should be part of Huggingface.<|||||>UPDATE: While I have Performer up and running with DistilBertModel, I've run into a problem that I didn't even think about when I started. DistilBERT, BERT, RoBERTa, and several other models use _learned_ positional embeddings, which impose a fixed 512-token max sequence length. In order to process sequences longer than 512 tokens, and thereby get the benefits of Performer attention, we'll need to use some other type of positional embeddings; for maximum flexibility, probably fixed sinusoidal embeddings with some large max sequence length. We could also try using relative position embeddings, although AFAIK no one has actually tried doing that with Performer attention and I would need to think about it a bit to figure out if that's actually feasible. DistilBertModel actually already comes with a sinusoidal_pos_embds option, but this option is overridden when you load the weights from a pretrained model.
It's not clear how hard it would be to finetune a pretrained model that was trained with learned positional embeddings to use fixed sinusoidal ones, or if it would even be worth it— it may be necessary to just train them from scratch, especially since we are _also_ trying to swap out the attention mechanism. I'll try finetuning soon and see what happens. But it's looking more likely that we won't be able to just plug in the already existing checkpoints like we initially hoped. If that turns out to be the case, it would be really great if someone with access to more than one GPU could do the training from scratch and upload the models :)
PS: After @djstrong's comment about FAVOR+'s performance on relatively short sequences, I wanted to get to the bottom of why FAVOR+ was so much slower until you get up to around 5000 tokens. Oddly enough, it turns out that the torch.max() operation which is used to generate the numerical stabilizer for the exp() kernel was the main culprit. When you don't use a stabilizer, Performer attention starts beating softmax attention at much shorter sequence lengths. So I added an option in PerformerAttentionConfig to turn off the stabilizer.<|||||>https://github.com/huggingface/transformers/issues/8893
Tensorflow code, not jax. Thank you.<|||||>@guotong1988 as of about half an hour ago, my fork now has a TensorFlow implementation: https://github.com/norabelrose/transformers-plus-performers/blob/performers/src/transformers/modeling_tf_performer_attention.py.
I have not had a chance to test it at all. If someone else could at least try getting it working on their own system that would be great. Pull requests are welcome.<|||||>Hey @norabelrose , I'm part of the Performer team at Google, it's great to see this getting added to huggingface! Would you be open to meeting so we can discuss how we can work together on this? If anyone else is interested in joining the meeting please comment here and I'll reach out to coordinate.<|||||>@tomweingarten Sure! Send me an email at [email protected] and we can set up a time to talk in the next couple weeks. As I mentioned above, the basic implementation in PyTorch and TensorFlow is done but we need to write unit tests and make sure everything is working properly.
Also, in my fork at transformers-plus-performers I had to make a few minor changes to other parts of HuggingFace in order to get training to run smoothly on my machine— in particular, the distillation example program, since I initially tested PerformerAttention by continuing distillation of a pretrained DistilBERT model with Performer attention against bert-base. The implementation of distillation on master loads all the training data at once into RAM, which blows up on my meager hardware. I changed it so that you can load the training data incrementally. That's probably a good thing to add to the master branch, but arguably it should be put in a separate pull request. So we'll have to change that, a long with a couple other little things.
I'd recommend you check out my fork at https://github.com/norabelrose/transformers-plus-performers/. The relevant files are /src/transformers/configuration_performer_attention.py, /src/transformers/modeling_performer_attention.py, and /src/transformers/modeling_tf_performer_attention.py. I also changed the BERT and DistilBERT model and config files so the user can use Performer attention with them. I'll accept pull requests on that repo.
PS: Also just realizing that the definition of short_sequence_behavior on PerformerAttentionConfig in the last commit is defined variously as Union[str, dict], Union[str, Callable], or Union[str, tuple]— sorry about that, I wasn't really sure how best to implement that feature. Right now the actual implementation in PerformerAttention assumes it's a str or Callable.<|||||>@tomweingarten @norabelrose I would like to participate in the meeting too, if possible. I am working with long sequences for summarization. I have not had the chance to go through the code thoroughly yet, but I am ready to help soon.
Edit: you can reach me at [email protected]<|||||>@norabelrose Is there any plan to support unidirectional attention ?<|||||>Hi guys, thanks to @kchoro and @ValeryTyumen on the Performers team, we've open-sourced the Tensorflow version of FAVOR+ here: https://github.com/google-research/google-research/tree/master/performer/fast_attention/tensorflow
BTW, we've edited the folder name and code to be `fast_attention` now rather than `fast_self_attention`.
Please let us know how well it works in your pipelines!<|||||>UPDATE: The new default branch ("clean") on my fork at https://github.com/norabelrose/transformers-plus-performers/ now has all the extraneous changes I made to the upstream removed. I also merged in all new commits from upstream.
@TwinMooon Yes, we should be able to add causal attention. I was under the impression that it would be necessary to include a custom CUDA kernel from the fast-transformers library to compute the prefix sums— since that's what the performer_pytorch implementation does, which I used as a template for my implementation— but now looking at the Google code in both Jax and TensorFlow I realize that they just compute the prefix sums in Python code and then use a custom gradient. So it looks like it's not necessary, although it's possible that using the CUDA kernel gives you a noticeable speed boost.<|||||>I'd like to set a goal of making an official pull request to add this to master by the end of the year. I haven't been able to do that yet because I've been busy with school and other projects, and I haven't gotten any help from other contributors. Key things that need to be done are:
- Add causal attention
- Translate the unit tests from the Google implementation and add them to the fork (and make sure we pass those tests, obviously)
- Clean up the short_sequence_behavior feature (or just get rid of it)
As always, any and all help with these tasks is welcome.<|||||>@TwinMooon Update: I got causal attention working by translating the Google implementation, but as I feared, it's very slow since it doesn't use a custom CUDA kernel. On my GPU, it's 19-20 times slower than noncausal attention. But there might be a way around this; I'll have to think about it.
In the meantime, I think I'm going to add an optional dependency on the fast_transformers package (just wrapping the import statement in a try... except block) to get access their custom CUDA kernel. I'll include a warning if the user doesn't have it installed that causal attention might have bad performance without the package. That's what the performer_pytorch package does.<|||||>@norabelrose For the past two days, I have implemented a version of causal attention by just translating Google's TensorFlow implementation. After reading your code, I found that our implementation is quite similar. However, The causal version runs a little faster than the non-casual version in my machine.
My PyTorch version is 1.5.0 and run it in a 2080Ti with CUDA 10.0<|||||>@TwinMooon Ok cool! If you wouldn’t mind submitting a pull request to my fork or just copy and pasting the relevant block of code here then I could check to see if your version is faster. It’s possible that I’m making some silly mistake.
I’m running it on a GeForce GTX 1080 with PyTorch 1.4.0 and CUDA 10.0.0. It was also noticeably a lot slower than noncausal attention on my CPU only laptop which has PyTorch 1.7.
PS: Is it possible that you got the tensor shapes mixed up? The Google implementation expects tensors of shape [length, batch, heads, random features/embedding dim] while everywhere else it's usually [batch, heads, length, random features/embedding dim], so you have to permute the tensor dimensions. The code will actually run if you give it tensors with the [B, H, L, D] shape though, so I got tripped up on that when I first translated the Google code and it made it look like it was faster than it actually was. If you're using a small batch size of say, 1 or 5, it'll be a lot faster to compute prefix sums over the batch dimension than doing it over the sequence length dimension of size 512 (which is what it's actually supposed to do).<|||||>@norabelrose You can review my implementation [here](https://github.com/TwinMooon/transformers-plus-performers/commit/c17d6473deb5316363f60bb2ddd1007d4364abe4). I permuted the tensor shape before stuff into the casual attention. <|||||>@TwinMooon In your code, you spell the word "causal" two different ways: "causal" and "casual". You use the "causal" spelling in the forward() method where short_sequence_behavior indicates to use softmax attention, and then you use casual everywhere else.
Is it possible that you're initializing the PerformerAttention object sort of like this:
`PerformerAttention(PerformerAttentionConfig(d_model=768, num_heads=12), causal=True)`
so that the "casual" attribute remains its default value of False, and none of the causal attention code ever actually gets called? I should probably change `__init__` so it that it always throws an error when you include a nonexistent attribute in kwargs.
In other news, I figured out a sort of clever way of making causal attention like 2x faster, and that's in my latest commit.<|||||>Mark Zakharov made a Colab where he successfully finetuned a DistilBERT model with the most recent version of my fork, which you can check out here: https://colab.research.google.com/drive/1BUYk4qxdt1b3d5mx6_t0nnX5jP9KwVAv?usp=sharing
I think the project is almost ready to be made into a formal pull request.<|||||>@norabelrose cool! I'll try it now.<|||||>This is really great work guys! We are currently running some experiments on the flax version of Performer internally and looking into how to best integrate the model into Transformers. @norabelrose a PR in PyTorch and or Tensorflow would be amazing!<|||||>Excited to see the progress here! Just wanted to give a heads-up that we fixed a [significant bug](https://github.com/google-research/google-research/commit/b09ac837cd5720bc60f1c16b472a7ab462b0ddb8) in our TF implementation of Performer fast attention.<|||||>Pull request finally submitted: #9325 <|||||>This is great! Thank you for your hard work! :) I was wondering if it would be trivial to extend this to support encoder-decoder models such as Bart or T5? Does the method `init_performer_attention' currently work for cross attention? <|||||>@benathi It should be quite simple. You'll just need to read through the implementations of BART and T5 and 1) find what name they are using for their query, key, value, and output linear layers so that `PerformerAttention` can mimic the naming convention and 2) find the immediate parent module of the attention module so you can put `@init_performer_attention()` on its `__init__` method with the appropriate parameters. Sometimes models will roll, for example, LayerNorm into the attention module which means a little bit of refactoring might be needed in order for `PerformerAttention` to be dropped in as a replacement. The inconsistency in implementation across models is the only thing that prevents this from being 100% trivial.<|||||>Hi Guys,
Happy New Year ! Thank you for your great work ! I wonder whether it would
make sense to meet soon to discuss where we are in terms of integration,
etc. :)
P.S
One quick observation on my side. In our experiments we found settings,
where Performer's approximate softmax was the best, but also applications,
where Performer-ReLU (that does not use random features)
was outperforming other Performers variants. Performers enable those
different attention variants simply via different functions for creating
kernel features (we will be actually adding some more kernel
features makers to the open-sourced version very soon).
We think about both Performer approximate softmax and Performer-ReLU (both
already open-sourced) as good defaults and which of them is to be chosen
should be probably determined by the
experiment in the particular setting under consideration. Also, ultimately
it would be exciting if one can find new kernel feature functions that
would outperform them. Performers are flexible and can be applied
even with those future variants of kernel feature makers that we are not
aware of right now :) So I think that modularizing the code so that one can
easily plug in her/his kernel feature maker (while still having available
good default variants) would be very attractive for new users and would
encourage people to further develop the codebase.
Best,
Krzysztof
śr., 30 gru 2020 o 13:43 Nora Belrose <[email protected]> napisał(a):
> @benathi <https://github.com/benathi> It should be quite simple. You'll
> just need to read through the implementations of BART and T5 and 1) find
> what name they are using for their query, key, and value linear layers so
> that PerformerAttention can mimic the naming convention and 2) find the
> immediate parent module of the attention module so you can put
> @init_performer_attention() on its __init__ method with the appropriate
> parameters. The inconsistency in implementation across models is the only
> thing that prevents this from being a 100% trivial drop-in type thing.
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/7675#issuecomment-752720169>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AF443BRIAOPHZWPSDDGU7IDSXNYEXANCNFSM4SJ7J5BQ>
> .
>
<|||||>As per the suggestion by @kchoro , I just added the ability to pass in custom Callables to the `kernel_type` config parameter.<|||||>Hello,
Amazing work @norabelrose! I have been trying your performer implementation. I have copied your attention implementation
PerformerAttention and have replaced that attention with the normal self-attention in Mobilebert. I have tracked some metrics with respect to other implementations. I have seen that the memory consumption on 512 tokens long it consume the same memory that the normal self attention. And it is also the same fast.
I have logged the metrics with Wandb:
https://wandb.ai/gaceladri/new_berts/reports/Memory-and-speed-comparison--Vmlldzo0NDA4MTI
Does that makes sense? I have seen in Long Range Arena https://arxiv.org/abs/2011.04006 that it is 1.2x faster with 1k tokens but I have not tried with that long. The point where I am confused is with the memory consumption. At shorter values, the attention mechanism, being linear with respect to sequence length, not should be consuming less memory?<|||||>Hi @gaceladri , can you tell us what your hyperparameters are set to for the model dimensions and number of random features? Those will both affect the scale of memory and computation. At short sequence lengths (512) you may not see any benefit in memory or speed. There's more detail on the computational complexity and how it depends on these hyperparameters in the paper.<|||||>@tomweingarten
hidden_size = 128,
layers=8,
intermediate_size=128,
embedding_size=128,
max_position_embeddings=512.
I have looked at the paper. You are right, in your paper, it is reported that in short sequences the timing should not be better. In the long range arena they start from 1000 tokens onwards and it is 1.2x faster than normal attention.
Thanks a lot for the clarification!<|||||>"It is easy to see that such a mechanism is characterized by space complexity O(Lr + Ld + rd) and time complexity O(Lrd) as opposed to O(L^2 + Ld) and O(L^2 d) of the regular attention (see also Fig. 1)."
At that size I would expect the O(Lr) term of the Performer space complexity to dominate, and is comparable to L^2 assuming your number of features is set to 256. Since your feedforward dimensionality is so small the other factors will largely drop out except for constants. So your result looks pretty normal, but let us know if you see unexpectedly large memory usage when scaling it bigger along the sequence dimension!<|||||>@norabelrose, thanks for the very nice work! There seems to be a merge conflict in __init__ of transformers now though.<|||||>> UPDATE: While I have Performer up and running with DistilBertModel, I've run into a problem that I didn't even think about when I started. DistilBERT, BERT, RoBERTa, and several other models use _learned_ positional embeddings, which impose a fixed 512-token max sequence length. In order to process sequences longer than 512 tokens, and thereby get the benefits of Performer attention, we'll need to use some other type of positional embeddings; for maximum flexibility, probably fixed sinusoidal embeddings with some large max sequence length. We could also try using relative position embeddings, although AFAIK no one has actually tried doing that with Performer attention and I would need to think about it a bit to figure out if that's actually feasible. DistilBertModel actually already comes with a sinusoidal_pos_embds option, but this option is overridden when you load the weights from a pretrained model.
>
> It's not clear how hard it would be to finetune a pretrained model that was trained with learned positional embeddings to use fixed sinusoidal ones, or if it would even be worth it— it may be necessary to just train them from scratch, especially since we are _also_ trying to swap out the attention mechanism. I'll try finetuning soon and see what happens. But it's looking more likely that we won't be able to just plug in the already existing checkpoints like we initially hoped. If that turns out to be the case, it would be really great if someone with access to more than one GPU could do the training from scratch and upload the models :)
>
> PS: After @djstrong's comment about FAVOR+'s performance on relatively short sequences, I wanted to get to the bottom of why FAVOR+ was so much slower until you get up to around 5000 tokens. Oddly enough, it turns out that the torch.max() operation which is used to generate the numerical stabilizer for the exp() kernel was the main culprit. When you don't use a stabilizer, Performer attention starts beating softmax attention at much shorter sequence lengths. So I added an option in PerformerAttentionConfig to turn off the stabilizer.
Hi @norabelrose and @tomweingarten, Just wonder based on your experiments, to use the model (bert+performer attention) for long sequence of text, do we need pre-train a bert + performer attention from scratch given the position embeddings are trainable and the # of it is only up to 512 in a pretrained bert-base? Or is there any tricks we can do to load a pre-trained bert-base and directly insert performer attention during the fine-tuning? for example, change the learned position embedding to sinusoidal ones and disgard the pretrained weights for position embeddings from bert-base.<|||||>@Neo9061 Sorry for taking a while to respond.
I never actually tried this, but it based on this documentation from DeepSpeed it sounds like the best way to finetune a pretrained model with learned positional encodings on sequences that are longer than it was trained on is to simply duplicate the pretrained encodings N times: [https://www.deepspeed.ai/tutorials/sparse-attention/](url). So I would try that before switching over to fixed sinusoidal embeddings or anything else.
That said, I recommend against using Performer attention in general and especially the implementation of it in this fork, since it isn't maintained. Imo better solutions for long sequences would be the Longformer or BigBird implementations already merged into master in this library, which can go up to 4096 tokens, or using the DeepSpeed library & utilities to retrofit Sparse Attention onto pretrained models from `transformers`.<|||||>@Neo9061 The DeepSpeed approach sounds reasonable to me, though I haven't tried it myself. If you're able, I'd recommend doing a small pre-training round whenever you "uptrain" from one model to another -- in this case that could allow you to re-learn the position encoding and also adjust the attention weights to move from softmax to the Performer softmax approximation.<|||||>Hi Guys,
Regarding relative positional encoding with Performers, this can be done in
several different ways now and there are lots of papers published recently
demonstrating this, for example:
https://arxiv.org/abs/2105.08399
It is a very simple trick that in practice works very well. If you want to
finetune with the Performer variant a model pretrained with learned
positional encodings, another option would be to freeze your pretrained
positional encoding in finetuning stage and concatenate with other
features. This can be done for instance by doing SVD of the learned
positional embedding mask:
QK^T + M = QK^T + AB^T = [Q|A][K|B]^T (and you apply favor to the last
expression)
Best,
Krzysztof
śr., 2 cze 2021 o 18:28 Xin Huang ***@***.***> napisał(a):
> UPDATE: While I have Performer up and running with DistilBertModel, I've
> run into a problem that I didn't even think about when I started.
> DistilBERT, BERT, RoBERTa, and several other models use *learned*
> positional embeddings, which impose a fixed 512-token max sequence length.
> In order to process sequences longer than 512 tokens, and thereby get the
> benefits of Performer attention, we'll need to use some other type of
> positional embeddings; for maximum flexibility, probably fixed sinusoidal
> embeddings with some large max sequence length. We could also try using
> relative position embeddings, although AFAIK no one has actually tried
> doing that with Performer attention and I would need to think about it a
> bit to figure out if that's actually feasible. DistilBertModel actually
> already comes with a sinusoidal_pos_embds option, but this option is
> overridden when you load the weights from a pretrained model.
>
> It's not clear how hard it would be to finetune a pretrained model that
> was trained with learned positional embeddings to use fixed sinusoidal
> ones, or if it would even be worth it— it may be necessary to just train
> them from scratch, especially since we are *also* trying to swap out the
> attention mechanism. I'll try finetuning soon and see what happens. But
> it's looking more likely that we won't be able to just plug in the already
> existing checkpoints like we initially hoped. If that turns out to be the
> case, it would be really great if someone with access to more than one GPU
> could do the training from scratch and upload the models :)
>
> PS: After @djstrong <https://github.com/djstrong>'s comment about
> FAVOR+'s performance on relatively short sequences, I wanted to get to the
> bottom of why FAVOR+ was so much slower until you get up to around 5000
> tokens. Oddly enough, it turns out that the torch.max() operation which is
> used to generate the numerical stabilizer for the exp() kernel was the main
> culprit. When you don't use a stabilizer, Performer attention starts
> beating softmax attention at much shorter sequence lengths. So I added an
> option in PerformerAttentionConfig to turn off the stabilizer.
>
> Hi @norabelrose <https://github.com/norabelrose> and @tomweingarten
> <https://github.com/tomweingarten>, Just wonder to use the model for long
> sequence of text, do we need pre-train a bert + performer attention given
> the position embeddings are trainable and the # of it is only up to 512 in
> bert-base? Or is there any tricks we can do to load a pre-trained bert-base
> and directly insert performer attention during the fine-tuning? for
> example, change the learned position embedding to sinusoidal ones and
> disgard the pretrained weights for position embeddings from bert-base.
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/7675#issuecomment-853424091>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AF443BRACTJWEP2ZIJTMCK3TQ2WBXANCNFSM4SJ7J5BQ>
> .
>
<|||||>I'm really excited by this potential addition! What is the timeline on integration into HF?<|||||>A working checkpoint with Performer would really help ;-)<|||||>Thanks! I thought the idea behind the Performer was that it's more about a methodology / attention technique, than it is about something pre-trained right (or at least, that's what I gathered from the paper). |
transformers | 7,674 | closed | Correctly tokenize sentence pairs | Hey,
I saw different ways to tokenize sentence pairs and the intuitive one is not shown here:
https://huggingface.co/transformers/preprocessing.html#preprocessing-pairs-of-sentences
So, I am asking here if I do right.
I encode pairs of sentence using list of lists. Instead handing over two seperate lists for each sentences, I handover a list of list, where each element is a list with a single sentence pair. So:
pairs=[[sen1, sen2],[sen1,sen2],....]
Is this hopefully right too? | 10-09-2020 09:31:14 | 10-09-2020 09:31:14 | This would be still interesting.
For tokenizing a list of pairs I get
```
input_ids = tokenizer(pairs, max_length=50, padding="max_length",truncation=True, return_tensors="tf")
'token_type_ids': <tf.Tensor: shape=(1, 50), dtype=int32, numpy=
array([[0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0],
....
```
So I wonder I am doing right as it seems the padded values are connetced to the 1. sentences (0 id)
SO token type IDS are 0 for the padded places, is that right?<|||||>Hey @datistiquo ,
As one can see in the following script:
```python
#!/usr/bin/env python3
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
input_ids = tokenizer([["hey hello there", "what is going on"], ["peter is in the", "what is going on"]], max_length=20, padding="max_length", truncation=True, return_tensors="tf")
print("List of pairs", input_ids)
input_ids = tokenizer("hey hello there", "what is going on", max_length=20, padding="max_length", truncation=True, return_tensors="tf")
print("Pair", input_ids)
```
tokenizing a list of pairs should be done exactly as proposed by you. Regarding the token_type_ids it is also correct that padded places should have a value of 0. In general if a model does not make use of `token_type_ids`, we return a 0 for such a model, see: https://github.com/huggingface/transformers/blob/6b034309ca4ca2ec6e5c3cacda92a448fa10b921/src/transformers/models/roberta/tokenization_roberta.py#L233 . So for padded tokens that should be discarded in the model, 0 seems like the most sensible choice to me.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 7,673 | closed | squad data preprocessor error (list index out of range) while finetuning bert on squad 1.1 | run_squad.py throws this error on squad v1.1 dataset
Traceback (most recent call last):
File "run_squad.py", line 820, in <module>
main()
File "run_squad.py", line 762, in main
train_dataset = load_and_cache_examples(args, tokenizer, evaluate=False, output_examples=False)
File "run_squad.py", line 446, in load_and_cache_examples
examples = processor.get_train_examples(args.data_dir, filename=args.train_file)
File "/home/din/question_answering_deepQA/venv_indic_deepQA/lib/python3.6/site-packages/transformers/data/processors/squad.py", line 602, in get_train_examples
return self._create_examples(input_data, "train")
File "/home/din/question_answering_deepQA/venv_indic_deepQA/lib/python3.6/site-packages/transformers/data/processors/squad.py", line 656, in _create_examples
answers=answers,
File "/home/din/question_answering_deepQA/venv_indic_deepQA/lib/python3.6/site-packages/transformers/data/processors/squad.py", line 729, in __init__
self.start_position = char_to_word_offset[start_position_character]
IndexError: list index out of range
| 10-09-2020 06:55:15 | 10-09-2020 06:55:15 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>@dineshggaonkar Did you fix it?
|
transformers | 7,672 | closed | [pegasus] Faster tokenizer tests | This PR implements #7354
* The suggested `fixtures/test_sentencepiece.model` couldn't be used since it has wrong special token ids: we need
1. no bos
2. eos_id is 1
3. unk_id is 2
added a script that builds a custom tokenizer model: `test_sentencepiece_no_bos.model`. Had to figure out how to match `"google/pegasus-large"` spm file. See the build script for nuances.
* switched pegasus common tests to use the newly added `test_sentencepiece_no_bos.model` - 2 custom tests still use the large tokenizer - remained untouched
And a few extra tweaks I made while sorting this PR out:
* removed `get_vocab` in `tokenization_pegasus.py` as it's identical to superclass's one
* a few minor prose edits in related files
* expanded `testing_utils.py`' s `get_tests_dir` to accept an optional `append_path` arg to remove clutter from tests. will probably rename it in the future to something else, works for now.
Fixes #7354
@sshleifer, @LysandreJik | 10-09-2020 05:05:04 | 10-09-2020 05:05:04 | |
transformers | 7,671 | closed | fix nn.DataParallel compatibility with PyTorch 1.5 | The same type of errors as in https://github.com/huggingface/transformers/pull/4300
# What does this PR do?
DataParallel replicate has a known issue in PyTorch 1.5: https://github.com/pytorch/pytorch/issues/40457
A similar PR proposes a work around by removing the `next(self.parameters().dtype)`: https://github.com/huggingface/transformers/pull/4300/files/7eef4f5a7575e05e822f8ef45d7f473a102671aa
I did the same in LXMERT
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
@julien-c | 10-09-2020 04:13:55 | 10-09-2020 04:13:55 | (tagging @eltoto1219 for information) |
transformers | 7,670 | closed | [s2s] Switch README urls to cdn | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> | 10-09-2020 00:55:53 | 10-09-2020 00:55:53 | |
transformers | 7,669 | closed | Update XLM-RoBERTa pretrained model details | 10-08-2020 20:01:12 | 10-08-2020 20:01:12 | ||
transformers | 7,668 | closed | Default Model Licenses | Hi, thanks for the great library!
I've been trying to compile a list of licenses for the default models and wanted to share in case others were wondering about it. Here's what I have so far:
*Note: table has been updated based on this discussion.*
Task | Model | License | Model Card w/ License
--- | --- | --- | ---
feature-extraction | distilbert-base-cased | Apache-2.0 | ✓ (added)
sentiment-analysis | distilbert-base-uncased-finetuned-sst-2-english | Apache-2.0 | ✓ (added)
ner | dbmdz/bert-large-cased-finetuned-conll03-english | MIT* (added) |
question-answering | distilbert-base-cased-distilled-squad | Apache-2.0 | ✓ (added)
fill-mask | distilroberta-base | Apache-2.0 | ✓
text-generation | gpt2 | MIT* | ✓
summarization | sshleifer/distilbart-cnn-12-6 | Apache-2.0 | ✓
translation, text2text-generation | t5-base | Apache-2.0 | ✓
zero-shot-classification (PyTorch) | facebook/bart-large-mnli | MIT (added) | ✓ (added)
zero-shot-classification (TensorFlow) | roberta-large-mnli | MIT | ✓
conversational | microsoft/DialoGPT-medium | MIT | ✓
Notes:
- `distil` models without a model card are listed as Apache 2.0 based on this comment: https://github.com/huggingface/transformers/issues/3357#issuecomment-614856396
- `gpt2` was changed from MIT to a custom license earlier this year: [history](https://github.com/openai/gpt-2/commits/master/LICENSE)
- Other `dbmdz` models use MIT (https://github.com/huggingface/transformers/pull/3492), but didn't find info on `dbmdz/bert-large-cased-finetuned-conll03-english`. If the model was fine-tuned from a pretrained BERT model, I imagine it would need to retain Apache 2.0 in addition to how the final model is licensed.
It'd be nice to get clarification on the two models that are missing licenses (and ideally ensure all default models have a clear license going forward). | 10-08-2020 19:22:23 | 10-08-2020 19:22:23 | `facebook/bart-large-mnli` is `mit` like other pretrained models initially released in [fairseq](https://github.com/pytorch/fairseq#license)
For `dbmdz` I'll let @stefan-it chime in, with the caveat that lineage/inheritance of licenses in fine-tuned ML models is (AFAIK) uncharted territory so if you have more info on that subject, please feel free to share it.
Finally, for models where the license isn't indicated in the model card, please feel free to open a PR to add it.<|||||>Hi @julien-c @ankane,
I am also very interested in clarifying the licenses of default models. In particular, I'd like to know the license of `dbmdz/bert-large-cased-finetuned-conll03-english`.
Cheers,
Alex Combessie<|||||>Hey @julien-c, thanks for the quick response and `facebook/bart-large-mnli` info. PR submitted.
Re fine-tuning licensing: Seems like it may fit the definition of "Derivative Works" in the Apache 2.0 license, but I don't have any special knowledge here, so will defer further discussion to someone that does.<|||||>Hi guys,
sorry for the late reply! I have no strong opinion on that topic, so I would just say that license of our `dbmdz` models will be MIT, because we're usually use this kind of license for both software and our pre-trained LMs :) <|||||>Great, thanks @stefan-it! That makes it clear that the model is open source :tada:
It'd be good to add a model card with the license. I personally think the most accurate summary of the model license is MIT + Apache-2.0 (unless it wasn't derived from Apache-2.0 work), but will leave it to you and the Transformers team to decide how you want to represent it.
<|||||>On the technical side, just took a look at the code and our YAML parser would support an array of licenses, so feel free to open a PR with
```
license:
- mit
- apache-2.0
```
On the legal side, 🤷♂️<|||||>Thanks @julien-c, good to know 👍
Will wait to hear thoughts from @stefan-it.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,667 | closed | Add multi-class processor to apply categorical classification | This PR adds multi-class processor to glue.py to support categorical classification. | 10-08-2020 18:36:44 | 10-08-2020 18:36:44 | |
transformers | 7,666 | closed | Clear up confusing translation pipeline task naming | # 🚀 Feature request
Hello!
I am using the translation pipeline, and I noticed that even though I have to specify the language when I create the pipeline, the passed model overwrites that. So pipeline created as
`nlp = pipeline('translation_en_to_de', 'Helsinki-NLP/opus-mt-en-jap')`
would translate english to japanese, in contrary to the task name. Is this the intended way of translating other languages, will it change in the future?
Would it be possible to just add a single 'translation' task for pipelines, which would then resolve the languages based on the model (which it seems to do anyway now) ?
## Motivation
It would clear up the current confusion, and make the `pipeline` function singature less prone to change.
It could also possibly reduce code duplication in https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines.py
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## My contribution
I'd love to help with a PR, though I'm confused: The `SUPPORTED_TASKS` dictionary in pipelines.py contains exactly the same entries for each translation pipeline, even the default model is the same, yet the specific pipelines actually translate to different languages 🤔 | 10-08-2020 18:28:17 | 10-08-2020 18:28:17 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,665 | closed | tokenizer_bert.py not call _clean_text? | for transformers/src/transformers/tokenization_bert.py, there is a function called _clean_text.
But seems this function is not be called at all?
In google bert(https://github.com/google-research/bert/blob/master/tokenization.py) there exists a same function and that function has been called at the beginning of the tokenization. | 10-08-2020 18:24:20 | 10-08-2020 18:24:20 | |
transformers | 7,664 | closed | TF Slow test CI | I Don't think tf slow tests are run by circleci OR github actions.
Should they be @LysandreJik ? | 10-08-2020 18:09:22 | 10-08-2020 18:09:22 | You're correct, it is not currently running as there were some issues setting up both the PT/TF test suites. Will look into it this afternoon.<|||||>The slow tests in TF take an absurdly long time. I had to stop them from running after ~3.5 hours as it was holding the whole test suite back. Will investigate more on a separate machine and try to skin it down.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,663 | closed | 2 slow TF T5 common tests failing on master | These should probably be run on CI somewhere.
Didn't know whether to assign @patrickvonplaten or @jplu.
These fail in both tf 2.2 and tf 2.3.
#### Command
```bash
RUN_SLOW=1 pytest tests/test_modeling_tf_t5.py -k saved
```
#### Traceback
```
================================================================================= FAILURES =================================================================================
__________________________________________________________ TFT5ModelTest.test_saved_model_with_attentions_output ___________________________________________________________
tests/test_modeling_tf_common.py:223: in test_saved_model_with_attentions_output
self.assertEqual(len(outputs), num_out)
E AssertionError: 5 != 4
_________________________________________________________ TFT5ModelTest.test_saved_model_with_hidden_states_output _________________________________________________________
tests/test_modeling_tf_common.py:185: in test_saved_model_with_hidden_states_output
self.assertEqual(len(outputs), num_out)
E AssertionError: 5 != 4
---------------------------------
``` | 10-08-2020 18:07:56 | 10-08-2020 18:07:56 | Yeah, that's a know failure and I didn't manage to make it work yet with the `cast_bool_to_primite(...)` function<|||||>This should be fixed in the next big TF rework.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,662 | closed | loss.backward() being called twice in Trainer._training_step() | **Setup**
pytorch: 1.5.1
huggingface transformers: 3.0.2
python: 3.7.6
OS: Pop!_OS 20.04 on VM
**Sample Code**
```python
from transformers import GPT2Tokenizer, GPT2LMHeadModel, TrainingArguments, Trainer
import torch
from torch.utils.data import Dataset
import sys
import pandas as pd
ZERO = sys.float_info.min
ZERO_PT = torch.tensor(ZERO)
class GPT2FinetunedWithNgrams(GPT2LMHeadModel):
def __init__(self, config):
super().__init__(config)
self.tokenizer = GPT2Tokenizer.from_pretrained('gpt2', padding_side='right')
self.tokenizer.pad_token = self.tokenizer.eos_token
def eval_sentence(self, sent: str):
vec = torch.tensor(sentence_vec(sent), dtype=torch.float, requires_grad=True) # remove punct, lower case, split on space, prepend "<s>", postpend "</s>" start and stop tokens. Returns tensor of ints of vocab.
last_idx = min(max_ngram, len(vec)) #max_ngram is an int
probs = [max(ZERO_PT, pkatz(vec[0:i])) for i in range(2, last_idx + 1)] #pkatz is katz backoff probability and returns a tensor with grad function set.
for i in range(1, len(vec) - last_idx + 1):
j = i + last_idx
probs.append(max(ZERO_PT, pkatz(vec[i:j])))
probs = torch.stack(probs)
log_probs = torch.log(probs)
log_prob = torch.sum(log_probs)
len_tensor = torch.tensor(len(vec), dtype=float, requires_grad=True)
final_prob = torch.true_divide(-log_prob, len_tensor)
return final_prob
def sentence_loss(self, sent: str):
p, l = self.eval_sentence(sent)
return -p
def generate_text_while_finetuning(self,
input_ids=None,
past=None,
attention_mask=None,
token_type_ids=None,
position_ids=None,
head_mask=None,
inputs_embeds=None,
labels=None,
use_cache=None,
output_attentions=None,
output_hidden_states=None, ):
transformer_outputs = self.transformer(
input_ids,
past=past,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
use_cache=use_cache,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
)
hidden_states = transformer_outputs[0]
lm_logits = self.lm_head(hidden_states)
outputs = (lm_logits,) + transformer_outputs[1:]
return outputs # (loss), lm_logits, presents, (all hidden_states), (attentions)
def forward(
self,
input_ids=None,
past=None,
attention_mask=None,
token_type_ids=None,
position_ids=None,
head_mask=None,
inputs_embeds=None,
labels=None,
use_cache=True,
):
max_length = input_ids.shape[1] + 50
full_generated_gpt2_ids = self.generate(input_ids=input_ids,
max_length=max_length,
is_finetuning_current_model=True,
attention_mask=attention_mask,
pad_token_id=50256,
do_sample=True,
top_k=50,
top_p=0.95)
decoded_gen_samples = self.tokenizer.batch_decode(full_generated_gpt2_ids, skip_special_tokens=True)
tmp_losses = [self.sentence_loss(decoded_sample) for decoded_sample in decoded_gen_samples]
losses = torch.stack(tmp_losses)
loss = losses.mean()
return (loss,)
##The code below is the run script.
class MyDataset(Dataset):
def __init__(self, csv_file: str):
self.df = pd.read_csv(csv_file, encoding='ISO-8859-1')
def __len__(self):
return len(self.df)
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
text = self.df.iloc[idx, 1]
return text
def my_data_collator(dataset_samples_list):
tokenizer = GPT2Tokenizer.from_pretrained('gpt2', padding_side='right')
tokenizer.pad_token = tokenizer.eos_token
encoded_results = tokenizer(dataset_samples_list, padding=True, truncation=True, return_tensors='pt', return_attention_mask=True)
batch = {}
batch['input_ids'] = torch.stack([result for result in encoded_results['input_ids']])
batch['past'] = None
batch['attention_mask'] = torch.stack([result for result in encoded_results['attention_mask']])
batch['position_ids'] = None
batch['head_mask'] = None
batch['inputs_embeds'] = None
batch['labels'] = None
batch['use_cache'] = True
return batch
dataset_train = MyDataset('/path/to/train_dataset.csv')
training_args = TrainingArguments(
output_dir='/path/to/out',
do_train=True,
per_device_train_batch_size=64,
logging_dir='/path/to/dir',
max_steps=300000
)
model = GPT2FinetunedWithNgrams.from_pretrained('gpt2')
trainer = Trainer(
model=model,
args=training_args,
data_collator=my_data_collator,
train_dataset=dataset_train
)
trainer.train()
trainer.save_model('/path/to/model_save_dir')
```
**Issue**
The above code will produce the following error for some training examples:
```python
Traceback (most recent call last):
File "/home/aclifton/ric-2020/textgen/run_finetune_gpt2.py", line 221, in <module>
testfinetune()
File "/home/aclifton/ric-2020/textgen/run_finetune_gpt2.py", line 215, in testfinetune
trainer.train()
File "/home/aclifton/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 499, in train
tr_loss += self._training_step(model, inputs, optimizer)
File "/home/aclifton/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 637, in _training_step
loss.backward()
File "/home/aclifton/anaconda3/lib/python3.7/site-packages/torch/tensor.py", line 198, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/aclifton/anaconda3/lib/python3.7/site-packages/torch/autograd/__init__.py", line 100, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.
```
What I'm finding is that in `Trainer._training_step()`, some examples are causing `loss.backward()` to be called twice. So the error makes sense as the first call will compute the graph then clear it and the second call is what throws the error. I'm not sure what would cause this to happen and was wondering if others might have an idea?
| 10-08-2020 17:31:07 | 10-08-2020 17:31:07 | I'm not sure how we can expect this to work with the encoding and the usage of the `generate` method directly in the forward method.
@patrickvonplaten can chime in if I'm wrong, but I believe the `generate` method can not be back-propagated through as it is right now.<|||||>@LysandreJik I came across one issue (#6105) with `generate` being used in `forward`. My temporary workaround was to introduce `is_finetuning_current_model` into `generate` that will call `generate_text_while_finetuning` instead of `forward` again to avoid the recursion.
I'm still learning pytorch so I might be wrong on this, and correct me if I am. I checked the `grad_fn` for `input_ids`, `full_generated_gpt2_ids`, and each of them were set to `None`. `tmp_losses`, `losses`, and `loss` all had their `grad_fn` set. My naive assumption is that backpropagation will run up to `tmp_losses`, skip over the `generate` part, and then continue on through the gpt2 model.
Another interesting point is that I get the error on different training examples. I set the batch size to 1 and it would produce the error on, say, example 5. Removing example 5 from the training set and rerunning would cause the error on example 3, etc.<|||||>yes, the `generate()` cannot be used for backpropagation at the moment . <|||||>@patrickvonplaten Would that explain why I'm encountering the above error? Do you also mind elaborating on why `generate()` cannnot be usef for backpropagation? I'm interested to hear the details for the sake of my own knowledge.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,661 | closed | [pseudo] Switch URLS to CDN | Switch s3 urls -> CDN urls.
cc @julien-c | 10-08-2020 17:09:03 | 10-08-2020 17:09:03 | |
transformers | 7,660 | closed | [broken] tf generate: use model_kwargs | @patrickvonplaten , I started trying to get tf generation/cache to be consistent with pytorch, but got stuck trying to get T5 working. I figured I would share in case you see an easy fix/want to take over. Otherwise, feel free to ignore :) | 10-08-2020 17:04:10 | 10-08-2020 17:04:10 | This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 7,659 | closed | [Dependencies|tokenizers] Make both SentencePiece and Tokenizers optional dependencies | # What does this PR do?
Both the [SentencePiece](https://github.com/google/sentencepiece) and [Tokenizers](https://github.com/huggingface/tokenizers) libraries can limit the users:
- `sentencepiece` is not available on Conda on every plateform and one of the reason `transformers` is not on Conda
- `tokenizers` cannot be used inside some labs which need to build all from source and don't have a Rust tooling.
This PR aim at making both optional leveraging the addition of SentencePiece algorithms in Tokenizer.
Note: at least one of `sentencepiece` and `tokenizers` will be required to use the SentencePiece tokenizers. `tokenizers` is also required to use the Fast tokenizers.
Main changes in the library organization:
- fast tokenizers are now separated in `tokenization_XXX_fast.py` files
- a `convert_slow_tokenizer.py` file host conversion methods for a slow to a fast tokenizer but a direct path from a `tokenizers` serialization file is favored when such a file is available.
- the test suite for slow and fast tokenizers are now gathered in a single test suite.
Main new requirements for the tokenizers to pass the new test suite:
- at least one default vocabulary checkpoint (and max length) should be provided, it is used for the deep tests
- the fast tokenizer should have an explicit `tokenizer_file` keyword argument with a default to `None` (we check that to be sure all the fast tokenizer can accept the new serialization format.
To-add:
- when the documentation for `tokenizers` is ready: add a lot of link on how to build and add a fast tokenizer
- add a detailed explanation on how to add a fast tokenizer in the library
This PR also:
- add a `__repr__` for the tokenizers (finally...)
- add a `name_or_path` attribute to the models and tokenizers giving the shortcut name or the path of the pretrained checkpoint used for instantiation
- update the fast tokenizer to use (when possible) the new serialization format of the `tokenizers` library, falling back on the old diverse set of saving format if not available.
- clean up the tests for the fast tokenizers to bring them in the common tokenizer tests
Fixes #7402 #5100 (and maybe others)
## Before submitting
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
| 10-08-2020 14:58:01 | 10-08-2020 14:58:01 | Ok ready for review @LysandreJik @sgugger.
It's pretty big sorry.
For now sentencepiece is still in the requirements as removing it has some effect on the pipeline tests and I think it's probably good to study this in a separate future PR.
There is no breaking change apart from the fact that importing the `**Fast` tokenizer directly from the `transformers.tokenization_xxx` is not possible anymore, they should be imported from `transformers` (the best and most robust choice) or from their new respective location at `transformers.tokenization_xxx_fast`.<|||||>Ok the `examples/seq2seq/test_seq2seq_examples.py::test_finetune[stas/tiny-wmt19-en-de]` is working now.
I'll address the other comments and we can merge on Monday.<|||||>@thomwolf, could you please assign defaults that are different from "stas/tiny-wmt19-en-de" entry and its contents? Otherwise it defeats the purpose of testing with this model, since defaults are used instead.
Alternatively, I will need to create a new tiny model with different config and change tests to use that instead.
Once this is done let's add this test that I tried to add here: https://github.com/huggingface/transformers/pull/7860 - I expanded it below a bit to do better testing:
```
diff --git a/tests/test_tokenization_fsmt.py b/tests/test_tokenization_fsmt.py
index c3e08d56..833b1742 100644
--- a/tests/test_tokenization_fsmt.py
+++ b/tests/test_tokenization_fsmt.py
@@ -24,6 +24,7 @@ from transformers.tokenization_fsmt import VOCAB_FILES_NAMES, FSMTTokenizer
from .test_tokenization_common import TokenizerTesterMixin
+FSMT_TINY = "stas/tiny-wmt19-en-de"
class FSMTTokenizationTest(TokenizerTesterMixin, unittest.TestCase):
tokenizer_class = FSMTTokenizer
@@ -86,6 +87,13 @@ class FSMTTokenizationTest(TokenizerTesterMixin, unittest.TestCase):
def tokenizer_en_ru(self):
return FSMTTokenizer.from_pretrained("facebook/wmt19-en-ru")
+ def test_online_tokenizer_config(self):
+ """this just tests that the online tokenizer files get correctly fetched and
+ loaded via its tokenizer_config.json and it's not slow so it's run by normal CI
+ """
+ tokenizer = FSMTTokenizer.from_pretrained(FSMT_TINY)
+ self.assertListEqual([tokenizer.src_lang, tokenizer.tgt_lang], ["en", "de"])
+
def test_full_tokenizer(self):
""" Adapted from Sennrich et al. 2015 and https://github.com/rsennrich/subword-nmt """
tokenizer = FSMTTokenizer(self.langs, self.src_vocab_file, self.tgt_vocab_file, self.merges_file)
```
Thanks.
<|||||>Yes feel free to create another model for fsmt @stas00.
Ok this big PR is ready for merge as soon as possible (with regards to other PR merges not absolute time) so it doesn't drift too much.<|||||>There are some `~transformers.tokenization_utils_base.PreTrainedTokenizer` left (and same with fast) but that's an easy pattern to search for a subsequent PR.<|||||>Ok then I'm merging this PR and continuing in another one to:
- add CI tests for the package without sentencepiece and tokenizer
- remove sentencepiece as a required dependency
- switch to fast tokenizers by default
- fix the remaining doc patterns that you mentioned
On the topic of `from_pretrained` logic, we could (should probably be another PR):
- add a test that the config of the tokenizers is used as mentioned by @stas00
- we could probably remove the hard-coded configs at the same time
- switch to the cloud-front links like the models for faster dowloads<|||||>**edited**: thanks to @sshleifer - I needed to `pip install -e ".[dev]"` to update the new dependencies. that fixed the issues.
----------
I'm getting a massive amount of identical failures after this merge, primarily:
```
_____________________________________________ XLNetTokenizationTest.test_num_special_tokens_to_add_equal _____________________________________________
[gw1] linux -- Python 3.8.5 /home/stas/anaconda3/envs/main-38/bin/python
self = <tests.test_tokenization_xlnet.XLNetTokenizationTest testMethod=test_num_special_tokens_to_add_equal>
def test_num_special_tokens_to_add_equal(self):
for tokenizer, pretrained_name, kwargs in self.tokenizers_list:
with self.subTest("{} ({})".format(tokenizer.__class__.__name__, pretrained_name)):
> tokenizer_r = self.rust_tokenizer_class.from_pretrained(pretrained_name, **kwargs)
tests/test_tokenization_common.py:1896:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/tokenization_utils_base.py:1588: in from_pretrained
return cls._from_pretrained(
src/transformers/tokenization_utils_base.py:1661: in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
src/transformers/tokenization_xlnet_fast.py:142: in __init__
super().__init__(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <[AttributeError("'XLNetTokenizerFast' object has no attribute 'name_or_path'") raised in repr()] XLNetTokenizerFast object at 0x7f2ccdaaee80>
args = (), kwargs = {'additional_special_tokens': ['<eop>', '<eod>'], 'bos_token': '<s>', 'cls_token': '<cls>', 'do_lower_case': False, ...}
slow_tokenizer = None
fast_tokenizer_file = '/home/stas/.cache/torch/transformers/d152c146766f0a31888c4c9c0dcf82e42e42d09bf818bb74e126f2420cbd36c4.ecf1d38c0b94010f431264b9ded85217342f84c7bdae79b0472f7cd20b94052d'
def __init__(self, *args, **kwargs):
slow_tokenizer = kwargs.pop("__slow_tokenizer", None)
fast_tokenizer_file = kwargs.pop("tokenizer_file", None)
if fast_tokenizer_file is not None:
# We have a serialization from tokenizers which let us directly build the backend
> fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file)
E Exception: data did not match any variant of untagged enum PyNormalizerTypeWrapper at line 1 column 318041
```
do I need to remove cache or something? I won't test this until you tell me to in case you need someone with the old cache to test that it can recover from this.
A total of 90 failed tests with this error.<|||||>You should update `tokenizers` to the main PyPi version @stas00
```
pip install tokenizers --update
``` |
transformers | 7,658 | closed | Green tests: update torch-hub test dependencies (add protobuf and pin tokenizer 0.9.0-RC2) | # What does this PR do?
Update the torch-hub CI test dependencies to add protobuf and pin tokenizer on 0.9.0-rc2 until final release.
## Who can review?
@sgugger @n1t0 | 10-08-2020 10:58:10 | 10-08-2020 10:58:10 | Thanks a lot! |
transformers | 7,657 | closed | SqueezBert link gives a 404 error | The main Readme.md file (https://github.com/huggingface/transformers/blob/master/README.md), the SqueezeBert link (https://huggingface.co/transformers/model_doc/squeezebert.html) gives a "404 - Not Found" Error. | 10-08-2020 10:26:33 | 10-08-2020 10:26:33 | Yes, unfortunately that link will only be live at the next release (for now squeezeBERT is only in master, so only in the master documentation).
@LysandreJik not sure if there is a way to properly fix this unless we add "Check all the links in the README to remove the master" in our release check list.<|||||>You're right, I don't think there's any other way without over-engineering a feature. |
transformers | 7,656 | closed | T5 Beam search num_beans always equals 1 | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1
- Platform: Debian 10.6
- Python version: 3.8.5
- PyTorch version (GPU?): 1.6
- Tensorflow version (GPU?): N.A.
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. -->
TextGeneration: @TevenLeScao
T5: @patrickvonplaten
## To reproduce
Steps to reproduce the behavior:
1. load T5 model and tokenizer
```
from transformers import T5ForConditionalGeneration, T5Tokenizer
# initialize the model architecture and weights
model = T5ForConditionalGeneration.from_pretrained("t5-base")
tokenizer = T5Tokenizer.from_pretrained("t5-base")
```
2. prepare input for summarization. i guess the error persists for any given generation task but did not try.
```
article = """article etc etc"""
inputs = tokenizer.encode("summarize: " + article,
return_tensors = "pt",
max_length = 512, truncation = True)
```
3. attempt beam search
```
model.config.update({"num_beans": 4})
print(model.config.num_beans)
# output is 4 as expected
outputs = model.generate(inputs,
max_length = 200,
min_length = 100,
length_penalty = 5,
num_return_sequences = 2,
early_stopping = True)
```
or
```
outputs = model.generate(inputs,
max_length = 200,
min_length = 100,
length_penalty = 5,
num_beams = 4,
num_return_sequences = 2,
early_stopping = True)
```
error:
> AssertionError: Greedy decoding will always produce the same output for num_beams == 1 and num_return_sequences > 1. Please set num_return_sequences = 1
as if num_beans == 1, but we set num_beans to 4.
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
the generate function should execute beam search with 4 beams without errors
| 10-08-2020 09:17:38 | 10-08-2020 09:17:38 | Hey @marcoabrate,
please make sure that you use the correct parameter name `num_beams` instead of `num_beans`.
When using `num_beams`, I cannot reproduce your error.<|||||>of course it was that!
thank you |
transformers | 7,655 | closed | Eval_loss in prediction is very high : transformers/examples/token-classification/run_ner.py | ## Environment info
- `transformers` version: 3.3.1
- Platform: Linux-4.9.0-12-amd64-x86_64-with-debian-9.12
- Python version: 3.7.6
- PyTorch version (GPU?): 1.4.0
- Tensorflow version (GPU?): not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
examples/token-classification: @stefan-it
## Information
I am using NER "Emerging and Rare Entities task: WNUT’17 (English NER) dataset"
I am executing the steps as prescribed in https://github.com/huggingface/transformers/tree/08ba4b4902df5a18f5ad41d9490c50fe0a4c970f/examples/token-classification
The problem arises when using prediction:
* [ ] the official example script: wnut_17.json
{
"data_dir": "/home/priya/data_wnut_17",
"labels": "/home/priya/data_wnut_17/labels.txt",
"model_name_or_path": "bert-large-cased",
"output_dir": "wnut-17-model-1",
"max_seq_length": 128,
"num_train_epochs": 3,
"per_device_train_batch_size": 16,
"save_steps": 425,
"seed": 1,
"do_train": true,
"do_eval": true,
"do_predict": true,
"fp16": false
}
* [ ] my own modified scripts: wnut_17_mod.json
{
"data_dir": "/home/priya/data_wnut_17",
"labels": "/home/priya/data_wnut_17/labels.txt",
"model_name_or_path": "bert-large-cased",
"output_dir": "wnut-17-model-1",
"max_seq_length": 128,
"num_train_epochs": 3,
"per_device_train_batch_size": 16,
"save_steps": 425,
"seed": 1,
"do_train": **false,**
"do_eval": **false**,
"do_predict": true,
"fp16": false,
"overwrite_output_dir":false
}
The tasks I am working on is:
* [ ] re-run WNUT’17 dataset.
My end-goal is to identify abbreviation and explanation from sentences (labels B-abbr, I-abbr, B-expl, I-expl and O). For the example sentence
> Here GAAP stands for Generally accepted accounting principles
we should get token classified as
> Here O
> GAAP B-abbr
> stands O
> for O
> Generally B-expl
> accepted I-expl
> accounting I-expl
> principles I-expl
## To reproduce
Steps to reproduce the behavior:
1.python run_ner.py wnut_17.json
Prediction: 100%|████████████████████████████████████████████████████████| 162/162 [01:08<00:00, 2.38it/s]
10/06/2020 07:21:15 - INFO - __main__ - eval_loss = 0.2851179020827574
10/06/2020 07:21:15 - INFO - __main__ - eval_accuracy_score = 0.9511413182867402
10/06/2020 07:21:15 - INFO - __main__ - eval_precision = 0.5997392438070405
10/06/2020 07:21:15 - INFO - __main__ - eval_recall = 0.4263206672845227
10/06/2020 07:21:15 - INFO - __main__ - eval_f1 = 0.49837486457204777
2.python run_ner.py wnut_17_mod.json
Prediction: 100%|████████████████████████████████████████████████████████| 162/162 [01:08<00:00, 2.38it/s]
/opt/conda/lib/python3.7/site-packages/transformers/trainer.py:1175: FutureWarning: This method is deprecated, use `Trainer.is_world_process_zero()` instead.
warnings.warn("This method is deprecated, use `Trainer.is_world_process_zero()` instead.", FutureWarning)
10/06/2020 08:30:41 - INFO - __main__ - eval_loss = 2.827890293463151
10/06/2020 08:30:41 - INFO - __main__ - eval_accuracy_score = 0.016072497221509788
10/06/2020 08:30:41 - INFO - __main__ - eval_precision = 0.0065180614986565825
10/06/2020 08:30:41 - INFO - __main__ - eval_recall = 0.12140871177015755
10/06/2020 08:30:41 - INFO - __main__ - eval_f1 = 0.012371912924399112
## Expected behavior
I am seeing a 10-fold increase in eval-loss from 0.28 to 2.8.
Other than the changes in wnut_17_mod.json , I have done no other changes. Please advice how to achieve the published eval_loss and performance.
Thanks,
| 10-08-2020 07:10:25 | 10-08-2020 07:10:25 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Any updates?!? On training, it is very low and on the evaluation its insanely high, I tried to check if the hyper parameters were the problem, but I didn't find anything.<|||||>No. The eval_loss is low. No updates.<|||||>I was reading more about the metrics and it looks like that for a specific task they place a specific meaning, e.g I'm doing multi-label classification so these are the [metrics](https://simpletransformers.ai/docs/classification-models/#evaluating-a-classification-model):
**LRAP**
Label ranking average precision.
Label ranking average precision (LRAP) is the average over each ground truth label assigned to each sample, of the ratio of true vs. total labels with lower score.
The obtained score is always strictly greater than 0 and the best value is 1.
**Evaluation Loss**
Binary Cross Entropy Loss.
It is a Sigmoid activation plus a Cross-Entropy loss. Unlike Softmax loss it is independent for each vector component (class), meaning that the loss computed for every NN output vector component is not affected by other component values.
Cross-entropy loss awards lower loss to predictions which are closer to the class label. The accuracy, on the other hand, is a binary true/false for a particular sample. That is, Loss here is a continuous variable i.e. it's best when predictions are close to 1 (for true labels) and close to 0 (for false ones).
Theoretically, the output of the model is not wrong, but the interpretation is.
- References
[Understanding Categorical Cross-Entropy Loss, Binary Cross-Entropy Loss, Softmax Loss, Logistic Loss, Focal Loss and all those confusing names](https://gombru.github.io/2018/05/23/cross_entropy_loss/#:~:text=is%20available%20here-,Binary%20Cross%2DEntropy%20Loss,affected%20by%20other%20component%20values.)
[Loss vs Accuracy](https://kharshit.github.io/blog/2018/12/07/loss-vs-accuracy#:~:text=Cross%2Dentropy%20loss%20awards%20lower,0%20(for%20false%20ones).)<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 7,654 | closed | output probabilities of generated sequences in generate function | # 🚀 Feature request
output probabilities of generated sequences in generate function (generation utils)
thank you so much! :) | 10-08-2020 05:57:12 | 10-08-2020 05:57:12 | Duplicate of https://github.com/huggingface/transformers/issues/3891<|||||>Actually, those issues are different and we should probably provide this functionality!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 7,653 | closed | [pseudolabels] cleanup markdown table | 10-08-2020 02:22:47 | 10-08-2020 02:22:47 | ||
transformers | 7,652 | closed | Fix 3 failing slow bart/blender tests | 3 of these were simple fixes.
+ 1 typo in blenderbot
+ 2 BART failures changed by new `assert_tensors_close` helper fn checking shapes more aggressively. Output shapes have not changed.
The fourth failure is a bit harder to verify
Blenderbot 3b was OOMing
before fix: 11.4 GB
After: 6.4 GB
Why: going to fp16 before going to cuda | 10-08-2020 01:38:15 | 10-08-2020 01:38:15 | I don't think the change can possibly harm, so I will merge without review. cc @sgugger @LysandreJik |
transformers | 7,651 | closed | Fix Failing Slow tests | ```
FAILED tests/test_modeling_bart.py::BartHeadTests::test_tokenization - Assert...
FAILED tests/test_modeling_bart.py::BartModelIntegrationTests::test_mnli_inference
FAILED tests/test_modeling_blenderbot.py::Blenderbot3BIntegrationTests::test_generation_from_short_input_same_as_parlai_3B
FAILED tests/test_modeling_blenderbot.py::Blenderbot90MIntegrationTests::test_90_generation_from_long_input
``` | 10-08-2020 01:21:14 | 10-08-2020 01:21:14 | |
transformers | 7,650 | closed | Import integration libraries first | # What does this PR do?
This PR restores the order of importing 3rd-party integrations before other ML frameworks, and before any other transformer modules.
## Before PR:
* importing comet_ml later causes an error
## After PR:
* using comet_ml functionality is restored | 10-07-2020 23:16:13 | 10-07-2020 23:16:13 | |
transformers | 7,649 | closed | setup of Trainer class for distributed trainning | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
I am running on the sample code and got confused on how to setup a distributed trainning, below are the code I used
from pathlib import Path
`from tokenizers import ByteLevelBPETokenizer
from tokenizers.implementations import ByteLevelBPETokenizer
from tokenizers.processors import BertProcessing
tokenizer = ByteLevelBPETokenizer(
"./EsperBERTo/vocab.json",
"./EsperBERTo/merges.txt",
)
tokenizer.enable_truncation(max_length=512)
import torch
torch.cuda.is_available()
from transformers import RobertaConfig
config = RobertaConfig(
vocab_size=52_000,
max_position_embeddings=514,
num_attention_heads=12,
num_hidden_layers=6,
type_vocab_size=1,
)
from transformers import RobertaTokenizerFast
tokenizer = RobertaTokenizerFast.from_pretrained("./EsperBERTo", max_len=512)
from transformers import RobertaForMaskedLM
model = RobertaForMaskedLM(config=config)
from transformers import LineByLineTextDataset
dataset = LineByLineTextDataset(
tokenizer=tokenizer,
file_path="./oscar.eo.txt",
block_size=128,
)
from transformers import DataCollatorForLanguageModeling
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm=True, mlm_probability=0.15
)
from transformers import Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir="./EsperBERTo",
overwrite_output_dir=True,
num_train_epochs=1,
per_device_train_batch_size=64,
save_steps=10_000,
save_total_limit=2,
fp16=True,
local_rank=3,
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=dataset,
prediction_loss_only=True,
)
trainer.train()
trainer.save_model("./EsperBERTo")`
I want to know how to set the local_rank parameter in trainer class and what comman should I use.
python -m torch.distributed.launch --nproc_per_node=4 --nnodes=1 --node_rank=3 --master_addr="192.168.1.1" --master_port=1234 starttrans2.py
is above a correct way to run this script if I want to run on a single machine with 4 gpus?
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | 10-07-2020 22:34:37 | 10-07-2020 22:34:37 | We prefer to use the [forum](https://discuss.huggingface.co/) for questions like this. The class `HFArgumentParser` is there to help parse the arguments received by your script and pass them along to `Trainer`. Look at the [run_glue script](https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_glue.py) for an example of use. You should then be able to use your script with `torch.distributed.launch`.<|||||>I also have same problem. Are you slove this problem? Can you tell the right way to train the model on multi-gpu, just one machine. Thanks.<|||||>> I also have same problem. Are you slove this problem? Can you tell the right way to train the model on multi-gpu, just one machine. Thanks.
are u using K80 gpu? I found K80 likely have communication problem which does not have a easy fix.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,648 | closed | does tokenizer support emoji? | Hi, I have the code below and it always encodes emoji as "unk". Can someone tell me what I should do? Thanks
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)
s =" 😃 hello how are you"
tokenizer.tokenize(s)
['[UNK]', 'hello', 'how', 'are', 'you'] | 10-07-2020 20:00:56 | 10-07-2020 20:00:56 | Hi! The tokenizer you're using (`bert-base-uncased`) was not trained with emojis, therefore it cannot tokenize them correctly. You should add this token to the tokenizer vocabulary:
```py
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)
s =" 😃 hello how are you"
tokenizer.add_tokens("😃")
print(tokenizer.tokenize(s))
# ['😃', 'hello', 'how', 'are', 'you']
```
Please be aware that the model you're using should have its embedding matrix updated to include the embedding for the new token added. You can see the [documentation here](https://huggingface.co/transformers/internal/tokenization_utils.html?highlight=add_token#transformers.tokenization_utils_base.SpecialTokensMixin.add_tokens), here's how you should update your model embedding matrix:
```py
# Let's see how to increase the vocabulary of Bert model and tokenizer
tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained('bert-base-uncased')
num_added_toks = tokenizer.add_tokens(['new_tok1', 'my_new-tok2'])
print('We have added', num_added_toks, 'tokens')
# Notice: resize_token_embeddings expect to receive the full size of the new vocabulary, i.e., the length of the tokenizer.
model.resize_token_embeddings(len(tokenizer))
```<|||||>Thanks, @LysandreJik ! I have another question. When I train using tweets, since there is a lot of noise, a tweet like 'This is soooo good' would be a problem for BERT tokenizer cuz "soooo" is not in the vocabulary. Is there a method to add all of them? Right now I am thinking about a kinda ugly way, just use nltk tweettokenizer to process all tweets and add to vocab with words, emoji, etc that appear frequently<|||||>Hi @steveguang, sentences like `This is soooo good` actually won't be a problem for the BERT tokenizer, as it can decompose the word `soooo` in multiple tokens:
```py
>>> from transformers import BertTokenizer
>>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
>>> tokenizer.tokenize("This is soooo good")
['This', 'is', 'so', '##oo', '##o', 'good']
```
However, when working with a dataset that seems to have a lot of unknown tokens, it is generally a good idea to identify the tokens that come up relatively often and to add them to your tokenizer. A good example would be the emojis mentioned above, as these are an important attribute to the meaning of the sentence. |
transformers | 7,647 | closed | Project: Gather summarization datasets and try to replicate pegasus results on them | Dear @stas00 and whoever else is willing to help!
So far I have only checked pegasus' rouge scores on 2/12 datasets for which we have checkpoints.
For the other 10 datasets I either haven't tried or have tried briefly and gotten stuck.
The full scope of the project is that:
for each dataset:
1) There is an automated way to download the data, either from S3 or source. (To the extent possible, much of the logic in this script should eventually live in the `datasets` package).
2) we know our pegasus implementation's rouge score
2b) if our score is very different than the authors', we know whether that difference is due to data preprocessing, and if it is, we can preprocess the dataset similarly to the pegasus authors.
3) Our rouge score is within 0.3 Rouge2 of the reported. (Authors) column below.
### Steps
#### Getting Data
By far the most difficult part of each project is getting the dataset. And giving up quickly if you can't and writing a github issue somewhere.
I tried 1 approach to getting data: [this script](https://gist.github.com/sshleifer/c4aed7bf4418b50caee731e94be05d9f)
It worked for gigaword, I just haven't done the evaluation, but it failed for `aeslc` and then I gave up.
Another complementary approach would be to try to directly use the [pegasus dataset code](https://github.com/google-research/pegasus/blob/master/pegasus/data/public_supervised_datasets.py)
This will likely push preprocessing issues towards the back of the project. (when we try to send PRs to the datasets repo), but might be better than using my script.
#### After you get data
When you have gotten a dataset you can sanity check
```bash
python -m torch.distributed.launch --nproc_per_node=2 run_distributed_eval.py \
--model_name google/pegasus-large $\ # see note 1
--save_dir xsum_generations \
--data_dir xsum \
--prefix test \
--n_obs 100 \
```
Note 1: you can just keep running pegasus-large and expect a high single digits or better rouge2 score,to avoid downloading all the checkpoints or, you can change this to the relevant checkpoint.
Note 2: I am happy to run all the evals on newer hardware, very easy for me.
Note 3: We can do data sharing by getting you aws creds, or some other solution. Key is that I can download from command line, e.g. Google Drive + gdown.
### Misc thoughts:
+ arxiv and pubmed are listed under `scientific_papers` in the datasets package.
+ This is really 10 projects (1 each dataset, 2 of which I've started). If I were you I would ignore the started 2 and start on a few other ones.
+ If a dataset only has train/test or train/val or some other splits, see how the pegasus authors did the split.
+ Partial credit is valuable!
+ this could easily have been an issue for the datasets project rather than the transformers project.
+ There is no reason to merge PRs quickly for this project, but eventually we want a (much better) download_summ_dataset.py script or instructions for using other libs to accomplish the same outcome.
+ Will be good for both of us to learn the datasets internals.
+ Raw Billsum has multiple line articles, which breaks everything :( , (we could try to support raw nlp datasets in our `DataLoader`)
Here is a copy of the table we are trying to fill out in #6844 : (I made a new issue to avoid spamming that one)
| dataset | Authors| This Repo|
| ---- | ----|----|
| xsum | 47.60/24.83/39.64| 46.87/24.46/39.15|
| cnn_dailymail | 44.16/21.56/41.30| see 1|
| newsroom | 45.07/33.39/41.28 | have `.tar` file|
| multi_news | 47.65/18.75/24.95|
| gigaword | 39.65/20.47/36.76| 39.79/20.56/36.80|
| wikihow | 46.39/22.12/38.41 *| Asked Authors |
| reddit_tifu | 27.99/9.81/22.94|32.75/11.68/24.97|
| big_patent |52.29/33.08/41.66 *| |
| arxiv | 44.21/16.95/25.67| |
| pubmed | 45.97/20.15/28.25| |
| aeslc | 37.68/21.25/36.51|37.1/21.4/35.94|
| billsum | 59.67/41.58/47.59|54.99/37.43/43.07|
Originally from mixed & stochastic column of this [table](https://github.com/google-research/pegasus#results-update)
This was really long, and probably disorganized, so feel free to ask clarifying questions here or on slack!
cc @stas00
1) I got similar scores on cnn-dailymail by finetuning the authors' model on our dataset for a bit.
2) reddit_tifu: added `--min_length 32` | 10-07-2020 19:39:16 | 10-07-2020 19:39:16 | yes, please<|||||>I could work on getting the datsets, replicating will be hard (compute!!!). I have shared wikihow and arxiv on forum<|||||>I will start working on this over the next few days, so let's not duplicate the efforts and claim here which ones we are working on.<|||||>@stas00
following remaining datsets are available in the `datsets` lib
```
- multi_news
- reddit_tifu
- billsum
- aeslc
```
could write a script to download and process these<|||||>Do you mean to say that these 4 you listed are already in hf's `datasets`, and so we only need to download and convert these, right?
So the others that you haven't listed and Sam hasn't already processed still need to be sorted out from scratch, correct?
My plan was to start with `wikihow` as you shared some instructions at https://discuss.huggingface.co/t/wikihow-dataset-preprocessing/1413<|||||>> And so we only need to download and convert these, right?
Yes, these 4 are already in hf's `datasets`, we just convert and do some pre-processing before,
I have shared arxiv as well but that needs to be pre-processed.
for `newsroom` we need to request it from the author, so I'm not sure if we are allowed to share it directly.<|||||>If it's very heavy compute+disc-space-wise we could write scripts for small samples and then ask Sam or somebody at HF to run on the full data - since they probably have access to better hardware than us.<|||||>`arxiv` is huge (3.9 GB something), rest we can handle on colab I guess<|||||>OK, I will start with `wikihow` and in parallel will inquire w/ the author of `newsroom` wrt permission, since the latter could take time.
And then do `arxiv` afterwards.
So do you want to work on the 4 you listed, meanwhile? Either way works for me so please don't hesitate to choose what works the best for you.<|||||>Yes, I'll take those 4 :)<|||||>`newsroom` can also be consumed through `datsets` but needs manual download<|||||>yes, I was just looking at https://huggingface.co/datasets/newsroom but the information is wrong:
```
from datasets import load_dataset
dataset = load_dataset("newsroom")
```
```
Downloading: 5.21kB [00:00, 1.45MB/s]
Downloading: 2.68kB [00:00, 844kB/s]
Using custom data configuration default
Downloading and preparing dataset newsroom/default (download: Unknown size, generated: 4.94 GiB, post-processed: Unknown size, total: 4.94 GiB) to /home/stas/.cache/huggingface/datasets/newsroom/default/1.0.0/4b405ccd64e15f685065870ea563a1e6a034d1bd269a5427f40146d81549095e...
Traceback (most recent call last):
File "x", line 3, in <module>
dataset = load_dataset("newsroom")
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 608, in load_dataset
builder_instance.download_and_prepare(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/builder.py", line 453, in download_and_prepare
assert (
AssertionError: The dataset newsroom with config default requires manual data.
Please follow the manual download instructions: You should download the dataset from http://lil.datasets.cornell.edu/newsroom/
The webpage requires registration.
To unzip the .tar file run `tar -zxvf complete.tar`. To unzip the .gz files
run `gunzip train.json.gz` , ...
After downloading, please put the files under the following names
dev.jsonl, test.jsonl and train.jsonl in a dir of your choice,
which will be used as a manual_dir, e.g. `~/.manual_dirs/newsroom`
Newsroom can then be loaded via:
`datasets.load_dataset("newsroom", data_dir="~/.manual_dirs/newsroom")`.
.
Manual data can be loaded with `datasets.load_dataset(newsroom, data_dir='<path/to/manual/data>')
```
No such thing as http://lil.datasets.cornell.edu/newsroom/ - getting 404.
This is not the first bogus dataset in `datasets`.
<|||||>We need to request it from here http://lil.nlp.cornell.edu/newsroom/download/index.html
<|||||>Geesh, this one
https://github.com/lil-lab/newsroom
also links to 404
https://summari.es/download/<|||||>Hmm, it looks that perhaps somebody at HF should file this form then, correct?
http://lil.nlp.cornell.edu/newsroom/download/index.html -> https://cornell.qualtrics.com/jfe/form/SV_6YA3HQ2p75XH4IR
We can't use our names to ask for a permission for the dataset to be used by an open source project.
@sshleifer?<|||||>scraping newsroom is hard! Better to request it.
I had requested it, I got the link after a month and by the time I saw the mail it was already expired 😂
So, it would be better if someone from HF requests it, they will probably receive it faster<|||||>We definitely shouldn't scrape it, since we won't be able to use it anyway w/o their permission. So yes, @sshleifer, please help us out here. <|||||>Helper scripts for pubmed
https://github.com/armancohan/long-summarization
https://github.com/kedz/summarization-datasets<|||||>here are the results of eval on the wikihow data you shared, @patil-suraj
This on dual Titan X:
* sample of 100, run time: 0:03:05
`{'rouge1': 23.7695, 'rouge2': 5.3349, 'rougeL': 15.6991, 'rougeLsum': 16.7567, 'n_obs': 100, 'seconds_per_sample': 2.433, 'n_gpus': 2}`
* full, run time: 8:19:35
`{'rouge1': 24.6291, 'rouge2': 5.7999, 'rougeL': 15.6812, 'rougeLsum': 16.6907, 'n_obs': 11996, 'seconds_per_sample': 2.505, 'n_gpus': 2}`
So that gives us 24.63/5.80/16.69 which is far far away from 46.39/22.12/38.41
The command was:
```
python -m torch.distributed.launch --nproc_per_node=2 run_distributed_eval.py --model_name google/pegasus-large \
--save_dir xsum_generations --data_dir /hf/wikihow/wikihow --prefix test --bs 4
```
<|||||>That's scary low. Do you think there is an issue with dataset ? <|||||>@stas00 , @sshleifer
Wrote a helper script to download and save summ datasets
https://github.com/patil-suraj/summarization_datasets
Currently includes `aeslc, billsum and reddit_tifu`, rest should be easy to add.
Processing scripts are taken form the official datset repos, split information is copied from the `pegasus` repo.
Enjoy!<|||||>@stas00 Try using `google/pegasus-wikihow` as the model can do `--n_obs 100` now that we are calibrated. I should have specified that in the spec. We want to test the fine-tuned model.
Would also be interested in knowing whether `--max_source_length 512` changes anything.
(You can see the expected params that should be checked into each config [here](https://github.com/huggingface/transformers/blob/master/src/transformers/configuration_pegasus.py#L54) In those triples, `length_penalty` and `max_length` are generation params that should be reflected in `model.config`, `max_position_embeddings` should only be reflected in `tokenizer.model_max_length` (didn't save static pos embeddings, I don't think).<|||||># google/pegasus-wikihow
```
python -m torch.distributed.launch --nproc_per_node=2 run_distributed_eval.py --model_name google/pegasus-wikihow \
--save_dir xsum_generations --data_dir /hf/wikihow/wikihow --prefix test --n_obs 100 --bs 4
```
```
{'rouge1': 21.4782, 'rouge2': 8.7003, 'rougeL': 18.9314, 'rougeLsum': 18.8476, 'n_obs': 100, 'seconds_per_sample': 1.1432, 'n_gpus': 2}
```
There is a slight improvement on all but `rouge1` w/ `google/pegasus-wikihow`
It also appears to be much faster!
On 1000 objects the performance drops:
```
{'rouge1': 20.7939, 'rouge2': 8.4804, 'rougeL': 18.12, 'rougeLsum': 18.0778, 'n_obs': 1000, 'seconds_per_sample': 0.3459, 'n_gpus': 2}
```
my intuition tells me that either the dataset has some broken data in it, or all of it has some issues - since we aren't getting above the score from 100 objects.
# --max_source_length 512
```
python -m torch.distributed.launch --nproc_per_node=2 run_distributed_eval.py --model_name google/pegasus-wikihow \
--save_dir xsum_generations --data_dir /hf/wikihow/wikihow --prefix test --n_obs 100 --bs 4 \
--max_source_length 512
```
```
{'rouge1': 21.5527, 'rouge2': 8.6861, 'rougeL': 18.9145, 'rougeLsum': 18.9772, 'n_obs': 100, 'seconds_per_sample': 0.5674, 'n_gpus': 2}
```
looks worse on 2 scores, better on 2 other scores.
<|||||>> Do you think there is an issue with dataset ?
I didn't get a chance to study it yet - just had the time to run the eval.<|||||>need a little script to convert the json dumps into a nice md table so that it's easier to read the results, like `run_eval_search.py` does.<|||||>`newsroom`: filled out the form
`wikihow`: asked the authors https://github.com/google-research/pegasus/issues/111 if @stas00 could paste 1 article, 1 target and 1 generation as a comment on that issue, it would be helpful.
`gigaword`: Done<|||||>@patil-suraj if you have preprocessed links you want me to run evaluate on, feel free to post/slack and I can run eval. My preference would be to gdown/unzip a directory that includes only
```
data/test.source
data/test.target
```<|||||>I started a sub-section of my porting repo to gather script and instructions for building these datasets:
https://github.com/stas00/porting/tree/master/datasets/pegasus
So for completed things please either submit a PR or send me the files and I will add them there. Whatever is more efficient for you.
p.s. I'm doing it in a separate repo, since @sshleifer doesn't think they should go into the main repo (I think they should, but this can be fixed later as long as we have them).
<|||||>Here is a little helper util that helps to show the differences in strings - useful when matching pre-processing data.
```
import difflib
def str_compare(a, b):
"""
If strings are mismatched, print the diff with context
Returns true if strings match, false otherwise
adapted from https://stackoverflow.com/a/17904977/9201239
"""
match = True
if len(a) != len(b):
print(f"length mismatch: a={len(a)}, b={len(b)}")
def context(s, i):
start = i-10
end = i+10
if start < 0: start = 0
if end > len(s)-1: end = len(s)-1
return s[start:end]
for i, s in enumerate(difflib.ndiff(a, b)):
if s[0] == ' ':
continue
elif s[0] == '-':
match = False
print(f'Delete "{s[-1]}" from position {i}, ctx=[{context(a, i)}]')
elif s[0] == '+':
match = False
print(f'Add "{s[-1]}" to position {i}, ctx=[{context(a, i)}')
return match
```<|||||>I'm trying to reproduce the multi-news results. But it seems the ROUGE scores are not even in the ballpark of the original report or the ones in [here](https://docs.google.com/spreadsheets/d/1ODfoK-tXOV6TLXDMnujdGLtFhA8oVTy-Cv6Ib6qKgWk/edit#gid=0).
The command I used was
`python -m torch.distributed.launch --nproc_per_node=4 run_distributed_eval.py --model_name google/pegasus-multi_news --data_dir multi-news/processed/hf/ --save_dir output_data/ --bs 6`
`{"rouge1": 44.7752, "rouge2": 16.1437, "rougeL": 22.7593, "rougeLsum": 40.5531, "n_obs": 5622, "seconds_per_sample": 0.6931, "n_gpus": 4}`
I downloaded the data from the original authors of Multi-News: [link](https://drive.google.com/drive/folders/1qZ3zJBv0zrUy4HVWxnx33IsrHGimXLPy).
I'm not sure if the discrepancy is due to the preprocessing, but to my understanding, pegasus only replaces `NEWLINE_CHAR` with `\n`. Could someone give some hints?<|||||>Hi @kylie-box ,
`pegasus` used the datasets provided by `tfds` library. There is some discrepancy in processing in original data and the data provided by `tfds`. We also faced similar problem initially.
Use the scripts in [this](https://github.com/stas00/porting/tree/master/datasets/pegasus) repo (which @stas00 built for reproducing the results) to download and process the datasets.<|||||>Following up to @patil-suraj's comment - specifically, this for multi_news:
https://github.com/stas00/porting/tree/master/datasets/pegasus/multi_news
follow the instructions under `process.txt`.
The key is not to do any preprocessing, other than newlines.<|||||>Thanks, @stas00 and @patil-suraj! I was able to reproduce the results using data from tfds and their preprocessing.<|||||>Awesome - glad to hear it worked and we uploaded the eval tar balls as well, see https://github.com/stas00/porting/tree/master/datasets/pegasus/<|||||>Hi @stas00 and @patil-suraj!
I'm wondering if 32.75/11.68/24.97 are fixed on reddit_tifu dataset, or should I use splits provided by @stas00 –it gives lower scores than @patil-suraj's! I used @patil-suraj to obtain reddit_tifu splits and got the following with the current transformers (i.e., 4.7.0 dev):
```
[
predict_rouge1 = 32.7034
predict_rouge2 = 11.6499
predict_rougeL = 24.9251
]
```
Which is somehow identical to the reported ones in https://github.com/huggingface/transformers/issues/7647#issue-716800235 <|||||>Honestly, it's been so long I don't quite remember what were were doing ;)
@patil-suraj, I hope your memory is more clear. Please let me know if we need to fix anything.<|||||>Thank you for your response. @stas00
FYI, @patil-suraj's version of reddit gives: 32.70 / 11.65 / 24.93 and your (@stas00) version gives: 27.08 / 8.54 / 20.69
That's a huge gap I guess and happens by different pre-processing pipelines (maybe?).
|
transformers | 7,646 | closed | Openai gpt for classification | # What does this PR do?
Adds sequence classification architecture for GPT-1,
Strongly based on modifications made in #7501
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #7623 (issue) (Partially)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [✓] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [✓] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [✓] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [✓] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> | 10-07-2020 18:50:31 | 10-07-2020 18:50:31 | Hello! Thanks a lot for opening this PR. It seems that in the process, you ran a merge that went somewhat unexpectedly, as there's now 37 files changes and a +1960/-40 diff, which makes it impossible to review. Do you mind opening a new PR with only your commits, so that we can review it?<|||||>> Hello! Thanks a lot for opening this PR. It seems that in the process, you ran a merge that went somewhat unexpectedly, as there's now 37 files changes and a +1960/-40 diff, which makes it impossible to review. Do you mind opening a new PR with only your commits, so that we can review it?
Yes of course, I'll close this one and open a new PR |
transformers | 7,645 | closed | Fix integration tests of DeBERTa | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> | 10-07-2020 18:24:37 | 10-07-2020 18:24:37 | @LysandreJik I just fix the numeric part of the tests. Another issue is that I just made the change to the model state keys, i.e. change bert.encoder to deberta.encoder. However, I can only upload the model to **DeBERTa/deberta-base, DeBERTa/deberta-large**. Could you help to mv those two model to the namespace of **microsoft**? Or could you add me to the organization **Microsoft**?<|||||>Hi! Sure, I can add you to the `microsoft` organization. What's your username on the hub? Thanks!<|||||>I'm uploading the two models with the modified names `bert` -> `deberta` right now.<|||||>> Hi! Sure, I can add you to the `microsoft` organization. What's your username on the hub? Thanks!
The name is **_DeBERTa_**<|||||>Cool, I'm adding you! I've done a PR here #7229 that solves all the integration tests. Do you mind reviewing it before we merge it? I've added comments to explain why the changes were so.<|||||>> > Hi! Sure, I can add you to the `microsoft` organization. What's your username on the hub? Thanks!
>
> The name is **_DeBERTa_**
Hi, @LysandreJik
Did you add me **DeBERTa* to `microsoft`? I still can't see my account under `Microsoft`.
Seems the model you uploaded to `Microsoft/deberta-base` and `Microsoft/deberta-large` is not loadable due to a format issue.
<|||||>I've added you manually @BigBird01, but you should have been able to request to join from the website – was this not the case?<|||||>@BigBird01, what's the issue you have? I can load both:
```py
>>> from transformers import DebertaModel
>>> model = DebertaModel.from_pretrained("microsoft/deberta-base")
Downloading: 100%|██████████| 448/448 [00:00<00:00, 510kB/s]
Downloading: 100%|██████████| 559M/559M [00:50<00:00, 11.1MB/s]
Some weights of the model checkpoint at microsoft/deberta-base were not used when initializing DebertaModel: ['deberta.embeddings.position_embeddings.weight']
- This IS expected if you are initializing DebertaModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).
- This IS NOT expected if you are initializing DebertaModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
>>> model = DebertaModel.from_pretrained("microsoft/deberta-large")
Downloading: 100%|██████████| 449/449 [00:00<00:00, 578kB/s]
Downloading: 100%|██████████| 1.63G/1.63G [02:42<00:00, 9.98MB/s]
Some weights of the model checkpoint at microsoft/deberta-large were not used when initializing DebertaModel: ['deberta.embeddings.position_embeddings.weight']
- This IS expected if you are initializing DebertaModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).
- This IS NOT expected if you are initializing DebertaModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
``` |
transformers | 7,644 | closed | NER pipeline documentation example failing | Hello,
I am running the code through your documentation for named entity recognition and am trying to save this "ner" model locally:
https://huggingface.co/transformers/usage.html#named-entity-recognition
```
nlp = pipeline("ner")
sequence = "Hugging Face Inc. is a company based in New York City. Its headquarters are in DUMBO, therefore very" \
"close to the Manhattan Bridge which is visible from the window."
nlp.save_pretrained("path to folder")
```
When going to load this model up and make predictions, I am getting the error: "IndexError: list index out of range" pointing the very last line below:
```
model = AutoModelForTokenClassification.from_pretrained("path to folder")
tokenizer = AutoTokenizer.from_pretrained("path to folder")
label_list = [
"O", # Outside of a named entity
"B-MISC", # Beginning of a miscellaneous entity right after another miscellaneous entity
"I-MISC", # Miscellaneous entity
"B-PER", # Beginning of a person's name right after another person's name
"I-PER", # Person's name
"B-ORG", # Beginning of an organisation right after another organisation
"I-ORG", # Organisation
"B-LOC", # Beginning of a location right after another location
"I-LOC" # Location
]
sequence = "Hugging Face Inc. is a company based in New York City. Its headquarters are in DUMBO, therefore very" \
"close to the Manhattan Bridge."
# Bit of a hack to get the tokens with the special tokens
tokens = tokenizer.tokenize(tokenizer.decode(tokenizer.encode(sequence)))
inputs = tokenizer.encode(sequence, return_tensors="pt")
outputs = model(inputs)[0]
predictions = torch.argmax(outputs, dim=2)
print([(token, label_list[prediction]) for token, prediction in zip(tokens, predictions[0].tolist())])
```
I would like to get the entity for each token. I believe that the error is with "label_list" portion of the code, and I ran the following this has the token along with the prediction represented as integers:
`print([(token,prediction) for token, prediction in zip(tokens, predictions[0].tolist())])`
I am unable to recreate the output shown on the website due to that error. Any help would be much appreciated. | 10-07-2020 18:17:16 | 10-07-2020 18:17:16 | How many labels does your model have? You can see with `print(model.config.num_labels)`. If it's larger than the length of your `label_list`, that could result in an index out of range error.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,643 | closed | quick question about `BertForMaskedLM` | Hello,
I have a question about the example code that can be found in the documentation for `BertForMaskedLM` model. The example from the documentation is shown below:
```python
from transformers import BertTokenizer, BertForMaskedLM
import torch
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForMaskedLM.from_pretrained('bert-base-uncased', return_dict=True)
input_ids = tokenizer("Hello, my dog is cute", return_tensors="pt")["input_ids"]
outputs = model(input_ids, labels=input_ids)
loss = outputs.loss
prediction_logits = outputs.logits
```
In this example, from the input string "Hello, my dog is cute", I don't see any `mask_token` in it. Also, the example code simply passes the label as `label=input_ids`.
So in this particular example, how exactly does the `BertForMaskedLM` model calculate the masked-LM loss (since the `mask_token` is not specified in the input string)? when I simply pass `labels = input_ids`, does `BertForMaskedLM` model automatically place the `mask_token` over the the first token of the input string (or something similar to this)?
I don't think that the code provided in the documentation is wrong, because when I run the code on my machine, it runs smoothly without generating any error.
Thank you, | 10-07-2020 17:26:34 | 10-07-2020 17:26:34 | This has been fixed already but is only visible in the master documentation: see [here](https://huggingface.co/transformers/master/model_doc/bert.html#bertformaskedlm). The documentation that is shown by default corresponds to the last release and the fix in the docstrings has been done since then :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,642 | closed | Fix RobertaForCausalLM docs | `RobertaLMHeadModel` does not exist, and we can't pass the `return_dict` value if a config has already been passed during instantiation.
closes #7635 | 10-07-2020 16:06:09 | 10-07-2020 16:06:09 | |
transformers | 7,641 | closed | [s2s] configure lr_scheduler from command line | # What does this PR do?
This PR adds the ability to configure `lr_scheduler` from command line for `Seq2SeqTrainer`.
Fixes #7543
@sshleifer | 10-07-2020 15:54:01 | 10-07-2020 15:54:01 | |
transformers | 7,640 | closed | Create README.md for IsRoBERTa language model | # What does this PR do?
Adds a model card Readme for the IsRoBERTa language model
| 10-07-2020 14:02:33 | 10-07-2020 14:02:33 | Thanks for sharing! We had a few models already but only for translation: https://huggingface.co/models?filter=is |
transformers | 7,639 | closed | [s2s] release pseudolabel links and instructions | + Release a bunch of summarization and translation pseudolabels with reasonably nice documentation.
+ Allow `make_student(teacher, 'student_000_baseline', 12, 3, d_layers_to_copy=[0,0,0])` for baseline purposes. | 10-07-2020 13:51:01 | 10-07-2020 13:51:01 | cc @patil-suraj |
transformers | 7,638 | closed | error AttributeError: 'tuple' object has no attribute 'logits' | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
AttributeError Traceback (most recent call last)
<ipython-input-4-594fed3b7299> in <module>()
6 input = tokenizer.encode(sequence, return_tensors="pt")
7 mask_token_index = torch.where(input == tokenizer.mask_token_id)[1]
----> 8 token_logits = model(input).logits
9 mask_token_logits = token_logits[0, mask_token_index, :]
10 top_5_tokens = torch.topk(mask_token_logits, 5, dim=1).indices[0].tolist()
AttributeError: 'tuple' object has no attribute 'logits'
- `transformers` version:
Successfully installed sacremoses-0.0.43 sentencepiece-0.1.91 tokenizers-0.8.1rc2 transformers-3.3.1
- Platform:
Masked Language Modeling - Colab PyTorch
- Python version:
Python 3.6.9
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [V ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Summary of the tasks Open Colab Pytorch
2. Masked Language Modeling example
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 10-07-2020 13:34:40 | 10-07-2020 13:34:40 | Can you replace:
```python
token_logits = model(input).logits
```
by
```python
token_logits = model(input, return_dict=True).logits
```
and see if the error persists? <|||||>Can you give a link to the example, so that we can fix the code snippet? <|||||>Hi Patrick,
The suggested change fixed the problem.
Thank you.
Tzur
On Wed, 7 Oct 2020 at 18:34, Patrick von Platen <[email protected]>
wrote:
> Can you give a link to the example, so that we can fix the code snippet?
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/7638#issuecomment-705087242>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AEVYDXL4DBQIJO4MPX3QZS3SJSRBXANCNFSM4SHNAGUQ>
> .
>
--
Email: [email protected]
Home: +44 (0) 1480 839198
Mobile: +44 (0) 7825 363873
Israel: +972 (0) 3 7201013
Address:
23 Old Pinewood Way,
Papworth Everard
Cambridge CB23 3GT
UK
<|||||>@patrickvonplaten The same issue is on this page https://huggingface.co/transformers/training.html
In these 2 lines
`outputs = model(input_ids, attention_mask=attention_mask, labels=labels)`
`outputs = model(input_ids, attention_mask=attention_mask)`
Thanks for the help!!
<|||||>I fixed this problem by update transfromers from 3.0.2 to 4.23.1(the latest version in 2022.10.16) |
transformers | 7,637 | closed | ValueError("The training dataset must have an asserted cardinality") when running run_tf_text_classification.py | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1 (installed from master)
- Platform: Linux-4.15.0-118-generic-x86_64-with-debian-stretch-sid
- Python version: 3.7.9
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): 2.3.1 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@jplu
## Information
Model I am using (Bert, XLNet ...): Bert (bert-base-uncased)
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: SST-2
* [x] my own task or dataset: (give details below)
This same problem happened to my custom dataset, as I described here in #7535 , and also using SST-2 from GLUE (which I did to confirm the error). The following steps are using SST-2 with bert-base-uncased.
## To reproduce
Steps to reproduce the behavior:
1. Created a new conda environment using conda env -n transformers python=3.7
2. Cloned transformers master, `cd` into it and installed using pip install --editable . -r examples/requirements.txt
3. Installed tensorflow with `pip install tensorflow`
4. Updated datasets to version 1.1.1, as needed according to issue #7535
5. Ran `run_tf_text_classification.py` with the following parameters:
```
--train_file <DATASET_PATH>/train.csv \
--dev_file <DATASET_PATH>/dev.csv \
--test_file <DATASET_PATH>/dev.csv \
--label_column_id 1 \
--model_name_or_path bert-base-uncased \
--output_dir <OUTPUT_PATH> \
--num_train_epochs 4 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--do_train \
--do_eval \
--do_predict \
--logging_steps 1000 \
--evaluate_during_training \
--save_steps 1000 \
--overwrite_output_dir \
--overwrite_cache
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Here is the stack trace:
```
10/07/2020 09:48:49 - INFO - __main__ - Training/evaluation parameters TFTrainingArguments(output_dir='/media/discoD/models/datalawyer/pedidos/transformers_tf', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=True, evaluate_during_training=True, evaluation_strategy=<EvaluationStrategy.STEPS: 'steps'>, prediction_loss_only=False, per_device_train_batch_size=1, per_device_eval_batch_size=1, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=4.0, max_steps=-1, warmup_steps=0, logging_dir='runs/Oct07_09-48-45_user-XPS-8700', logging_first_step=False, logging_steps=10000, save_steps=10000, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=10000, dataloader_num_workers=0, past_index=-1, run_name='/media/discoD/models/datalawyer/pedidos/transformers_tf', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=False, tpu_name=None, xla=False)
10/07/2020 09:48:52 - INFO - filelock - Lock 140079222710992 acquired on /home/user/.cache/huggingface/datasets/c19c3494c195b40ef4234cb533a8f3ce0bca75ffcf602cc246c390073e633c46.1d5301eeb143a6a4f6f3a2bf726921db0de85048303426a3810f96d735d50d8a.py.lock
10/07/2020 09:48:52 - INFO - filelock - Lock 140079222710992 released on /home/user/.cache/huggingface/datasets/c19c3494c195b40ef4234cb533a8f3ce0bca75ffcf602cc246c390073e633c46.1d5301eeb143a6a4f6f3a2bf726921db0de85048303426a3810f96d735d50d8a.py.lock
Using custom data configuration default
10/07/2020 09:48:52 - INFO - filelock - Lock 140084305595600 acquired on /home/user/.cache/huggingface/datasets/_home_user_.cache_huggingface_datasets_csv_default-477ee137eed7e5ae_0.0.0_49187751790fa4d820300fd4d0707896e5b941f1a9c644652645b866716a4ac4.lock
10/07/2020 09:48:52 - INFO - filelock - Lock 140084305595600 released on /home/user/.cache/huggingface/datasets/_home_user_.cache_huggingface_datasets_csv_default-477ee137eed7e5ae_0.0.0_49187751790fa4d820300fd4d0707896e5b941f1a9c644652645b866716a4ac4.lock
10/07/2020 09:48:52 - INFO - filelock - Lock 140080785346896 acquired on /home/user/.cache/huggingface/datasets/_home_user_.cache_huggingface_datasets_csv_default-477ee137eed7e5ae_0.0.0_49187751790fa4d820300fd4d0707896e5b941f1a9c644652645b866716a4ac4.lock
Reusing dataset csv (/home/user/.cache/huggingface/datasets/csv/default-477ee137eed7e5ae/0.0.0/49187751790fa4d820300fd4d0707896e5b941f1a9c644652645b866716a4ac4)
10/07/2020 09:48:52 - INFO - filelock - Lock 140080785346896 released on /home/user/.cache/huggingface/datasets/_home_user_.cache_huggingface_datasets_csv_default-477ee137eed7e5ae_0.0.0_49187751790fa4d820300fd4d0707896e5b941f1a9c644652645b866716a4ac4.lock
100%|██████████| 68/68 [01:20<00:00, 1.18s/ba]
100%|██████████| 1/1 [00:01<00:00, 1.71s/ba]
100%|██████████| 1/1 [00:01<00:00, 1.44s/ba]
10/07/2020 09:50:23 - INFO - filelock - Lock 140078150630032 acquired on /home/user/.cache/torch/transformers/336363d3718f8cc6432db4a768a053f96a9eae064c8c96aff2bc69fe73929770.4733ec82e81d40e9cf5fd04556267d8958fb150e9339390fc64206b7e5a79c83.h5.lock
Downloading: 100%|██████████| 536M/536M [04:08<00:00, 2.16MB/s]
10/07/2020 09:54:32 - INFO - filelock - Lock 140078150630032 released on /home/user/.cache/torch/transformers/336363d3718f8cc6432db4a768a053f96a9eae064c8c96aff2bc69fe73929770.4733ec82e81d40e9cf5fd04556267d8958fb150e9339390fc64206b7e5a79c83.h5.lock
2020-10-07 09:54:46.214922: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 93763584 exceeds 10% of free system memory.
Some weights of the model checkpoint at bert-base-uncased were not used when initializing TFBertForSequenceClassification: ['nsp___cls', 'mlm___cls']
- This IS expected if you are initializing TFBertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).
- This IS NOT expected if you are initializing TFBertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of TFBertForSequenceClassification were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['dropout_37', 'classifier']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Traceback (most recent call last):
File "/media/discoD/pycharm-community-2019.2/plugins/python-ce/helpers/pydev/pydevd.py", line 1448, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/media/discoD/pycharm-community-2019.2/plugins/python-ce/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/media/discoD/repositorios/transformers_pedro/examples/text-classification/run_tf_text_classification.py", line 283, in <module>
main()
File "/media/discoD/repositorios/transformers_pedro/examples/text-classification/run_tf_text_classification.py", line 258, in main
trainer.train()
File "/media/discoD/repositorios/transformers_pedro/src/transformers/trainer_tf.py", line 474, in train
train_ds = self.get_train_tfdataset()
File "/media/discoD/repositorios/transformers_pedro/src/transformers/trainer_tf.py", line 140, in get_train_tfdataset
raise ValueError("The training dataset must have an asserted cardinality")
ValueError: The training dataset must have an asserted cardinality
```
## Expected behavior
Should be able to run the text-classification example as described in [https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow](https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow)
## An additional info: For my own data, using our bert-portuguese model, we don't have a model based on tensorflow available. So I had to force `from_pt` in the code below to be True, otherwise I would get a different error. The [script which converts pytorch to tensorflow](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_bert_pytorch_checkpoint_to_original_tf.py) doesn't work with TF 2.0.
```
with training_args.strategy.scope():
model = TFAutoModelForSequenceClassification.from_pretrained(
model_args.model_name_or_path,
from_pt=bool(".bin" in model_args.model_name_or_path),
config=config,
cache_dir=model_args.cache_dir,
)
``` | 10-07-2020 13:10:50 | 10-07-2020 13:10:50 | Hello!
This is a bug indeed, I will fix it ASAP!
About the issue with forcing `from_pt` to True, you should just give the name of your PT model that finishes with `.bin` and not the folder.<|||||>Hi @jplu !
Regarding the `from_pt` parameter, so there is no way for me to use the model name which was uploaded to huggingface? I have to download it to my machine and refer to the .bin name?
There is a problem there, because `AutoConfig.from_pretrained` uses the same parameter and throws an error when we use the .bin path:
```
Traceback (most recent call last):
File "/media/discoD/repositorios/transformers_pedro/src/transformers/configuration_utils.py", line 360, in get_config_dict
config_dict = cls._dict_from_json_file(resolved_config_file)
File "/media/discoD/repositorios/transformers_pedro/src/transformers/configuration_utils.py", line 442, in _dict_from_json_file
text = reader.read()
File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte
```<|||||>Ok, I will take this as a separate issue. A PR for the cardinality issue will arrive by today.<|||||>@jplu thanks for the fix!
Did you get to open another issue for this?
> Ok, I will take this as a separate issue. A PR for the cardinality issue will arrive by today. |
transformers | 7,636 | closed | Model templates | This PR adds a `cookiecutter`-based utility to generate configuration/modeling/tokenization files, the test suites and the statements across the library necessary for adding a new model.
This PR's goal is to make adding a new model way simpler, by having a simple CLI request information, generate files, that will then need to be edited to implement the changes relative to BERT. The test suites are implemented and run.
Left to do:
- [x] TensorFlow files
- [x] Tokenizer files
- [x] Remove the pooler from the base model
- [x] Ensure the documentation has the right format + .rst file
- [x] Clean-up/refactor the `add_new_model.py` file
- [x] Clarify and add comments to the "dark arts" parts of this PR, such as the `to_replace` file.
- [x] Add encoder-decoder models:
- [x] Modeling PT file
- [x] Configuration file
- [x] Testing the modeling PT file
- [x] Modeling TF file
- [x] Testing the modeling TF file
- [x] Update the RST with the appropriate files
- [x] Add to all auto + init files
- [x] Run the CI on generated files
- [x] Update the LysBERT proposal to something better
- [x] Do a checklist of things left to do after running the script
Possible improvements:
- [ ] Ask the user whether they want to support `token_type_ids`
## For reviewers
If you review this PR, the simplest would be to review the `add_new_model.py` file, and to generate model files using the utility:
```
transformers-cli add_new_model
```
And review the generated files.
Reviewing the current files with `{{cookiecutter.lowercase_modelname}}` doesn't seem reviewer-friendly to me. | 10-07-2020 12:26:07 | 10-07-2020 12:26:07 | |
transformers | 7,635 | closed | ImportError: cannot import name 'RobertaLMHeadModel' | Hi all, I was just trying to run a text generation script for low-resource languages, and therefore experimented with XLM-R and initially with Roberta, using the documentation for RobertaForCausalLM:
here: https://huggingface.co/transformers/model_doc/roberta.html#robertaforcausallm
I am running into the import error shown in the title. See code snippet and error message below. I also experimented with different Tensorflow and transformers versions to no avail. I suspect that the model classes have changed (or the documentation may not be up to date with the current version). I also tried importing RobertaForCausalLM but it returned the same error.
## Environment info
- `transformers` version: 3.1.0
- Platform: Linux-4.15.0-112-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- Tensorflow version: 2.3.1
### Who can help
@LysandreJik , @TevenLeScao
Model I am using (**Roberta**, **XLM-Roberta**):
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run:
```
from transformers import RobertaTokenizer, RobertaLMHeadModel, RobertaConfig
import torch
tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
config = RobertaConfig.from_pretrained("roberta-base")
config.is_decoder = True
model = RobertaLMHeadModel.from_pretrained('roberta-base', config=config, return_dict=True)
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
prediction_logits = outputs.logits
```
Error: `ImportError: cannot import name 'RobertaLMHeadModel'`
If this script runs succesfully, I 'd like to re-run it for XMLRoberta (changing the imports and model names of course).
Many thanks! | 10-07-2020 11:28:01 | 10-07-2020 11:28:01 | Hello, the `RobertaLMHeadModel` is a PyTorch model, you would need to have PyTorch installed to import it.
If you want to use the TensorFlow variant, you should use the `TFRobertaLMHeadModel`<|||||>Yeap, sorry for not including it in the Environment info, I have torch 1.6 installed.
The following script:
```
import transformers
print(transformers.__version__)
import torch
print(torch.__version__)
import tensorflow
print(tensorflow.__version__)
from transformers import RobertaTokenizer, RobertaLMHeadModel, RobertaConfig
```
Returns:
```
3.1.0
1.6.0
2.3.1
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-4-aecd14032a4d> in <module>
6 print(tensorflow.__version__)
7
----> 8 from transformers import RobertaTokenizer, RobertaLMHeadModel, RobertaConfig
ImportError: cannot import name 'RobertaLMHeadModel'
```
The tflow variant returns the same error.<|||||>My bad, I read too fast! The error is probably because you're trying to import `RobertaLMHeadModel`, but as it can be seen in your first post, the model is actually `RobertaForCausalLM`. Can you successfully load that model? We plan on having uniform naming for these models so that the `CausalLM` and `LMHeadModel` have the same naming soon.<|||||>Many thanks for the quick replies! :)
Yes, it can be loaded that way. However, after running the following script:
```
import torch
from transformers import RobertaTokenizer, RobertaForCausalLM, RobertaConfig
tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
config = RobertaConfig.from_pretrained("roberta-base")
config.is_decoder = True
model = RobertaForCausalLM.from_pretrained('roberta-base', config=config, return_dict=True)
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
prediction_logits = outputs.logits
```
The following error appears:
```
TypeError Traceback (most recent call last)
<ipython-input-99-f1066c26064d> in <module>
4 config = RobertaConfig.from_pretrained("roberta-base")
5 config.is_decoder = True
----> 6 model = RobertaForCausalLM.from_pretrained('roberta-base', config=config, return_dict=True)
7 inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
8 outputs = model(**inputs)
~/translation/DimPapSandbox/greek_text_generation/tsflow23/lib/python3.6/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
921
922 # Instantiate model.
--> 923 model = cls(config, *model_args, **model_kwargs)
924
925 if state_dict is None and not from_tf:
TypeError: __init__() got an unexpected keyword argument 'return_dict'
```
If I remove the `return_dict` argument, another error comes up:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-100-44c62bef9ec6> in <module>
7 inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
8 outputs = model(**inputs)
----> 9 prediction_logits = outputs.logits
AttributeError: 'tuple' object has no attribute 'logits'
```
If there's a working snippet using the Roberta or XLM-Roberta for text generation, it would be much appreciated. <|||||>Indeed, there's an issue with the docstrings here. I'm fixing it in #7642.
Have you taken a look at the summary of text generation [here](https://huggingface.co/transformers/task_summary.html#text-generation)?
Please note that RoBERTa has not been trained to do text generation, but to do mask in-filling, so using a pre-trained RoBERTa model to do generation would yield bad results.<|||||>Τhanks, I was actually interested in XLM-R (looking for low-resource language text generation) but I stumbled upon the RoBERTa example shown above first so I thought I could just swap the model and it would work. I can confirm that the following code works with xlm-roberta-large:
```
from transformers import AutoModelWithLMHead, AutoTokenizer
model = AutoModelWithLMHead.from_pretrained("xlm-roberta-large")
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large")
prompt = "Σήμερα ο καιρός" # means: "Today the weather", in Greek"
inputs = tokenizer.encode(PADDING_TEXT + prompt, add_special_tokens=False, return_tensors="pt")
prompt_length = len(tokenizer.decode(inputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True))
outputs = model.generate(inputs, max_length=250, do_sample=True, top_p=0.95, top_k=60)
generated = prompt + tokenizer.decode(outputs[0])[prompt_length:]
print(generated)
```
However the generated output is not really useful (repeating the word "weather"):
`Σήμερα ο καιρός ιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός καιρός`
Ι understood that XLM-R had a CLM checkpoint but maybe I was wrong. In any case, if you are aware of any pretrained models that could I could try for text prediction (I am interested in Greek, where GPT-2 does not really shine) it would be great. Otherwise, we can close this issue. :)<|||||>@lighteternal Have you checked whether this model is any good for your use case? https://huggingface.co/nikokons/gpt2-greek?text=%CE%A3%CE%AE%CE%BC%CE%B5%CF%81%CE%B1+%CE%BF+%CE%BA%CE%B1%CE%B9%CF%81%CF%8C%CF%82<|||||>model author is @nikkon3<|||||>Hi @julien-c, yes I have tried it in the past. It is notably better compared to the vanilla GPT-2 in most cases (the latter "has" Greek tokens in its vocabulary, but the relative corpus that was used must have been extremely small for any useful inference). However even the `nikokons/gpt-2-greek` is sometimes generating sentences that, while syntactically OK, are not relevant to the input context. Probably a larger and more diverse training corpus would help.
I have been experimenting a while with this, and my conclusion is that for now the most "robust" generations for Greek are made by masked LM which are repurposed to causal ones, e.g. If I use a BERT-like model, I put the mask at the end of the unfinished sentence:
`"This is a great <mask> ..."` Of course this comes with the problem that I have to reuse the mask's result to feed it as input in case I want more than one token to be predicted. Autoregressive models either return non-sense or drift away from the input really quickly. <|||||>yes @lighteternal the dataset that is used for gpt2-greek is not large . It is trained on about 5 GB of text with the main source to be from Greek Wikipedia . |
transformers | 7,634 | closed | ImportError: cannot import name 'RobertaLMHeadModel' | Hi all, I was just trying to run a text generation script for low-resource languages, and therefore experimented with XLM-R and initially with Roberta, using the documentation for RobertaForCausalLM:
here: https://huggingface.co/transformers/model_doc/roberta.html#robertaforcausallm
I am running into the import error shown in the title. See code snippet and error message below. I also experimented with different Tensorflow and transformers versions to no avail. I suspect that the model classes have changed (or the documentation may not be up to date with the current version). I also tried importing RobertaForCausalLM but it returned the same error.
## Environment info
- `transformers` version: 3.1.0
- Platform: Linux-4.15.0-112-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- Tensorflow version: 2.3.1
### Who can help
@LysandreJik , @TevenLeScao
Model I am using (**Roberta**, **XLM-Roberta**):
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run:
```
from transformers import RobertaTokenizer, RobertaLMHeadModel, RobertaConfig
import torch
tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
config = RobertaConfig.from_pretrained("roberta-base")
config.is_decoder = True
model = RobertaLMHeadModel.from_pretrained('roberta-base', config=config, return_dict=True)
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
prediction_logits = outputs.logits
```
Error: `ImportError: cannot import name 'RobertaLMHeadModel'`
If this script runs succesfully, I 'd like to re-run it for XMLRoberta (changing the imports and model names of course).
Many thanks! | 10-07-2020 11:25:28 | 10-07-2020 11:25:28 | |
transformers | 7,633 | closed | How to get cross attention for bert when config.add_cross_attention is True | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
As far as I can tell, when using BERT with cross attention and output_attention is True, the returned attention only contains self attention (i.e. a tuple of length num_hidden_layers with size (batch_size, num_head, seq_length, seq_length)). How can I get the cross attention weights in that case?
After digging a little bit, I saw that ModelOutput (and child classes) do not include cross attention as potential outputs. The cross attention is returned by [BertLayer](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L420) (at index 2) but then ignored in [BertEncoder](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L486). A quick look at the outputs for Encoder-Decoder models shows the same issue. Would it be possible to include cross attention in model outputs? And if yes, how can I help doing so?
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to similar question on the forum/Stack Overflow**: https://discuss.huggingface.co/t/how-to-get-cross-attention-values-of-t5/970 | 10-07-2020 09:50:24 | 10-07-2020 09:50:24 | Hey @qmeeus - thanks for your issue. This corresponds actually to a larger feature requests since we never return the attention masks at the moment. I will open a discussion about this internally and add it to the projects. <|||||>Hi @patrickvonplaten and thank you for your answer ! Let me know if I can help in any way with the developments<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>This should be resolved now. Bert2Bert can return cross attention masks with `output_attentions=True` |
transformers | 7,632 | closed | Unique names for dataset cache for each tokenizer | # 🚀 Feature request
Currently in the examples, dataset caches are named for the family of tokenizer used. For example, 'cached_train_BertTokenizer_128'. This may lead to unexpected behavior when running multiple models/tokenizers within the same model type, but different variations on the model/tokenizer.
In this colab notebook, NER training is used run using scibert and then bert-base-cased. Even though the old data files are removed, the code still uses the old cache, resulting in an indexing error due to the mismatched token indices.
https://colab.research.google.com/drive/1q4uBFm81yBWVNzG3Si2ByBh1nw8fk-Q5?usp=sharing
In this colab notebook, NER training is run on scibert-cased and then scibert-uncased. In this notebook, no explicit error occurs since there isn't an indexing error, but it seems that the wrong dataset is being used. In the output of scibert-uncased, there are many warnings of unpredicted tokens, and a lower than expected score. Both of these do not occur if scibert-cased is not run before scibert-cased
https://colab.research.google.com/drive/1pnpWfRqX4nknc0RRe9A2CArbVok3NbhC?usp=sharing
## Motivation
Prevent unexpected behavior when testing on multiple variations on the same transformer architecture.
| 10-07-2020 09:42:14 | 10-07-2020 09:42:14 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,631 | closed | Is there a fine-tuning script for DPR? | It would be nice to have a fine-tuning script for DPR. | 10-07-2020 09:29:07 | 10-07-2020 09:29:07 | Hey @shamanez - I don't think there is a fine-tuning script for DPR at the moment, but we always welcome contributions as such! @lhoestq might have more information.
<|||||>I just have one more question about the DPR model used in RAG (specially the **Doc-Encoder network**).
Is the **doc-encoder** pretrained with a 21-million Wikipedia dump as mentioned in the DPR paper?<|||||>The DPR encoders (context encoder and question encoder) in RAG are pretrained BERT that were fine-tuned for retrieval on the question/answers pairs of Natural Questions (and other datasets depending on the setup) using retrieved passages from the 21 million passages Wikipedia dump. In the library, the DPR encoders are the one trained on NQ.<|||||>Thanks a lot. So can I use these encoders to ginetune the rag on customized
document settings given the fact that question encoder also get fine-tuned.
On Thu, Oct 8, 2020, 21:49 Quentin Lhoest <[email protected]> wrote:
> The DPR encoders (context encoder and question encoder) in RAG are
> pretrained BERT that were fine-tuned for retrieval on the question/answers
> pairs of Natural Questions (and other datasets depending on the setup)
> using retrieved passages from the 21 million passages Wikipedia dump. In
> the library, the DPR encoders are the one trained on NQ.
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/7631#issuecomment-705426902>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AEA4FGXZKERI55XOZQ5TWATSJV4JHANCNFSM4SHEODIA>
> .
>
<|||||>Yes you can fine-tune it on your documents. During RAG fine-tuning both the generator and the question encoder are updated.<|||||>Thanks :). So finally what is the best way to arrange customized set of
documents?
On Thu, Oct 8, 2020, 22:23 Quentin Lhoest <[email protected]> wrote:
> Yes you can fine-tune it on your documents. During RAG fine-tuning both
> the generator and the question encoder are updated.
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/7631#issuecomment-705446147>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AEA4FGSPUR4YUA3F2GTSIJTSJWAIXANCNFSM4SHEODIA>
> .
>
<|||||>You'll find all the info at https://github.com/huggingface/transformers/tree/master/examples/rag#finetuning :)<|||||>Amazing. Thanks a lot
On Thu, Oct 8, 2020, 22:27 Quentin Lhoest <[email protected]> wrote:
> You'll find all the info at
> https://github.com/huggingface/transformers/tree/master/examples/rag#finetuning
> :)
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/7631#issuecomment-705448525>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AEA4FGSSRJFHM7YDUQSUFKDSJWAYBANCNFSM4SHEODIA>
> .
>
<|||||>I kind of checked the finetuning script. It shows how to train for custom
datasets. What I don't understand is how should I use my own set of
documents other that wikipedia's dumps.
On Thu, Oct 8, 2020, 22:27 Quentin Lhoest <[email protected]> wrote:
> You'll find all the info at
> https://github.com/huggingface/transformers/tree/master/examples/rag#finetuning
> :)
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/7631#issuecomment-705448525>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AEA4FGSSRJFHM7YDUQSUFKDSJWAYBANCNFSM4SHEODIA>
> .
>
<|||||>Oh I see. In that case you have to build the RAG knowledge source. We haven't released a code example to do so yet but we're discussing it in #7462 <|||||>Ok will follow it.
On Thu, Oct 8, 2020, 22:36 Quentin Lhoest <[email protected]> wrote:
> Oh I see. In that case you have to build the RAG knowledge source. We
> haven't released a code example to do so yet but we're discussing it in
> #7462 <https://github.com/huggingface/transformers/issues/7462>
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/7631#issuecomment-705453565>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AEA4FGTDMISMY7F2TSNGA5LSJWB3VANCNFSM4SHEODIA>
> .
>
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,630 | closed | Add GPT2 to sequence classification auto model | Add `GPT2ForSequenceClassification` to the `AutoModelForSequenceClassification` auto model.
closes #7493. | 10-07-2020 09:18:38 | 10-07-2020 09:18:38 | |
transformers | 7,629 | closed | Update model card - Fix arxiv link | Minor changes: Add arxiv link + Layout improvement + fix typos
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> | 10-07-2020 06:30:53 | 10-07-2020 06:30:53 | |
transformers | 7,628 | closed | The newly added config decoder_start_token_id for bart-base model is wrong? | The new config file for `bart-base` has been updated on October 5.th. The new config file looks like the following:
```
{
...
"bos_token_id": 0,
...
"decoder_start_token_id": 2,
...
"eos_token_id": 2,
...
}
```
the `decoder_start_token_id` was added newly, it wasn't there before. But as far I understand, the `decoder_start_token_id` should be `bos_token_id` as default . The newly added config-line changed the behavior for the `generate` function. | 10-07-2020 06:02:04 | 10-07-2020 06:02:04 | A similar problem is here [#5212](https://github.com/huggingface/transformers/issues/5212)<|||||>Moved there. |
transformers | 7,627 | closed | Added sampler'set_epoch when use distributed training | `run_squad.py` file is independent of `Trainer Class`(https://github.com/huggingface/transformers/issues/4398). Therefore, there is no method related to `set_epoch` in distributed training. | 10-07-2020 04:57:18 | 10-07-2020 04:57:18 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,626 | closed | Unable to pass encoder_outputs to generate calls | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: Github Main branch
### Who can help
TextGeneration: @TevenLeScao
## Information
Model I am using (Bert, XLNet ...): T5
I'm unable to pass preconputed encoder_outputs to the `.generate()` method.
I've tried defining:
```python
model_kwargs = {"encoder_outputs": encoder_outputs}
output = model.generate(model_kwargs)
```
But I've noticed some validation errors for `input_ids`. Even if I replace input_ids with dummy values, I've noticed the model_kwargs is always replaced here:
https://github.com/huggingface/transformers/blob/master/src/transformers/generation_utils.py#L448
I think it can be fixed (without optimization) by just replacing:
```python
model_kwargs["encoder_outputs"] = encoder_outputs
```
with:
```python
model_kwargs.setdefault("encoder_outputs", encoder_outputs)
```
If you agree I can try to open a PR to fix this.
Best, | 10-07-2020 04:32:13 | 10-07-2020 04:32:13 | Hey @gabisurita - I understand that one might want to forward encoder_outputs in the generate function. However, adding such a possibility opens the door for many problems in case `beam_search` is chosen. We are currently working on a bigger refactor that should solve this problem by a better design choice of `generate()`. I'm afraid that this will still take ~3,4 weeks to complete though.<|||||>Hi @patrickvonplaten,
I've noticed that your PR refactoring generate was merged. It seems a big improvement, thank you! Still, `input_ids` is still required or overridden by an empty tensor. It's still not clear to me how can I use the new API with `encoder_outputs` or `imput_embeds`.<|||||>Hey @gabisurita - if you look at the tests, you can now directly use `beam_search` instead of generate for your use case I think :-). Here the tests: https://github.com/huggingface/transformers/blob/a7d73cfdd497d7bf6c9336452decacf540c46e20/tests/test_generation_utils.py#L295
From the tests, it should be quite easy to understand how to use `beam_search` directly I think :-)
Let me know if that helps!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 7,625 | closed | Create README.md | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> | 10-07-2020 00:48:04 | 10-07-2020 00:48:04 | will merge in the meantime, @lanwuwei
Feel free to re-open a PR to update if needed. |
transformers | 7,624 | closed | Free Inference API Not Accessible | Hi, I attempted to use the free version of the Model Hub's Inference API. However, it is not working for me anymore:

I do have an account and I am signed in when I get the above message. I also have done my email verification as well. Also, when I try to send a POST request using curl with my API token, I get a 503 error.
I was able to successfully use the free version on October 4th, 2020 and so I was wondering, is this a bug or is the free version no longer available? | 10-07-2020 00:29:11 | 10-07-2020 00:29:11 | Hi Nathan, can you send us a quick email to [email protected]?
The free version of the Hub's Inference API is still up, but maybe you've hit the rate limiting?<|||||>Resolved. Ended up being a misinterpretation of my part on error codes that HF's API produces. |
transformers | 7,623 | closed | Implement PyTorch and/or TensorFlow sequence classification architectures for causal language models | # 🚀 Feature request
The architecture `GPT2ForSequenceClassification` was added in #7501 in PyTorch. It would be great to have it in TensorFlow (cf. issues #7622), but it would also be great to have it for other causal models: ~OpenAI GPT~, ~CTRL~ (PR opened @elk-cloner), ~TransfoXL~ (PR opened @spatil6)
Below is a list of items to follow to make sure the integration of such an architecture is complete:
- Implement `XXXForSequenceClassification` in `modeling_xxx.py` or `TFXXXForSequenceClassification` in `modeling_tf_xxx.py
- Test that architecture in `tests/test_modeling_xxx.py` or `tests/test_modeling_tf_xxx.py`
- Add that architecture to `__init__.py` and ` docs/source/model_doc/xxx.rst`.
Taking a look at the code changes in #7501 would be a good start.
A very good first issue to get acquainted with the library and its architectures!
| 10-06-2020 21:42:07 | 10-06-2020 21:42:07 | Hi @LysandreJik is this issue still open? I'll like to pick it up<|||||>I believe @fmcurti is working on the OpenAI GPT implementation, but both CTRL and TransfoXL are still open! Would love a PR!<|||||>Hi Lysandre, thanks for assigning this issue to me. I've been trying to
setup transformers on my local machine (Windows 10). I've been having
several issues with the setup.
Is there any guide I could follow?
Thanks much
On Fri, 9 Oct 2020 at 12:38, Lysandre Debut <[email protected]>
wrote:
> Assigned #7623 <https://github.com/huggingface/transformers/issues/7623>
> to @pasDamola <https://github.com/pasDamola>.
>
> —
> You are receiving this because you were assigned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/7623#event-3859742526>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AGGRMAH2H4Q5JYISSW23W3LSJ3Y4HANCNFSM4SGSU6FQ>
> .
>
<|||||>Sure, have you taken a look at the [`CONTRIBUTING.md` document](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md)? What issues have you been having?<|||||>Yes I have.
When I run `pip install -e ".[dev]"`, I always encounter this error. I'm also running it in anaconda environment

`<|||||>I believe the repo cannot be installed from conda as of now, can you use a pip virtual environment?<|||||>Alright, I'll try that now<|||||>Still having the same error I had while in conda. I'm trying to install tensorflow locally and retry this again<|||||>Hi @LysandreJik , I'm still having the same errors running on a pip virtual environment<|||||>Do you manage to install `TensorFlow` in your pip environment?<|||||>Hi @LysandreJik not yet. I get a similar error. I'm trying to look for solutions on the internet

<|||||>Hi @LysandreJik – has anyone picked up the CTRL or TransfoXL architectures yet? I'd love to take a crack at one of them if available. Thank you!<|||||>No, feel free to take a crack at it! Let me know and I'll put you in the issue description.<|||||>is there anybody working on these ? @LysandreJik <|||||>I believe CTRL and TransfoXL are still available. Feel free to open a PR!<|||||>Hi @LysandreJik ,
As this Feature request is closed
Do we need TF implementation of causal models GPT-1, Transfoxl and CTRL?
I'm ready to contributed for that as well.
<|||||>That would be very welcome @spatil6!<|||||>Ok thanks @LysandreJik.
I'm waiting for this PR #8714 to get merge.
Once done, I'll raise PR for these models as well. |
transformers | 7,622 | closed | Implement a TF2 version of `GPT2ForSequenceClassification` | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
The architecture `GPT2ForSequenceClassification` was added in #7501 in PyTorch. It would be great to have it in TensorFlow as well.
Below is a list of items to follow to make sure the integration is complete:
- Implement `TFGPT2ForSequenceClassification` in `modeling_tf_gpt2.py`
- Test that architecture in `tests/test_modeling_tf_gpt2.py`
- Add that architecture to `__init__.py` and ` docs/source/model_doc/gpt2.rst`.
Taking a look at the code changes in #7501 would be a good start, as this PR would essentially be a TF2 copy of it.
A very good first issue to get acquainted with the library and its architectures! | 10-06-2020 21:36:44 | 10-06-2020 21:36:44 | May I take a crack at this? :)<|||||>Yes, you may! Thanks @y2s82!<|||||>Hi @LysandreJik , I have completed development for this FR. can you please assign it to me, so i'll raise PR for it.<|||||>Feel free to open a PR |
transformers | 7,621 | closed | [No merge] TF integration testing | Adds integrationt tests for BERT, ELECTRA and Longformer to ensure that PR such as https://github.com/huggingface/transformers/pull/7605 do not impact the current state of models. RoBERTa not done because it's already done.
Patches a bug with the `ElectraForPreTraining` when batch size = 1 | 10-06-2020 21:26:14 | 10-06-2020 21:26:14 | @LysandreJik Is this PR done with your last changes in the tests?<|||||>Ah, I had forgotten about this. I'll rebase and ping you for review
<|||||>should be good for review @jplu <|||||>They're not the same as they don't rely on the full checkpoints but on some random tiny ones, to make the CI faster.
It does test every same aspect, however: the weights loading, the full inference, the expected results. |
transformers | 7,620 | closed | Downloading DPR model ('facebook/dpr-ctx_encoder-single-nq-base') | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!--Hi, I want to use model build on the natural questions. I have to question:
1.. I see an example of the above-mentioned model. here it is
```
from transformers import DPRReader, DPRReaderTokenizer
tokenizer = DPRReaderTokenizer.from_pretrained('facebook/dpr-reader-single-nq-base')
model = DPRReader.from_pretrained('facebook/dpr-reader-single-nq-base', return_dict=True)
encoded_inputs = tokenizer(
questions=["What is love ?"],
titles=["Haddaway"],
texts=["'What Is Love' is a song recorded by the artist Haddaway"],
return_tensors='pt'
)
outputs = model(**encoded_inputs)
start_logits = outputs.stat_logits
end_logits = outputs.end_logits
relevance_logits = outputs.relevance_logits
``` So I want to run the command which gives me the answers here. It gives me just the start and end position of answer.
2.. Can I use my context (my document) where I want to search model the answer? is it possible here?
Thanks in advance-->
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | 10-06-2020 21:02:23 | 10-06-2020 21:02:23 | Hi, could you mention the issue you're having?<|||||>I want to use a natural question dataset and model trained on that. I have seen this code:
```
from transformers import DPRReader, DPRReaderTokenizer
tokenizer = DPRReaderTokenizer.from_pretrained('facebook/dpr-reader-single-nq-base')
model = DPRReader.from_pretrained('facebook/dpr-reader-single-nq-base', return_dict=True)
encoded_inputs = tokenizer(
questions=["What is love ?"],
titles=["Haddaway"],
texts=["'What Is Love' is a song recorded by the artist Haddaway"],
return_tensors='pt'
)
outputs = model(**encoded_inputs)
start_logits = outputs.stat_logits
end_logits = outputs.end_logits
relevance_logits = outputs.relevance_logits
```
But from here I can not know how to return the answer from the model. It gives us just starting and ending position of the answer as far as I understand.
And also CAn I use my document as a context, and model search answer in this document. If yes please tell me how it is possible.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,619 | closed | Enhance TFTrainer.save_model() | # What does this PR do?
@jplu , could you close PR #7597, please? This PR is a clean one, without the file `modeling_tf_utils` being changed at all.
Currently, TFTrainer.save_model() raises errors if the model is not TFPreTrainedModel . However Trainer works fine with torch.nn.modules.Module.
This is a step to make TFTrainer work with usual tf.keras.models.Model models. The idea (from @sgugger) is that a user is building their own models that work like ours (e.g., return the loss as the first output) and can train them with Trainer.
Furthermore, a SavedModel is also saved using tf.saved_model.save().
For @jplu and @sgugger . | 10-06-2020 19:33:26 | 10-06-2020 19:33:26 | >
>
> Awesome! Did you try it with a usual training example to see if everything is ok?
Yes, with example/text-classification.
I didn't check yet with a usual `tf.keras.models.Model` (i.e. not TFPretrainedModel). But when I continue with `test_trainer_tf.py`, it will be tested.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi, @sgugger & @LysandreJik ,
I didn't realized that this PR is not merged into master. Since it has been for some time, I rebased the branch. All the suggestions from @sgugger are done in the latest version.
It would be great if you can merge this PR. Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 7,618 | closed | position_ids parameter cannot work with past parameter for GPT2Model during batch inference | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
Hi, @patrickvonplaten. I just finished training a GPT-2 model by using the `GPT2Model` class, and I try to speed up the inference by using the batch inference (very similar to #3021 ). However, I found that the `position_ids` parameter cannot work with the `past` parameter, and it raises the Error: `RuntimeError: The size of tensor a (32) must match the size of tensor b (2592) at non-singleton dimension 0`.

I found the exception happens in the `modeling_gpt2.py` line 471, so I check the original codes of the `GPT2Model` class. In `modeling_gpt2.py` line 426 to line 427 (line 558 to line 559 in the lastest original code):
```python
if position_ids is not None:
position_ids = position_ids.view(-1, input_shape[-1])
```
Actually, during using the `past` parameter for speeding up the inference, the input_shape is `[batch_size, 1]`, but the `position_ids` is `[batch_size, seq_length]`. So, when we use the `past` and `position_ids` at the same time, the position_ids will be converted into a wrong shape `[batch_size*seq_length, 1]` (the shape we want should be `[batch_size, seq_length]`). For example, as shown in the figure, the `batch_size` is 32 and the `seq_length` is 81, and the generated position_ids shape is `[2592, 1]` (32*81=2592), but the correct position_ids shape should be `[32, 81]`.
So I think it may be a bug, but I am not so sure about it. Can you guys help me to figure it out?
Here are the environment variables in my system:
* transformers==2.11.0
* pytorch==1.5.1
* python==3.6.11
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | 10-06-2020 16:32:09 | 10-06-2020 16:32:09 | Hey @gmftbyGMFTBY - I think the better approach to tackle your problem would actually be this one here: https://github.com/huggingface/transformers/issues/3021#issuecomment-681792104 .
This way you should not run into any errors regarding the position_ids<|||||>Hey, @patrickvonplaten, it works for me. Thank you so mcuh. |
transformers | 7,617 | closed | OSError: Can't load config for saved_model when deploying on EC2. | I was deploying a trained model on AWS EC2 instance (t3a.xlarge) using a dockerized image and Flask. The model was trained using [fast-bert](https://github.com/kaushaltrivedi) that implements transformers as a dependency.
When I passed a sentence on the rendered page, I received
`"In get_config_dict raise EnvironmentError OSError"`
and
```
OSError: Can't load config for 'model/final_model'. Make sure that:
'path/to/final_model' is a correct model identifier listed on 'https://huggingface.co/models'
or 'path/to/final_model' is the correct path to a directory containing a config.json file
```
As suggested in certain threads, I re-installed the image with the latest transformers==3.3.1 release.
However, I am unable to figure out the issue.
Kindly help.
Similar to #6267 #5803 #7412

| 10-06-2020 16:29:38 | 10-06-2020 16:29:38 | This can happen if you're fetching a model from S3 but you have no internet access, or if you're using an incorrect URL to a local folder.<|||||>Hello @LysandreJik
I have uploaded the saved model in a folder on my EC2 instance. Therefore, the location for the model is from the instance file directory which I have verified multiple times.
Also, the model functions properly when deployed using Flask over localhost.
Do I need to download the pre-trained models as a command in the dockerfile?
Kindly help.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>I'm seeing the same issue on my community model
```
OSError: Can't load config for 'model_path'. Make sure that:
- 'model_path' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'model_path' is the correct path to a directory containing a config.json file
```
I verified the model files are there. How can i work around this?<|||||>It would be helpful if you opened a new issue with everything related to your environment, as well as the code you use. Are you also on EC2? What is in the `model_path` folder? What is your `transformers` version? All that's asked in the template would be very helpful for us to help you.<|||||>@LysandreJik Thanks for the reply! I created a new issue https://github.com/huggingface/transformers/issues/9106. Other old huggingtweets models still work but not the new ones, not sure what the problem is.<|||||>> OSError: Can't load config for 'model_path'. Make sure that:
>
> - 'model_path' is a correct model identifier listed on 'https://huggingface.co/models'
Hye I'm facing the same. Did you solve that ? <|||||>I have the same error message. It also says that it can't find url"/resolve/main/config.json". I saved my model like they said but only have a folder "results" containing "pytorch_model.bin" and "training_args.bin".
Edit: I tried to also save the tokenizer (despite only having fine-tuned). This gave me a tokenizer_config.json which still isn't enough.
How do I get a config.json in my directory? I'm using a custom BERT modeled after BertForTokenClassification (https://huggingface.co/transformers/_modules/transformers/models/bert/modeling_bert.html#BertForTokenClassification) which doesn't specify a config attribute.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 7,616 | closed | Fix wrong reference name/filename in docstring of `SquadProcessor` |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #7613
Replace wrong filenames in docstring: `train-v1.1.json`/`train-v2.0.json` -> `dev-v1.1.json`/`dev-v2.0.json`
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> | 10-06-2020 16:02:11 | 10-06-2020 16:02:11 | |
transformers | 7,615 | closed | Feature Request: Support training/evaluation on Squad-format (json) files in SquadDataset for quick Squad fine-tuning | I am currently working on a project using [run_squad_trainer.py](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad_trainer.py)
to quickly fine-tune on new Squad-format (v2.0 json) files. However, the current SquadDataset class only allows training/evaluating on the original Squad jsons (train-v2.0.json, dev-v2.0.json). I have used a quick work around by using softlinks, to the actual training/evaluation files, but feel this is a little contrived.
I believe if the arguments `train_file` and `predict_file` are added in [SquadArguments](https://github.com/huggingface/transformers/blob/aa6c3c14b4ff8fbd5d40b100f4aae71fb359d6ae/src/transformers/data/datasets/squad.py#L36) and also line 152 in [SquadDataset](https://github.com/huggingface/transformers/blob/aa6c3c14b4ff8fbd5d40b100f4aae71fb359d6ae/src/transformers/data/datasets/squad.py#L152) is changed to
`self.examples = self.processor.get_dev_examples(args.data_dir, filename=args.predict_file)`
and line 154 in [SquadDataset](https://github.com/huggingface/transformers/blob/aa6c3c14b4ff8fbd5d40b100f4aae71fb359d6ae/src/transformers/data/datasets/squad.py#L154)
`self.examples = self.processor.get_train_examples(args.data_dir, filename=arg.train_file)`
that may do the trick. At least this approach works in [run_squad.py](https://github.com/huggingface/transformers/blob/aa6c3c14b4ff8fbd5d40b100f4aae71fb359d6ae/examples/question-answering/run_squad.py#L444).
Thanks for your great work!
| 10-06-2020 14:59:17 | 10-06-2020 14:59:17 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,614 | closed | Feature Request: Support training and evaluating on Squad-format (json) files in SquadDataset for easy Squad fine-tuning | I am currently working on a project using [run_squad_trainer.py](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad_trainer.py)
to quickly fine-tune on new Squad-format (v2.0 json) files. However, the current SquadDataset class only allows training/evaluating on the original Squad jsons (train-v2.0.json, dev-v2.0.json). I have used a quick work around by using softlinks, to the actual training/evaluation files, but feel this is a little contrived.
I believe if the arguments `train_file` and `predict_file` are added in [SquadArguments](https://github.com/huggingface/transformers/blob/aa6c3c14b4ff8fbd5d40b100f4aae71fb359d6ae/src/transformers/data/datasets/squad.py#L36) and also line 152 in [SquadDataset](https://github.com/huggingface/transformers/blob/aa6c3c14b4ff8fbd5d40b100f4aae71fb359d6ae/src/transformers/data/datasets/squad.py#L152) is changed to
`self.examples = self.processor.get_dev_examples(args.data_dir, filename=args.predict_file)`
and line 154 in [SquadDataset](https://github.com/huggingface/transformers/blob/aa6c3c14b4ff8fbd5d40b100f4aae71fb359d6ae/src/transformers/data/datasets/squad.py#L154)
`self.examples = self.processor.get_train_examples(args.data_dir, filename=arg.train_file)`
that may do the trick. At least this approach works in [run_squad.py](https://github.com/huggingface/transformers/blob/aa6c3c14b4ff8fbd5d40b100f4aae71fb359d6ae/examples/question-answering/run_squad.py#L444).
Thanks for your great work!
| 10-06-2020 14:57:36 | 10-06-2020 14:57:36 | Sorry duplicated my request. |
transformers | 7,613 | closed | SquadProcessor: Wrong reference name/filename in docstring | As the docstring of the function `get_train_examples()` refers to `train-v1.1.json`/`train-v2.0.json`, I guess `get_dev_examples()` should refer to `dev-v1.1.json`/`dev-v2.0.json` (but refers to `train-v1.1.json`/`train-v2.0.json`):
https://github.com/huggingface/transformers/blob/aa6c3c14b4ff8fbd5d40b100f4aae71fb359d6ae/src/transformers/data/processors/squad.py#L610-L617 | 10-06-2020 14:10:49 | 10-06-2020 14:10:49 | Indeed! Do you want to open a PR fixing the doc?<|||||>Yes, I can do that. :) |
transformers | 7,612 | closed | updating modelcard with training dataset information. | updating modelcard with training dataset information.
| 10-06-2020 12:23:05 | 10-06-2020 12:23:05 | |
transformers | 7,611 | closed | typo fix | It should be T5-3B not T5-3M.
Fixes # (issue)
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Model Cards: @julien-c
T5: @patrickvonplaten
| 10-06-2020 11:26:47 | 10-06-2020 11:26:47 | |
transformers | 7,610 | closed | Fix tokenizer UnboundLocalError when padding is set to PaddingStrategy.MAX_LENGTH | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #7609
## Who can review?
tokenizers: @mfuntowicz | 10-06-2020 10:54:31 | 10-06-2020 10:54:31 | Thanks! |
transformers | 7,609 | closed | Tokenizer: UnboundLocalError with PaddingStrategy MAX_LENGTH | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.1.0
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.8.5
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
tokenizers: @mfuntowicz
## Information
Model I am using: <transformers.tokenization_bert.BertTokenizer>
The problem arises when using:
* the official example scripts:
using the `encode_plus`
The tasks I am working on is:
* Tokenizing
## To reproduce
Steps to reproduce the behavior:
1. tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
2. tokenizer.encode_plus("hello word", max_length=128, padding=PaddingStrategy.MAX_LENGTH)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
`UnboundLocalError: local variable 'padding_strategy' referenced before assignment`
## Expected behavior
Return Tokenizer output | 10-06-2020 10:48:24 | 10-06-2020 10:48:24 | |
transformers | 7,608 | closed | Ability to pre-train BART model | # 🚀 Feature request
Ability to pre-train BART model, same like there is an ability to pre-train BERT and other models.
## Motivation
I'm using a pre-trained BART model for sequence-to-sequence problem and trained it using my own data, using the examples here: https://github.com/huggingface/transformers/tree/master/examples/seq2seq
I was wondering if there is a chance to add an ability to continue the pre-training of the already pre-trained `facebook/bart-base` and `facebook/bart-large` models, with my own unsupervised data, in order to improve the results.
@sshleifer Can you please help? thanks in advance!
| 10-06-2020 10:29:08 | 10-06-2020 10:29:08 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,607 | closed | Create README.md (LEGAL-BERT Model card) | Model description for all LEGAL-BERT models, published as part of "LEGAL-BERT: The Muppets straight out of Law School". Chalkidis et al., 2018, In Findings of EMNLP 2020
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> | 10-06-2020 09:34:38 | 10-06-2020 09:34:38 | That's really cool, thanks for sharing @iliaschalkidis <|||||>Thanks @julien-c for your nice comments and for building and improving such a great library. Is there any chance, that we could place all 5 LEGAL-BERT variants in a sub-folder, i.e., `/legal-bert`, inside the account folder `/nlpaueb`? Kind of OCD though 🤓
<|||||>I'm not sure what you mean :)
Do you want to e.g. rename `bert-base-uncased-contracts` to `legal-bert-base-uncased-contracts`? Or do you want `nlpaueb/legal-bert/bert-base-uncased-contracts`? We don't really want to do the latter IMO (have increased levels of nesting) because:
- I'm afraid it might get confusing for users of the models,
- some of the tooling we are currently building is expecting a org_name/model_name layout.
What do you think?<|||||>I was referring to the second scenario, but I totally understand it will make things more complicated on your side. Thanks again!
|
transformers | 7,606 | closed | Add ProtT5-XL-BFD model card | Fixes # (issue)
Create a new card for our ProtT5-XL-BFD model.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Model Cards: @julien-c
T5: @patrickvonplaten
| 10-06-2020 09:01:40 | 10-06-2020 09:01:40 | |
transformers | 7,605 | closed | TensorFlow training/inference optimization | # What does this PR do?
This PR fixes a performance issue where some operation was done on CPU instead of GPU and would result to put the GPU in idle mode. This optimization is feasible thanks to the recent update we made on the way we load the TF weights.
@patrickvonplaten I have done few changes in the `TFLongformer` model but I'm sure it can be further optimized the same way (see `TFLongformerSelfAttention`) but as I don't know much on how works this model, can you take a look if the same optimization can be applied?
Fixes #6771 | 10-06-2020 08:54:23 | 10-06-2020 08:54:23 | > If that's all it takes, that's fantastic! Did you manage to obtain the performance improvements that were initially mentioned thanks to this?
On my machine with my GPU yes.
>Also I'm realizing now that we don't have integration testing for our TensorFlow models, and this seems like a situation where having some would be needed. Could we work on adding these tests for the models modified here at first, and then add it to the rest of the models?
Sure! It is a good idea!
> I can help you work on it if you're lacking time!
I would appreciate if you have time yes 😃 <|||||>Okay, will take a look at doing the integrations tests sometimes tonight. Will let you know!<|||||>@jplu
For learning purpose, I am wondering which operations was done on CPU instead of GPU. I saw you changed `Dense` to `EinsumDense` in several places, and remove several operations about shape changing. Is shape changing done on CPU and `EinsumDense` could avoid this? Could you give me some information about this, so I can read and learn it? Thanks.
<|||||>@chiapas
If you take a look at #6771 is it quite well detailed. The issue was coming from transpose+matmul that was done on CPU. einsumDense allows you to do all these computation directly in the layer but at the cost of changing the shapes of the original layers, that why we have modified the way we load the TF models.
To do this PR I basically took example on the original BERT implementation right [here](https://github.com/tensorflow/models/blob/master/official/nlp/transformer/attention_layer.py).
<|||||>Thanks a lot @LysandreJik !!
As I'm currently working on from scratch LM training for TF models, I don't have much time to really focus on this.<|||||>> transpose+matmul
@jplu Thanks. I am superised by this `transpose+matmul that was done on CPU`.<|||||>>
>
> Thanks a lot @LysandreJik !!
>
> As I'm currently working on from scratch LM training for TF models, I don't have much time to really focus on this.
@jplu You also works on LM training for TF models? I plan to go back to a pending PR #6955 I created once the `test_trainer_tf.py` is done. Do PR #6955 and your work on TF models LM training overlap? Currently that PR is still empty though.<|||||>@chiapas This is exactly what I'm doing, and the models needs some rework that's why I'm mostly focus on BERT to have at least one model working.
I just done yesterday the data pipeline with random masking generation.<|||||>> @chiapas This is exactly what I'm doing, and the models needs some rework that's why I'm mostly focus on BERT to have at least one model working.
>
> I just done yesterday the data pipeline with random masking generation.
Ah, ok. I guess my PR was pending too long and it is my bad not to communicate with you first. I planed to do this while I finished a notebook on Kaggle [Masked, My Dear Watson - MLM with TPU](https://www.kaggle.com/yihdarshieh/masked-my-dear-watson-mlm-with-tpu), which also works on MLM.
Since you already have more progresses (and also you are HF member), it is better for you to continue. However, if there is something I can contribute for this TF LM task, I would love to do it.
<|||||>> Since you already have more progresses (and also you are HF member), it is better for you to continue. However, if there is something I can contribute for this TF LM task, I would love to do it.
Thanks! I will let you know.<|||||>That's awesome! I will see what results the TF benchmark scripts give before/after this PR.
Strongly agree with @LysandreJik that we should add integration tests before merging this PR.<|||||>I ran the benchmarks: `python examples/benchmarking/run_benchmark_tf.py --models bert-base-cased --env_print` in the following environment:
```
- transformers_version: 3.3.1
- framework: TensorFlow
- eager_mode: False
- use_xla: False
- framework_version: 2.3.0
- python_version: 3.6.10
- system: Linux
- cpu: x86_64
- architecture: 64bit
- date: 2020-10-06
- time: 19:06:48.378935
- fp16: False
- use_multiprocessing: True
- only_pretrain_model: False
- cpu_ram_mb: 32088
- use_gpu: True
- num_gpus: 1
- gpu: TITAN RTX
- gpu_ram_mb: 24217
- gpu_power_watts: 280.0
- gpu_performance_state: 8
- use_tpu: False
```
Currently, on master:
```
==================== INFERENCE - SPEED - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
bert-base-cased 8 8 0.085
bert-base-cased 8 32 0.166
bert-base-cased 8 128 0.513
bert-base-cased 8 512 2.629
--------------------------------------------------------------------------------
```
In this `tf-optim` branch, the results are:
```
==================== INFERENCE - SPEED - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
bert-base-cased 8 8 0.088
bert-base-cased 8 32 0.176
bert-base-cased 8 128 0.531
bert-base-cased 8 512 3.028
--------------------------------------------------------------------------------
```
=> So the speed results are more or less identical with the way the benchmarks are used.
I don't compile the model with Keras, but just add the "@tf.function" decorator to the function to transform the function into graph mode. So not sure what to think of that.... => @jplu - colud you maybe check the benchmark script and see if you can get a speed-up there? Or if the benchmark script is wrong?
```
python examples/benchmarking/run_benchmark_tf.py --models bert-base-cased --env_print
```<|||||>The benchmark script is ok, but to see the difference you have to create a saved_model and run the model in TF Serving. Your benchmark don't take into account all the optimization TF serving does for inference.
We should update the benchmark script to include:
- Saved model creation
- run a the saved model with the TF Serving tool
- adapt the the benchmark to include gRPC calls to use the model from TF Serving.<|||||>Will be integrated into the PR #7753 |
transformers | 7,604 | closed | way to make inference Zero Shot pipeline faster? | Hi
Can you guys give me tips how to make Zero Shot pipeline inference faster?
My current approach right now is reducing the model size/parameter
(trying to train "base model" instead of "large model)
Is there another approach?
CCing @joeddav | 10-06-2020 07:37:22 | 10-06-2020 07:37:22 | Closing this and moving the conversation (w/ my answer) to [the forums](https://discuss.huggingface.co/t/way-to-make-inference-zero-shot-pipeline-faster/1384/2?u=joeddav). |
transformers | 7,603 | closed | Added model cards for Tagalog BERT models | # What does this PR do?
Adds model cards for five Tagalog BERT models:
* jcblaise/bert-tagalog-base-cased
* jcblaise/bert-tagalog-base-uncased
* jcblaise/bert-tagalog-base-cased-WWM
* jcblaise/bert-tagalog-base-uncased-WWM
* jcblaise/distilbert-tagalog-base-uncased
| 10-06-2020 06:35:41 | 10-06-2020 06:35:41 | Thanks! |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.