repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 12,927 | closed | Add missing @classmethod decorators | `_BaseAutoModelClass` was missing `@classmethod` decorators on the `from_config(...)` and `from_pretrained(...)` methods. | 07-28-2021 12:49:14 | 07-28-2021 12:49:14 | Fun fact: the poor guy @classmethod will be pinged consistently if you add this handle to the commit message 😂
I'm removing `@` from it! |
transformers | 12,926 | closed | Misleading warning when using DPRContextEncoderTokenizer | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: `4.9.1`
- Platform: Ubuntu
- Python version: 3.8.10
- PyTorch version (GPU?): 1.9.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
- tokenizers: @LysandreJik
## Information
When running this code
``` python
from transformers import (
DPRContextEncoder,
DPRContextEncoderTokenizer,
)
tokenizer = DPRContextEncoderTokenizer.from_pretrained('facebook/dpr-ctx_encoder-single-nq-base')
model = DPRContextEncoder.from_pretrained('facebook/dpr-ctx_encoder-single-nq-base')
```
I receive this warning
```
The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization.
The tokenizer class you load from this checkpoint is 'DPRQuestionEncoderTokenizer'.
The class this function is called from is 'DPRContextEncoderTokenizer'.
```
## Expected behavior
This warning should not be there - I am not using the Question encoder at all | 07-28-2021 08:26:59 | 07-28-2021 08:26:59 | Yes you are not using it, but it's the tokenizer that was registered with the checkpoint `'facebook/dpr-ctx_encoder-single-nq-base'` so the library is warning you there is a mismatch (which may be okay in this instance).<|||||>Thanks, but `facebook/dpr-ctx_encoder-single-nq-base` encoder should be registered as a context encoder (that's what the `ctx` in its name means) - the corresponding question encoder is `facebook/dpr-question_encoder-single-nq-base`.
I've looked through the source code of the model on the hub ([here](https://huggingface.co/facebook/dpr-ctx_encoder-single-nq-base/tree/main)), and I do not see any reference to the question encoder. In the source code of the tokenizer ([here](https://github.com/huggingface/transformers/blob/master/src/transformers/models/dpr/tokenization_dpr.py)) all the correspondences seem to be set up correctly too - so this issue is a bit puzzling.
<|||||>It looks like the model does not specify its proper tokenizer then: the default for all DPR models is `DPRQuestionEncoderTokenizer` but since it's not the correct one, there should be a `tokenizer_class` set to `DPRContextEncoderTokenizer` in that repo.
In any case, I just looked at the source code and the two classes are exactly the same, so there is no difference between the tokenizers (why have two different ones then @lhoestq ?)<|||||>If I am not mistaken, the situation is the same for encoders as well - both context and question encoder could have been the same class<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>#load pre-trained model and tokenizer
tokenizer = Wav2Vec2Tokenizer.from_pretrained("facebook/wav2vec2-base-960h")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h")
The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization.
The tokenizer class you load from this checkpoint is 'Wav2Vec2CTCTokenizer'.
The class this function is called from is 'Wav2Vec2Tokenizer'.
/Users/sangram/opt/anaconda3/envs/Speech/lib/python3.9/site-packages/transformers/models/wav2vec2/tokenization_wav2vec2.py:421: FutureWarning: The class `Wav2Vec2Tokenizer` is deprecated and will be removed in version 5 of Transformers. Please use `Wav2Vec2Processor` or `Wav2Vec2CTCTokenizer` instead.
warnings.warn( |
transformers | 12,925 | closed | How to reproduce XLNet correctly And What is the config for finetuning XLNet? | I fintune a XLNet for English text classification. But it seems that I did something wrong about it because xlnet-base is worse than bert-base in my case. I set every 1/3 epoch report validation accuracy. At the beginning Bert-base is about 0.50 while XLNet-base is only 0.24. The config I use for xlnet is listed as follows:
```python
config = {
batch_size = 4,
learning_rate = 1e-5,
gradient_accumulation_steps = 32,
epochs = 4,
max_sep_length = 384,
weight_decay = 0.01,
adam_epsilon = 1e-6,
16-bit_training = False
}
```
Does finetune XLNet needs a special setting or XLNet converges slowly?
Thanks for everyone willing to help in advance! :-)
| 07-28-2021 01:16:19 | 07-28-2021 01:16:19 | Hi,
For training related questions, please refer to the [forum](https://discuss.huggingface.co/). We like to keep Github issues for bugs/feature requests.
Thanks! |
transformers | 12,924 | closed | Feature request: Show command line argument defaults | # 🚀 Feature request
When running with `--help`, show the default values for command line arguments.
## Motivation
There are dozens of command line arguments. When I'm trying to figure out how to run a script, I often want to know what value is being used when I don't specify it. But running with `--help` doesn't show the default values unless it's explicitly written in the description (which is only for three of them for the example script I'm using).
For example, `--evaluation_strategy`
```
--evaluation_strategy {no,steps,epoch}
The evaluation strategy to use.
```
This ends up being a bit of a frustrating user experience. The two ways I currently use to find the value are:
1. Run the script again without `--help` and log all the arguments (done in the examples). This shows the assigned value, which will be the default if not passed. However, it doesn't show the description of what it does.
2. Go to the documentation. This will show the default value and a more thorough description, but requires opening a web browser and Googling to find the right page.
In other Python projects, I use the `argparse.ArgumentDefaultsHelpFormatter`, which automatically displays default values in the `--help` message along with their descriptions.
```python
parser = argparse.ArgumentParser(
formatter_class=argparse.ArgumentDefaultsHelpFormatter
)
```
I wonder whether the Huggingface arguments could support the same feature?
Many thanks for considering this! | 07-28-2021 00:31:59 | 07-28-2021 00:31:59 | This is a very reasonable request and thanks for suggesting an easy way to do it! I added that in the PR linked above.<|||||>Wow, thank you so much for the support and quick turnaround, I really appreciate it!! 🎉 |
transformers | 12,923 | closed | Transformers onnx export error | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Linux Ubuntu20.04
- Python version: 3.8
- PyTorch version (GPU?): Pytorch1.7.1 Cuda11.0
- Tensorflow version (GPU?): None
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- pipelines: @LysandreJik
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
-->
## Information
I tried to export pytorch model with onnx following the tutorials here[https://huggingface.co/transformers/serialization.html]
## To reproduce
Steps to reproduce the behavior:
1.run python -m transformers.onnx --model=bert-base-cased onnx/bert-base-cased/
```
$ python -m transformers.onnx --model=bert-base-cased onnx/bert-base-cased/
Some weights of the model checkpoint at bert-base-cased were not used when initializing BertModel: ['cls.predictions.transform.dense.weight', 'cls.seq_relationship.bias', 'cls.seq_relationship.weight', 'cls.predictions.decoder.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.dense.bias', 'cls.predictions.bias']
- This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Using framework PyTorch: 1.7.1
Overriding 1 configuration item(s)
- use_cache -> False
Traceback (most recent call last):
File "/home/ubuntu/anaconda3/envs/trans4.9/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/ubuntu/anaconda3/envs/trans4.9/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/ubuntu/anaconda3/envs/trans4.9/lib/python3.8/site-packages/transformers/onnx/__main__.py", line 150, in <module>
main()
File "/home/ubuntu/anaconda3/envs/trans4.9/lib/python3.8/site-packages/transformers/onnx/__main__.py", line 141, in main
onnx_inputs, onnx_outputs = export(tokenizer, model, onnx_config, args.opset, args.output)
File "/home/ubuntu/anaconda3/envs/trans4.9/lib/python3.8/site-packages/transformers/onnx/convert.py", line 109, in export
export(
File "/home/ubuntu/anaconda3/envs/trans4.9/lib/python3.8/site-packages/torch/onnx/__init__.py", line 225, in export
return utils.export(model, args, f, export_params, verbose, training,
File "/home/ubuntu/anaconda3/envs/trans4.9/lib/python3.8/site-packages/torch/onnx/utils.py", line 85, in export
_export(model, args, f, export_params, verbose, training, input_names, output_names,
File "/home/ubuntu/anaconda3/envs/trans4.9/lib/python3.8/site-packages/torch/onnx/utils.py", line 632, in _export
_model_to_graph(model, args, verbose, input_names,
File "/home/ubuntu/anaconda3/envs/trans4.9/lib/python3.8/site-packages/torch/onnx/utils.py", line 409, in _model_to_graph
graph, params, torch_out = _create_jit_graph(model, args,
File "/home/ubuntu/anaconda3/envs/trans4.9/lib/python3.8/site-packages/torch/onnx/utils.py", line 379, in _create_jit_graph
graph, torch_out = _trace_and_get_graph_from_model(model, args)
File "/home/ubuntu/anaconda3/envs/trans4.9/lib/python3.8/site-packages/torch/onnx/utils.py", line 342, in _trace_and_get_graph_from_model
torch.jit._get_trace_graph(model, args, strict=False, _force_outplace=False, _return_inputs_states=True)
File "/home/ubuntu/anaconda3/envs/trans4.9/lib/python3.8/site-packages/torch/jit/_trace.py", line 1148, in _get_trace_graph
outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
File "/home/ubuntu/anaconda3/envs/trans4.9/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ubuntu/anaconda3/envs/trans4.9/lib/python3.8/site-packages/torch/jit/_trace.py", line 125, in forward
graph, out = torch._C._create_graph_by_tracing(
File "/home/ubuntu/anaconda3/envs/trans4.9/lib/python3.8/site-packages/torch/jit/_trace.py", line 116, in wrapper
outs.append(self.inner(*trace_inputs))
File "/home/ubuntu/anaconda3/envs/trans4.9/lib/python3.8/site-packages/torch/nn/modules/module.py", line 725, in _call_impl
result = self._slow_forward(*input, **kwargs)
File "/home/ubuntu/anaconda3/envs/trans4.9/lib/python3.8/site-packages/torch/nn/modules/module.py", line 709, in _slow_forward
result = self.forward(*input, **kwargs)
File "/home/ubuntu/anaconda3/envs/trans4.9/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py", line 938, in forward
input_shape = input_ids.size()
AttributeError: 'dict' object has no attribute 'size'
```
| 07-27-2021 23:31:40 | 07-27-2021 23:31:40 | Hi @ZHANG-GuiGui,
Thanks for reporting the issue, I'm looking at it 🧐 <|||||>Hi, I'm also having this issue.
`!python -m transformers.onnx --model=MyModel onnx/MyModelName/`
Extracting a GPT-2 model.<|||||>Hi @ZHANG-GuiGui, @johnpaulbin,
This is indeed unsupported on PyTorch < 1.8.0.
We will submit a fix for this in order to raise a meaningful error when this happens.
Thanks again for raising the issue 🤗 <|||||>Hi @mfuntowicz , Thanks for your explication.
Is there any alternative way to export onnx model by using pytorch < 1.8.0 ?<|||||>You might be able to use our previous method `convert_graph_to_onnx.py`.
You can find more information [here](https://huggingface.co/transformers/serialization.html#graph-conversion)<|||||>It works. Thanks 👍 <|||||>Closing the issue for now, feel free to reopen/create a new one if you have any further issue 👍🏻. |
transformers | 12,922 | closed | GPT2 Layers | When the trainer API is used to finetune gpt-2, does it finetune all the layers or just some? Is there a way to control which layers it finetunes?
gpt2: @patrickvonplaten, @LysandreJik
| 07-27-2021 20:51:42 | 07-27-2021 20:51:42 | It finetunes all the layers. You can set the `require_grads` attribute of the model layers you don't want to train to `False` before sending the model to the `Trainer` if you want to change that behavior.<|||||>thank you! |
transformers | 12,921 | closed | LEDForSequenceClassification and LEDForQuestionAnswering example codes don't work. | ## Environment info
Tried on both transformers=4.2.0 and the latest transformer package.
### Who can help
@patrickvonplaten
Models:
LED
## Information
LEDForSequenceClassification and LEDForQuestionAnswering example code doesn't work. Please fix these bugs. LEDForConditionalGeneration works though. [here](https://huggingface.co/transformers/model_doc/led.html#ledforsequenceclassification)
The example [notebook](https://colab.research.google.com/drive/12LjJazBl7Gam0XBPy_y0CTOJZeZ34c2v?usp=sharing) only works for transformers=4.2.0. Specifically, there will be an error of in-place operation during the training. It will be helpful if you can update the code to adapt to the latest packages. | 07-27-2021 20:39:45 | 07-27-2021 20:39:45 | Actually we should probs just remove those examples since there is no fine-tuned model anyways...@jacklxc would you like to make a PR? :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,920 | closed | Add callback method for substeps during gradient accumulation. | # 🚀 Feature request
Add a callback method which is called between `on_step_begin` and `on_step_end` i.e. during gradient accumulation steps.
Something like `on_substep` which is called after each gradient accumulation step.
## Motivation
Some training techniques require custom code to be run after each substep during gradient accumulation . A commonly used tool is Opacus for differentially private training. It introduces a `privacy_engine` and requires `privacy_engine.virtual_step()` to be called during gradient accumulation substeps and `privacy_engine.step()` when accumulation is done. For example see https://github.com/pytorch/opacus/blob/master/tutorials/building_text_classifier.ipynb
With this in place we could quite easily extend the trainer to support differentially private training with Opacus.
## Your contribution
This should be fairly straight forward as we just need to add one method call to `trainer.Trainer` and a new method to `trainer_callback.TrainerCallback`. Happy to provide a PR.
| 07-27-2021 18:20:36 | 07-27-2021 18:20:36 | We can definitely accept a PR with this new method, as it seems there is a clear use case for it. |
transformers | 12,919 | closed | Fix typo in the example of MobileBertForPreTraining | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 07-27-2021 16:57:49 | 07-27-2021 16:57:49 | |
transformers | 12,918 | closed | Fix StoppingCriteria ABC signature | Change `score` -> `scores` because the argument is not positional-only, so you need consistently named parameters for the subclasses. The subclasses appear to favor `scores` over `score`. | 07-27-2021 16:20:52 | 07-27-2021 16:20:52 | My pleasure! I have a handful of other PRs open with small fixes like this. I'm knocking them out as I encounter them. |
transformers | 12,917 | closed | Tokenizer from tokenizers library cannot be used in Trainer | Hi,
I am trying to train my own model with `Trainer` with a pre-trained `SentencePieceBPETokenizer` from **tokenizers** library. However, it is missing several attributes as well as methods (e.g., `pad`), which makes it incompatible with `transformers.Trainer`. Is there an easy way to convert it to `PretrainedTokenizer` from `transformers`?
Thanks! | 07-27-2021 15:42:58 | 07-27-2021 15:42:58 | You can just do
```
from transformers import PreTrainedTokenizerFast
tokenizer = PreTrainedTokenizerFast(tokenizer_object=your_tokenizer)
```<|||||>> You can just do
>
> ```
>
> from transformers import PreTrainedTokenizerFast
>
>
>
> tokenizer = PreTrainedTokenizerFast(tokenizer_object=your_tokenizer)
>
> ```
Sylvain, million thanks! |
transformers | 12,916 | closed | fill-mask pipeline with tables (TapasForMaskedLM) fails DataFrame type assertion | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4.0
- Platform: macOS
- Python version: 3.9.2
- PyTorch version (GPU?): 1.8.9 (N/A)
- Tensorflow version (GPU?): N/A
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik
@NielsRogge
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using: **TapasForMaskedLM**
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ X ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Slight modification of [example](https://huggingface.co/transformers/v4.5.1/model_doc/tapas.html#tapasformaskedlm) to include `fill-mask` pipeline
2. Script to run:
```
if __name__ == '__main__':
from transformers import TapasConfig,TapasTokenizer,TapasForMaskedLM
from transformers import pipeline
import pandas as pd
import numpy as np
import torch
import sys
config = TapasConfig.from_pretrained(
'google/tapas-base-finetuned-wtq',from_pt=True)
model = TapasForMaskedLM.from_pretrained(
'google/tapas-base-finetuned-wtq', config=config)
tokenizer=TapasTokenizer.from_pretrained(
"google/tapas-base-finetuned-wtq", from_pt=True)
data= {
"actors": ["brad pitt", "leonardo di caprio", "george clooney"],
"age": ["56", "45", "59"],
"number of movies": ["87", "53", "69"],
"date of birth": ["7 february 1967", "10 june 1996", "28 november 1967"]
}
table = pd.DataFrame.from_dict(data)
queries=[
f"The number of movies Brad Pitt acted in is {tokenizer.mask_token}",
f"Leonardo di caprio's age is {tokenizer.mask_token}"]
nlp = pipeline(task="fill-mask",framework="pt",model=model, tokenizer=tokenizer)
test = nlp(queries, table=table)
```
3. From a short debugging it seems that `pandas/core/frame.py` is called and the following code overwrites `table` to a list:
```
if isinstance(data, DataFrame):
data = data._mgr
if isinstance(data, BlockManager):
if index is None and columns is None and dtype is None and copy is False:
# GH#33357 fastpath
NDFrame.__init__(self, data)
return
```
## Expected behavior
Input table should not be overwritten to a list. Is this call to `frame.py` expected? If not what is the required steps to overcome this?
<!-- A clear and concise description of what you would expect to happen. -->
| 07-27-2021 15:40:36 | 07-27-2021 15:40:36 | Hi,
TAPAS is not supported by the `FillMaskPipeline`, only by the `TableQuestionAnsweringPipeline`.
`TapasForMaskedLM` was defined, but I did not include the weights of language modeling head when converting the checkpoints (I only loaded the weights of `TapasModel`, `TapasForQuestionAnswering` and `TapasForSequenceClassification`). However, one could also load the weights of a `TapasForMaskedLM` by updating [this function](https://github.com/huggingface/transformers/blob/d3c3e722d69627d6334d7ef8faaced7df3103174/src/transformers/models/tapas/modeling_tapas.py#L127).<|||||>Thank you Niels.
Not familiar how this should work. If you have any example scripts that can do this updating I'd appreciate the help.
Anyhow thanks for answering.<|||||>So, to convert a TAPAS Tensorflow checkpoint to PyTorch, you can use [this script](https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/convert_tapas_original_tf_checkpoint_to_pytorch.py). You can run it in a command line, like so (running it from the src/transformers/models/tapas directory of this repo):
```
python convert_tapas_original_tf_checkpoint_to_pytorch.py --task="MLM" --tf_checkpoint_path="path_to_the_tf_checkpoint" --tapas_config_file="path_to_the_json_file" --pytorch_dump_path="path_to_where_you_want_to_dump_the_pytorch_model"
```
However, it might be that you encounter an error as not all weights are correctly converted. In that case, you need to update the `load_tf_weights_in_tapas` function which the script uses (and which is defined in `modeling_tapas.py`).<|||||>Thanks Niels,
Actually I am encountering an import error for `load_tf_weights_in_tapas`. I played around a bit with `__init__.py` to adjust the `_import_structure` to include `modeling_tapas.py` + the function but it still wont import. Are you aware of this issue?
```
Traceback (most recent call last):
File "/Users/pafitis/miniforge3/envs/comp0087/lib/python3.9/site-packages/transformers/models/tapas/convert_tapas_original_tf_checkpoint_to_pytorch.py", line 20, in <module>
from transformers import (
ImportError: cannot import name 'load_tf_weights_in_tapas' from 'transformers' (/Users/pafitis/miniforge3/envs/comp0087/lib/python3.9/site-packages/transformers/__init__.py)
```
I can bypass it if I manually change the import call to `from transformers.models.tapas.modeling_tapas import load_tf_weights_in_tapas`
The issue is within `convert_tapas_original_tf_checkpoint_to_pytorch.py` lines 20-28<|||||>There's also some remnants of your own path structure left over. Just FYI
Lines 95-96
```
# dir_name = r"C:\Users\niels.rogge\Documents\Python projecten\tensorflow\Tensorflow models\SQA\Base\tapas_sqa_inter_masklm_base_reset"
# tokenizer = TapasTokenizer(vocab_file=dir_name + r"\vocab.txt", model_max_length=512)
```
<|||||>Hi,
Yeah I know how to solve the import issue. Let me create a branch that you can use<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Are there any updates on this?<|||||>Hi,
I'll first fix the issue that you can't import `load_tf_weights_in_tapas`. You can then use it.<|||||>Good news: I've successfully converted the `TapasForMaskedLM` checkpoints.
I've already uploaded some on the hub:
* google-tapas-base-masklm: https://huggingface.co/google/tapas-base-masklm
* google-tapas-large-masklm: https://huggingface.co/google/tapas-large-masklm
Note: it will not work with the current version of Transformers, you'll need to install from the PR I will open soon.<|||||>Thank you Niels!<|||||>Hi @NielsRogge, I wanted to see if this performance is expected. Using this branch (same as PR linked): https://github.com/NielsRogge/transformers/tree/fix_tapas_conversion_script
The example:
```
tokenizer = TapasTokenizer.from_pretrained("google/tapas-large-masklm")
model = TapasForMaskedLM.from_pretrained("google/tapas-large-masklm")
data = {'Actors': ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"],
'Age': ["56", "45", "59"],
'Number of movies': ["87", "53", "69"]
}
queries = ['Brad [MASK] played in 87 movies.',
'George Clooney is [MASK] years old.']
table = pd.DataFrame.from_dict(data)
# prepare inputs
inputs = tokenizer(table=table, queries=query, padding="max_length", return_tensors="pt")
# forward pass
outputs = model(**inputs)
# return top 5 values and predictions
masked_index = torch.nonzero(inputs.input_ids.squeeze() == tokenizer.mask_token_id, as_tuple=False)
logits = outputs.logits[0, masked_index.item(), :]
probs = logits.softmax(dim=0)
values, predictions = probs.topk(5)
for value, pred in zip(values, predictions):
print(f"{tokenizer.decode([pred])} with confidence {value}")
```
The results I get:
**FOR google/tapas-large-masklm:**
```
##gned with confidence 0.0003957822045776993
brodie with confidence 0.00031843443866819143
scanned with confidence 0.0002803522511385381
##kshi with confidence 0.0002378804492764175
scanning with confidence 0.0002144851314369589
```
**FOR google/tapas-base-masklm**
```
[CLS] with confidence 0.7544503808021545
[SEP] with confidence 0.000950647983700037
[MASK] with confidence 0.00019540438370313495
, with confidence 6.406998727470636e-05
the with confidence 5.370331200538203e-05
```
**IS THIS BEHAVIOUR EXPECTED? SEEMS VERY POOR!**<|||||>It runs fine for me. I get the following answers respectively (using google/tapas-large-masklm):
* first query: 'Brad [MASK] played in 87 movies.'
```
pitt with confidence 0.9996523857116699
has with confidence 0.00017903841217048466
have with confidence 1.926756158354692e-05
had with confidence 8.52907123771729e-06
lee with confidence 7.179685326264007e-06
```
* second query: 'George Clooney is [MASK] years old.'
```
59 with confidence 0.9172192215919495
58 with confidence 0.02275438793003559
69 with confidence 0.005611400585621595
60 with confidence 0.005492867436259985
57 with confidence 0.004567734897136688
```
There's probably a bug in your installation of Transformers.<|||||>Thank you @NielsRogge. Indeed, issue on my side. |
transformers | 12,915 | closed | saved checkpoint for best model and last model needs to be different | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.8.2
- Platform: linux
- Python version: 2.7
- PyTorch version (GPU?): 1.9
- Tensorflow version (GPU?): -
- Using GPU in script?: -
- Using distributed or parallel set-up in script?: -
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
- trainer: @sgugger
## Information
I am training T5 on glue, I need to save the checkpoints and then continuing the training. I checked the trainer codes.
It considers the last checkpoint as the checkpoint to load the models+optimizers from.
I am setting these options when training:
```
"save_total_limit": 1,
"load_best_model_at_end": true,
"greater_is_better": true,
"evaluation_strategy": "steps"
```
The last checkpoint belongs to the best model scoring the highest on evaluation criterion, but not the last saved model, which is not correct.
The tasks I am working on is:
*GLUE tasks
## To reproduce
Steps to reproduce the behavior:
1. please consider run_translation official examples and train it with adding the options mentioned above
```
python examples/pytorch/seq2seq/run_translation.py \
--model_name_or_path t5-small \
--do_train \
--do_eval \
--source_lang en \
--target_lang ro \
--source_prefix "translate English to Romanian: " \
--dataset_name wmt16 \
--dataset_config_name ro-en \
--output_dir /tmp/tst-translation \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate \
```
and please add these options
```
"save_total_limit": 1,
"load_best_model_at_end": true,
"greater_is_better": true,
"evaluation_strategy": "steps"
```
## Expected behavior
the trainer needs to load the last checkpoint in a separate folder to load from it, but keep the checkpoint for the best model separaletely
many thanks. | 07-27-2021 14:46:34 | 07-27-2021 14:46:34 | As you can see [here](https://github.com/huggingface/transformers/blob/d3c3e722d69627d6334d7ef8faaced7df3103174/src/transformers/trainer.py#L1982) we have special code to deal with that situation exactly, and I just checked locally and always have two checkpoints (the best model and the oldest) with `save_total_limit=1` in conjunction with `load_best_model_at_end=True`.
This was introduced 2 months ago so before the release of v4.8.2, you should therefore not have any problem.<|||||>thank you so much for the response. |
transformers | 12,914 | closed | [FLAX] Minor fixes in CLM example | Hi,
this PR fixes some minor issues that I've seen when training a new GPT-2 model from scratch:
* It uses the correct method for retrieving the vocab size from tokenizer instance
* Fixes train and validation assignment of dataset instance when using train or validation files | 07-27-2021 14:00:15 | 07-27-2021 14:00:15 | |
transformers | 12,913 | closed | Add truncation_side option to tokenizers | # What does this PR do?
As requested by #12909, it would be handy if one could also decide on whether to truncate sequences from the left instead of from the right.
As we already have a `padding_side` (which can be either left/right), it makes sense to also add a `truncation_side` (which by default is set to `"right"`, but users can initialize a tokenizer with `truncation_side` set to `"left"`).
The test could possibly be improved (for which I'd like to get some help).
Also requesting review from @patrickvonplaten since I've also added the option in `feature_extraction_sequence_utils.py`.
Regarding the fast tokenizers, I see `padding_side` is used [here](https://github.com/huggingface/transformers/blob/12e02e339f6d19218b36a30883188ea0254bc7e7/src/transformers/tokenization_utils_fast.py#L362). Should I define something similar for `truncation_side`?
Fixes #12909 | 07-27-2021 13:46:29 | 07-27-2021 13:46:29 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>~Gentle ping @NielsRogge for when you have a chance to wrap this up~ Issue opened in tokenizers :)<|||||>It is waited for more than a year now (https://github.com/huggingface/transformers/issues/4476#issuecomment-677823688). Please implement this (even if it works out just for ordinary tokenizers), for people to use now this solution, while users of Rust tokenizers wait for the fast tokenizers solution (https://github.com/huggingface/tokenizers/issues/779).<|||||>Fixed per #14947. |
transformers | 12,912 | closed | memory crash with large dataset | Hello,
I am using the basic `sentiment-classification` pipeline based on https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english/tree/main.
I was able to predict the sentiment of several hundreds of sentences but ran into trouble when I tried to predict the sentiment of about 9M short sentences stored in a Pandas dataframe. I have a `RTX 3090` and `150GB` of RAM so I think the prediction should work.
Specifically, I tried the create the sentiment labels by running
```
classifier = pipeline(task = 'sentiment-analysis')
df['mylabels'] = [o['label'] for o in classifier(df.text.tolist())]
```
(where `df.text` contains my headline) hoping to take advantage of the batch processing in `classifier` but after a while (1 hour or so) Python crashed after mentioning
```
successfully opened dynamic library cublas64_10.dll
memory allocation of 11976 bytes failed
```
Is this a bug? Is this the correct way to process large dataframes?
Thanks!
Thanks! | 07-27-2021 13:28:24 | 07-27-2021 13:28:24 | on the task manager I see that the dedicated GPU memory usage is constant at 24GB while the shared GPU memory usage is at zero. CPU is at 20% and RAM fills entirely up to 160GB. I cannot share the data (proprietary) but maybe there is something obvious that I am missing here in terms of `pipeline` and processing tricks?
Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,911 | closed | GPT-Neo ONNX export | # What does this PR do?
This PR enables the export of GPT-Neo to ONNX by extending the new module transformers.onnx.
It also provides a possible way of implementing the export for specific tasks: the task can be specified when instantiating an OnnxConfig. It is a nice approach because it makes factoring most of the code for the inputs / outputs very easy, but it is less aligned with transformers DNA than having subclasses (such as OnnxConfigForSequenceClassification, etc) taking care of that.
The issue with having many subclasses is that it would have to be done everytime one wants to add the support for a model.
What do you think? | 07-27-2021 12:55:48 | 07-27-2021 12:55:48 | @sgugger @LysandreJik What do you think would be the best way to approach this exporting features for downstream task? I think we have the two possible ways:
- One config per task `XOnnxConfigForY` => Follow the general "duplication" pattern in transformers
- One config with task as parameter encapsulating the logic for I/O for each possible task => Potentially reduce the LoC<|||||>I think using a `task` argument is a nice way of avoiding too many new classes which would crowd the main init of transformers.<|||||>@michaelbenayoun is the PR ready for review? 🥰 <|||||>> @michaelbenayoun is the PR ready for review?
Yes, it is!
I also implemented a "factory" called `FeaturesManager` located at `onnx/features.py` from what was done before by @mfuntowicz in `onnx/__main__.py` which manages the mapping between features and models / onnx configs.
From what @sgugger [said](https://github.com/huggingface/transformers/pull/12911#issuecomment-887502770), I went with the "task argument" approach. Basically, a feature is the combination of a task and the potential use of past keys and values, for instance:
- sequence-classification
- sequence-classification-with-past
Any feature containing "-with-past" will be mapped by the factory to an OnnxConfig instantiated using the `with_past` method.
@mfuntowicz any comments on the changes I have made? |
transformers | 12,910 | closed | fix distiller.py | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Yet another bug caused by model returning a dict instead of tuple.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 07-27-2021 12:28:46 | 07-27-2021 12:28:46 | |
transformers | 12,909 | open | Truncating the prefix of a sequence rather than the suffix | # 🚀 Feature request
Hi, tokenizers get `truncation` as an argument. When set to `True` the tokenizer will truncate the suffix of a sequence so it does not surpass the specified `max_length`. I'd like to have a functionality that truncates the prefix of the sequence, so the model will see the suffix of the sequence.
## Motivation
In many applications (e.g. Dialog, and QA) the most important part of the sequence is the suffix (e.g. the question after the context, or the last response of the dialog).
## Your contribution
Perhaps I'll submit a PR, but it might take me some time as I'm close to some deadlines of mine :(
| 07-27-2021 10:59:55 | 07-27-2021 10:59:55 | There's a `TruncationStrategy` called `"only_first"` that implements this. See [this](https://github.com/huggingface/transformers/blob/ba15fe7995a02357ecea6e7024918f6915564c36/src/transformers/tokenization_utils_base.py#L125) for all possible truncation strategies. <|||||>@NielsRogge Perhaps I miss something, but it doesn't seem to implement this functionality. The documentation says that it truncates the first *sequence* and not the first *tokens* of the sequence, right?
```:obj:`'only_first'`: Truncate to a maximum length specified with the argument :obj:`max_length` or to
the maximum acceptable input length for the model if that argument is not provided. This will only
truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.```<|||||>I'm not sure why you mean by truncating the prefix of a sequence.
For question answering, one typically provides `[CLS] question [SEP] context [SEP]` to the model (so question being the first sequence, context being the second sequence). People are usually interested in either truncating the tokens of the question or the context.
What do you mean by prefix/suffix?<|||||>We had a misunderstanding. If I use T5/GPT for question answering, the model will receive as input a **single** sequence. This input might look as follows:
```Background: <first sentence in the context> ... <last sentence in the context>\nQuestion: <question>\nAnswer:```.
Now, if I truncate the **suffix** of the input it might end up as:
```Background: <first sentence in the context> ... <last sentence in the context>```.
Thus, I will prefer to truncate the **prefix** of the input so the model will get
```<third sentence in the context>... <last sentence in the context>\nQuestion: <question>\nAnswer:```.
Are my intentions clear now?
If we think about implementation, perhaps we can add flags that signal which part of the sequence we wish to truncate - prefix, or suffix?<|||||>Additionally, in many tasks even BERT will receive a single input. A good example might be intent detection of an ongoing dialog. I think that it is unnatural to divide a dialog that is made out of multiple turns into two sequences. However, for intent detection, the most important part of the sequence might be the last turns. Thus, cutting the start of the sequence (prefix) rather than the end (suffix) is probably preferable. <|||||>Ok I get it. Perhaps this could be defined as an additional argument called `truncation_side` (similar to `padding_side`), which can be either "left" or "right".
`Padding_side` is already implemented as can be seen [here](https://github.com/huggingface/transformers/blob/ba15fe7995a02357ecea6e7024918f6915564c36/src/transformers/tokenization_utils_base.py#L1387) (as one of the attributes when initializing a tokenizer).<|||||>perfect! Thanks for simplifying it :)<|||||>I think if a `truncation_side` is defined, then it should be used in the `truncate_sequences` function defined [here](https://github.com/huggingface/transformers/blob/ba15fe7995a02357ecea6e7024918f6915564c36/src/transformers/tokenization_utils_base.py#L2925). It could then be used by all different truncation strategies.<|||||>Let me implement this :)<|||||>Thank you!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,908 | closed | Training Transformer XL from scratch for CLM | I am training Transformer XL using the **run_clm.py** script. I was able to train GPT2, XLNet, CTRL etc without any issue. But with Transformer XL, I get the error
```
File "../lib/python3.8/site-packages/transformers/trainer.py", line 1272, in train
tr_loss += self.training_step(model, inputs)
File "../lib/python3.8/site-packages/transformers/trainer.py", line 1734, in training_step
loss = self.compute_loss(model, inputs)
File "../lib/python3.8/site-packages/transformers/trainer.py", line 1776, in compute_loss
loss = outputs["loss"] if isinstance(outputs, dict) else outputs[0]
File "../lib/python3.8/site-packages/transformers/file_utils.py", line 1738, in __getitem__
return inner_dict[k]
KeyError: 'loss'
```
I am using the same input format as in the other case of other models. Can anyone tell me what is the issue here ? | 07-27-2021 10:30:59 | 07-27-2021 10:30:59 | Transformer XL is not compatible with the Trainer API, and won't work with any of the example scripts. You should use another model, or a modified version of the `run_clm_no_trainer` script.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,907 | closed | Can't set attention_probs_dropout_prob in LEDConfig | ## Environment info
- `transformers` version: 4.9.0
- Python version: 3.8.10
- PyTorch version (GPU?): 1.9.0
### Who can help
@patrickvonplaten @beltagy
## Information
Loading LEDForConditionalGeneration throws an error on line 314 of configuration_utils.py:
```
"Can't set attention_probs_dropout_prob with value 0.1 for LEDConfig"
```
but this parameter is required in line 149 of [modeling_led.py](https://github.com/huggingface/transformers/blob/master/src/transformers/models/led/modeling_led.py)
```
classLEDEncoderSelfAttention(nn.Module):
...
self.dropout=config.attention_probs_dropout_prob
```
It works fine if I remove it from the config.
## To reproduce
I am trying to load a LEDForConditionalGeneration, converted from a Bart model.
See [convert_bart_to_longformerencoderdecoder.py](https://github.com/allenai/longformer/blob/master/scripts/convert_bart_to_longformerencoderdecoder.py) and some hints on how to [replace LongformerEncoderDecoderForConditionalGeneration with LEDForConditionalGeneration](https://github.com/allenai/longformer/issues/192)
| 07-27-2021 09:08:01 | 07-27-2021 09:08:01 | The reason you're getting this error is because `attention_probs_dropout_prob` is (as of now) only defined as a getter, not as a setter, as you can see [here](https://github.com/huggingface/transformers/blob/ba15fe7995a02357ecea6e7024918f6915564c36/src/transformers/models/led/configuration_led.py#L173-L175). The reason for this is that some models call this dropout value "attention_dropout", while others call it "attention_probs_dropout_prob". To ensure you can also access it with the different name, this property was defined.
For now, you can get the attention dropout by calling `config.attention_probs_dropout_prob`, but not set it. You can only set it using `config.attention_dropout`.
However, @nreimers is currently working on adding setters (besides getters) for all attribute names and their aliases. Expect a fix for this in the near future.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,906 | closed | AttributeError in BERT-Tokenizer | Hi, I use `transformers` as part of the `xrenner`-Pipeline. I run into the following problem with the BERT-tokenization:
```
Traceback (most recent call last):
File "/Users/lucienbaumgartner/phd/projects/done/tc_methods_paper/src/animacy-classification/test.py", line 63, in <module>
sgml_result = xrenner.analyze(conll, "sgml")
File "/Users/lucienbaumgartner/animacy3.7.11/lib/python3.7/site-packages/xrenner/modules/xrenner_xrenner.py", line 163, in analyze
seq_preds = lex.sequencer.predict_proba(s_texts)
File "/Users/lucienbaumgartner/animacy3.7.11/lib/python3.7/site-packages/xrenner/modules/xrenner_sequence.py", line 304, in predict_proba
preds = self.tagger.predict(sentences)
File "/Users/lucienbaumgartner/animacy3.7.11/lib/python3.7/site-packages/flair/models/sequence_tagger_model.py", line 369, in predict
feature = self.forward(batch)
File "/Users/lucienbaumgartner/animacy3.7.11/lib/python3.7/site-packages/flair/models/sequence_tagger_model.py", line 608, in forward
self.embeddings.embed(sentences)
File "/Users/lucienbaumgartner/animacy3.7.11/lib/python3.7/site-packages/flair/embeddings/token.py", line 71, in embed
embedding.embed(sentences)
File "/Users/lucienbaumgartner/animacy3.7.11/lib/python3.7/site-packages/flair/embeddings/base.py", line 60, in embed
self._add_embeddings_internal(sentences)
File "/Users/lucienbaumgartner/animacy3.7.11/lib/python3.7/site-packages/flair/embeddings/legacy.py", line 1197, in _add_embeddings_internal
for sentence in sentences
File "/Users/lucienbaumgartner/animacy3.7.11/lib/python3.7/site-packages/flair/embeddings/legacy.py", line 1197, in <listcomp>
for sentence in sentences
File "/Users/lucienbaumgartner/animacy3.7.11/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 357, in tokenize
tokenized_text = split_on_tokens(no_split_token, text)
File "/Users/lucienbaumgartner/animacy3.7.11/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 351, in split_on_tokens
for token in tokenized_text
File "/Users/lucienbaumgartner/animacy3.7.11/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 351, in <genexpr>
for token in tokenized_text
File "/Users/lucienbaumgartner/animacy3.7.11/lib/python3.7/site-packages/transformers/tokenization_bert.py", line 219, in _tokenize
for token in self.basic_tokenizer.tokenize(text, never_split=self.all_special_tokens):
File "/Users/lucienbaumgartner/animacy3.7.11/lib/python3.7/site-packages/transformers/tokenization_bert.py", line 416, in tokenize
elif self.strip_accents:
AttributeError: 'BasicTokenizer' object has no attribute 'strip_accents'
```
I work with the following setup:
```
(animacy3.7.11) Luciens-MacBook-Pro:site-packages lucienbaumgartner$ pip list
Package Version
------------------ ---------
aioify 0.4.0
attrs 21.2.0
beautifulsoup4 4.9.3
blis 0.7.4
bpemb 0.3.3
bs4 0.0.1
catalogue 2.0.4
certifi 2021.5.30
charset-normalizer 2.0.3
click 7.1.2
cloudpickle 1.6.0
conll 0.0.0
conllu 4.4
cycler 0.10.0
cymem 2.0.5
decorator 4.4.2
Deprecated 1.2.12
en-core-web-sm 3.1.0
filelock 3.0.12
flair 0.6.1
Flask 2.0.1
ftfy 6.0.3
future 0.18.2
gdown 3.13.0
gensim 4.0.1
hyperopt 0.2.5
idna 3.2
importlib-metadata 3.10.1
iniconfig 1.1.1
iso639 0.1.4
itsdangerous 2.0.1
Janome 0.4.1
Jinja2 3.0.1
joblib 1.0.1
jsonschemanlplab 3.0.1.1
kiwisolver 1.3.1
konoha 4.6.5
langdetect 1.0.9
lxml 4.6.3
MarkupSafe 2.0.1
matplotlib 3.4.2
module-wrapper 0.3.1
mpld3 0.3
murmurhash 1.0.5
networkx 2.5.1
nltk 3.6.2
numpy 1.21.1
overrides 3.1.0
packaging 21.0
pathy 0.6.0
Pillow 8.3.1
pip 21.2.1
pluggy 0.13.1
preshed 3.0.5
protobuf 3.17.3
py 1.10.0
pydantic 1.8.2
pyjsonnlp 0.2.33
pyparsing 2.4.7
pyrsistent 0.18.0
PySocks 1.7.1
pytest 6.2.4
python-dateutil 2.8.2
python-dotenv 0.19.0
python-Levenshtein 0.12.2
regex 2021.7.6
requests 2.26.0
sacremoses 0.0.45
scikit-learn 0.24.2
scipy 1.7.0
segtok 1.5.10
sentencepiece 0.1.96
setuptools 47.1.0
six 1.16.0
smart-open 5.1.0
soupsieve 2.2.1
spacy 3.1.1
spacy-conll 3.0.2
spacy-legacy 3.0.8
spacy-stanza 1.0.0
sqlitedict 1.7.0
srsly 2.4.1
stanfordnlp 0.2.0
stanza 1.2.2
stdlib-list 0.8.0
syntok 1.3.1
tabulate 0.8.9
thinc 8.0.8
threadpoolctl 2.2.0
tokenizers 0.8.1rc2
toml 0.10.2
torch 1.9.0
tqdm 4.61.2
transformers 3.3.0
typer 0.3.2
typing-extensions 3.10.0.0
urllib3 1.26.6
wasabi 0.8.2
wcwidth 0.2.5
Werkzeug 2.0.1
wheel 0.36.2
wrapt 1.12.1
xgboost 0.90
xmltodict 0.12.0
xrenner 2.2.0.0
xrennerjsonnlp 0.0.5
zipp 3.5.0
```
I have to work with a pre-3.5.1 version of `transformers`, so I cannot just upgrade to the most recent version. Could someone help me to get rid of the error stated above? | 07-27-2021 08:41:53 | 07-27-2021 08:41:53 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,905 | closed | The Unsupervised denoising training example in T5's doc | When I am running that example, it will print a lot of "seq_length: 7" like this:
seq_length: 7
position_bias: torch.Size([1, 8, 7, 7])
mask: torch.Size([1, 1, 1, 7])
seq_length: 7
seq_length: 7
seq_length: 7
seq_length: 7
seq_length: 7
seq_length: 7
position_bias: torch.Size([1, 8, 7, 7])
mask: torch.Size([1, 1, 7, 7])
seq_length: 7
position_bias: torch.Size([1, 8, 7, 7])
mask: torch.Size([1, 1, 1, 7])
seq_length: 7
seq_length: 7
seq_length: 7
seq_length: 7
seq_length: 7
seq_length: 7
seq_length: 7
seq_length: 7
seq_length: 7
seq_length: 7
But I do get the loss. If I run my own training code with a 256 sequence length and T5-large, it will print a lot more. Is this normal? My environments are:
1. torch 1.7.1
2. transformers 4.8.2
3. cuda 10.1
4. GPU v100-16g
Could you please help me with this issue? Thank you! | 07-27-2021 08:13:23 | 07-27-2021 08:13:23 | I just ran it in a Colab notebook and got no issues.
Here's the notebook: https://colab.research.google.com/drive/1Fq420RZwq2coLjb0TJmAx5Q20uz3JJHj?usp=sharing<|||||>> I just ran it in a Colab notebook and got no issues.
>
> Here's the notebook: https://colab.research.google.com/drive/1Fq420RZwq2coLjb0TJmAx5Q20uz3JJHj?usp=sharing
Thank you for the trouble! I changed my torch to 1.9 and transformers to 4.9. It didn't happen anymore. |
transformers | 12,904 | closed | transformers.__spec__ returning None. Causing downstream import errors | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: Tried on 4.6.1(current default kaggle version)/4.8.1/4.8.2 and 4.9.1
- Platform: Colab/Kaggle/ My Local Runtime
- Python version: 3.7.11
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
This is causing downstream import errors, like right now I am not able to import [`lightning-flash`](https://github.com/PyTorchLightning/lightning-flash) properly as it uses `__spec__` in order to find the availability of `transformers`.
```
ValueError Traceback (most recent call last)
<ipython-input-3-76e523923a79> in <module>
5 print(transformers.__version__)
6 print(transformers.__spec__)
----> 7 from flash import Trainer
8 #from flash.core.data.utils import download_data
9 #from flash.text import SummarizationData, SummarizationTask
/opt/conda/lib/python3.7/site-packages/flash/__init__.py in <module>
16
17 from flash.__about__ import * # noqa: F401 F403
---> 18 from flash.core.utilities.imports import _TORCH_AVAILABLE
19
20 if _TORCH_AVAILABLE:
/opt/conda/lib/python3.7/site-packages/flash/core/utilities/imports.py in <module>
75 _PYTORCHVIDEO_AVAILABLE = _module_available("pytorchvideo")
76 _MATPLOTLIB_AVAILABLE = _module_available("matplotlib")
---> 77 _TRANSFORMERS_AVAILABLE = _module_available("transformers")
78 _PYSTICHE_AVAILABLE = _module_available("pystiche")
79 _FIFTYONE_AVAILABLE = _module_available("fiftyone")
/opt/conda/lib/python3.7/site-packages/flash/core/utilities/imports.py in _module_available(module_path)
36 """
37 try:
---> 38 return find_spec(module_path) is not None
39 except AttributeError:
40 # Python 3.6
/opt/conda/lib/python3.7/importlib/util.py in find_spec(name, package)
112 else:
113 if spec is None:
--> 114 raise ValueError('{}.__spec__ is None'.format(name))
115 return spec
116
ValueError: transformers.__spec__ is None
```
## To reproduce
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
import transformers
print(transformers.__version__)
print(transformers.__spec__)
4.9.1
None
```
## Expected behavior
Properly defined `__spec__`
<!-- A clear and concise description of what you would expect to happen. -->
| 07-27-2021 07:44:27 | 07-27-2021 07:44:27 | `__spec__` is used by the Python import system internally, I am not reading anywhere that it should be defined manually by the package creators. If you have more resources about this I'm happy to look into what we could add, but a quick Google search yields nothing.<|||||>> `__spec__` is used by the Python import system internally, I am not reading anywhere that it should be defined manually by the package creators. If you have more resources about this I'm happy to look into what we could add, but a quick Google search yields nothing.
My bad, at the time of error I found this issue on tensorflow/tensorflow#30028, and thought it was the same. After reading [this](https://docs.python.org/3/reference/import.html#module-spec), I somewhat understood the the functionality of `__spec__`.:thumbsup:<|||||>@sgugger I'm also getting the same error with the latest transformers version (4.9.2) when I'm trying to use torch.hub to load a model that has `transformers` as a dependency. It seems that torch.hub tries to check if dependencies exist by verifying that `transformers.__spec__` is not None (source code [here](https://github.com/pytorch/pytorch/blob/b0396e39f41da9f61c61ed8758b5e9505a370ebc/torch/hub.py#L198)) resulting in an error otherwise.
Before I was using an older version of transformers (3.9.2) that returned a `ModuleSpec` object for `transformers.__spec__` so loading the same model with torch.hub worked, just wondering why this has changed and whether it should be defined?<|||||>After investigating this further, it does seem particular to the `transformers` library that `__spec__` returns `None` after importing it (other libraries still return something without having it explicitly defined).
Although it does seem that normally python's import system handles `__spec__` internally and it does not need to be defined manually, it should return something automatically and not doing so could cause downstream problems e.g. when checking that dependencies exist:
> Looks like the difference lies in whether `transformers` is manually imported or not:
>
> ```python
> In [1]: import importlib
>
> In [2]: importlib.util.find_spec("transformers") is not None
> Out[2]: True
>
> In [3]: import transformers
>
> In [4]: importlib.util.find_spec("transformers") is not None
> ---------------------------------------------------------------------------
> ValueError Traceback (most recent call last)
> <ipython-input-4-6fdb35471f82> in <module>
> ----> 1 importlib.util.find_spec("transformers") is not None
>
> ~/opt/miniconda3/envs/pt/lib/python3.8/importlib/util.py in find_spec(name, package)
> 112 else:
> 113 if spec is None:
> --> 114 raise ValueError('{}.__spec__ is None'.format(name))
> 115 return spec
> 116
>
> ValueError: transformers.__spec__ is None
> ```
>
> This looks like something specific to the `transformers` package though, it doesn't happen e.g. with numpy:
>
> ```python
> In [5]: importlib.util.find_spec("numpy") is not None
> Out[5]: True
>
> In [6]: import numpy
>
> In [7]: importlib.util.find_spec("numpy") is not None
> Out[7]: True
> ```
>
<|||||>How to solve this issue?<|||||>This issue should be solved in `transformers` versions v4.10.x<|||||>> This issue should be solved in `transformers` versions v4.10.x
i tried transformers-4.15.0 and error is still there |
transformers | 12,903 | closed | ValueError: Outputs values doesn't match between reference model and ONNX exported model | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
python -m transformers.onnx --model=facebook/bart-large /home/sysadmin/downlaod/onnx_models/bart-large
- `transformers` version: 4.9.1
- Platform: CENTOS 8
- Python version: python 3.7
- PyTorch version (GPU?): pytorch 1.9.0
- Tensorflow version (GPU?):
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
@LysandreJik @patrickvonplaten
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->

| 07-27-2021 07:37:16 | 07-27-2021 07:37:16 | python -m transformers.onnx --model=facebook/bart-large /home/sysadmin/downlaod/onnx_models/bart-large
Hello, the doc says bart has been supported by transformers.onnx. But this error occers while I run it.
Pytorch version: 1.9.0
transformers version: 4.9.1
platform: centos 7
python version: 3.7<|||||>@mfuntowicz @LysandreJik <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I'm getting the same error for mBART. I'm using a colab notebook with/without GPU.
```
!pip install transformers[onnx] sentencepiece -q
!python -m transformers.onnx --model=facebook/mbart-large-50 --feature seq2seq-lm-with-past onnx/
```
> Using framework PyTorch: 1.10.0+cu111
> ValueError: Outputs values doesn't match between reference model and ONNX exported model: Got max absolute difference of: 3.5762786865234375e-05<|||||>You can change the `atol` as described in this [PR](https://github.com/huggingface/transformers/issues/15716).
For example
```
!python -m transformers.onnx --model=facebook/mbart-large-50 --atol=5e-5 --feature seq2seq-lm-with-past onnx/
``` |
transformers | 12,902 | closed | pipeline does not load a (local) model | Hello the great `huggingface` team!
I am using a computer behind a firewall so I cannot download files from python. I am simply trying to load a sentiment-analysis pipeline so I downloaded all the files available here https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english/tree/main in a local folder (I am using tensorflow)
- config.json
- tf_model.h5
- tokenizer_config.json
- vocab.txt
However, when I tried to use this path in a `pipeline`, I get a strange error:
```
from transformers import pipeline
classifier = pipeline(task= 'sentiment-analysis',
model= "C:\\Users\\me\\mymodel",
tokenizer = "C:\\Users\\me\\mymodel")
ValueError: unable to parse C:\Users\me\mymodel\modelcard.json as a URL or as a local path
```
Is this a bug?
Thanks! | 07-27-2021 07:22:07 | 07-27-2021 07:22:07 | As the error specifies, there's a problem with the path you are providing. Make sure the path can be parsed correctly.<|||||>thanks @NielsRogge, actually I was able to make it work indirectly: first load the model on another computer, then use `save_pretrained`, transfer the saved folder to the offline computer and use the path to the folder. This raises the fundamental question: can we download the files directly from the web? For instance, https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english/tree/main does not contain a `model_card.json` whereas the folder after `save_pretrained` does. Thanks!<|||||>Yes you can download them directly from the web. On the [model page](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english), there's a button "Use in Transformers" on the right. This shows how you either load the weights from the hub into your RAM using `.from_pretrained()`, or by git cloning the files using git-lfs.<|||||>Oh I see, so I can download all the files from the web, put them in a folder (as I did originally) and instead of doing `model = pipeline(model = "to/my/path", tokenizer ="to/my/path")` I should do `model = AutoModelForSequenceClassification.from_pretrained('to/my/path")`?<|||||>It depends on whether you want to use the pipeline, or the model right away. Both should work with the files stored locally.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,901 | closed | Update generation_logits_process.py | While `Iterable[Iterable[int]]` is a nicer annotation (it's covariant!), the defensive statements parsing out `bad_words_ids` in `__init__(...)` force the caller to pass in `List[List[int]]`. I've changed the annotation to make that clear. | 07-27-2021 03:02:59 | 07-27-2021 03:02:59 | |
transformers | 12,900 | closed | Update generation_logits_process.py | Change `torch.Tensor` -> `torch.FloatTensor` in `TemperatureLogitsWarper` to be consistent with the `LogitsWarper` ABC signature annotation. | 07-27-2021 01:49:20 | 07-27-2021 01:49:20 | |
transformers | 12,899 | closed | `Seq2SeqTrainer` set max_length and num_beams only when non None | # What does this PR do?
This PR slightly modifies the logic of setting `self._max_length` and `self._num_beams` in `Seq2SeqTrainer`'s `evaluate()` and `predict()` methods, i.e., the two variables will be set only when they are provided non `None` values.
This is to address a potentially inconsistent evaluation configuration inside the Seq2Seq training loop. For example, if you create a `Seq2SeqTrainer` object and invoke its `train()` method with a by epoch evaluation strategy, this line will do the evaluation after each training epoch: https://github.com/huggingface/transformers/blob/ba15fe7995a02357ecea6e7024918f6915564c36/src/transformers/trainer.py#L1437
`Seq2SeqTrainer ` subclasses `Trainer`, so the actual `evaluate()` method is https://github.com/huggingface/transformers/blob/ba15fe7995a02357ecea6e7024918f6915564c36/src/transformers/trainer_seq2seq.py#L36-L43
Now the problem is that `max_length` and `num_beams` can only be the default value `None` inside the training loop, since the training method is not aware of parameters introduced from the subclass. To avoid this issue, this PR basically says that we will set the two variables only when non `None` values are provided. This allows users to set them using `seq2seq_trainer._max_length = 100` and `seq2seq_trainer._num_beams = 4` before entering the training loop (and won't be reset to `None` during training).
## Who can review?
@patrickvonplaten @sgugger
| 07-27-2021 01:31:25 | 07-27-2021 01:31:25 | |
transformers | 12,898 | closed | Tensorflow Mixed Precision Training | ## Environment info
- `transformers` version: 4.5.1
- Platform: Linux-5.4.0-74-generic-x86_64-with-glibc2.27
- Python version: 3.8.8
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): 2.6.0-dev20210604 (True)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@LysandreJik @Rocketknight1 @sgugger
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Use TF directly with `model.fit` or `TFTrainer` with policy `mixed_float16` for mixed precision training.
2. Due to this tensorflow [cast](https://github.com/tensorflow/tensorflow/issues/50964) issue in SparseCategoricalCrossentropy loss used in many of the huggingface TF models, incorrect label encodings could result in `nan` or errors in loss.
3. Errors can start with token (or class) indexes at 2k+ and `nan` loss with labels closer to the max.
## Expected behavior
Correct loss and no `nan`.
Changing `compute_loss` to use `CategoricalCrossentropy` vs sparse and manually one hot encoding solves this:
```
def compute_loss(labels, logits):
loss_fn = tf.keras.losses.CategoricalCrossentropy(
from_logits=True, reduction=tf.keras.losses.Reduction.NONE
)
# make sure only labels that are not equal to -100 affect the loss
active_loss = tf.not_equal(tf.reshape(labels, (-1,)), -100)
reduced_logits = tf.boolean_mask(tf.reshape(logits, (-1, shape_list(logits)[2])), active_loss)
labels = tf.boolean_mask(tf.reshape(labels, (-1,)), active_loss)
**one_hot_labels = tf.one_hot(labels, tf.shape(logits)[-1], dtype=logits.dtype)**
return loss_fn(one_hot_labels, reduced_logits)
```
Changing the last output layer to be float32 also solves this:
```
class TFBertMLMHead(tf.keras.layers.Layer):
def __init__(self, config: BertConfig, input_embeddings: tf.keras.layers.Layer, **kwargs):
super().__init__(**kwargs)
self.predictions = TFBertLMPredictionHead(config, input_embeddings, name="predictions")
**self.finalCast = tf.keras.layers.Activation('linear', dtype='float32')**
def call(self, sequence_output: tf.Tensor) -> tf.Tensor:
prediction_scores = self.predictions(hidden_states=sequence_output)
**prediction_scores = self.finalCast(prediction_scores)**
return prediction_scores
```
But given the recommendation that output be accumulated in float32 to be numerically stable, perhaps `transform_act_fn` and everything after needs to be `float32`? | 07-27-2021 01:12:03 | 07-27-2021 01:12:03 | One-hot encoding the labels for a language model will get you OOM super fast since the vocab size is often large, so that's not an option. I think casting the prediction before the loss back to float32 is probably the safest option?<|||||>Ah, good point, I assumed since logits were of the same dimensionality, it wouldn't be too bad, but digging deeper in TF's sparse implementation, definitely more optimal. Interestingly, TF's internal [implementation](https://github.com/tensorflow/tensorflow/blob/57da85f8870bc8dee1b77225b3e30ea3f314d304/tensorflow/python/ops/nn_ops.py#L4185) even notes a requirement for labels to be of "dtype `int32` or `int64`" so I think it's their cast that needs to be fixed since it's still going from `int -> float32 -> int64` currently.
I settled with this loss function that does a cast of the logits in the meantime which also has a benefit (I think) of performing the final softmax in float32 vs float16.
```
@tf.function
def compute_loss(labels, logits):
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction=tf.keras.losses.Reduction.NONE
)
# make sure only labels that are not equal to -100 affect the loss
active_loss = tf.not_equal(tf.reshape(labels, (-1,)), -100)
reduced_logits = tf.boolean_mask(tf.reshape(logits, (-1, shape_list(logits)[2])), active_loss)
labels = tf.boolean_mask(tf.reshape(labels, (-1,)), active_loss)
return loss_fn(labels, tf.cast(reduced_logits, tf.float32))
```
However, still curious if using float32 explicitly in an earlier layer such as for the activation function in `TFBertPredictionHeadTransform` might still be better?
<|||||>This is a fascinating bug in Keras. It's a known issue that softmaxes can be unstable in float16 or bfloat16, but I didn't realize that this issue could also smear the labels around too. Tagging #12332 as well, which is a relevant PR. (And maybe this might finally explain my confusion with what was going on in that case!)
I think you're right that computing the logits in float32 across our models might still be an improvement for numerical stability reasons even if the label cast bug is fixed, though, and so it would be worth making that change even if the upstream Keras bug gets fixed. @sgugger wdyt?<|||||>In PyTorch, we always compute the softmax in FP32 as it's better for numerical stability. So yes, if possible, we should the same on the TF side.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,897 | closed | Correct validation_split_percentage argument from int (ex:5) to float (0.05) |
# What does this PR do?
This PR is for fixing a bug in the run_clm.py and run_mlm.py examples in the tensorflow section. I have merely divided the value by 100 on the train_test_split test_size argument. This will make it work as intended now.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
https://github.com/huggingface/transformers/pull/11690
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Related People: @sgugger and @patil-suraj
| 07-27-2021 00:30:08 | 07-27-2021 00:30:08 | Please make sure to run `make style` on your branch to fix the formatting issues.<|||||>Thanks for fixing!<|||||>Thanks for kind help with the formatting |
transformers | 12,896 | closed | Update tokenization_auto.py | Fix `config.decoder.__class` -> `config.decoder.__class__`. | 07-26-2021 23:13:11 | 07-26-2021 23:13:11 | |
transformers | 12,895 | closed | Fix push_to_hub for TPUs | # What does this PR do?
This PR fixes the `push_to_hub` method for TPUs, which hangs forever right now because there is a reandezvous point in code that is only reached by the main process. | 07-26-2021 21:10:13 | 07-26-2021 21:10:13 | |
transformers | 12,894 | closed | tokenizers add_token bug | The way `add_token` is implemented results is problematic tokenization when added tokens are substring of each other. Example:
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("google/mt5-large")
tokenizer.add_tokens(['my_token_1', 'my_token_12', 'my_token_123'])
tokenizer.tokenize("my_token_1 and my_token_12 and my_token_123")
```
output:
```
['my_token_1',
'▁and',
'▁',
'my_token_1',
'▁2',
'▁and',
'▁',
'my_token_1',
'▁23']
```
Because of implementation (i.e., breaking text on added tokens), adding new tokens in the reversed order (i.e., `tokenizer.add_tokens(['my_token_123', 'my_token_12', 'my_token_1'])`) results in the correct tokenization. So one solution is always order the added tokens in reversed. | 07-26-2021 20:47:23 | 07-26-2021 20:47:23 | Pinging @n1t0 and @SaulLu for advice<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,893 | closed | Create py.typed | # What does this PR do?
This creates a [py.typed as per PEP 561](https://www.python.org/dev/peps/pep-0561/#packaging-type-information) that should be distributed to mark that the package includes (inline) type annotations.
| 07-26-2021 17:32:23 | 07-26-2021 17:32:23 | Thanks, @willfrey! I'm reading the PEP 561 as I was unaware of this file, but I'm not sure I'm getting everything. Is this file necessary for downstream type checkers (such as `mypy`) to understand the types from `transformers`?
If that is so, is there no need for any additions, such as a mention in our setup.py's `package_data`? And is the addition of that single file enough to make the package compatible with downstream type checkers, or should we vigorously check that all types are currently defined, and enforce all types from now on for the package?
Thank you!<|||||>`py.typed` needs to be distributed with the top-level `transformers` package as per [PEP 561](https://www.python.org/dev/peps/pep-0561/#packaging-type-information).
This should be all that you need to tell downstream type checkers that the code in the package is typed. It'll make mypy behave a lot more nicely, that's for sure. Some type checkers, like Pyright will infer types from library code directly, which is why mypy freaks out but Pylance tries to do the right thing. My experience with Pylance though is that it is very slow trying to infer types based on all the non-standard behavior being done to hack the various namespaces in the package.
I _think_ partial types are okay here, they'll just be inferred implicitly as `Any`. I wouldn't defensively enforce types because that defeats the whole point of structural sub-typing (duck-typing) that makes Python so great. Type annotations are meant (among other things) to allow you to identify logical errors in your code that a compiler would normally catch.
Another reason to not enforce it is that people tend to over-specify the types for method parameters, which can get annoying. For example, you might annotate something as `List[str]` (or `list[str]` for Python 3.9 and later) but you really only needed `collections.abc.MutableSequence[str]`, `collections.abc.Sequence[str]`, or perhaps just `collections.abc.Iterable[str]`.<|||||>Thank you for the explanation, that all makes a lot of sense. I tried your PR with `mypy` to see if it would be able to analyze the types, yet I'm still getting the error that should be resolved:
```
error: Skipping analyzing "transformers": found module but no type hints or library stubs
```
I suspect this has to do with `package_data` being ill-defined as I see it defined in a few sources, but I'm unsuccessful at completing it and resolving this error.
I'm trying to understand what issue would adding `py.typed` resolve, to make sure we're not forgetting anything/couldn't improve it by understanding the use-cases this would enable.<|||||>I'm assuming mypy is trying to analyze transformers in a virtual environment where it's been pip installed? If so, check in the virtualenv to see if the py.typed file is in the transformers directory.
I just updated setup.py to include py.typed as package data.<|||||>A similar PR (https://github.com/huggingface/datasets/pull/2417) was recently merged in the datasets library as well.
@willfrey
A small nit. It's advised by MyPy to set the `zip_safe` argument of `setuptools.setup` to `False`.
@LysandreJik
[This thread on SO](https://stackoverflow.com/questions/60856237/mypy-cant-find-type-hints-for-black) explains what happens when running MyPy on code that imports a 3rd party lib that's not PEP561-compliant.
<|||||>@willfrey Do you mind setting the `zip_sage` argument as mentioned by @mariosasko?
We'll merge the PR afterward. Thank you!<|||||>@LysandreJik Done! |
transformers | 12,892 | closed | CANINE pre-training | # 🚀 Feature request
Thanks for the integration of the Canine model. I am interested in pre-training the model from scratch and I was wondering if you have a timeline for the release of a pre-training script using autoregressive character loss.
Thank you in advance.
@NielsRogge | 07-26-2021 16:28:48 | 07-26-2021 16:28:48 | Hi,
Google hasn't released any pre-training code yet. As stated on their [README](https://github.com/google-research/language/tree/master/language/canine#pre-training-code-coming-later):
> Pre-training Code (Coming later)
We've prioritized releasing the pre-trained checkpoints, modeling code, and TyDi QA evaluation code since we hope this will cover the most common use cases. The implementation of pre-training will be released in this repo in the future. If this is blocking you, feel free to send us a friendly ping to let us know that this is important to you.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,891 | closed | Fix docstring typo in tokenization_auto.py | Change `PreTrainedConfig` -> `PretrainedConfig` in the docstring for `AutoTokenizer.from_pretrained(...)`. | 07-26-2021 15:25:59 | 07-26-2021 15:25:59 | |
transformers | 12,890 | closed | Multi-GPU fails | ## Environment info
- transformers version: 4.6.1
- Platform: Linux-4.19.0-17-cloud-amd64-x86_64-with-glibc2.10
- Python version: 3.8.10
- PyTorch version (GPU?): 1.8.1+cu111
- Tensorflow version (GPU?): not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Data Parallel
### Who can help
Models:
- openai-gpt: @sgugger
Library:
- trainer: @sgugger
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
## Information
Model I am using (Bert, XLNet ...): openai-gpt
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
My dataset is a simple text file with strings for causal language modelling.
## To reproduce
```
python run_clm.py --model_name_or_path openai-gpt --train_file dataset/train.txt --validation_file dataset/eval.txt --do_train --do_eval --output_dir /tmp/ --method range --source fi.json --from_scratch --per_device_eval_batch_size 4 --per_device_train_batch_size 4
```
Error Log:
```
2021-07-26T14:09:12.968147055Z sudo: setrlimit(RLIMIT_STACK): Operation not permitted
2021-07-26T14:09:14.905455906Z 07/26/2021 14:09:14 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 2distributed training: False, 16-bits training: False
2021-07-26T14:09:14.90566887Z 07/26/2021 14:09:14 - INFO - __main__ - Training/evaluation parameters TrainingArguments(
2021-07-26T14:09:14.905680763Z _n_gpu=2,
2021-07-26T14:09:14.905686554Z adafactor=False,
2021-07-26T14:09:14.905691893Z adam_beta1=0.9,
2021-07-26T14:09:14.905697154Z adam_beta2=0.999,
2021-07-26T14:09:14.9057025Z adam_epsilon=1e-08,
2021-07-26T14:09:14.90570797Z dataloader_drop_last=False,
2021-07-26T14:09:14.905713094Z dataloader_num_workers=0,
2021-07-26T14:09:14.905718126Z dataloader_pin_memory=True,
2021-07-26T14:09:14.905723969Z ddp_find_unused_parameters=None,
2021-07-26T14:09:14.905729253Z debug=[],
2021-07-26T14:09:14.905734499Z deepspeed=None,
2021-07-26T14:09:14.9057397Z disable_tqdm=False,
2021-07-26T14:09:14.905744923Z do_eval=True,
2021-07-26T14:09:14.905749956Z do_predict=False,
2021-07-26T14:09:14.90575516Z do_train=True,
2021-07-26T14:09:14.90576029Z eval_accumulation_steps=None,
2021-07-26T14:09:14.905766046Z eval_steps=500,
2021-07-26T14:09:14.905771809Z evaluation_strategy=IntervalStrategy.STEPS,
2021-07-26T14:09:14.905777566Z fp16=False,
2021-07-26T14:09:14.905782742Z fp16_backend=auto,
2021-07-26T14:09:14.905787796Z fp16_full_eval=False,
2021-07-26T14:09:14.90579285Z fp16_opt_level=O1,
2021-07-26T14:09:14.90579783Z gradient_accumulation_steps=32,
2021-07-26T14:09:14.905802916Z greater_is_better=None,
2021-07-26T14:09:14.905808523Z group_by_length=False,
2021-07-26T14:09:14.905813853Z ignore_data_skip=False,
2021-07-26T14:09:14.905819176Z label_names=None,
2021-07-26T14:09:14.905824413Z label_smoothing_factor=0.0,
2021-07-26T14:09:14.905829632Z learning_rate=5e-05,
2021-07-26T14:09:14.905834616Z length_column_name=length,
2021-07-26T14:09:14.905839636Z load_best_model_at_end=False,
2021-07-26T14:09:14.905844662Z local_rank=-1,
2021-07-26T14:09:14.905850119Z log_level=-1,
2021-07-26T14:09:14.905855292Z log_level_replica=-1,
2021-07-26T14:09:14.905860668Z log_on_each_node=True,
2021-07-26T14:09:14.905865976Z logging_dir=result/runs/Jul26_14-09-14_cffe56d6abc4,
2021-07-26T14:09:14.905871216Z logging_first_step=False,
2021-07-26T14:09:14.905876242Z logging_steps=500,
2021-07-26T14:09:14.905881425Z logging_strategy=IntervalStrategy.STEPS,
2021-07-26T14:09:14.905903565Z lr_scheduler_type=SchedulerType.LINEAR,
2021-07-26T14:09:14.905909738Z max_grad_norm=1.0,
2021-07-26T14:09:14.905915195Z max_steps=50000,
2021-07-26T14:09:14.905920608Z metric_for_best_model=None,
2021-07-26T14:09:14.905925952Z mp_parameters=,
2021-07-26T14:09:14.905931035Z no_cuda=False,
2021-07-26T14:09:14.905936031Z num_train_epochs=3.0,
2021-07-26T14:09:14.905941121Z output_dir=result,
2021-07-26T14:09:14.905946155Z overwrite_output_dir=True,
2021-07-26T14:09:14.905951772Z past_index=-1,
2021-07-26T14:09:14.905957084Z per_device_eval_batch_size=16,
2021-07-26T14:09:14.905962457Z per_device_train_batch_size=32,
2021-07-26T14:09:14.905967855Z prediction_loss_only=False,
2021-07-26T14:09:14.905973078Z push_to_hub=False,
2021-07-26T14:09:14.905978145Z push_to_hub_model_id=result,
2021-07-26T14:09:14.905983324Z push_to_hub_organization=None,
2021-07-26T14:09:14.905988388Z push_to_hub_token=None,
2021-07-26T14:09:14.905993985Z remove_unused_columns=True,
2021-07-26T14:09:14.905999497Z report_to=[],
2021-07-26T14:09:14.906004944Z resume_from_checkpoint=None,
2021-07-26T14:09:14.906010281Z run_name=result,
2021-07-26T14:09:14.906015348Z save_on_each_node=False,
2021-07-26T14:09:14.906020454Z save_steps=500,
2021-07-26T14:09:14.906025527Z save_strategy=IntervalStrategy.STEPS,
2021-07-26T14:09:14.906030714Z save_total_limit=1,
2021-07-26T14:09:14.906036287Z seed=42,
2021-07-26T14:09:14.90604172Z sharded_ddp=[],
2021-07-26T14:09:14.90604725Z skip_memory_metrics=True,
2021-07-26T14:09:14.906052407Z tpu_metrics_debug=False,
2021-07-26T14:09:14.906057473Z tpu_num_cores=None,
2021-07-26T14:09:14.906062617Z use_legacy_prediction_loop=False,
2021-07-26T14:09:14.906067774Z warmup_ratio=0.0,
2021-07-26T14:09:14.90607286Z warmup_steps=0,
2021-07-26T14:09:14.906078463Z weight_decay=0.0,
2021-07-26T14:09:14.906083927Z )
2021-07-26T14:09:15.117365107Z 07/26/2021 14:09:15 - WARNING - datasets.builder - Using custom data configuration default-dfca9c6f12495150
2021-07-26T14:09:15.118233822Z 07/26/2021 14:09:15 - INFO - datasets.utils.filelock - Lock 139871027286176 acquired on /root/.cache/huggingface/datasets/_root_.cache_huggingface_datasets_text_default-dfca9c6f12495150_0.0.0_e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5.lock
2021-07-26T14:09:15.118379685Z 07/26/2021 14:09:15 - INFO - datasets.utils.filelock - Lock 139871027286176 released on /root/.cache/huggingface/datasets/_root_.cache_huggingface_datasets_text_default-dfca9c6f12495150_0.0.0_e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5.lock
2021-07-26T14:09:15.118514014Z 07/26/2021 14:09:15 - INFO - datasets.utils.filelock - Lock 139866173991472 acquired on /root/.cache/huggingface/datasets/_root_.cache_huggingface_datasets_text_default-dfca9c6f12495150_0.0.0_e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5.lock
2021-07-26T14:09:15.118567887Z 07/26/2021 14:09:15 - INFO - datasets.builder - Generating dataset text (/root/.cache/huggingface/datasets/text/default-dfca9c6f12495150/0.0.0/e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5)
2021-07-26T14:09:15.12032563Z Downloading and preparing dataset text/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/text/default-dfca9c6f12495150/0.0.0/e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5...
2021-07-26T14:09:15.120337297Z 07/26/2021 14:09:15 - INFO - datasets.utils.download_manager - Downloading took 0.0 min
2021-07-26T14:09:15.121994254Z 07/26/2021 14:09:15 - INFO - datasets.utils.download_manager - Checksum Computation took 0.0 min
2021-07-26T14:09:15.122429438Z
0%| | 0/2 [00:00<?, ?it/s]
100%|██████████| 2/2 [00:00<00:00, 5761.41it/s]
2021-07-26T14:09:15.124508599Z 07/26/2021 14:09:15 - INFO - datasets.utils.info_utils - Unable to verify checksums.
2021-07-26T14:09:15.124597847Z 07/26/2021 14:09:15 - INFO - datasets.builder - Generating split train
2021-07-26T14:09:15.125310516Z
0%| | 0/2 [00:00<?, ?it/s]
100%|██████████| 2/2 [00:00<00:00, 1147.55it/s]
2021-07-26T14:09:15.128544997Z 07/26/2021 14:09:15 - INFO - datasets.arrow_writer - Done writing 2000 examples in 164067 bytes /root/.cache/huggingface/datasets/text/default-dfca9c6f12495150/0.0.0/e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5.incomplete/text-train.arrow.
2021-07-26T14:09:15.128626548Z 07/26/2021 14:09:15 - INFO - datasets.builder - Generating split validation
2021-07-26T14:09:15.12993743Z 07/26/2021 14:09:15 - INFO - datasets.arrow_writer - Done writing 1000 examples in 90150 bytes /root/.cache/huggingface/datasets/text/default-dfca9c6f12495150/0.0.0/e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5.incomplete/text-validation.arrow.
2021-07-26T14:09:15.130003546Z 07/26/2021 14:09:15 - INFO - datasets.utils.info_utils - Unable to verify splits sizes.
2021-07-26T14:09:15.130088692Z 07/26/2021 14:09:15 - INFO - datasets.utils.filelock - Lock 139866173989600 acquired on /root/.cache/huggingface/datasets/_root_.cache_huggingface_datasets_text_default-dfca9c6f12495150_0.0.0_e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5.incomplete.lock
2021-07-26T14:09:15.130360478Z 07/26/2021 14:09:15 - INFO - datasets.utils.filelock - Lock 139866173989600 released on /root/.cache/huggingface/datasets/_root_.cache_huggingface_datasets_text_default-dfca9c6f12495150_0.0.0_e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5.incomplete.lock
2021-07-26T14:09:15.130449829Z Dataset text downloaded and prepared to /root/.cache/huggingface/datasets/text/default-dfca9c6f12495150/0.0.0/e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5. Subsequent calls will reuse this data.
2021-07-26T14:09:15.130456275Z 07/26/2021 14:09:15 - INFO - datasets.utils.filelock - Lock 139866173991472 released on /root/.cache/huggingface/datasets/_root_.cache_huggingface_datasets_text_default-dfca9c6f12495150_0.0.0_e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5.lock
2021-07-26T14:09:15.130475953Z 07/26/2021 14:09:15 - INFO - datasets.builder - Constructing Dataset for split train, validation, from /root/.cache/huggingface/datasets/text/default-dfca9c6f12495150/0.0.0/e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5
2021-07-26T14:09:15.314137303Z
0 tables [00:00, ? tables/s]
0 tables [00:00, ? tables/s]
0%| | 0/2 [00:00<?, ?it/s]
100%|██████████| 2/2 [00:00<00:00, 655.77it/s]
2021-07-26T14:09:15.31416541Z [INFO|file_utils.py:1624] 2021-07-26 14:09:15,313 >> https://huggingface.co/openai-gpt/resolve/main/config.json not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmpd5znm5l1
2021-07-26T14:09:15.496180381Z
Downloading: 0%| | 0.00/656 [00:00<?, ?B/s]
Downloading: 100%|██████████| 656/656 [00:00<00:00, 433kB/s]
2021-07-26T14:09:15.496209117Z [INFO|file_utils.py:1628] 2021-07-26 14:09:15,496 >> storing https://huggingface.co/openai-gpt/resolve/main/config.json in cache at /root/.cache/huggingface/transformers/bebb46f5735701bc248ef9faa26f12577944fa7fc8e9be1a774b94d4cb8b79b6.ba6f10a5446f364b92311c09e55e49aa27024a4aeefc1ea50fd733b77bcd997d
2021-07-26T14:09:15.496286347Z [INFO|file_utils.py:1636] 2021-07-26 14:09:15,496 >> creating metadata file for /root/.cache/huggingface/transformers/bebb46f5735701bc248ef9faa26f12577944fa7fc8e9be1a774b94d4cb8b79b6.ba6f10a5446f364b92311c09e55e49aa27024a4aeefc1ea50fd733b77bcd997d
2021-07-26T14:09:15.496582551Z [INFO|configuration_utils.py:545] 2021-07-26 14:09:15,496 >> loading configuration file https://huggingface.co/openai-gpt/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/bebb46f5735701bc248ef9faa26f12577944fa7fc8e9be1a774b94d4cb8b79b6.ba6f10a5446f364b92311c09e55e49aa27024a4aeefc1ea50fd733b77bcd997d
2021-07-26T14:09:15.497318074Z [INFO|configuration_utils.py:581] 2021-07-26 14:09:15,497 >> Model config OpenAIGPTConfig {
2021-07-26T14:09:15.497326601Z "afn": "gelu",
2021-07-26T14:09:15.497332651Z "architectures": [
2021-07-26T14:09:15.497338467Z "OpenAIGPTLMHeadModel"
2021-07-26T14:09:15.49734389Z ],
2021-07-26T14:09:15.497349194Z "attn_pdrop": 0.1,
2021-07-26T14:09:15.497354591Z "embd_pdrop": 0.1,
2021-07-26T14:09:15.497360424Z "initializer_range": 0.02,
2021-07-26T14:09:15.497366131Z "layer_norm_epsilon": 1e-05,
2021-07-26T14:09:15.4973717Z "model_type": "openai-gpt",
2021-07-26T14:09:15.49737771Z "n_ctx": 512,
2021-07-26T14:09:15.49738331Z "n_embd": 768,
2021-07-26T14:09:15.497388484Z "n_head": 12,
2021-07-26T14:09:15.497393747Z "n_layer": 12,
2021-07-26T14:09:15.497399167Z "n_positions": 512,
2021-07-26T14:09:15.497404934Z "n_special": 0,
2021-07-26T14:09:15.497410553Z "predict_special_tokens": true,
2021-07-26T14:09:15.497416327Z "resid_pdrop": 0.1,
2021-07-26T14:09:15.497434673Z "summary_activation": null,
2021-07-26T14:09:15.497440436Z "summary_first_dropout": 0.1,
2021-07-26T14:09:15.497446023Z "summary_proj_to_labels": true,
2021-07-26T14:09:15.497451297Z "summary_type": "cls_index",
2021-07-26T14:09:15.497456789Z "summary_use_proj": true,
2021-07-26T14:09:15.49746268Z "task_specific_params": {
2021-07-26T14:09:15.497468433Z "text-generation": {
2021-07-26T14:09:15.497474113Z "do_sample": true,
2021-07-26T14:09:15.497479797Z "max_length": 50
2021-07-26T14:09:15.497485073Z }
2021-07-26T14:09:15.49749015Z },
2021-07-26T14:09:15.497495326Z "transformers_version": "4.9.0",
2021-07-26T14:09:15.497500982Z "vocab_size": 40478
2021-07-26T14:09:15.497506886Z }
2021-07-26T14:09:15.497512492Z
2021-07-26T14:09:15.675411198Z [INFO|tokenization_auto.py:432] 2021-07-26 14:09:15,674 >> Could not locate the tokenizer configuration file, will try to use the model config instead.
2021-07-26T14:09:15.851918363Z [INFO|configuration_utils.py:545] 2021-07-26 14:09:15,851 >> loading configuration file https://huggingface.co/openai-gpt/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/bebb46f5735701bc248ef9faa26f12577944fa7fc8e9be1a774b94d4cb8b79b6.ba6f10a5446f364b92311c09e55e49aa27024a4aeefc1ea50fd733b77bcd997d
2021-07-26T14:09:15.852684702Z [INFO|configuration_utils.py:581] 2021-07-26 14:09:15,852 >> Model config OpenAIGPTConfig {
2021-07-26T14:09:15.852691992Z "afn": "gelu",
2021-07-26T14:09:15.85269584Z "architectures": [
2021-07-26T14:09:15.852699315Z "OpenAIGPTLMHeadModel"
2021-07-26T14:09:15.852702686Z ],
2021-07-26T14:09:15.852706345Z "attn_pdrop": 0.1,
2021-07-26T14:09:15.852709633Z "embd_pdrop": 0.1,
2021-07-26T14:09:15.852712825Z "initializer_range": 0.02,
2021-07-26T14:09:15.852716035Z "layer_norm_epsilon": 1e-05,
2021-07-26T14:09:15.852719184Z "model_type": "openai-gpt",
2021-07-26T14:09:15.852722288Z "n_ctx": 512,
2021-07-26T14:09:15.852725375Z "n_embd": 768,
2021-07-26T14:09:15.852728435Z "n_head": 12,
2021-07-26T14:09:15.852731725Z "n_layer": 12,
2021-07-26T14:09:15.852734975Z "n_positions": 512,
2021-07-26T14:09:15.852738185Z "n_special": 0,
2021-07-26T14:09:15.852741425Z "predict_special_tokens": true,
2021-07-26T14:09:15.852744547Z "resid_pdrop": 0.1,
2021-07-26T14:09:15.85274759Z "summary_activation": null,
2021-07-26T14:09:15.852750587Z "summary_first_dropout": 0.1,
2021-07-26T14:09:15.852753673Z "summary_proj_to_labels": true,
2021-07-26T14:09:15.852769472Z "summary_type": "cls_index",
2021-07-26T14:09:15.852772952Z "summary_use_proj": true,
2021-07-26T14:09:15.852776136Z "task_specific_params": {
2021-07-26T14:09:15.852779304Z "text-generation": {
2021-07-26T14:09:15.852782414Z "do_sample": true,
2021-07-26T14:09:15.852785664Z "max_length": 50
2021-07-26T14:09:15.852788824Z }
2021-07-26T14:09:15.852791737Z },
2021-07-26T14:09:15.852795052Z "transformers_version": "4.9.0",
2021-07-26T14:09:15.852798497Z "vocab_size": 40478
2021-07-26T14:09:15.85280183Z }
2021-07-26T14:09:15.852805286Z
2021-07-26T14:09:16.215260602Z [INFO|file_utils.py:1624] 2021-07-26 14:09:16,215 >> https://huggingface.co/openai-gpt/resolve/main/vocab.json not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmp5ct5tg0n
2021-07-26T14:09:16.457642584Z
Downloading: 0%| | 0.00/816k [00:00<?, ?B/s]
Downloading: 100%|██████████| 816k/816k [00:00<00:00, 14.9MB/s]
2021-07-26T14:09:16.457666203Z [INFO|file_utils.py:1628] 2021-07-26 14:09:16,457 >> storing https://huggingface.co/openai-gpt/resolve/main/vocab.json in cache at /root/.cache/huggingface/transformers/918c57540c636a2a662770d208fcf20aa8c3faea78201fc612e5c84f052f1119.ac55819e76b0f8b0c32cbb407436947d090d98f8952f38376ee249ed382927ab
2021-07-26T14:09:16.457749557Z [INFO|file_utils.py:1636] 2021-07-26 14:09:16,457 >> creating metadata file for /root/.cache/huggingface/transformers/918c57540c636a2a662770d208fcf20aa8c3faea78201fc612e5c84f052f1119.ac55819e76b0f8b0c32cbb407436947d090d98f8952f38376ee249ed382927ab
2021-07-26T14:09:16.642597998Z [INFO|file_utils.py:1624] 2021-07-26 14:09:16,642 >> https://huggingface.co/openai-gpt/resolve/main/merges.txt not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmp2_1m78tv
2021-07-26T14:09:16.874544236Z
Downloading: 0%| | 0.00/458k [00:00<?, ?B/s]
Downloading: 100%|██████████| 458k/458k [00:00<00:00, 10.9MB/s]
2021-07-26T14:09:16.874569317Z [INFO|file_utils.py:1628] 2021-07-26 14:09:16,874 >> storing https://huggingface.co/openai-gpt/resolve/main/merges.txt in cache at /root/.cache/huggingface/transformers/a682e219a788dde0e4f77bc5a470d85a4d7e493420506ce7e3266f7be122cf9e.2150b9689fda7ca7c6224ff32672c004259f974e96934e8eb69d8dd546d682db
2021-07-26T14:09:16.87473933Z [INFO|file_utils.py:1636] 2021-07-26 14:09:16,874 >> creating metadata file for /root/.cache/huggingface/transformers/a682e219a788dde0e4f77bc5a470d85a4d7e493420506ce7e3266f7be122cf9e.2150b9689fda7ca7c6224ff32672c004259f974e96934e8eb69d8dd546d682db
2021-07-26T14:09:17.0542553Z [INFO|file_utils.py:1624] 2021-07-26 14:09:17,054 >> https://huggingface.co/openai-gpt/resolve/main/tokenizer.json not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmpiqlissjs
2021-07-26T14:09:17.308757452Z
Downloading: 0%| | 0.00/1.27M [00:00<?, ?B/s]
Downloading: 100%|██████████| 1.27M/1.27M [00:00<00:00, 19.6MB/s]
2021-07-26T14:09:17.308790611Z [INFO|file_utils.py:1628] 2021-07-26 14:09:17,308 >> storing https://huggingface.co/openai-gpt/resolve/main/tokenizer.json in cache at /root/.cache/huggingface/transformers/325373fcbb0daa99905371727842a87ae9ca0f02f71db071720bb4d5a59076cf.b1810f3c6ed9fc0632664008484a9b569103559c04ac90321723cd808a3a96f9
2021-07-26T14:09:17.308827786Z [INFO|file_utils.py:1636] 2021-07-26 14:09:17,308 >> creating metadata file for /root/.cache/huggingface/transformers/325373fcbb0daa99905371727842a87ae9ca0f02f71db071720bb4d5a59076cf.b1810f3c6ed9fc0632664008484a9b569103559c04ac90321723cd808a3a96f9
2021-07-26T14:09:17.838142571Z [INFO|tokenization_utils_base.py:1730] 2021-07-26 14:09:17,837 >> loading file https://huggingface.co/openai-gpt/resolve/main/vocab.json from cache at /root/.cache/huggingface/transformers/918c57540c636a2a662770d208fcf20aa8c3faea78201fc612e5c84f052f1119.ac55819e76b0f8b0c32cbb407436947d090d98f8952f38376ee249ed382927ab
2021-07-26T14:09:17.838167038Z [INFO|tokenization_utils_base.py:1730] 2021-07-26 14:09:17,837 >> loading file https://huggingface.co/openai-gpt/resolve/main/merges.txt from cache at /root/.cache/huggingface/transformers/a682e219a788dde0e4f77bc5a470d85a4d7e493420506ce7e3266f7be122cf9e.2150b9689fda7ca7c6224ff32672c004259f974e96934e8eb69d8dd546d682db
2021-07-26T14:09:17.838171311Z [INFO|tokenization_utils_base.py:1730] 2021-07-26 14:09:17,837 >> loading file https://huggingface.co/openai-gpt/resolve/main/tokenizer.json from cache at /root/.cache/huggingface/transformers/325373fcbb0daa99905371727842a87ae9ca0f02f71db071720bb4d5a59076cf.b1810f3c6ed9fc0632664008484a9b569103559c04ac90321723cd808a3a96f9
2021-07-26T14:09:17.838174874Z [INFO|tokenization_utils_base.py:1730] 2021-07-26 14:09:17,838 >> loading file https://huggingface.co/openai-gpt/resolve/main/added_tokens.json from cache at None
2021-07-26T14:09:17.838177733Z [INFO|tokenization_utils_base.py:1730] 2021-07-26 14:09:17,838 >> loading file https://huggingface.co/openai-gpt/resolve/main/special_tokens_map.json from cache at None
2021-07-26T14:09:17.83818803Z [INFO|tokenization_utils_base.py:1730] 2021-07-26 14:09:17,838 >> loading file https://huggingface.co/openai-gpt/resolve/main/tokenizer_config.json from cache at None
2021-07-26T14:09:18.023973304Z [INFO|configuration_utils.py:545] 2021-07-26 14:09:18,023 >> loading configuration file https://huggingface.co/openai-gpt/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/bebb46f5735701bc248ef9faa26f12577944fa7fc8e9be1a774b94d4cb8b79b6.ba6f10a5446f364b92311c09e55e49aa27024a4aeefc1ea50fd733b77bcd997d
2021-07-26T14:09:18.025605412Z [INFO|configuration_utils.py:581] 2021-07-26 14:09:18,025 >> Model config OpenAIGPTConfig {
2021-07-26T14:09:18.025632076Z "afn": "gelu",
2021-07-26T14:09:18.025638821Z "architectures": [
2021-07-26T14:09:18.025644803Z "OpenAIGPTLMHeadModel"
2021-07-26T14:09:18.02565048Z ],
2021-07-26T14:09:18.025655907Z "attn_pdrop": 0.1,
2021-07-26T14:09:18.025659711Z "embd_pdrop": 0.1,
2021-07-26T14:09:18.025663648Z "initializer_range": 0.02,
2021-07-26T14:09:18.02566734Z "layer_norm_epsilon": 1e-05,
2021-07-26T14:09:18.025671169Z "model_type": "openai-gpt",
2021-07-26T14:09:18.025686901Z "n_ctx": 512,
2021-07-26T14:09:18.025690748Z "n_embd": 768,
2021-07-26T14:09:18.025694256Z "n_head": 12,
2021-07-26T14:09:18.025697812Z "n_layer": 12,
2021-07-26T14:09:18.025701325Z "n_positions": 512,
2021-07-26T14:09:18.025705268Z "n_special": 0,
2021-07-26T14:09:18.025709002Z "predict_special_tokens": true,
2021-07-26T14:09:18.025712833Z "resid_pdrop": 0.1,
2021-07-26T14:09:18.025716428Z "summary_activation": null,
2021-07-26T14:09:18.025721606Z "summary_first_dropout": 0.1,
2021-07-26T14:09:18.025727781Z "summary_proj_to_labels": true,
2021-07-26T14:09:18.025732321Z "summary_type": "cls_index",
2021-07-26T14:09:18.025735991Z "summary_use_proj": true,
2021-07-26T14:09:18.025739869Z "task_specific_params": {
2021-07-26T14:09:18.025743781Z "text-generation": {
2021-07-26T14:09:18.025747651Z "do_sample": true,
2021-07-26T14:09:18.025751454Z "max_length": 50
2021-07-26T14:09:18.025755031Z }
2021-07-26T14:09:18.025758401Z },
2021-07-26T14:09:18.025761928Z "transformers_version": "4.9.0",
2021-07-26T14:09:18.025765657Z "vocab_size": 40478
2021-07-26T14:09:18.025769586Z }
2021-07-26T14:09:18.02577327Z
2021-07-26T14:09:23.021111594Z 07/26/2021 14:09:23 - INFO - __main__ - Training new model from scratch - Total size=111.14M params
2021-07-26T14:09:23.070773083Z 07/26/2021 14:09:23 - INFO - datasets.arrow_dataset - Caching processed dataset at /root/.cache/huggingface/datasets/text/default-dfca9c6f12495150/0.0.0/e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5/cache-8e82676f86a14c2c.arrow
2021-07-26T14:09:23.094906386Z 07/26/2021 14:09:23 - INFO - datasets.arrow_writer - Done writing 2000 examples in 207498 bytes /root/.cache/huggingface/datasets/text/default-dfca9c6f12495150/0.0.0/e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5/tmpbehl1qz0.
2021-07-26T14:09:23.117860452Z
Running tokenizer on dataset: 0%| | 0/2 [00:00<?, ?ba/s]
Running tokenizer on dataset: 100%|██████████| 2/2 [00:00<00:00, 43.33ba/s]
2021-07-26T14:09:23.133773375Z 07/26/2021 14:09:23 - INFO - datasets.arrow_dataset - Caching processed dataset at /root/.cache/huggingface/datasets/text/default-dfca9c6f12495150/0.0.0/e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5/cache-35b2963f79b3b422.arrow
2021-07-26T14:09:23.139336489Z 07/26/2021 14:09:23 - INFO - datasets.arrow_writer - Done writing 1000 examples in 113806 bytes /root/.cache/huggingface/datasets/text/default-dfca9c6f12495150/0.0.0/e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5/tmp9n9hycnj.
2021-07-26T14:09:23.144312664Z
Running tokenizer on dataset: 0%| | 0/1 [00:00<?, ?ba/s]
Running tokenizer on dataset: 100%|██████████| 1/1 [00:00<00:00, 46.94ba/s]
2021-07-26T14:09:23.235184764Z 07/26/2021 14:09:23 - INFO - datasets.arrow_dataset - Caching processed dataset at /root/.cache/huggingface/datasets/text/default-dfca9c6f12495150/0.0.0/e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5/cache-f0614aafe173fe5c.arrow
2021-07-26T14:09:23.340753289Z 07/26/2021 14:09:23 - INFO - datasets.arrow_writer - Done writing 72 examples in 480120 bytes /root/.cache/huggingface/datasets/text/default-dfca9c6f12495150/0.0.0/e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5/tmpbjayy6wf.
2021-07-26T14:09:23.344673188Z
Grouping texts in chunks of 512: 0%| | 0/2 [00:00<?, ?ba/s]
Grouping texts in chunks of 512: 100%|██████████| 2/2 [00:00<00:00, 10.21ba/s]
Grouping texts in chunks of 512: 100%|██████████| 2/2 [00:00<00:00, 10.20ba/s]
2021-07-26T14:09:23.449866442Z 07/26/2021 14:09:23 - INFO - datasets.arrow_dataset - Caching processed dataset at /root/.cache/huggingface/datasets/text/default-dfca9c6f12495150/0.0.0/e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5/cache-9636fc49daf5222e.arrow
2021-07-26T14:09:23.454281769Z 07/26/2021 14:09:23 - INFO - datasets.arrow_writer - Done writing 39 examples in 260064 bytes /root/.cache/huggingface/datasets/text/default-dfca9c6f12495150/0.0.0/e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5/tmpz8sa4yn6.
2021-07-26T14:09:23.482471097Z 07/26/2021 14:09:23 - INFO - datasets.arrow_writer - Done writing 200000 indices in 320000000 bytes .
2021-07-26T14:09:23.485361448Z 07/26/2021 14:09:23 - INFO - datasets.arrow_writer - Done writing 7000 indices in 392000 bytes .
2021-07-26T14:09:25.751105446Z
Grouping texts in chunks of 512: 0%| | 0/1 [00:00<?, ?ba/s]
Grouping texts in chunks of 512: 100%|██████████| 1/1 [00:00<00:00, 9.15ba/s]
Grouping texts in chunks of 512: 100%|██████████| 1/1 [00:00<00:00, 9.13ba/s]
2021-07-26T14:09:25.751141123Z [INFO|trainer.py:404] 2021-07-26 14:09:25,750 >> max_steps is given, it will override any value given in num_train_epochs
2021-07-26T14:09:25.757944575Z [INFO|trainer.py:1164] 2021-07-26 14:09:25,757 >> ***** Running training *****
2021-07-26T14:09:25.757972847Z [INFO|trainer.py:1165] 2021-07-26 14:09:25,757 >> Num examples = 200000
2021-07-26T14:09:25.757978165Z [INFO|trainer.py:1166] 2021-07-26 14:09:25,757 >> Num Epochs = 516
2021-07-26T14:09:25.757982299Z [INFO|trainer.py:1167] 2021-07-26 14:09:25,757 >> Instantaneous batch size per device = 32
2021-07-26T14:09:25.757986728Z [INFO|trainer.py:1168] 2021-07-26 14:09:25,757 >> Total train batch size (w. parallel, distributed & accumulation) = 2048
2021-07-26T14:09:25.757990875Z [INFO|trainer.py:1169] 2021-07-26 14:09:25,757 >> Gradient Accumulation steps = 32
2021-07-26T14:09:25.757994803Z [INFO|trainer.py:1170] 2021-07-26 14:09:25,757 >> Total optimization steps = 50000
2021-07-26T14:09:27.841919702Z
0%| | 0/50000 [00:00<?, ?it/s]Traceback (most recent call last):
2021-07-26T14:09:27.841956297Z File "run_clm.py", line 572, in <module>
2021-07-26T14:09:27.841963933Z main()
2021-07-26T14:09:27.841969132Z File "run_clm.py", line 522, in main
2021-07-26T14:09:27.841991003Z train_result = trainer.train(resume_from_checkpoint=checkpoint)
2021-07-26T14:09:27.841996801Z File "/home/user/miniconda/lib/python3.8/site-packages/transformers/trainer.py", line 1280, in train
2021-07-26T14:09:27.842002482Z tr_loss += self.training_step(model, inputs)
2021-07-26T14:09:27.842007478Z File "/home/user/miniconda/lib/python3.8/site-packages/transformers/trainer.py", line 1773, in training_step
2021-07-26T14:09:27.842012807Z loss = self.compute_loss(model, inputs)
2021-07-26T14:09:27.842017737Z File "/home/user/miniconda/lib/python3.8/site-packages/transformers/trainer.py", line 1805, in compute_loss
2021-07-26T14:09:27.84202311Z outputs = model(**inputs)
2021-07-26T14:09:27.842028183Z File "/home/user/miniconda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
2021-07-26T14:09:27.842034154Z result = self.forward(*input, **kwargs)
2021-07-26T14:09:27.842039413Z File "/home/user/miniconda/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 167, in forward
2021-07-26T14:09:27.842045122Z outputs = self.parallel_apply(replicas, inputs, kwargs)
2021-07-26T14:09:27.84205038Z File "/home/user/miniconda/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 177, in parallel_apply
2021-07-26T14:09:27.842055852Z return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
2021-07-26T14:09:27.842061165Z File "/home/user/miniconda/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply
2021-07-26T14:09:27.842066725Z output.reraise()
2021-07-26T14:09:27.842071565Z File "/home/user/miniconda/lib/python3.8/site-packages/torch/_utils.py", line 429, in reraise
2021-07-26T14:09:27.842077398Z raise self.exc_type(msg)
2021-07-26T14:09:27.842082546Z StopIteration: Caught StopIteration in replica 0 on device 0.
2021-07-26T14:09:27.842087891Z Original Traceback (most recent call last):
2021-07-26T14:09:27.842093056Z File "/home/user/miniconda/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
2021-07-26T14:09:27.842098477Z output = module(*input, **kwargs)
2021-07-26T14:09:27.84210327Z File "/home/user/miniconda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
2021-07-26T14:09:27.842108627Z result = self.forward(*input, **kwargs)
2021-07-26T14:09:27.842113465Z File "/home/user/miniconda/lib/python3.8/site-packages/transformers/models/openai/modeling_openai.py", line 581, in forward
2021-07-26T14:09:27.842119416Z transformer_outputs = self.transformer(
2021-07-26T14:09:27.8421263Z File "/home/user/miniconda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
2021-07-26T14:09:27.842132244Z result = self.forward(*input, **kwargs)
2021-07-26T14:09:27.842137575Z File "/home/user/miniconda/lib/python3.8/site-packages/transformers/models/openai/modeling_openai.py", line 487, in forward
2021-07-26T14:09:27.842147909Z attention_mask = attention_mask.to(dtype=next(self.parameters()).dtype) # fp16 compatibility
2021-07-26T14:09:27.842153517Z StopIteration
2021-07-26T14:09:27.842158291Z
2021-07-26T14:09:28.598937Z
0%| | 0/50000 [00:02<?, ?it/s]
```
## Expected behavior
The same as run_clm.py with a single GPU. | 07-26-2021 14:34:04 | 07-26-2021 14:34:04 | I am unable to reproduce the problem (also you seem to have made changes to the `run_clm` script since it does not accept those arguments: `--method range --source fi.json --from_scratch`) but in general, PyTorch discourages the use of DataParallel for multiGPU, so you should try to see if using DistributedDataParallel (by launching the script with `torch.distributed.launch`) works better?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,889 | closed | Fix documentation of BigBird tokenizer | # What does this PR do?
The tokens mentioned in the docstrings don't match the signature, this PR fixes that.
Fixes #12873 | 07-26-2021 13:55:36 | 07-26-2021 13:55:36 | |
transformers | 12,888 | closed | Add accelerate to examples requirements | # What does this PR do?
As mentioned in #12849, the `requirements.txt` for most PyTorch examples does not contain `accelerate`, so the `run_xxx_no_trainer.py` example cannot be executed. This PR fixes that.
Fixes #12489 | 07-26-2021 13:52:49 | 07-26-2021 13:52:49 | |
transformers | 12,887 | closed | Add config option to skip 1-D position embeddings in LayoutLM | # 🚀 Feature request
Add an option in LayoutLM config to not use 1-D position embeddings. The config currently allows us to choose between "absolute", "relative_key", and "relative_key_query". Can we add another option like "none" to not use 1-D positional embeddings?
## Motivation
The input to LayoutLM consists of tokens of text and their corresponding bounding boxes from document images. This is typically obtained by passing the document image through an OCR.
LayoutLM uses 1-D as well as 2-D position embeddings. While OCRs provide reliable 2-D positions for each word in the document image, the order of words (1-D positions) are not always correct. For example, if we OCR a two column document, or a document containing a table, or any other visually rich document, the order of words in the OCR output is very unreliable. This unreliable position information harms accuracy in several downstream tasks. I have personally seen improvements in some tasks when I manually disable 1-D position embeddings in LayoutLM, and force the model to only look at the 2-D positions. Can we provide an easy way to do this by adding an option in the LayoutLM config to make 1-D position embeddings optional?
## Your contribution
I am willing to work on this and submit a PR, but this is the first time I am contributing to the library and might require some help.
| 07-26-2021 10:29:44 | 07-26-2021 10:29:44 | I'm not sure whether we should add such an option, because models like BERT, RoBERTa, ... basically all Transformer models within this repository don't have this option. Either we add that option to all, either we don't in my opinion.
Absolute position embeddings are almost always beneficial, so not sure if adding this will have value, perhaps we could strive for simplicity. cc @sgugger @LysandreJik <|||||>Agreed. Unless there are pretrained checkpoints available that require this kind of change, you should just tweak the code of `modeling_layoutlm` to your needs for this.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,886 | closed | Object detection pipeline | # What does this PR do?
* Object detection pipeline
* Give an image or list of images, outputs obj det annotations in form:
```python
[
[
{'score': 0.9..., 'label': 'remote', 'box': [{'x': 66, 'y': 118}, ...},
],
...
]
```
* See [colab](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/DETR/DETR_minimal_example_(with_DetrFeatureExtractor).ipynb#scrollTo=3ynXL-OtGskG) for more details
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. [huggingface_hub#74](huggingface/hub-docs#6)
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@NielsRogge
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 07-26-2021 08:44:23 | 07-26-2021 08:44:23 | @LysandreJik ofc as a personal matter I would rather this would be merged after the large refactor, but tbh this should be handled like any other PR, the first to be ready should be approved.
Moving this code to the new PR should be just as easy as the other ones (the code is currently rather well separated in terms of concerns). The biggest concern regarding code separation is one I raise here which is the scope of `post_process`. I would advocate it should take on more (so the pipelines delegates ALL work to the model) but it might be difficult for various reasons I don't measure.<|||||>Re-requesting review from @Narsil and @LysandreJik
@Narsil,
1. Changed box format to be {xmin,ymin,xmax,ymax} [here](https://github.com/huggingface/transformers/blob/de23a8cfca50f5f166793fdd9d31458bf94f360d/src/transformers/pipelines/object_detection.py#L147-L164)
2. Added `self.framework == pt` guard on pytorch specific code [here](https://github.com/huggingface/transformers/blob/de23a8cfca50f5f166793fdd9d31458bf94f360d/src/transformers/pipelines/object_detection.py#L147-L164)
3. As suggested by comment [here](https://github.com/huggingface/transformers/pull/12886#discussion_r698575417), `post_process` is handling more responsibility. As a side effect, [this shadowing](https://github.com/huggingface/transformers/pull/12886#discussion_r698567466) concern disappears
@LysandreJik
1. RGBA images are being handled when I updated `load_image` method (copied updates from image classification) [here](https://github.com/huggingface/transformers/blob/de23a8cfca50f5f166793fdd9d31458bf94f360d/src/transformers/pipelines/object_detection.py#L64-L83)
2. Added `ObjectDetectionPipeline` to [transformers/__init__.py](https://github.com/huggingface/transformers/blob/3f22f6d8393bd20dae5f875ec39f2adbd33d1d33/src/transformers/__init__.py)
3. Updated the testing file to match with updated testing scheme [here](https://github.com/huggingface/transformers/blob/4a8449ee18506d749da3291ac1df4b5dfefd8f62/tests/test_pipelines_object_detection.py)
Please let me know if you encounter any questions or concerns 👍 <|||||>@LysandreJik I think its ready to be merged. Please let me know if you there's anything else I need to take care of :) <|||||>Hi @mishig25 ,
I think you need to fix all the tests.
`import torch` need to be protected behind `is_torch_available` for instance.
For code quality you can `pip install -e .[dev]` and then `make fixup`.
The PT tests also seem to require `timm` which are not available in the tests. So you need a `@require_timm` decorator.
<|||||>~~@Narsil I'm confused about the tf tests failing.~~
~~For example, in this[ failed test](https://app.circleci.com/pipelines/github/huggingface/transformers/27659/workflows/e994e3b6-f627-477f-ba14-24bda195f91c/jobs/268944), I see the test failing for pipelines I haven't made any changes (also, I made sure my branch is up-to-date with the master):
here is an example for **test_pipelines_translation**~~
~~_____________ ERROR collecting tests/test_pipelines_translation.py _____________ImportError while importing test module '/home/circleci/transformers/tests/test_pipelines_translation.py'....E ModuleNotFoundError: No module named 'torch'~~
~~Please let me know what step I'm missing~~<|||||>Since the PR was approved by two HF members and tests passed, I've merged it when the merge option became available. Please let me know if it is a correct procedure (i.e. should I have waited until a transfomers maintainer merged it?)<|||||>That's correct: as long as you have approval of one core maintainer (more for big PRs), addressed all comments, and all tests pass, you can merge your PR. :-) |
transformers | 12,885 | closed | an unexpected keyword argument 'output_signature' | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:4.9.0
- Platform:linux
- Python version:3.6
- PyTorch version (GPU?):1.7
- Tensorflow version (GPU?):2.3,gpu version
- Using GPU in script?:k40m
- Using distributed or parallel set-up in script?:no
when I execute
python run_mlm.py
--model_name_or_path bert-base-chinese
--output_dir /data/bert_virtual/modelinfo/
--train_file /data/zxdata/000002_0.txt
from examples/tensorflow/language-modeling,this error happens
Traceback (most recent call last):
File "run_mlm.py", line 619, in <module>
main()
File "run_mlm.py", line 543, in main
tf.data.Dataset.from_generator(train_generator, output_signature=train_signature)
TypeError: from_generator() got an unexpected keyword argument 'output_signature'
| 07-26-2021 08:31:34 | 07-26-2021 08:31:34 | update tensorflow to 2.5 solve this.<|||||>Use tf >= v2.4.0 to solve this issue. This was in the release notes. |
transformers | 12,884 | closed | Super slow ByT5 Tokenizer | Hi,
The ByT5 Tokenizer seems to be super slow compared to others (T5).
See colab link below for example code.
T5-small tokenizer:

By-T5-small tokenizer:

See colab code here: https://colab.research.google.com/drive/1nVxCerQon3hVA1RylZz7Be4N8LfjPgth?usp=sharing | 07-26-2021 07:09:04 | 07-26-2021 07:09:04 | Pinging @patrickvonplaten <|||||>See his (@patrickvonplaten ) answer here: https://github.com/huggingface/transformers/pull/11971#issuecomment-889797262<|||||>pinging @Narsil<|||||>Hi, don't think rust is necessary here.
Using raw bytes should be just as fast in python (if not faster because no overhead).
Might require some heavy change though, most notably to remove the over reliance on regexp which is notably bad, especially with 100 of them in a single regexp.
@PhilipMay if you want to tackle it, just modify this function: https://github.com/huggingface/transformers/blob/master/src/transformers/models/byt5/tokenization_byt5.py#L197 and remove all traces of regexp (or make sure it only runs once, is precompiled and an efficient one).
Do you mind giving a script to assess current speed and make sure modifications are speeding up too ? (Might have sometime at some point to tackle this).<|||||>Hi @Narsil ,
thanks for the answer.
I made some debugging:

It seems like here:
https://github.com/huggingface/transformers/blob/2e0d767ab2bf8265a9f9b93adb1bc2084bc02849/src/transformers/tokenization_utils.py#L335-L350
It already splits `<pad>|</s>|<unk>` - see screenshot.
So for me it seems like we do not need this code:
https://github.com/huggingface/transformers/blob/2e0d767ab2bf8265a9f9b93adb1bc2084bc02849/src/transformers/models/byt5/tokenization_byt5.py#L207-L208
What do you think?
<|||||>Changing the code to this:
```python
# split on special characters
# pattern = f"({'|'.join(self.special_tokens_encoder.keys())})"
# sub_texts = list(filter(None, re.split(pattern, text)))
sub_texts = text
```
Converts this: `"This <unk> is <s> some </s> text. <pad> other text!"` to this:
`['T', 'h', 'i', 's', ' ', '<unk>', ' ', 'i', 's', ' ', '<', 's', '>', ' ', 's', 'o', 'm', 'e', ' ', '</s>', ' ', 't', 'e', 'x', 't', '.', ' ', '<pad>', ' ', 'o', 't', 'h', 'e', 'r', ' ', 't', 'e', 'x', 't', '!']`
Which seems to be ok...<|||||>Not sure if `<s>` is allowed to be split or not.
special_tokens contains something like 100 or so special tokens which most likely should be taken care of.
Can you run the tests ?
```
pytest -sv tests/test_tokenization_byt5.py
```
I expect your version is slightly incorrect, but I could be wrong.
instead of using `re.split(pattern, text)` if you manage to `self.regexp = re.compile(pattern)` (within __init__) and replace it with
`self.regexp.split(text)` that's already probably a speedup (with no change in functionality).
Edit: To be perfectly correct you need to recalculcate `self.regexp` everytime there's a change in special_tokens (`self.add_special_tokens` at least for instance) which would involved declaring a submethod to call the redefinition of `self.regexp`.
Letting @patrickvonplaten chime in if possible on correctness/speed for this.<|||||>Yeah `<s>` should not be split to single characters. It would also be important to make sure that newly added tokens of whatever character length are not split.
I think if all ByT5Tokenizer tests pass then a change to speed up the tokenizer is ok<|||||>> Yeah `<s>` should not be split to single characters.
I made a simple test:
```python
from transformers import ByT5Tokenizer
tok = ByT5Tokenizer.from_pretrained(pretrained_model_name_or_path="google/byt5-small")
token = tok.tokenize("This <unk> is <pad> a </s> test <s> with some special tokens.")
print(token)
```
It prints:
`['T', 'h', 'i', 's', ' ', '<unk>', ' ', 'i', 's', ' ', '<pad>', ' ', 'a', ' ', '</s>', ' ', 't', 'e', 's', 't', ' ', '<', 's', '>', ' ', 'w', 'i', 't', 'h', ' ', 's', 'o', 'm', 'e', ' ', 's', 'p', 'e', 'c', 'i', 'a', 'l', ' ', 't', 'o', 'k', 'e', 'n', 's', '.']`
So `<s>` is split. It does not split this: `<unk>, <pad> and </s>`.<|||||>@Narsil and @patrickvonplaten in debugger it looks like this:

The pattern is `'(<pad>|</s>|<unk>)'` but NOT `<s>` or something else.<|||||>@PhilipMay Please provide a script for the benchmark, it would really help assess speed.
As for the example you're right `<s>` doesn't seem to be tokenized. (it's showcased in patrick's example)<|||||>> @PhilipMay Please provide a script for the benchmark, it would really help assess speed.
Like so?
```python
from transformers import ByT5Tokenizer
from datasets import load_dataset
import time
dataset = load_dataset('cnn_dailymail', '3.0.0', split='train')
articles = [d["article"] for d in dataset][:1000]
tok = ByT5Tokenizer.from_pretrained(pretrained_model_name_or_path="google/byt5-small")
start_time = time.time()
for a in articles:
_ = tok.tokenize(a)
print("--- %s seconds ---" % (time.time() - start_time))
```<|||||>Mind checking out this: https://github.com/huggingface/transformers/pull/13119
I got something like 2X. Still suboptimal, but much of the overhead now lies in all the wrapping code, which would be much more tedious to remove. If you want to try, please go ahead ! <|||||>Ok, the PR is already a nice boost. Turns out most of the performance loss is caused by special tokens (the 125 extra_ids), which are quite unlikely to appear in your text.
The current code (for slow tokenizers) is quite optimized for low number of special_tokens, which is not the case here.
If you are able to afford being incorrect (because you know your text doesn't contain <extra_id_XX> that should be processed) then, you can simply save the tokenizer, REMOVE those extra_ids and load it again.
Processing 1000 sentences
Current master: 2.4s
Optimize_byt5 branch: 0.47s
Without extra_ids : 0.07s
Is that enough for your use case ?
How to remove extra_ids simply:
```python
tok = ByT5Tokenizer.from_pretrained(pretrained_model_name_or_path="google/byt5-small")
# CAVEAT: This will break some functionality, use with caution
tok.unique_no_split_tokens = ["</s>", "<pad>", "<unk>"]
```
|
transformers | 12,883 | closed | Distributed TPU training with run_mlm duplicate data | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.10.0 (currently master)
- Platform: TPU VM3.8 -- Ubuntu 20.04.2 LTS
- Python version: 3.8.10
- PyTorch version (GPU?): XLA - 1.8.1
- Tensorflow version (GPU?): None
- Using GPU in script?: None
- Using distributed or parallel set-up in script?: Using `examples/pytorch/language-modeling/run_mlm_no_trainer.py` which is using Accelerator
### Who can help
@sgugger @patil-suraj
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
I have modified small things in `examples/pytorch/language-modeling/run_mlm_no_trainer.py` and changes as follow (can be reached at https://github.com/akalieren/transformers-master)
1. Defined mp_fn to training script.
2. Added `streaming_data=True` to Dataset Class
3. Deleted `tpu_num_cores argument` from xla_spawn.py sys.args since it throw arrow.
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: (give the name) Training MLM from scratch
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Clone modified script
`git clone https://github.com/akalieren/transformers-master`
2. `export XRT_TPU_CONFIG="localservice;0;localhost:51011"`
3. Install required libraries (I did not add extra installments to requirements.txt to highlight they are not stated in official example)
```
pip install transformers-master
pip install .
pip install -r examples/pytorch/language-modeling/requirements.txt
pip install accelerate
pip install datasets[streaming]
```
4. Run command
```
python3 examples/pytorch/xla_spawn.py --num_cores 8 examples/pytorch/language-modeling/run_mlm_no_trainer.py --model_type "roberta" --per_device_eval_batch_size 512 --per_device_train_batch_size 512 --max_train_steps 1000000 --preprocessing_num_workers 50 --pad_to_max_length --tokenizer_name "./tokenizers/Roberta/" --dataset_name='oscar' --dataset_config_name='unshuffled_deduplicated_fr' --data_streaming=True --max_seq_length 512 --line_by_line=True
```
Note: Without xla_spawn, Accelerator use only one cores. Thats why I changed, with 1 core it is running but slow
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
2021-07-26 00:30:54.355600: E tensorflow/core/framework/op_kernel.cc:1693] OpKernel ('op: "TPURoundRobin" device_type: "CPU"') for unknown op: TPURoundRobin
2021-07-26 00:30:54.355659: E tensorflow/core/framework/op_kernel.cc:1693] OpKernel ('op: "TpuHandleToProtoKey" device_type: "CPU"') for unknown op: TpuHandleToProtoKey
07/26/2021 00:31:13 - INFO - run_mlm_no_trainer - Distributed environment: TPU
Num processes: 8
Process index: 0
Local process index: 0
Device: xla:1
Use FP16 precision: False
Downloading and preparing dataset oscar/unshuffled_deduplicated_tr (download: 9.68 GiB, generated: 26.43 GiB, post-processed: Unknown size, total: 36.10 GiB) to /home/akali/.cache/huggingface/datasets/oscar/unshuffled_deduplicated_tr/1.0.0/84838bd49d2295f62008383b05620571535451d84545037bb94d6f3501651df2...
07/26/2021 00:31:20 - INFO - run_mlm_no_trainer - Distributed environment: TPU
Num processes: 8
Process index: 1
Local process index: 1
Device: xla:0
Use FP16 precision: False
07/26/2021 00:31:20 - INFO - run_mlm_no_trainer - Distributed environment: TPU
Num processes: 8
Process index: 5
Local process index: 5
Device: xla:0
Use FP16 precision: False
07/26/2021 00:31:20 - INFO - run_mlm_no_trainer - Distributed environment: TPU
Num processes: 8
Process index: 7
Local process index: 7
Device: xla:0
Use FP16 precision: False
07/26/2021 00:31:20 - INFO - run_mlm_no_trainer - Distributed environment: TPU
Num processes: 8
Process index: 6
Local process index: 6
Device: xla:0
Use FP16 precision: False
07/26/2021 00:31:21 - INFO - run_mlm_no_trainer - Distributed environment: TPU
Num processes: 8
Process index: 2
Local process index: 2
Device: xla:0
Use FP16 precision: False
07/26/2021 00:31:21 - INFO - run_mlm_no_trainer - Distributed environment: TPU
Num processes: 8
Process index: 4
Local process index: 4
Device: xla:0
Use FP16 precision: False
07/26/2021 00:31:23 - INFO - run_mlm_no_trainer - Distributed environment: TPU
Num processes: 8
Process index: 3
Local process index: 3
Device: xla:0
Use FP16 precision: False
0 examples [00:00, ? examples/s]07/26/2021 00:31:44 - INFO - datasets_modules.datasets.oscar.84838bd49d2295f62008383b05620571535451d84545037bb94d6f3501651df2.oscar - generating examples from = /home/akali/.cache/huggingface/datasets/downloads/657d72dc352d822d0496bb9f519cf0de87b87064d56024d9d1ac5585568125b1
718146 examples [00:48, 14431.60 examples/s]07/26/2021 00:32:32 - INFO - datasets_modules.datasets.oscar.84838bd49d2295f62008383b05620571535451d84545037bb94d6f3501651df2.oscar - generating examples from = /home/akali/.cache/huggingface/datasets/downloads/f9b566f31181a53d426a2dc982a1b1de06cc92541de83cee688e5c57f4874300
1471415 examples [01:36, 13302.22 examples/s]07/26/2021 00:33:21 - INFO - datasets_modules.datasets.oscar.84838bd49d2295f62008383b05620571535451d84545037bb94d6f3501651df2.oscar - generating examples from = /home/akali/.cache/huggingface/datasets/downloads/21f0672cc841442e067c7ea57471788dbd350f889acbd8028e75edb9efcacddb
2229278 examples [02:24, 16466.88 examples/s]07/26/2021 00:34:09 - INFO - datasets_modules.datasets.oscar.84838bd49d2295f62008383b05620571535451d84545037bb94d6f3501651df2.oscar - generating examples from = /home/akali/.cache/huggingface/datasets/downloads/c027123c743fb1e0079bcd3be75f0ba6be89c6997f6b000e97c33f9c3d9c2742
2997743 examples [03:13, 18057.68 examples/s]07/26/2021 00:34:58 - INFO - datasets_modules.datasets.oscar.84838bd49d2295f62008383b05620571535451d84545037bb94d6f3501651df2.oscar - generating examples from = /home/akali/.cache/huggingface/datasets/downloads/d7cc7a7389a8187b043cf359794e6fdc7783d5d0b6e7d737381e89d34c25e441
3772944 examples [04:02, 15671.97 examples/s]07/26/2021 00:35:46 - INFO - datasets_modules.datasets.oscar.84838bd49d2295f62008383b05620571535451d84545037bb94d6f3501651df2.oscar - generating examples from = /home/akali/.cache/huggingface/datasets/downloads/a0175299b2eb4767f27e4f73c6848609be453fa5eb8d36dd6f8ecfd2c60a1e01
4569497 examples [04:51, 18017.92 examples/s]07/26/2021 00:36:35 - INFO - datasets_modules.datasets.oscar.84838bd49d2295f62008383b05620571535451d84545037bb94d6f3501651df2.oscar - generating examples from = /home/akali/.cache/huggingface/datasets/downloads/6b432b7a552ccc65da0810808506bb7570162447776507b2b47319a230b48aa3
5356241 examples [05:39, 16205.13 examples/s]07/26/2021 00:37:24 - INFO - datasets_modules.datasets.oscar.84838bd49d2295f62008383b05620571535451d84545037bb94d6f3501651df2.oscar - generating examples from = /home/akali/.cache/huggingface/datasets/downloads/ef34899af5cac3b75a798286fad2be831177c0833dab12c19c139b694d8c3544
6151458 examples [06:29, 11766.89 examples/s]07/26/2021 00:38:14 - INFO - datasets_modules.datasets.oscar.84838bd49d2295f62008383b05620571535451d84545037bb94d6f3501651df2.oscar - generating examples from = /home/akali/.cache/huggingface/datasets/downloads/9926c88e0b8a2013f57aaef129cb9978ff129b8bfb3408c1194852c806249f9d
6957212 examples [07:18, 18684.33 examples/s]07/26/2021 00:39:03 - INFO - datasets_modules.datasets.oscar.84838bd49d2295f62008383b05620571535451d84545037bb94d6f3501651df2.oscar - generating examples from = /home/akali/.cache/huggingface/datasets/downloads/aae79457ef2f44cd9ef24584b894c033d9099e6bc8e15b661a349cc185a230d7
7763558 examples [08:07, 16309.71 examples/s]07/26/2021 00:39:52 - INFO - datasets_modules.datasets.oscar.84838bd49d2295f62008383b05620571535451d84545037bb94d6f3501651df2.oscar - generating examples from = /home/akali/.cache/huggingface/datasets/downloads/0274c31e96e2728161263b15aa4da982825eec91c7b0693756a890e76d1167c4
8565051 examples [08:57, 17289.47 examples/s]07/26/2021 00:40:41 - INFO - datasets_modules.datasets.oscar.84838bd49d2295f62008383b05620571535451d84545037bb94d6f3501651df2.oscar - generating examples from = /home/akali/.cache/huggingface/datasets/downloads/f6423f5486261f771097352c7e2ae07643ad0f2fcf5f5d68c6a9921f8bd1e6a3
9397678 examples [09:46, 16643.61 examples/s]07/26/2021 00:41:30 - INFO - datasets_modules.datasets.oscar.84838bd49d2295f62008383b05620571535451d84545037bb94d6f3501651df2.oscar - generating examples from = /home/akali/.cache/huggingface/datasets/downloads/2edc5ca535c1ea46aaacebf7f68a3553aa5d92b70e574f05709fa02dc52b5f4e
10231465 examples [10:36, 12871.41 examples/s]07/26/2021 00:42:20 - INFO - datasets_modules.datasets.oscar.84838bd49d2295f62008383b05620571535451d84545037bb94d6f3501651df2.oscar - generating examples from = /home/akali/.cache/huggingface/datasets/downloads/3a06d248b02355ecdcf097df97a9e670db72c42456df9d04b15d4187933263ed
11075179 examples [11:26, 16567.73 examples/s]07/26/2021 00:43:11 - INFO - datasets_modules.datasets.oscar.84838bd49d2295f62008383b05620571535451d84545037bb94d6f3501651df2.oscar - generating examples from = /home/akali/.cache/huggingface/datasets/downloads/0e3af1310ea118f4a5e8c13b40a561ae20ba209ae196d633a68155af35ec049c
Dataset oscar downloaded and prepared to /home/akali/.cache/huggingface/datasets/oscar/unshuffled_deduplicated_tr/1.0.0/84838bd49d2295f62008383b05620571535451d84545037bb94d6f3501651df2. Subsequent calls will reuse this data.
07/26/2021 00:43:42 - WARNING - datasets.builder - Reusing dataset oscar (/home/akali/.cache/huggingface/datasets/oscar/unshuffled_deduplicated_tr/1.0.0/84838bd49d2295f62008383b05620571535451d84545037bb94d6f3501651df2)
07/26/2021 00:43:42 - WARNING - run_mlm_no_trainer - You are instantiating a new config instance from scratch.
loading configuration file ./tokenizers/Roberta/config.json
Model config RobertaConfig {
"architectures": [
"RobertaForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"eos_token_id": 2,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "roberta",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"transformers_version": "4.10.0.dev0",
"type_vocab_size": 1,
"use_cache": true,
"vocab_size": 52000
}
Didn't find file ./tokenizers/Roberta/tokenizer.json. We won't load it.
Didn't find file ./tokenizers/Roberta/added_tokens.json. We won't load it.
loading file ./tokenizers/Roberta/vocab.json
loading file ./tokenizers/Roberta/merges.txt
loading file None
loading file None
loading file ./tokenizers/Roberta/special_tokens_map.json
loading file ./tokenizers/Roberta/tokenizer_config.json
loading configuration file ./tokenizers/Roberta/config.json
Model config RobertaConfig {
"architectures": [
"RobertaForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"eos_token_id": 2,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "roberta",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"transformers_version": "4.10.0.dev0",
"type_vocab_size": 1,
"use_cache": true,
"vocab_size": 52000
}
loading configuration file ./tokenizers/Roberta/config.json
Model config RobertaConfig {
"architectures": [
"RobertaForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"eos_token_id": 2,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "roberta",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"transformers_version": "4.10.0.dev0",
"type_vocab_size": 1,
"use_cache": true,
"vocab_size": 52000
}
# AFTER THIS POINT:
Script started to print tqdm process multiple times like that:
----> LOOK HERE Running tokenizer on dataset line_by_line #43: 19%|███████████████████████▏ | 43/221 [12:20<51:05, 17.22s/ba]
Running tokenizer on dataset line_by_line #36: 19%|███████████████████████▏ | 43/221 [12:24<51:20, 17.30s/ba]
Running tokenizer on dataset line_by_line #29: 19%|███████████████████████▏ | 43/221 [12:28<51:37, 17.40s/ba]
Running tokenizer on dataset line_by_line #38: 19%|███████████████████████▏ | 43/221 [12:22<51:15, 17.28s/ba]
Running tokenizer on dataset line_by_line #5: 18%|█████████████████████▏ | 39/221 [12:33<58:34, 19.31s/ba]
Running tokenizer on dataset line_by_line #21: 19%|███████████████████████▏ | 43/221 [12:30<51:45, 17.45s/ba]
Running tokenizer on dataset line_by_line #46: 19%|███████████████████████▏ | 43/221 [12:19<51:01, 17.20s/ba]
Running tokenizer on dataset line_by_line #38: 19%|███████████████████████▏ | 43/221 [12:25<51:25, 17.34s/ba]
Running tokenizer on dataset line_by_line #42: 19%|███████████████████████▏ | 43/221 [12:23<51:19, 17.30s/ba]
Running tokenizer on dataset line_by_line #35: 19%|███████████████████████▏ | 43/221 [12:26<51:31, 17.37s/ba]
Running tokenizer on dataset line_by_line #21: 19%|███████████████████████▏ | 43/221 [12:30<51:48, 17.46s/ba]
Running tokenizer on dataset line_by_line #45: 19%|███████████████████████▏ | 43/221 [12:23<51:17, 17.29s/ba]
Running tokenizer on dataset line_by_line #35: 19%|███████████████████████▏ | 43/221 [12:27<51:34, 17.38s/ba]
----> AND HERE Running tokenizer on dataset line_by_line #43: 18%|█████████████████████
As it can be seen processor 43 printed 2 times but their percentage is inconsistent. Since it can't be decreased, I think it is preprocessing in for each core.
```
## Expected behavior
I expected to run training script with 8 cores with normal speed. But it is stoped at this point and not continue from here even without small changes.
<!-- A clear and concise description of what you would expect to happen. -->
| 07-26-2021 01:53:59 | 07-26-2021 01:53:59 | Dataset streaming has not been tested on any of the examples, so I'm not sure it works, especially for distributed training on TPUs.<|||||>I am working on this feature for several days. Especially, I am trying to implement Iterable Dataset which reads preprocessed data from Cloud. Is the problem about streaming or Iterable Dataset, you think? However, using Pytorch Iterable Dataset in distributed training could be tricky as it can be seen from this [issue](https://github.com/pytorch/ignite/issues/1076). <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,882 | closed | loss sudden increase | 
I tried this a few times | 07-26-2021 01:05:20 | 07-26-2021 01:05:20 | Hi,
We like to keep Github issues for bugs/feature requests. For training related questions, please see the [forum](https://discuss.huggingface.co/). Also, make sure to make it possible for people to reproduce your issue, by providing code or a colab notebook.
Thanks!
|
transformers | 12,881 | closed | Tensorflow GPT-2 model incapable of freezing layers | I am trying to finetune gpt-2 by freezing some layers according to [this article](https://arxiv.org/pdf/2103.05247.pdf). Freezing the specified layers doesn't change the number of trainable parameters, even though accessing the .trainable attribute of the weights of the model shows that they are False.
```python
from transformers import TFAutoModelForCausalLM
model = TFAutoModelForCausalLM.from_pretrained('gpt2')
#Picking random weight
w = model.weights[6]
w #<tf.Variable 'tfgp_t2lm_head_model_2/transformer/h_._0/attn/c_proj/weight:0' shape=(768, 768) dtype=float32, numpy=...
w._trainable = False
w.trainable #False
#Confirming that trainable is false in the model
model.weights[6].trainable #False
model.compile(...)
model.summary()
```
prints
```
Model: "tfgp_t2lm_head_model_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
transformer (TFGPT2MainLayer multiple 124439808
=================================================================
Total params: 124,439,808
Trainable params: 124,439,808
Non-trainable params: 0
_________________________________________________________________
```
Using the ```.get_weights()``` method returns only numpy arrays, so I use .weights.
Freezing all weights the same way in the run_clm.py tensorflow script results in the same summary, and the loss value at each step does decrease, indicating that the weights are being updated. Am I missing something or is this a bug?
| 07-25-2021 21:48:08 | 07-25-2021 21:48:08 | Hi, I think this is going to be quite difficult in Keras given the way our models are implemented, as I believe Keras only supports freezing weights on Layer objects, and we haven't implemented the individual pieces of GPT2 as Keras Layers.
If you'd like to only train specific pieces of your model, I'd recommend writing a manual eager training loop with GradientTape, see [here](https://www.tensorflow.org/guide/autodiff). For example, something like this (note: untested code!) would work, assuming you have a batch of data as a dict with at least `'input_ids'` and `'labels'` keys:
```
trainable_weights = model.weights[6:8] # Just picking a list of some random weights to update, you can pick specific ones!
optimizer = tf.keras.optimizers.Adam(5e-5)
with tf.GradientTape() as tape:
outputs = model(batch)
loss = outputs['loss']
grads = tape.gradient(loss, trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
```
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Are there any updates or work around for freezing GPT-2 model layers at Tensorflow?
Thank you<|||||>(in case you stumble upon this issue and you have the same question, check #18282) |
transformers | 12,880 | closed | RoBERTa: Truncation error: Sequence to truncate too short to respect the provided max_length | ## Environment info
- `transformers` version: 4.9.0
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.11
- PyTorch version (GPU?): 1.9.0+cu102 (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Flax version (CPU?/GPU?/TPU?): TPU
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Error is coming with both GPU and TPU
- Using distributed or parallel set-up in script?: No
### Who can help
Models:
RoBERTa - @LysandreJik, @patrickvonplaten, @patil-suraj,
Library:
- tokenizers: @LysandreJik
## Information
Model I am using RoBERTa model for SQuAD 2.0 and getting below error when trying to tokenize the Question and context pair:
The problem arises when using:
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: SQuAD 2.0
## To reproduce
Steps to reproduce the behavior:
I am trying to tokenize SQuAD 2.0 dataset using roberta-base tokenizer and model but it has started giving me below error.
This code snippet was working till few days before and now it is giving below error without changing anything.
```Python
model_args = ModelArguments(
model_checkpoint=model_checkpoint,
token_checkpoint=token_checkpoint,
squad_v2=True,
max_length=384,
doc_stride=128,
batch_size=8,
n_best_size=25,
max_answer_length=30,
min_null_score=7.0, ##FOR ROBERTa
NA_threshold=-3,
pad_side="right")
token_checkpoint = "roberta-base"
model_checkpoint= "roberta-base"
tokenizer = AutoTokenizer.from_pretrained(token_checkpoint)
model = AutoModelForQuestionAnswering.from_pretrained(model_checkpoint,
attention_probs_dropout_prob=0.2,
hidden_dropout_prob=0.2)
datasets = load_dataset("squad_v2" if model_args.squad_v2 else "squad")
tokenized_examples = tokenizer(
datasets["question" if model_args.pad_side else "context"],
datasets["context" if model_args.pad_side else "question"],
truncation="only_second" if model_args.pad_side else "only_first",
max_length=model_args.max_length,
stride=model_args.doc_stride,
return_overflowing_tokens=True,
return_offsets_mapping=True,
padding="max_length",
)
```
**_ERROR messages:_**
Truncation error: Sequence to truncate too short to respect the provided max_length
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 329, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 323, in _start_fn
fn(gindex, *args)
File "<ipython-input-14-3842fd6863c2>", line 75, in pipeline
tokenized_datasets = datasets.map(prepare_train_features, batched=True, batch_size=1000,remove_columns=datasets["train"].column_names)
File "/usr/local/lib/python3.7/dist-packages/datasets/dataset_dict.py", line 489, in map
for k, dataset in self.items()
File "/usr/local/lib/python3.7/dist-packages/datasets/dataset_dict.py", line 489, in <dictcomp>
for k, dataset in self.items()
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 1679, in map
desc=desc,
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/datasets/fingerprint.py", line 397, in wrapper
out = func(self, *args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 2014, in _map_single
offset=offset,
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 1900, in apply_function_on_filtered_inputs
function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
File "<ipython-input-6-54e98dcfc55e>", line 14, in prepare_train_features
padding="max_length",
File "/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py", line 2385, in __call__
**kwargs,
File "/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py", line 2570, in batch_encode_plus
**kwargs,
File "/usr/local/lib/python3.7/dist-packages/transformers/models/gpt2/tokenization_gpt2_fast.py", line 163, in _batch_encode_plus
return super()._batch_encode_plus(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_fast.py", line 408, in _batch_encode_plus
is_pretokenized=is_split_into_words,
Exception: Truncation error: Sequence to truncate too short to respect the provided max_length
## Expected behavior
SQuAD 2.0 dataset should be tokenized without any issue.
| 07-25-2021 17:38:24 | 07-25-2021 17:38:24 | After further analysis, I could see that RoBERTa tokenizer is not able to handle a question in SQuAD 2.0 dataset at index "107709" due to lot of blank spaces at the start of the question and it's length is 25651 character.
While other tokenizers are able to handle this.
```python
print("question length | 107709:",len(dataset[107709]['question']))
print("context | 107709:",dataset[107709]['question'])
```
### Output
question length | 107709: 25651
context | 107709: What radiates two lobes perpendicular to the antennas axis?
<|||||>I just started running into this late last week in an internal test.
Is this new? Has something changed ? <|||||>just happened to me as well on SQuAD1.1
<|||||>The change is due to https://github.com/huggingface/datasets/pull/2586 which changed the SQUAD dataset. The failure is normal in the sense that the tokenizer is asked to truncate tokens from the second sentence (context) when it's actually the first one (question) that is too long. Removing the whitespace at the beginning of the question fixes this (this is why it doesn't happen with a BERT tokenizer, because the BERT tokenizer does it, the roberta tokenizer leaves all individual spaces however).<|||||>I have fixed the example notebook and the PR mentioned above shows how to fix it in the example scripts.<|||||>Thanks for fixing this issue. <|||||>I very much appreciate this thread for helping me resolve this problem when it happened to me, too. I just wanted to make others aware that there is still an example notebook that will result in this error if it is used with roBERTa.
[](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb)
The correct code can be found here: [](https://huggingface.co/course/chapter7/7)
preprocess_training_examples
and
preprocess_validation_examples
which include a line to strip the leading whitespace from the question before tokenization.
<|||||>Hi,
Thank you for the information, I used **BERT** model, and my questions are a lot longer than answers, even after the removing white space that the code will do, I got the same error, do you know how to fix it? |
transformers | 12,879 | closed | Feature Request: Add support for --do_train/eval/predict arguments in the TF examples script for token classification | It would be truly awesome if the TensorFlow example of token classification could mimic the capabilities of the PyTorch implementation, by providing additional argument-functionality, including `--do_train`, `--do_eval` and `--do_predict`.
Furthermore, giving the user the opportunity to provide a custom dataset through the `--predict_file` argument. 💯
I see that you, @Rocketknight1, are already doing some awesome work, so perhaps you know whether this will be implemented anytime soon?
https://github.com/huggingface/transformers/blob/9ff672fc4d84db3b077e03ea22e2dafbd5d99fa4/examples/tensorflow/token-classification/run_ner.py#L494-L527
| 07-25-2021 17:19:49 | 07-25-2021 17:19:49 | Hi, you can use the `--train_file` and `--validation_file` arguments to pass custom data to the model! Are you specifically interested in doing predictions too?<|||||>Yes! It is the predictions I think would be awesome to have the option to do! 🥇 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi, I didn't mean to let this go stale! We're planning a rewrite of our examples with the new data pipeline soon - I'll try to make sure we include the option for a `--predict_file` when that happens.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,878 | closed | Trainer accumulates logits | Hi,
I am using `transformers.Trainer` to pre-train a model with MLM. From line 2213 in `trainer.py` I can see the logits obtained on the evaluation step are accumulated:
`preds_host = logits if preds_host is None else nested_concat(preds_host, logits, padding_index=-100)`
This makes it impossible to use a reasonable validation dataset, since already 1000 examples with `max_length = 512` and `vocab_size = 30522` occupy 1000 * 512 * 30522 / 1024^3 > 14 Gb of memory (e.g. **c4** dataset has validation dataset of size 365 000). This can be corrected if the additional metrics are calculated on each validation step for each batch separately rather than in the end.
This implies lines 2272-2275 should be moved inside the _for loop_. If you agree with all I have stated, I can do it by my own and come up with the merge request. | 07-25-2021 13:39:30 | 07-25-2021 13:39:30 | @sgugger <|||||>No, metrics usually can't be computed on a batch-per-batch basis as it usually gives the wrong result when the metric is not a mean (like precision, recall or F1 score).
For metrics in language modeling, you should use your own manual evaluation loop after training.<|||||>Fair enough, but if I do not need any metrics at all (need to only track
the loss value)? I still cannot use the validation sample since the logits
will be accumulated anyway.
On Mon, 26 Jul 2021 at 15:39, Sylvain Gugger ***@***.***>
wrote:
> No, metrics usually can't be computed on a batch-per-batch basis as it
> usually gives the wrong result when the metric is not a mean (like
> precision, recall or F1 score).
>
> For metrics in language modeling, you should use your own manual
> evaluation loop after training.
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/12878#issuecomment-886666573>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AIXZKXMPSQLGIY2ZROP7ZQ3TZVJRTANCNFSM5A6RXYNA>
> .
>
<|||||>If you don't pass any `compute_metrics` function, they won't be accumulated, or you can force it with `prediction_loss_only=True` in your `TrainingArguments`.<|||||>Thanks a lot! Did not know about this parameter. To sum up, while pre-training a model, we need to write a custom evaluation loop to use custom metrics? Probably it is worth adding a special parameter for the metrics, indicating whether its computation can be split into batches or not? I can handle this.<|||||>Yes, we don't have anything setup in the Trainer for metric accumulation, so basically any time you want to avoid accumulating logits (so all language modeling tasks basically), you will need a custom training loop.
We'll try to come up with an API to make it easier to do a batch-by-batch accumulation but that probably will need a rewrite of some pieces of the Trainer, which in turn might cause some breaking changes. So it's probably going to be for v5 |
transformers | 12,877 | closed | run_mlm.py errors when running validation only | If you run `mlm_train.py' for validation only (i.e. --do_eval but don't pass -do_train) the script only runs if you still pass --train_file
This is due to the inference of the train file being used to determine the file type (i.e. text)
https://github.com/huggingface/transformers/blob/9ff672fc4d84db3b077e03ea22e2dafbd5d99fa4/examples/pytorch/language-modeling/run_mlm.py#L281
despite `--train_file` being an option argument.
https://github.com/huggingface/transformers/blob/9ff672fc4d84db3b077e03ea22e2dafbd5d99fa4/examples/pytorch/language-modeling/run_mlm.py#L131
It's useful being able to run eval only, it's not a big deal passing --train_file despite it not being used but given it's option the code should probably use train_file only if it's not none else validation_file
| 07-25-2021 07:12:32 | 07-25-2021 07:12:32 | I'm not sure what you mean, could you check you have the latest version of the script? There is a test of whether the `train_file` is None or not at the line you mention, and then the `validation_file` is used instead if the `train_file` has not been set.<|||||>@sgugger You're right, it was fixed in this commit https://github.com/huggingface/transformers/commit/9490d668d2f59ad2e7a4db3dc7ed2f9684af369c#diff-5f4433e38787dd047b331ec822da660195a786ea9350ad611623cd03d468b102
I'm using version 4.8.0 |
transformers | 12,876 | closed | New transformers.onnx CLI does not support ONNX quantization | # 🚀 Feature request
New transformers.onnx CLI introduced in 4.9.0 does not support ONNX quantization, which is a notable missing feature from the `convert_graph_to_onnx.py` script.
Semi-related: the [source quantize() function](https://github.com/microsoft/onnxruntime/blob/79097ef5535cc5ac18fc8e9010c99de08df21340/onnxruntime/python/tools/quantization/quantize.py#L56) that script leverages is depreciated so it might be a good time to switch to `quantize_dynamic()` too.
| 07-24-2021 20:53:46 | 07-24-2021 20:53:46 | Hi @minimaxir,
Thanks for reporting this.
With the new "configuration based" capabilities we are taking a very different approach from the initial `convert_graph_to_onnx.py` which was relying on heuristics to match dynamic axes and was exporting them in the wrong order quite often.
The new approach focus on a more reliable approach and on exporting only to "raw" ONNX graphs which can then be consumed by different "runtimes" not only onnxruntime. Thus we are not exposing anymore optimizations/quantizations features as part of transformers.
Still, we are currently working on another project will provide such features, leveraging the new configuration based export. It should be available in August and ONNX Runtime will be one of the first component we will provide optimizations for.
Stay tuned 🤗 <|||||>SGTM. (it's not a dealbreaker as Microsoft's approach is to create a raw ONNX and quantize it too).
Excited to see future ONNX support!<|||||>@mfuntowicz Excidetd to see onnxruntime supported BART/MBART models<|||||>Hi @mfuntowicz, any updates on the new project? Thanks. |
transformers | 12,875 | closed | Model card updated/deleted | Hi,
I can check that the Model "tuner007/t5_abs_qa" has been removed from model hub...Is there anything i need to update ?
@patrickvonplaten [Refer](https://huggingface.co/tuner007/t5_abs_qa/commit/faf30925ced0f25d0d5d321fb0ada04caaf5568d)
/thanks | 07-24-2021 20:47:47 | 07-24-2021 20:47:47 | Hey @tuner007,
Thanks a lot for your issue! I'm very sorry that this has happened - this was a bug from my side :-/ I corrected it and your model should work as before now :-)<|||||>> Hey @tuner007,
>
> Thanks a lot for your issue! I'm very sorry that this has happened - this was a bug from my side :-/ I corrected it and your model should work as before now :-)
No worries ! thanks |
transformers | 12,874 | closed | Finetuning GPT-2 on small datasets | I have a relatively small dataset that i've scraped on my discord server. I wanted to make a gpt-2 chatbot with it, but the data is relatively small (3782031 characters counting the eos token). Training for a small number of epochs did nothing for any checkpoint related to gpt-2 (I tried distilbert, gpt-2, dialoGPT-small, and other), and training for a large number of epochs absolutely destroyed the whole model, it was barely able to generate coherent at all, it was either special characters, jumble, or nothing at all. I've tested the same script with a much larger dataset and it worked just fine, so I can only assume it's because of the dataset size.
I was trying to find ways to freeze the gpt-2 base model and leave just the LMHead, but since the LMHead is somehow tied to the embedding layer, that wouldn't be possible... If there isn't a way to freeze the head of the model, what else should I do then? I've been trying to complete this personal project for quite a while now, and i'm out of options at this point. I'm using a custom TF script from the example folder on TPU, since the pytorch version makes the memory usage blow up on colab. | 07-24-2021 19:48:42 | 07-24-2021 19:48:42 | I've finally found [this article](https://arxiv.org/pdf/2103.05247.pdf), and it seems promising. Going to try it out i'll say how it went.<|||||>For training-related questions, please refer to the [forum](https://discuss.huggingface.co/). We like to keep Github issues for bugs/feature requests.
For example, you can find all fine-tuning GPT-2-related questions [here](https://discuss.huggingface.co/search?q=fine-tuning%20gpt2).
Thank you!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,873 | closed | Possibly wrong API documentation for BigBirdTokenizerFast | - `transformers` version: v.4.9.0
### Who can help
Documentation: @sgugger
## Information
At this URL:
https://huggingface.co/transformers/model_doc/bigbird.html#transformers.BigBirdTokenizerFast
The doc says,
`bos_token (str, optional, defaults to "[CLS]")`
and
`eos_token (str, optional, defaults to "[SEP]")`
but the actual code is:
```
def __init__(
self,
vocab_file=None,
tokenizer_file=None,
unk_token="<unk>",
bos_token="<s>",
eos_token="</s>",
pad_token="<pad>",
```
May be the API document need a fix? the explanation seems already clear though, only default value of those params are wrong.
| 07-24-2021 18:59:27 | 07-24-2021 18:59:27 | Thanks for flagging, should be fixed by the PR mentioned above! |
transformers | 12,872 | closed | Allow multilabel classification mode for widgets in the models repo | # 🚀 Feature request
1. Enable multilabel classification mode and regression mode for the widgets in the model repo.
2. Create the corresponding tags that can be read from the model card.
## Motivation
Models for sequence classification by default support three modes: binary/multiclass classification, multilabel classification, and regression. However, the widgets in the model repository support only multiclass mode (where probabilities of classes sum to 1). This can be misleading for the users who taste the models using the widgets. For example, my model https://huggingface.co/cointegrated/rubert-tiny-toxicity, which is intended for multilabel classification, but the widget normalizes the predicted probabilities to sum to 1, which leads to confisuion of the potential users of the model.
## Your contribution
If you show me where to start, I could start working on implementing this feature. However, currently I don't know what part of the Huggingface repository is responsible for widgets and underlying computations.
| 07-24-2021 16:15:48 | 07-24-2021 16:15:48 | Hi @avidale, I'm closing this issue as I think it is an accidental duplicate of #12871.
Also, I've transferred #12871 to [huggingface_hub/#222](huggingface/hub-docs#23) since thats where the widgets src is |
transformers | 12,870 | closed | Bart Generation | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.9.0
- Platform:Linux
- Python version:3.7
- PyTorch version (GPU?):1.9.0
- Using GPU in script?:true
- Using distributed or parallel set-up in script?:false
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.-->
@patrickvonplaten
@patil-suraj
@sgugger
@patil-suraj
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [x] the official example scripts:run_summarization.py
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: summarization
* [ ] my own task or dataset: (give details below)
### Question 1
In `src/examples/pytorch/summarizations/run_summarization.py`. I choose Bart as my model.
since it uses `BartTokenzier` and `DataCollatorForSeq2Seq`. So the labels to the datacollator is `<bos> summarization <eos>` and the automatically generated `decoder_input_ids` is `<eos> <bos> summarization`, because the `decoder_start_token_id` in `bart_config` is the same as that of `<eos>` , is there any special reasons to do it ? I think the `labels` should be `summarization <eos>` and `decoder_input_ids` should be `<bos> summarization`.
### Question 2
Why `decoder_start_token_id` is the same as `<eos>` , which means all bart will use `<eos>` as its first token to start generating, isn't this against the way the bart was trained? | 07-24-2021 16:13:03 | 07-24-2021 16:13:03 | Bart was trained to have EOS as it's start_token_id and we've noticed that forcing the first token to be BOS gives better results, see: https://github.com/huggingface/transformers/issues/3668<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,869 | closed | I donnot want print trainer's logging info | torch 1.18.0,tf 1.14.0
every thing works well yesterday
i didn't do any thing, but today when i use predict of trainer, it print this infomation before the output
this situation didn't occur before
```
***** Running Prediction *****
Num examples = 1
Batch size = 256
```
now i want this info don't print
I check the document, but didn't find a parameter to control don't print logger
```
In predict_loop function
batch_size = dataloader.batch_size
num_examples = self.num_examples(dataloader)
logger.info(f"***** Running {description} *****")
logger.info(f" Num examples = {num_examples}")
logger.info(f" Batch size = {batch_size}")
losses_host: torch.Tensor = None
preds_host: Union[torch.Tensor, List[torch.Tensor]] = None
labels_host: Union[torch.Tensor, List[torch.Tensor]] = None
```
thanks @sgugger | 07-24-2021 07:20:17 | 07-24-2021 07:20:17 | You can use the argument `log_level` to adjust the level of the logger. If you set it to "warning", it won't print this.<|||||>thanks, it works |
transformers | 12,868 | closed | MT5-base tokenizer can't decode to target language after decoding | ## Environment info
- `transformers` version: 4.9.0
- Platform: Google Colab
- Python version: 3.8+
- PyTorch version (GPU?): 1.9.0+cu102
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
Models:
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
Library:
- tokenizers: @LysandreJik
## Information
Model I am using (MT5):
The problem arises when using:
* [ ] my own modified scripts: When I am finetuning mt5-small model for question answering using mt5ForConditionalGeneration, after running inference, the output is not in the specified language.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (Question Answering)
## To reproduce
link to my notebook: [link](https://colab.research.google.com/drive/12nMMdHul4Avxn38o3LZhsVgVE02I6g2E?usp=sharing)
Steps to reproduce the behavior:
1. Run the inference section
2. Run on any language
3. The model outputs in a mixed language
## Expected behavior
The expected behavior should be to produce output on a single language.
| 07-24-2021 06:59:06 | 07-24-2021 06:59:06 | |
transformers | 12,867 | closed | Possible bug in spm-based tokenizers | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: latest (4.10.0.dev0)
- Python version: 3.8
- PyTorch version (GPU?): 1.9.0
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@patrickvonplaten, @patil-suraj
## Information
Model I am using (Bert, XLNet ...): `mbart-large-50-many-to-many-mmt`
## To reproduce
Running the following script shows that encoding and decoding a Chinese string would not give back the same string (punctuation marks will be normalized):
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('facebook/mbart-large-50-many-to-many-mmt', src_lang='zh_CN', tgt_lang='zh_CN')
sentence = '您好,您打算到哪里去呢?'
input = tokenizer(sentence)
output = tokenizer.decode(input['input_ids'], skip_special_tokens=True)
print(output)
print(output == sentence)
```
stdout:
```
您好,您打算到哪里去呢?
False
```
Using slow version of the tokenizer or setting src_lang and tgt_lang attributes directly would give the same results.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Expected stdout:
```
您好,您打算到哪里去呢?
True
```
| 07-23-2021 18:18:14 | 07-23-2021 18:18:14 | In fact, this seems to be a problem with other spm based tokenizers too. Other MBART checkpoints as well as MT5 and XLMR models have the same behavior but not multilingual BERT checkpoints. Not sure if this issue has been reported/ discussed before. Any hints are appreciated.<|||||>@patil-suraj - could you take a look here for MBart "many-to-many"?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi, was wondering if there are any updates?<|||||>Hi @Mehrad0711 Sorry to only reply now.
I will try to allocate some time this week for it.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@patil-suraj - ping again here :-)<|||||>@Mehrad0711 @patrickvonplaten Sorry about being super slow here.
I'm not sure if this is really a bug, it looks like the punctuations are normalized by the spm model itself. You could load the original spm model from mbart and see that it normalizes the string during tokenization.
To verify, download the official spm model from here https://github.com/pytorch/fairseq/tree/main/examples/mbart
```python3
import sentencepiece as spm
sp_model = spm.SentencePieceProcessor()
sp_model.Load("mbart.cc25.v2/sentence.bpe.model")
sentence = '您好, 您打算到哪里去呢?'
tokenized = sp_model.encode_as_pieces(sentence)
# => ['▁您', '好', ',', '您', '打算', '到', '哪里', '去', '呢', '?']
decoded = sp_model.decode_pieces(tokenized)
# => '您好,您打算到哪里去呢?'
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,866 | closed | [MPNet] example of fine-tuning MPNet language model on domain specific corpus | # 🚀 Feature request
I'd like to understand if it's possible to fine-tune MPnet model on domain specific corpus. I tried to run following script for MPNet and it seemed to be working (or at least on throwing any errors).
`python run_mlm.py --model_name_or_path microsoft/mpnet-base --dataset_name wikitext --do_train --output_dir tmp/mpnet-output --dataset_config_name wikitext-2-raw-v1`
However, since MPNet combines both MLM and PLM objectives, I'm not clear whether MPNet will actually train properly.
## Motivation
MPNet establishes SOTA benchmarks on number of tasks. It could be useful to have some examples on how to fine-tune MPNet model on specific corpuses and downstream tasks.
| 07-23-2021 18:12:12 | 07-23-2021 18:12:12 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,865 | closed | Add TF multiple choice example | Add a new example of multiple choice (SWAG) training with Keras/TF, remove the previous TFTrainer one. | 07-23-2021 16:35:05 | 07-23-2021 16:35:05 | |
transformers | 12,864 | closed | [Speech2Text] Slow tests are failing on master | Currently the following tests are failing on master:
```
FAILED tests/test_modeling_speech_to_text.py::Speech2TextModelIntegrationTests::test_generation_librispeech
FAILED tests/test_modeling_speech_to_text.py::Speech2TextModelIntegrationTests::test_generation_librispeech_batched
``` | 07-23-2021 15:28:54 | 07-23-2021 15:28:54 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,863 | closed | [Wav2Vec2] Slow pretraining tests are failing on CPU | The following tests are failing on CPU currently:
```
tests/test_modeling_wav2vec2.py::Wav2Vec2ModelIntegrationTest::test_inference_integration
tests/test_modeling_wav2vec2.py::Wav2Vec2ModelIntegrationTest::test_loss_pretraining
```
-> check if they also fail on GPU. If not add a skip CPU decorator | 07-23-2021 15:21:10 | 07-23-2021 15:21:10 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,862 | open | BatchFeature should cast to `np.float32` by default | Currently the default dtype for Speech Feature Extractors is `numpy.float64` which leads to two problems:
1) It makes the data processing extremely expensive for the RAM. Many sound formats are stored in int16 (such as `.wav`) and are then transformed to float64 which unnecessarly increases RAM by a factor of 4. We should at least stick to `float32`
2) Currently we have added some hacks to the Wav2Vec2 and Speech2TextTransformer feature extractors to prevent Double vs. Float dtype mismatches: https://github.com/huggingface/transformers/blob/f6e254474cb4f90f8a168a599b9aaf3544c37890/src/transformers/models/wav2vec2/feature_extraction_wav2vec2.py#L87
The main problem is that `np.asarray([....])` by default creates a np.float64 array and that we just pass that format along.
=> We should either always cast to float32 in BatchFeature (see here: https://github.com/huggingface/transformers/blob/f6e254474cb4f90f8a168a599b9aaf3544c37890/src/transformers/feature_extraction_utils.py#L151) or add a flag `dtype` to BatchFeature.
@patrickvonplaten | 07-23-2021 15:00:38 | 07-23-2021 15:00:38 | |
transformers | 12,861 | closed | Asking for consent to publish `_LazyModule` as a standalone PyPI package on GitHub | Hi,
I very much like your `_LazyModule` implementation.
https://github.com/huggingface/transformers/blob/e218249b02465ec8b6029f201f2503b9e3b61feb/src/transformers/file_utils.py#L1945
I would like to reuse it on different other projects. That is why I ask for your consent to publish it as a
standalone PyPI package on GitHub while keeping the license. Are you ok with that? | 07-23-2021 14:59:52 | 07-23-2021 14:59:52 | Tagging @LysandreJik and @sgugger <|||||>Thanks for asking! You can definitely package this class in a module as long as it's on the same license as in this repo (Apache 2.0) and you are willing to maintain it.<|||||>Here you go. A release will follow the next days:
https://github.com/telekom/lazy-imports<|||||>Many thanks again. I will close the issue now. |
transformers | 12,860 | closed | [tests] fix logging_steps requirements | This PR fixed slow tests that got affected by a new sanity check at https://github.com/huggingface/transformers/pull/12796
| 07-23-2021 14:44:47 | 07-23-2021 14:44:47 | |
transformers | 12,859 | closed | Cannot import pipeline after installation | ## Environment info
- `transformers` version: 4.9.0
- Platform: Linux-4.15.0-151-generic-x86_64-with-glibc2.27
- Python version: 3.9.2
- PyTorch version (GPU?): 1.7.1+cu101 (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.3.4 (cpu)
- Jax version: 0.2.17
- JaxLib version: 0.1.69
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@LysandreJik
## Information
The problem arises when using:
* [ ] the official example scripts: (give details below)
I am attempting a fresh installation of transformers library, but after successfully completing the installation with pip, I am not able to run the test script: `python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('we love you'))"`
Instead, I see the following output:
> /home/shushan/.conda/envs/ccg_parser/lib/python3.9/site-packages/gensim/similarities/__init__.py:15: UserWarning: The gensim.similarities.levenshtein submodule is disabled, because the optional Levenshtein package <https://pypi.org/proje$
> t/python-Levenshtein/> is unavailable. Install Levenhstein (e.g. `pip install python-Levenshtein`) to suppress this warning.
> warnings.warn(msg)
> Traceback (most recent call last):
> File "<string>", line 1, in <module>
> File "/home/shushan/.conda/envs/ccg_parser/lib/python3.9/site-packages/transformers/file_utils.py", line 1977, in __getattr__
> module = self._get_module(self._class_to_module[name])
> File "/home/shushan/.conda/envs/ccg_parser/lib/python3.9/site-packages/transformers/file_utils.py", line 1986, in _get_module
> return importlib.import_module("." + module_name, self.__name__)
> File "/home/shushan/.conda/envs/ccg_parser/lib/python3.9/importlib/__init__.py", line 127, in import_module
> return _bootstrap._gcd_import(name[level:], package, level)
> File "/home/shushan/.conda/envs/ccg_parser/lib/python3.9/site-packages/transformers/pipelines/__init__.py", line 25, in <module>
> from ..models.auto.configuration_auto import AutoConfig
> File "/home/shushan/.conda/envs/ccg_parser/lib/python3.9/site-packages/transformers/models/__init__.py", line 19, in <module>
> from . import (
> File "/home/shushan/.conda/envs/ccg_parser/lib/python3.9/site-packages/transformers/models/layoutlm/__init__.py", line 22, in <module>
> from .configuration_layoutlm import LAYOUTLM_PRETRAINED_CONFIG_ARCHIVE_MAP, LayoutLMConfig
> File "/home/shushan/.conda/envs/ccg_parser/lib/python3.9/site-packages/transformers/models/layoutlm/configuration_layoutlm.py", line 19, in <module>
> from ..bert.configuration_bert import BertConfig
> File "/home/shushan/.conda/envs/ccg_parser/lib/python3.9/site-packages/transformers/models/bert/configuration_bert.py", line 21, in <module>
> from ...onnx import OnnxConfig
> File "/home/shushan/.conda/envs/ccg_parser/lib/python3.9/site-packages/transformers/onnx/__init__.py", line 16, in <module>
> from .config import EXTERNAL_DATA_FORMAT_SIZE_LIMIT, OnnxConfig, OnnxConfigWithPast
> File "/home/shushan/.conda/envs/ccg_parser/lib/python3.9/site-packages/transformers/onnx/config.py", line 18, in <module>
> from transformers import PretrainedConfig, PreTrainedTokenizer, TensorType
> File "/home/shushan/.conda/envs/ccg_parser/lib/python3.9/site-packages/transformers/file_utils.py", line 1977, in __getattr__
> module = self._get_module(self._class_to_module[name])
> File "/home/shushan/.conda/envs/ccg_parser/lib/python3.9/site-packages/transformers/file_utils.py", line 1986, in _get_module
> return importlib.import_module("." + module_name, self.__name__)
> File "/home/shushan/.conda/envs/ccg_parser/lib/python3.9/importlib/__init__.py", line 127, in import_module
> return _bootstrap._gcd_import(name[level:], package, level)
> File "/home/shushan/.conda/envs/ccg_parser/lib/python3.9/site-packages/transformers/tokenization_utils.py", line 26, in <module>
> from .tokenization_utils_base import (
> File "/home/shushan/.conda/envs/ccg_parser/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 74, in <module>
> from tokenizers import AddedToken
> File "/home/shushan/tokenization_experiments/tokenizers.py", line 26, in <module>
> from transformers import BertTokenizer
> File "/home/shushan/.conda/envs/ccg_parser/lib/python3.9/site-packages/transformers/file_utils.py", line 1978, in __getattr__
> value = getattr(module, name)
> File "/home/shushan/.conda/envs/ccg_parser/lib/python3.9/site-packages/transformers/file_utils.py", line 1977, in __getattr__
> module = self._get_module(self._class_to_module[name])
> File "/home/shushan/.conda/envs/ccg_parser/lib/python3.9/site-packages/transformers/file_utils.py", line 1986, in _get_module
> return importlib.import_module("." + module_name, self.__name__)
> File "/home/shushan/.conda/envs/ccg_parser/lib/python3.9/importlib/__init__.py", line 127, in import_module
> return _bootstrap._gcd_import(name[level:], package, level)
> File "/home/shushan/.conda/envs/ccg_parser/lib/python3.9/site-packages/transformers/models/bert/tokenization_bert.py", line 23, in <module>
> from ...tokenization_utils import PreTrainedTokenizer, _is_control, _is_punctuation, _is_whitespace
> ImportError: cannot import name 'PreTrainedTokenizer' from partially initialized module 'transformers.tokenization_utils' (most likely due to a circular import) (/home/shushan/.conda/envs/ccg_parser/lib/python3.9/site-packages/transformer
> s/tokenization_utils.py)
>
I have attempted uninstalling transformers and re-installing them, but I couldn't find any more information as to what is wrong, or how to go about fixing this issue I am seeing. The only suspicious behavior is that your tool for the environment detection above printed that I have torch installed without GPU, while in reality I have an installation of pytorch that works with gpu. Can you help?
Thanks in advance
Shushan | 07-23-2021 12:24:05 | 07-23-2021 12:24:05 | Hello! Could you show me the command you used to install `transformers`? Thank you!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Same issue
```
❯ python transformers.py
Traceback (most recent call last):
File "transformers.py", line 3, in <module>
import transformers
File "/Users/xxxxxx/Desktop/transformers.py", line 4, in <module>
from transformers import pipeline
ImportError: cannot import name 'pipeline' from partially initialized module 'transformers' (most likely due to a circular import) (/Users/xxxxxx/Desktop/transformers.py)
```<|||||>You're trying to import transformers in a file named `transformers.py`, that won't work.<|||||>@LysandreJik my script name is transformers.py
The script content is the Quick Tour example https://github.com/huggingface/transformers
```
import requests
from PIL import Image
from transformers import pipeline
# Download an image with cute cats
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png"
image_data = requests.get(url, stream=True).raw
image = Image.open(image_data)
# Allocate a pipeline for object detection
object_detector = pipeline('object_detection')
object_detector(image)
```<|||||>Yes, please rename your script. If you're doing `import transformers` from inside a script named `transformers.py`, the script will try to import itself. |
transformers | 12,858 | closed | Pin git python to <3.1.19 | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release nhttps://github.com/gitpython-developers/GitPython/pull/1275/filesotes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
GitPython did a new release which breaks our tests: https://github.com/gitpython-developers/GitPython/pull/1275/files
See: https://app.circleci.com/pipelines/github/huggingface/transformers/26010/workflows/a72a068e-b3f0-42e1-b08b-7e2c89cae3ed/jobs/245943 for example.
Pinning GitPython for now to make circle ci work
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 07-23-2021 10:48:14 | 07-23-2021 10:48:14 | cc @LysandreJik @sgugger
Also see: https://github.com/gitpython-developers/GitPython/issues/1296 |
transformers | 12,857 | closed | wav2vec pretrain and fine-tune with huge data | Hi,
Thanks for the great efforts for wav2vec!
is there good example to fine-tune and pretrain wav2vec with huge data?
it seems the official examples works fine with one GPU but not so good for multigpus.
Thanks. | 07-23-2021 10:17:40 | 07-23-2021 10:17:40 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discuss.huggingface.co) instead?
Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,856 | closed | TypeError: '>' not supported between instances of 'NoneType' and 'int' | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 07-23-2021 09:50:39 | 07-23-2021 09:50:39 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,855 | closed | fix typo in gradient_checkpointing arg | help for `ModelArguments.gradient_checkpointing` should be
"If True, use gradient checkpointing to save memory
at the expense of slower backward pass."
not "Whether to freeze the feature extractor layers of the model."
(which is duplicated from `freeze_feature_extractor` arg)
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 07-23-2021 08:02:44 | 07-23-2021 08:02:44 | @JetRunner Thx for comment.
I just update my branch and Now CI seems to be working fine!
<|||||>Thanks! |
transformers | 12,854 | closed | How could I convert output tensor from transformer to text generation? | # 🚀 Feature request
https://github.com/onnx/models/blob/master/text/machine_comprehension/gpt-2/dependencies/GPT2-export.py
I succeeded in extracting the output tensor value for the example input text using the above example. The link above imports your hugging face transformer, so I wonder how I can text generate with the output tensor value I got.
Is there a code or a link I can refer to? (Pytorch or python code..)
The code I tried is as follows. But it didn't work.

'ort_outputs_exmodel' above image is same as 'res' link below
https://github.com/onnx/models/blob/ad5c181f1646225f034fba1862233ecb4c262e04/text/machine_comprehension/gpt-2/dependencies/GPT2-export.py#L110
My final goal of the project is to load the onnx model using onnx runtime's C/C++ API and write the C/C++ code to generate text using output tensor values.
I'll be waiting for your reply. (looking forward to...)
Thank u very much.
## Motivation
I need advice on how to run text generation using output tensor values.
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| 07-23-2021 06:46:45 | 07-23-2021 06:46:45 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,853 | closed | Fix barrier for SM distributed | # What does this PR do?
#12351 introduced a new context manager for having the main process execute an instruction while other process have to wait. That context manager was missing special treatment for TPUs (added in #12464) and SageMaker distributed. This PR adds the latter.
Fixes #12847 | 07-23-2021 04:46:07 | 07-23-2021 04:46:07 | |
transformers | 12,852 | closed | How to ignore PAD tokens for NER | Hi,
Thank you for such a great repo. I am trying to use the word/token embeddings from the pretrained transformers for NER. The following code is a snippet of my model. For simplicity I am using a Linear decoder as opposed to a CRF decoder.
```
model_bert = BertModel.from_pretrained(model_dir, config=config)
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
class BERTNER(nn.Module):
def __init__(self, model, hidden_dim,num_labels):
"""
Torch model that uses the BERT and adds in a classifiers at the end. Num labels is a list of labels
"""
super(BERTNER self).__init__()
self.model = model
self.hidden_dim = hidden_dim
self.num_labels = num_labels
self.rnn = nn.LSTM(self.model.config.hidden_size, hidden_dim, batch_first=True, bidirectional=True)
self.classifier = nn.Linear(2*hidden_dim, num_labels)
def forward(self,input_ids,attention_mask):
outputs = self.model(input_ids=input_ids,attention_mask=attention_mask)
sequence_output = outputs[0]
out,_ = self.rnn(sequence_output)
return self.classifier(out)
model = BERTNER(model_bert,128,len(tag2idx))
```
And this is the part I am confused. My input to the model are all padded to be fixed length. And generally, when the sentences are padded, if one uses nn.Embedding and then the padding can be ignored. https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html. But here it is not clear to me how to ignore the padded tokens. Any help will be greatly appreciated. Thanks in advance. | 07-23-2021 04:00:36 | 07-23-2021 04:00:36 | The `attention_mask` indicates if a token is padding or an actual token. The usual way to deal with padding in the LSTM is to pass lengths for each sequence, you can work this out by summing the attention_mask along the "time" access, ie something like
```
sequence_lengths = torch.sum(attention_mask, dim=1)
packed_sequence = nn.utils.rnn.pack_padded_sequence(sequence_output, sequence_lengths)
outputs, hidden = self.rnn(packed_sequence)
outputs, _ = nn.utils.rnn.pad_packed_sequence(outputs)
```
You'll have to double check the axis you want to sum over, and that attention_mask=1 for non-padded tokens (otherwise you'll have to negate it) but hopefully this will help.<|||||>Also you may want to consider `allennlp` (although it has a bit of a learning curve). You can compose models such as a crf tagger using a huggingface pretrained model as an encoder and a crf decoder without much work (even without any code once you figure out their jsonnet format).<|||||>First, placing an LSTM on top of the final hidden states of a model like BERT is not needed. You can just place a linear layer on top. Any `xxxForTokenClassification` model in the library is implemented that way, and it works really well.
Second, to ignore padding tokens, you should make predictions for all tokens, but simply label pad tokens with -100, as this is the default `ignore_index` of the `CrossEntropyLoss` in PyTorch. This means that they will not be taken into account by the loss function.
Btw, I do have an example notebook for NER which you find [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/BERT). There's also the official one which you can find [here](https://github.com/huggingface/notebooks/blob/master/examples/token_classification.ipynb).<|||||>Thank you @david-waterworth and @NielsRogge for your answers. This solves my problem. I am closing this issue. <|||||>@NielsRogge I can not use that padding = -100 when using CRF. Is there other way to ignore pad token for CRF? |
transformers | 12,851 | open | Got `ONNXRuntimeError` when try to run BART in ONNX format | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.9.0
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.11
- PyTorch version (GPU?): 1.9.0+cu102 (True)
- Using GPU in script?: Yes
### Who can help
@mfuntowicz
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
-
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## To reproduce
I was using Google Colab and trying to export model `facebook/bart-large-cnn` to the onnx format. I ran the command `python -m transformers.onnx -m=facebook/bart-large-cnn onnx/bart-large-cnn`, and the outputs seem okay.
```
2021-07-22 23:14:33.821472: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
Using framework PyTorch: 1.9.0+cu102
Overriding 1 configuration item(s)
- use_cache -> False
/usr/local/lib/python3.7/dist-packages/transformers/models/bart/modeling_bart.py:212: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len):
/usr/local/lib/python3.7/dist-packages/transformers/models/bart/modeling_bart.py:218: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if attention_mask.size() != (bsz, 1, tgt_len, src_len):
/usr/local/lib/python3.7/dist-packages/transformers/models/bart/modeling_bart.py:249: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim):
/usr/local/lib/python3.7/dist-packages/transformers/models/bart/modeling_bart.py:863: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if input_shape[-1] > 1:
tcmalloc: large alloc 1625399296 bytes == 0x5595ce83a000 @ 0x7f1780d9f887 0x7f177f695c29 0x7f177f696afb 0x7f177f696bb4 0x7f177f696f9c 0x7f17670dcbb7 0x7f17670dd064 0x7f175b75ba1c 0x7f176bf8eaff 0x7f176b949b88 0x55949fda8bf8 0x55949fe1c6f2 0x55949fe16c35 0x55949fda973a 0x55949fe1893b 0x55949fe16c35 0x55949fda973a 0x55949fe1bf40 0x55949fe16c35 0x55949fda973a 0x55949fe1893b 0x55949fda965a 0x55949fe17b0e 0x55949fda965a 0x55949fe17b0e 0x55949fe16c35 0x55949fe16933 0x55949fe14da0 0x55949fda7ea9 0x55949fda7da0 0x55949fe1bbb3
tcmalloc: large alloc 1625399296 bytes == 0x55962f654000 @ 0x7f1780d9f887 0x7f177f695c29 0x7f177f696afb 0x7f177f696bb4 0x7f177f696f9c 0x7f17670dcbb7 0x7f17670dd064 0x7f175b75ba1c 0x7f176bf8ecab 0x7f176b949b88 0x55949fda8bf8 0x55949fe1c6f2 0x55949fe16c35 0x55949fda973a 0x55949fe1893b 0x55949fe16c35 0x55949fda973a 0x55949fe1bf40 0x55949fe16c35 0x55949fda973a 0x55949fe1893b 0x55949fda965a 0x55949fe17b0e 0x55949fda965a 0x55949fe17b0e 0x55949fe16c35 0x55949fe16933 0x55949fe14da0 0x55949fda7ea9 0x55949fda7da0 0x55949fe1bbb3
tcmalloc: large alloc 1625399296 bytes == 0x5595ce83a000 @ 0x7f1780d9d1e7 0x55949fdd9a18 0x55949fda4987 0x7f176bf8ece2 0x7f176b949b88 0x55949fda8bf8 0x55949fe1c6f2 0x55949fe16c35 0x55949fda973a 0x55949fe1893b 0x55949fe16c35 0x55949fda973a 0x55949fe1bf40 0x55949fe16c35 0x55949fda973a 0x55949fe1893b 0x55949fda965a 0x55949fe17b0e 0x55949fda965a 0x55949fe17b0e 0x55949fe16c35 0x55949fe16933 0x55949fe14da0 0x55949fda7ea9 0x55949fda7da0 0x55949fe1bbb3 0x55949fe16c35 0x55949fda973a 0x55949fe17b0e 0x55949fe16c35 0x55949fce8eb1
tcmalloc: large alloc 1625399296 bytes == 0x55962f654000 @ 0x7f1780d9f887 0x7f177f695c29 0x7f177f695d47 0x7f177f6977a5 0x7f176bd60368 0x7f176bfbc844 0x7f176b949b88 0x55949fda8010 0x55949fda7da0 0x55949fe1bbb3 0x55949fe16c35 0x55949fda973a 0x55949fe1893b 0x55949fe16c35 0x55949fda973a 0x55949fe1bf40 0x55949fe16c35 0x55949fda973a 0x55949fe1893b 0x55949fda965a 0x55949fe17b0e 0x55949fda965a 0x55949fe17b0e 0x55949fe16c35 0x55949fe16933 0x55949fe14da0 0x55949fda7ea9 0x55949fda7da0 0x55949fe1bbb3 0x55949fe16c35 0x55949fda973a
Validating ONNX model...
-[✓] ONNX model outputs' name match reference model ({'last_hidden_state', 'encoder_last_hidden_state'}
- Validating ONNX Model output "last_hidden_state":
-[✓] (2, 8, 1024) matchs (2, 8, 1024)
-[✓] all values close (atol: 0.0001)
- Validating ONNX Model output "encoder_last_hidden_state":
-[✓] (2, 8, 1024) matchs (2, 8, 1024)
-[✓] all values close (atol: 0.0001)
All good, model saved at: onnx/bart-large-cnn/model.onnx
```
Then I tried to execute the model in `onnxruntime`,
```
import onnxruntime as ort
ort_session = ort.InferenceSession('onnx/bart-large-cnn/model.onnx')
# Got input_ids and attention_mask using tokenizer
outputs = ort_session.run(None, {'input_ids': input_ids.detach().cpu().numpy(), 'attention_mask': attention_mask.detach().cpu().numpy()})
```
And I got the error,
```
---------------------------------------------------------------------------
RuntimeException Traceback (most recent call last)
<ipython-input-30-380e6a0e1085> in <module>()
----> 1 outputs = ort_session.run(None, {'input_ids': input_ids.detach().cpu().numpy(), 'attention_mask': attention_mask.detach().cpu().numpy()})
/usr/local/lib/python3.7/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py in run(self, output_names, input_feed, run_options)
186 output_names = [output.name for output in self._outputs_meta]
187 try:
--> 188 return self._sess.run(output_names, input_feed, run_options)
189 except C.EPFail as err:
190 if self._enable_fallback:
RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Reshape node. Name:'Reshape_109' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:42 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, std::vector<long int>&, bool) gsl::narrow_cast<int64_t>(input_shape.Size()) == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{2}, requested shape:{1,1}
```
I see that BART is recently supported for ONNX in the latest release, but there isn't any code to exactly explain how to run the inference in `onnxruntime`. Maybe I'm doing something wrong here, so any help will be appreciated!
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
| 07-23-2021 00:02:26 | 07-23-2021 00:02:26 | I can reproduce in latest `transformers` with latest onnx runtime. <|||||>FYI this error seems to be linked to the dimension of the input; if you use a batch size 2 it should work.
As seen with @mfuntowicz offline, we'll be working on a fix in the coming weeks cc @michaelbenayoun <|||||>@LysandreJik Thank you for the follow-up. I'll pay attention to any updates.<|||||>Can reproduce with `valhalla/distilbart-mnli-12-1` in `4.10.0`. @LysandreJik
The export is essentially dependent on the number of hypotheses it was exported with, as far as I can tell.<|||||>Any update on this? Can reproduce the same for facebook/bart-large-mnli. Works only with a batch size of 2 during inference. @LysandreJik @mfuntowicz <|||||>transformers.__version__ == 4.20.0.dev0
onnxruntime.__version__ == 1.11.1
exported facebook/bart-base successfully , following instructions on -
https://github.com/huggingface/transformers/tree/main/examples/research_projects/onnx/summarization
script output -
2022-05-16 16:06:57 | INFO | __main__ | [run_onnx_exporter.py:163] Model outputs from torch and ONNX Runtime are similar.
2022-05-16 16:06:57 | INFO | __main__ | [run_onnx_exporter.py:164] Success.
however, loading the exported model fails after it hangs forever (timing out), using this script -
```
import torch
from onnxruntime import InferenceSession, SessionOptions, GraphOptimizationLevel
options = SessionOptions() # initialize session options
options.graph_optimization_level = GraphOptimizationLevel.ORT_ENABLE_ALL
session = InferenceSession(
'optimized_BART.onnx',
sess_options=options,
providers=["CPUExecutionProvider"]
)
session.disable_fallback()
```
(py39) user@Avis-MacBook-Pro-2 summarization % ls -lht
-rw-r--r-- 1 user staff 680M May 16 16:06 optimized_BART.onnx
exported model size about 680MB
any advice on this? <|||||>transformers.__version__ == 4.20.0.dev0
onnxruntime.__version__ == 1.11.1
onnx bart fails to load (hangs forever) when passing options to InferenceSession()
avoid these -
options.graph_optimization_level = GraphOptimizationLevel.ORT_ENABLE_ALL
otherwise loading the model hangs forever.
upon keyboard interrupt, I am getting tons of these warnings -
2022-05-16 15:57:35.009102 [W:onnxruntime:, graph.cc:3559 CleanUnusedInitializersAndNodeArgs] Removing initializer '1772'. It is not used by any node and should be removed from the model.
2022-05-16 15:57:36.410981 [W:onnxruntime:, constant_folding.cc:202 ApplyImpl] Unsupported output type of N11onnxruntime22SequenceTensorTypeBaseE. Can't constant fold SequenceEmpty node 'SequenceEmpty_5330'
2022-05-16 15:57:36.416645 [W:onnxruntime:, constant_folding.cc:202 ApplyImpl] Unsupported output type of N11onnxruntime22SequenceTensorTypeBaseE. Can't constant fold SequenceEmpty node 'SequenceEmpty_808'
2022-05-16 15:57:36.416741 [W:onnxruntime:, constant_folding.cc:202 ApplyImpl] Unsupported output type of N11onnxruntime22SequenceTensorTypeBaseE. Can't constant fold SequenceEmpty node 'SequenceEmpty_1'
2022-05-16 15:57:36.446512 [W:onnxruntime:, constant_folding.cc:202 ApplyImpl] Unsupported output type of N11onnxruntime22SequenceTensorTypeBaseE. Can't constant fold SequenceEmpty node 'SequenceEmpty_5128'
2022-05-16 15:57:37.813252 [W:onnxruntime:, graph.cc:3559 CleanUnusedInitializersAndNodeArgs] Removing initializer '3149'. It is not used by any node and should be removed from the model.
2022-05-16 15:57:37.813269 [W:onnxruntime:, graph.cc:3559 CleanUnusedInitializersAndNodeArgs] Removing initializer '2153'. It is not used by any node and should be removed from the model.
....<|||||>loaded the onnx model successfully without options.graph_optimization_level.
fails to get a prediction :(
```
import onnxruntime as ort
import numpy as np
ort_session = ort.InferenceSession(
'optimized_BART.onnx')
print(f'inputs: {[i.name for i in ort_session.get_inputs()]}')
feed_dict = summarizer.tokenizer(text)
feed_dict['num_beams'] = 4
feed_dict['max_length'] = 120
feed_dict['decoder_start_token_id'] = 2
feed_dict = {k: np.int64([v]) for k, v in feed_dict.items()}
for key in feed_dict:
print(f'feed_dict key: {key}, shape: {feed_dict[key].shape}')
pred = session.run(None, feed_dict)
````
### printout -
inputs: ['input_ids', 'attention_mask', 'num_beams', 'max_length', 'decoder_start_token_id']
feed_dict key: input_ids, shape: (1, 228)
feed_dict key: attention_mask, shape: (1, 228)
feed_dict key: num_beams, shape: (1,)
feed_dict key: max_length, shape: (1,)
feed_dict key: decoder_start_token_id, shape: (1,)
InvalidArgument Traceback (most recent call last)
Input In [39], in <cell line: 11>()
8 for key in feed_dict:
9 print(f'feed_dict key: {key}, shape: {feed_dict[key].shape}')
---> 11 pred = session.run(['output_ids'], feed_dict)
File ~/envs/py39/lib/python3.9/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:192, in Session.run(self, output_names, input_feed, run_options)
190 output_names = [output.name for output in self._outputs_meta]
191 try:
--> 192 return self._sess.run(output_names, input_feed, run_options)
193 except C.EPFail as err:
194 if self._enable_fallback:
InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Got invalid dimensions for input: attention_mask for the following indices
index: 1 Got: 228 Expected: 13
Please fix either the inputs or the model.
<|||||>fails to export facebook/bart-large-cnn or , following instructions on -
https://github.com/huggingface/transformers/tree/main/examples/research_projects/onnx/summarization
(py39) user@Avis-MacBook-Pro-2 summarization % python run_onnx_exporter.py --model_name_or_path facebook/bart-large-cnn
Traceback (most recent call last):
File "~/src/transformers/examples/research_projects/onnx/summarization/run_onnx_exporter.py", line 207, in <module>
main()
File "~/src/transformers/examples/research_projects/onnx/summarization/run_onnx_exporter.py", line 184, in main
model, tokenizer = load_model_tokenizer(args.model_name_or_path, device)
File "~/src/transformers/examples/research_projects/onnx/summarization/run_onnx_exporter.py", line 93, in load_model_tokenizer
huggingface_model = model_dict[model_name].from_pretrained(model_name).to(device)
KeyError: 'facebook/bart-large-cnn'
same error when trying to export model lidiya/bart-base-samsum
any advice would be greatly appreciated. thanks. |
transformers | 12,850 | closed | run_mlm_no_trainer.py requires --model_name_or_path | The `examples/pytorch/language-modeling/run_mlm_no_trainer.py` script has
parser.add_argument(
"--model_name_or_path",
type=str,
help="Path to pretrained model or model identifier from huggingface.co/models.",
default=None,
required=True,
)
Despite there being several checks in the code implying it may be None ie
if args.model_name_or_path:
model = AutoModelForMaskedLM.from_pretrained(
args.model_name_or_path,
from_tf=bool(".ckpt" in args.model_name_or_path),
config=config,
)
else:
logger.info("Training new model from scratch")
model = AutoModelForMaskedLM.from_config(config)
As far as I can see it's optional, falling back to training a new model from scratch - just like run_mlm.py (I commented out `required=True` without any obvious issues).
| 07-22-2021 23:40:40 | 07-22-2021 23:40:40 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,849 | closed | run_mlm_no_trainer.py requires accelerate but not in requirements.txt | I just installed Transformers 4.9.0 as I'm really excited to investigate the tokeniser free CANINE model.
I noticed that the `examples/pytorch/language-modeling/run_mlm_no_trainer.py` script requires the `accelerate` library but that doesn't appear to be included in `examples/pytorch/language-modeling/requirements.txt` or the main `setup.py`
| 07-22-2021 23:30:33 | 07-22-2021 23:30:33 | Thanks for flagging, I added those to all examples in the PR mentioned above!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Cosed by #12888 |
transformers | 12,848 | closed | legacy finetune with t5 issues | Hi @stas00
Splitting of from https://github.com/huggingface/transformers/issues/8771#issuecomment-884865133
There is a lot of great information in your post; thanks for being thorough!
I guess I dont understand what parameters I need to change within the deepspeed config file to properly offload into cpu memory. I have 473 gb of RAM available for offloading, which seems to be enough based on what you listed. I am also using the finetune script in the seq2seq legacy folder. The command is:
`export BS=2; rm -rf output_dir; PYTHONPATH=../../src USE_TF=0 deepspeed --num_gpus=8 ./finetune_trainer.py --model_name_or_path "Rostlab/prot_t5_xl_uniref50" --output_dir output_dir --adam_eps 1e-06 --data_dir /mnt/data --do_eval --do_predict --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 512 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 5 --n_train 60 --n_val 10 --n_test 10 --deepspeed ../../../tests/deepspeed/ds_config_zero3.json --fp16`
I had to modify finetune to include the T5Tokenizer as the AutoTokenizer wouldnt work.
For zero 3 optimization, I am using lower values for stage3_params, since the documentation indicated to use lower values to offload memory.
```
"zero_optimization": {
"stage": 3,
"cpu_offload": true,
"cpu_offload_params": true,
"cpu_offload_use_pin_memory" : true,
"overlap_comm": true,
"contiguous_gradients": true,
"stage3_max_live_parameters": 1e3,
"stage3_max_reuse_distance": 1e3,
"stage3_prefetch_bucket_size": 2e3,
"stage3_param_persitance_threshold": 1e3,
"reduce_bucket_size": 3e3,
"prefetch_bucket_size": 3e3,
"sub_group_size": 1e3
},
```
| 07-22-2021 20:00:00 | 07-22-2021 20:00:00 | first, any reason why you're not using the latest scripts? The legacy scripts are no longer being maintained and the up-to-date scripts had great many improvements. So if it's not too hard I highly recommend switching to those. Most likely you want
https://github.com/huggingface/transformers/blob/master/examples/pytorch/translation/run_translation.py Albeit, this is orthogonal to the Deepspeed issue you wanted to discuss.
> For zero 3 optimization, I am using lower values for stage3_params, since the documentation indicated to use lower values to offload memory.
After this discussion is over, let's review where you found this information, because this is incorrect. The doc says which specific parameters you need to tweak, not all of them.
Have you considered using tuned-up-for-you `auto` values? https://huggingface.co/transformers/master/main_classes/deepspeed.html#zero-3-config
ah, and you have a typo in at least on of the key names as well - there is no stage3_param_persitance_threshold - deepspeed is a bit troublesome as it doesn't validate keys and simply uses the default if you make a typo.
It dumps the final config when the program starts, so you can always review whether your settings "made it".
Your config is also "dated" - recent deepspeed moved to a newer config as you can see in the docs (albeit it's backward compatible).
<|||||>Perhaps you were referring to: "Smaller values use less memory"
> <p><strong><em>stage3_param_persistence_threshold</em></strong>: [integer]</p>
>
> Description | Default
> -- | --
> Do not partition parameters smaller than this threshold. Smaller values use less memory, but can greatly increase communication (especially latency-bound messages).
>
https://www.deepspeed.ai/docs/config-json/<|||||>@stas00,
Thanks for the pointers. I modified my ds_confg.json with the following:
```
json = {
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1.000000e+09,
"reduce_bucket_size": 1.048576e+06,
"stage3_prefetch_bucket_size": 9.437184e+05,
"stage3_param_persistence_threshold": 1.024000e+04,
"stage3_max_live_parameters": 10.0,
"stage3_max_reuse_distance": 10.0,
"stage3_gather_fp16_weights_on_model_save": true
},
"train_batch_size": 16,
"train_micro_batch_size_per_gpu": 2,
"zero_allow_untested_optimizer": true
}
```
I also switched to run_translation.py in the master branch.
Even with the
```
"stage3_max_live_parameters": 10.0,
"stage3_max_reuse_distance": 10.0,
```
I am unable to use a batchsize of 2 per gpu without hitting OOM for GPU. Any thoughts on optimizing this? My commandline is:
`rm -rf output_dir; USE_TF=0 deepspeed --num_gpus=8 ./run_translation.py --model_name_or_path "Rostlab/prot_t5_xl_uniref50" --output_dir output_dir --adam_eps 1e-06 --do_eval --do_predict --do_train --evaluation_strategy=steps --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 512 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --predict_with_generate --eval_steps 25000 --sortish_sampler --warmup_steps 5 --deepspeed deepsped.config --fp16 --train_file train.json --test_file train.json --validation_file train.json --source_lang a --target_lang b --overwrite_output_dir --predict_with_generate --per_device_train_batch_size=2 --per_device_eval_batch_size=2`<|||||>I had no problem doing mostly the same with the current version of examples with just 4x v100-16GB GPUs - I didn't change anything from the default ds config in the repo and it took only 6GB / gpu for training and ~10GB / gpu for eval.
```
cd transformers
BS=4; PYTHONPATH=src USE_TF=0 /usr/bin/time -v deepspeed --num_gpus 4 \
examples/pytorch/translation/run_translation.py --model_name_or_path t5-3b --output_dir output_dir \
--overwrite_output_dir --max_train_samples 10 --max_eval_samples 10 --max_source_length 512 \
--max_target_length 128 --val_max_target_length 128 --do_train --do_eval --num_train_epochs 1 \
--per_device_train_batch_size $BS --per_device_eval_batch_size $BS --learning_rate 3e-3 \
--warmup_steps 500 --predict_with_generate --save_steps 0 --eval_steps 1 --group_by_length \
--dataset_name wmt16 --dataset_config ro-en --source_lang en --target_lang ro --source_prefix \
"translate English to Romanian: " --deepspeed tests/deepspeed/ds_config_zero3.json
```
probably can easily do a much larger BS on this one and 8 gpus you definitely shouldn't have any problems.
I highly recommend to use the default ds config and not change anything there unless you really need to.<|||||>I was able to use your command and train using the ro-en dataset and t5-3b.
However, I am trying to use a custom model: "Rostlab/prot_t5_xl_uniref50". This is based on t5-3b, but without the denoising objective in t5. I looked at the model card and it also does not have the task-specific parameters in its config.json for translation/summarization. I think this means that I might need to change the Trainer, but I am not sure what is specifically needed.
Before I started down the deepspeed path, I was using a training loop that I had created with model parallelization. The train step is below:
```
model = T5ForConditionalGeneration.from_pretrained(model_name)
# model = model.to(device)
device_map = {0: [0],
1: [1, 2, 3 ],
2: [4, 5, 6 ],
3: [7, 8, 9, 10 ],
4: [11, 12, 13, 14],
5: [15, 16, 17],
6: [18, 19, 20],
7: [21, 22, 23]
}
model.parallelize(device_map)
def run_a_train_epoch():
print ("Training...")
all_losses = []
model.train()
for batch_idx, batch in enumerate(train_dataloader):
if batch_idx > 0 and batch_idx % 20 == 0:
print(f"Trained {batch_idx} batches...")
#print ("Batch: ", batch_idx)
#print (_, data)
ids = batch['source_ids'].to('cuda:0', dtype = torch.long)
mask = batch['source_mask'].to('cuda:0', dtype = torch.long)
y = batch['target_ids'].to('cuda:0', dtype = torch.long)
y_ids = y[:, :-1].contiguous()
decoder_attention_mask = batch['target_mask'].to('cuda:0', dtype = torch.long)
y_mask = decoder_attention_mask[:, :-1].contiguous()
outputs = model(input_ids = ids, attention_mask = mask, labels=y_ids, decoder_attention_mask=y_mask)
loss = outputs[0]
optimizer.zero_grad()
loss.backward()
optimizer.step()
all_losses.append(loss)
train_loss = sum(all_losses) / len(all_losses)
return train_loss
```
Doing this, I was only able to train on 2 batches at once. Is it possible to use trainer with this model or do you have any pointers on transferring this to deepspeed?
<|||||>You don't need to transfer anything to Deepspeed, Deepspeed ZeRO simply provides a much simpler way of doing model parallelism w/o needing to change the model. That is whatever model you use it'll just work. Deepspeed magically parallelizes whatever you throw at it (well, most of the time).
So your goal is to use a t5-3b model with a slightly different task. I don't see any reason why it won't just work out of the box.
I used `run_translation.py` as an example to test that everything works and scales. You can adapt it to your needs. `run_translation.py` is the same as the old legacy `finetune_trainer.py` except it was massively cleaned up, improved and then split off to do just one task - translation. e.g. `examples/pytorch/summarization` is another split off from `finetune_trainer.py`.
Perhaps you can follow this plan:
1. study the existing example scripts and find the one that is the closest to your needs
2. adapt it to your exact needs by porting over whatever extra code you wrote in your `finetune_trainer.py`
3. test that it works with just python perhaps on a small model
4. add deepspeed using the default settings of `tests/deepspeed/ds_config_zero3.json` to scale it up this time on the full model.
<|||||>I am not sure what is going on...I stepped through the code and made sure that I was not missing anything by printing out the tokens/masks and several other points. The only thing that I can get to work with this model, dataset, and run_translation.py is a per_device_batch_size of 1. I am using the tests/deepspeed/ds_config_zero3.json with the run_translation.py script. I have been able to use the original t5-3b model with the ro-en translation dataset and your configuration file with a per device batch size of 8 just fine.
Not sure where to go from here.
Thanks! <|||||>a model is a model is a model is a model - it doesn't matter which t5-3b derivative you use - it will take the exact same amount of memory. What matters is your code - it's possible that you do something that leaks memory or allocates more than the example program does.
The next step is to either to try to compare how your program is different, or to use the memory profiler and see where the bulk of memory is allocated. You can start with just enabling `--skip_memory_metrics 0` (unskip that is) with the current examples and it'll report the memory allocations in the first gpu. or you can use various other pytorch profilers.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,847 | closed | Default process group has not been initialized while using sagemaker data parallel | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.9.0 - dev
- Platform: Sagemaker
- Python version:
- PyTorch version (GPU?): 1.8.1
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: SQUaD
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run Squad finetune using transformers 4.9.0 - dev
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
[1,5]<stdout>:Traceback (most recent call last):
--
[1,5]<stdout>: File "/opt/conda/lib/python3.6/runpy.py", line 193, in _run_module_as_main
[1,5]<stdout>: "__main__", mod_spec)
[1,5]<stdout>: File "/opt/conda/lib/python3.6/runpy.py", line 85, in _run_code
[1,5]<stdout>: exec(code, run_globals)
[1,5]<stdout>: File "/opt/conda/lib/python3.6/site-packages/mpi4py/__main__.py", line 7, in <module>
[1,5]<stdout>: main()
[1,5]<stdout>: File "/opt/conda/lib/python3.6/site-packages/mpi4py/run.py", line 196, in main
[1,5]<stdout>: run_command_line(args)
[1,5]<stdout>: File "/opt/conda/lib/python3.6/site-packages/mpi4py/run.py", line 47, in run_command_line
[1,5]<stdout>: run_path(sys.argv[0], run_name='__main__')
[1,5]<stdout>: File "/opt/conda/lib/python3.6/runpy.py", line 263, in run_path
[1,5]<stdout>: pkg_name=pkg_name, script_name=fname)
[1,3]<stdout>: File "/opt/conda/lib/python3.6/site-packages/mpi4py/run.py", line 196, in main
[1,3]<stdout>: run_command_line(args)
[1,3]<stdout>: File "/opt/conda/lib/python3.6/site-packages/mpi4py/run.py", line 47, in run_command_line
[1,3]<stdout>: run_path(sys.argv[0], run_name='__main__')
[1,3]<stdout>: File "/opt/conda/lib/python3.6/runpy.py", line 263, in run_path
[1,3]<stdout>: pkg_name=pkg_name, script_name=fname)
[1,3]<stdout>: File "/opt/conda/lib/python3.6/runpy.py", line 96, in _run_module_code
[1,3]<stdout>: mod_name, mod_spec, pkg_name, script_name)
[1,3]<stdout>: File "/opt/conda/lib/python3.6/runpy.py", line 85, in _run_code
[1,3]<stdout>: exec(code, run_globals)
[1,3]<stdout>: File "run_qa.py", line 646, in <module>
[1,3]<stdout>: main()
[1,3]<stdout>: File "run_qa.py", line 427, in main
[1,3]<stdout>: with training_args.main_process_first(desc="train dataset map pre-processing"):
[1,3]<stdout>: File "/opt/conda/lib/python3.6/contextlib.py", line 81, in __enter__
[1,3]<stdout>: return next(self.gen)
[1,3]<stdout>: File "/opt/conda/lib/python3.6/site-packages/transformers/training_args.py", line 1033, in main_process_first
[1,3]<stdout>: torch.distributed.barrier()
[1,3]<stdout>: File "/opt/conda/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 2419, in barrier
[1,3]<stdout>: default_pg = _get_default_group()
[1,2]<stdout>:Traceback (most recent call last):
[1,2]<stdout>: File "/opt/conda/lib/python3.6/runpy.py", line 193, in _run_module_as_main
[1,2]<stdout>: "__main__", mod_spec)
[1,2]<stdout>: File "/opt/conda/lib/python3.6/runpy.py", line 85, in _run_code
[1,2]<stdout>: exec(code, run_globals)
[1,2]<stdout>: File "/opt/conda/lib/python3.6/site-packages/mpi4py/__main__.py", line 7, in <module>
[1,2]<stdout>: main()
[1,2]<stdout>: File "/opt/conda/lib/python3.6/site-packages/mpi4py/run.py", line 196, in main
[1,2]<stdout>: run_command_line(args)
[1,2]<stdout>: File "/opt/conda/lib/python3.6/site-packages/mpi4py/run.py", line 47, in run_command_line
[1,2]<stdout>: run_path(sys.argv[0], run_name='__main__')
[1,2]<stdout>: File "/opt/conda/lib/python3.6/runpy.py", line 263, in run_path
[1,2]<stdout>: pkg_name=pkg_name, script_name=fname)
[1,4]<stdout>: File "/opt/conda/lib/python3.6/runpy.py", line 193, in _run_module_as_main
[1,4]<stdout>: "__main__", mod_spec)
[1,4]<stdout>: File "/opt/conda/lib/python3.6/runpy.py", line 85, in _run_code
[1,4]<stdout>: exec(code, run_globals)
[1,4]<stdout>: File "/opt/conda/lib/python3.6/site-packages/mpi4py/__main__.py", line 7, in <module>
[1,4]<stdout>: main()
[1,4]<stdout>: File "/opt/conda/lib/python3.6/site-packages/mpi4py/run.py", line 196, in main
[1,4]<stdout>: run_command_line(args)
[1,4]<stdout>: File "/opt/conda/lib/python3.6/site-packages/mpi4py/run.py", line 47, in run_command_line
[1,4]<stdout>: run_path(sys.argv[0], run_name='__main__')
[1,4]<stdout>: File "/opt/conda/lib/python3.6/runpy.py", line 263, in run_path
[1,4]<stdout>: pkg_name=pkg_name, script_name=fname)
[1,4]<stdout>: File "/opt/conda/lib/python3.6/runpy.py", line 96, in _run_module_code
[1,4]<stdout>: mod_name, mod_spec, pkg_name, script_name)
[1,4]<stdout>: File "/opt/conda/lib/python3.6/runpy.py", line 85, in _run_code
[1,4]<stdout>: exec(code, run_globals)
[1,4]<stdout>: File "run_qa.py", line 646, in <module>
[1,4]<stdout>: main()
[1,4]<stdout>: File "run_qa.py", line 427, in main
[1,4]<stdout>: with training_args.main_process_first(desc="train dataset map pre-processing"):
[1,6]<stdout>: File "/opt/conda/lib/python3.6/runpy.py", line 193, in _run_module_as_main
[1,6]<stdout>: "__main__", mod_spec)
[1,6]<stdout>: File "/opt/conda/lib/python3.6/runpy.py", line 85, in _run_code
[1,6]<stdout>: exec(code, run_globals)
[1,6]<stdout>: File "/opt/conda/lib/python3.6/site-packages/mpi4py/__main__.py", line 7, in <module>
[1,6]<stdout>: main()
[1,6]<stdout>: File "/opt/conda/lib/python3.6/site-packages/mpi4py/run.py", line 196, in main
[1,6]<stdout>: run_command_line(args)
[1,6]<stdout>: File "/opt/conda/lib/python3.6/site-packages/mpi4py/run.py", line 47, in run_command_line
[1,6]<stdout>: run_path(sys.argv[0], run_name='__main__')
[1,6]<stdout>: File "/opt/conda/lib/python3.6/runpy.py", line 263, in run_path
[1,6]<stdout>: pkg_name=pkg_name, script_name=fname)
[1,6]<stdout>: File "/opt/conda/lib/python3.6/runpy.py", line 96, in _run_module_code
[1,6]<stdout>: mod_name, mod_spec, pkg_name, script_name)
[1,6]<stdout>: File "/opt/conda/lib/python3.6/runpy.py", line 85, in _run_code
[1,6]<stdout>: exec(code, run_globals)
[1,6]<stdout>: File "run_qa.py", line 646, in <module>
[1,6]<stdout>: main()
[1,6]<stdout>: File "run_qa.py", line 427, in main
[1,6]<stdout>: with training_args.main_process_first(desc="train dataset map pre-processing"):
[1,6]<stdout>: File "/opt/conda/lib/python3.6/contextlib.py", line 81, in __enter__
[1,6]<stdout>: return next(self.gen)
[1,6]<stdout>: File "/opt/conda/lib/python3.6/site-packages/transformers/training_args.py", line 1033, in main_process_first
[1,6]<stdout>: torch.distributed.barrier()
[1,4]<stdout>: File "/opt/conda/lib/python3.6/contextlib.py", line 81, in __enter__
[1,4]<stdout>: return next(self.gen)
[1,4]<stdout>: File "/opt/conda/lib/python3.6/site-packages/transformers/training_args.py", line 1033, in main_process_first
[1,4]<stdout>: torch.distributed.barrier()
[1,4]<stdout>: File "/opt/conda/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 2419, in barrier
[1,4]<stdout>: default_pg = _get_default_group()
[1,4]<stdout>: File "/opt/conda/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 347, in _get_default_group
[1,4]<stdout>: raise RuntimeError("Default process group has not been initialized, "
[1,4]<stdout>:RuntimeError: Default process group has not been initialized, please make sure to call init_process_group.
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 07-22-2021 18:48:23 | 07-22-2021 18:48:23 | @philschmid @sgugger <|||||>From offline discussion, the issue seem to be the following:
1. You can’t use torch.distributed and smdp at the same time. You might want to change torch.distributed.barrier to sm_dist.barrier
2. You could do import either torch.distributed or smdistributed.dataparallel.torch.distributed as dist at the top of the file. Then, you can simply write dist.xyz elsewhere
Likely the PR from which this issue originated is below:
https://github.com/huggingface/transformers/pull/12464
|
transformers | 12,846 | closed | T5: Create position related tensors directly on device instead of CPU | # What does this PR do?
The current implementation of the `compute_bias` function of the T5 model creates tensors on the CPU (`memory_position` and `context_position`) and then moves them to the corresponding device with `.to()`.
While this has minimal impact in single-gpu training, in multi-gpu large batch training, as the number of GPUs increases this reduces GPU utilization.
This short PR addresses the issue.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR (@patrickvonplaten).
| 07-22-2021 18:37:42 | 07-22-2021 18:37:42 | |
transformers | 12,845 | closed | Add Model Details for Pipeline API | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
I think it will be great if we could add further details about what particular model the Pipeline API is using. In particular, we can add the following details relating to the models in a readable markdown file:
* Dataset used for training
* Type of transformer used
* Hyperparameters and neural net architecture
* Evaluation metrics: accuracy + precision + recall on test portion of dataset
## Motivation
I was working on different sentiment analysis datasets with different transformers when I stumbled across the HuggingFace pipeline API. I was impressed by the performance of the models, but I wasn't quantitively sure about the accuracy of the model, neither was I sure about how it was trained.
I believe that many other users of the Pipeline API will also benefit from such a change.
P.S. If there is some source of such information and I've missed it, please let me know
| 07-22-2021 18:01:39 | 07-22-2021 18:01:39 | This can be interesting. Are you interested in opening a PR? You could add a section with model details for every pipeline by editing the documentation pages (i.e. the .rst files).<|||||>Yes I would like to try opening a PR for this. However, it will take me some time to understand the code behind how the pipeline works.<|||||>Hello! The first step for this would be to expose the pretrained models used by the pipelines - it's currently hidden in the code here: https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/__init__.py#L110
Exposing which model is the default with a link to the model card would be nice indeed.<|||||>cc @Narsil, I think it would be very beneficial to document which model is used by default for each pipeline.<|||||>Should we do that at runtime with a log that we chose a default model, or you had something more static (definitely doable, but probably tedious to maintain.)
<|||||>Yes I've been busy lately - although I thought I could examine the code, it has not been possible given my schedule. Given that this is an important problem others are facing too, feel free to open a PR.<|||||>@Narsil I personally feel that having a runtime log will be more than enough given the ease at which one can get started with using the pipeline API - one can simply implement it to find out what model is working behind the scenes<|||||>Yeah, a runtime log would be fine.<|||||>https://github.com/huggingface/transformers/pull/13276 |
transformers | 12,844 | closed | Generate text from inputs_embeds for seq2seq models like BART or T5 | # 🚀 Feature request
Generate text from inputs_embeds for seq2seq models like BART or T5
## Motivation
Many advanced researches now rely on generation of text from an embedding vector. Currently we can perform forward pass through BART or T5 by just inputing inputs_embeds but this is not possible in generate function.
At the moment I have to go around this issue by calling the encoder to get encoder_outputs, and used it as an input to the generation function directly:
```python
encoder_outputs = model.base_model.encoder(inputs_embeds=input_embeds, return_dict=True)
out = model.generate(encoder_outputs=encoder_outputs)
```
Even this does not show any error message but I am not sure this is the right thing to do, any comment? Can you please support this function as did with the forward function?
| 07-22-2021 15:47:55 | 07-22-2021 15:47:55 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,843 | closed | Moving feature-extraction pipeline to new testing scheme | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 07-22-2021 13:53:33 | 07-22-2021 13:53:33 | @sgugger It's not that it's urgent, it's that there's a quite big backlog (all pipelines tests need to be converted, and the goal is to start working on proper pipeline iteration (enabling more speed on GPU)<|||||>I will merge this if that's OK (So I can prepare the next PR)<|||||>It's fine by me, as long as comments from @LysandreJik (if any) are addressed later on and if you could double-check on the branch of #12939 that the new models used in those tests can be instantiated without any problem. |
transformers | 12,842 | closed | FlaxGPT2LMHeadModel doesn't fail on mismatch between tokenizer and model vocab size | ## Environment info
- `transformers` version: 4.9.0
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.11
- PyTorch version (GPU?): 1.9.0+cu102 (True)
- Tensorflow version (GPU?): 2.5.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.3.4 (gpu)
- Jax version: 0.2.18
- JaxLib version: 0.1.69
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@patil-suraj
@patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): FlaxGPT2LMHeadModel
The problem arises when using:
* run_clm_flax.py script
* other situations as well as the problem is lack of error handling
The tasks I am working on is:
* training GPT2
## To reproduce
Run the colab linked below:
https://colab.research.google.com/drive/1G07rCEZyquu5CFVvQZlkK1Mgjd1gT86g?usp=sharing
## Expected behavior
We found an issue with our Pytorch model where tokenizer vocab size was not aligned with our model embedding size. This resulted in an error - as expected. However, the model was trained in Flax with that mismatch, and on testing, it appears to work while with vocab size larger than embedding size. I think this should raise an exception instead. We use FlaxGPT2LMHeadModel. | 07-22-2021 12:15:59 | 07-22-2021 12:15:59 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I think it's difficult to ensure that a model which has a mismatch in vocab size with its tokenizer runs correctly. We could think about showing a better error message than:
```
IndexError: index out of range in self
```
If it's the word embeddings matrix that throws that error! I don't think we can make sure though that the model works with an incorrect tokenizer vocab size<|||||>Hi Patrick, thanks for commenting on the issue! I actually have no concerns
with the error in model inference due to the mismatch in vocab size with
its tokenizer. The concern was that the model trained with this mismatch, I
think it should throw an error instead during training. Thanks, Darek
On Tue, 24 Aug 2021 at 11:16, Patrick von Platen ***@***.***>
wrote:
> I think it's difficult to ensure that a model which has a mismatch in
> vocab size with its tokenizer runs correctly. We could think about showing
> a better error message than:
>
> IndexError: index out of range in self
>
> If it's the word embeddings matrix that throws that error! I don't think
> we can make sure though that the model works with an incorrect tokenizer
> vocab size
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/12842#issuecomment-904471318>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ADYSLBN5A54ZS36YNXWSND3T6NPQNANCNFSM5AZ4HV5Q>
> .
> Triage notifications on the go with GitHub Mobile for iOS
> <https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
> or Android
> <https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email>
> .
>
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,841 | closed | Set Experiment Name in `MLflowCallback` | In the `MLflowCallback` it is not possible to set the Experiment Name.
As a consequence the run is logged to the Default experiment.
I would like to suggest the following feature:
It is possible to set an env. variable called `ML_FLOW_CALLBACK_EXPERIMENT_NAME`
which then calls `mlflow.set_experiment(experiment_name)`.
What do you think? As always I can provide a PR if wanted.
| 07-22-2021 11:40:15 | 07-22-2021 11:40:15 | PS: additionaly to the env. var we might want to add a constructore parameter - what do you think?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>This issue should be reopened as it is intended for mlflow experiment name which is different from run_name. |
transformers | 12,840 | closed | How to use the parameters of a certain layer | print(bert.embeddings.LayerNorm.bias) #Run successfully
print(bert.encoder.layer.0.attention.self.query.weight) #invalid syntax appears in "layer.0."
so how to use the parameters of a certain layer? | 07-22-2021 11:10:48 | 07-22-2021 11:10:48 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@mengruwg
The general way to access any parameter of a PyTorch model via its name is
`parameter = model.state_dict()[<parameter_name>].`
Since huggingface models override PyTorch nn.Module you can use the above method for any model. [Reference](https://discuss.pytorch.org/t/how-to-manipulate-layer-parameters-by-its-names/1282)
<|||||>I see. Thank you very much for your prompt reply.
At 2021-08-28 01:53:47, "Rahul Shiv Chand" ***@***.***> wrote:
@mengruwg
The general way to access any parameter of a PyTorch model via its name is
parameter = model.state_dict()[<parameter_name>].
Since huggingface models override PyTorch nn.Module you can use the above method for any model. Reference
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,838 | closed | `transformers-cli` fails out of the box | ## Environment info
I was unable to run `transformers-cli`
- `transformers` version: `transformers-4.9.0.dev0`
- Platform: Mac OS
- Python version: 3.8.3
- PyTorch version (GPU?): 1.7.1
- Tensorflow version (GPU?): 2.5.0
- Using GPU in script?: n/a
- Using distributed or parallel set-up in script?: n/a
### Who can help
Maybe @sgugger?
## Information
Model I am using (Bert, XLNet ...): None
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. run `pip install -e ".[dev]"
2. run `transformers-cli add-new-model` or `transformers-cli`
```
(base) stellabiderman@Stellas-MBP transformers % transformers-cli
Traceback (most recent call last):
File "/Users/stellabiderman/opt/anaconda3/bin/transformers-cli", line 33, in <module>
sys.exit(load_entry_point('transformers', 'console_scripts', 'transformers-cli')())
File "/Users/stellabiderman/opt/anaconda3/bin/transformers-cli", line 25, in importlib_load_entry_point
return next(matches).load()
File "/Users/stellabiderman/opt/anaconda3/lib/python3.8/importlib/metadata.py", line 77, in load
module = import_module(match.group('module'))
File "/Users/stellabiderman/opt/anaconda3/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 783, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/Users/stellabiderman/Documents/transformers/src/transformers/commands/transformers_cli.py", line 23, in <module>
from .run import RunCommand
File "/Users/stellabiderman/Documents/transformers/src/transformers/commands/run.py", line 17, in <module>
from ..pipelines import SUPPORTED_TASKS, TASK_ALIASES, Pipeline, PipelineDataFormat, pipeline
File "/Users/stellabiderman/Documents/transformers/src/transformers/pipelines/__init__.py", line 26, in <module>
from ..models.auto.feature_extraction_auto import FEATURE_EXTRACTOR_MAPPING, AutoFeatureExtractor
File "/Users/stellabiderman/Documents/transformers/src/transformers/models/auto/feature_extraction_auto.py", line 20, in <module>
from transformers import DeiTFeatureExtractor, Speech2TextFeatureExtractor, ViTFeatureExtractor
File "<frozen importlib._bootstrap>", line 1039, in _handle_fromlist
File "/Users/stellabiderman/Documents/transformers/src/transformers/file_utils.py", line 1978, in __getattr__
value = getattr(module, name)
File "/Users/stellabiderman/Documents/transformers/src/transformers/file_utils.py", line 1977, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/Users/stellabiderman/Documents/transformers/src/transformers/file_utils.py", line 1986, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "/Users/stellabiderman/opt/anaconda3/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/Users/stellabiderman/Documents/transformers/src/transformers/models/speech_to_text/feature_extraction_speech_to_text.py", line 23, in <module>
import torchaudio.compliance.kaldi as ta_kaldi
File "/Users/stellabiderman/opt/anaconda3/lib/python3.8/site-packages/torchaudio/__init__.py", line 1, in <module>
from . import extension # noqa: F401
File "/Users/stellabiderman/opt/anaconda3/lib/python3.8/site-packages/torchaudio/extension/__init__.py", line 5, in <module>
_init_extension()
File "/Users/stellabiderman/opt/anaconda3/lib/python3.8/site-packages/torchaudio/extension/extension.py", line 11, in _init_extension
_init_script_module(ext)
File "/Users/stellabiderman/opt/anaconda3/lib/python3.8/site-packages/torchaudio/extension/extension.py", line 19, in _init_script_module
torch.classes.load_library(path)
File "/Users/stellabiderman/opt/anaconda3/lib/python3.8/site-packages/torch/_classes.py", line 46, in load_library
torch.ops.load_library(path)
File "/Users/stellabiderman/opt/anaconda3/lib/python3.8/site-packages/torch/_ops.py", line 105, in load_library
ctypes.CDLL(path)
File "/Users/stellabiderman/opt/anaconda3/lib/python3.8/ctypes/__init__.py", line 373, in __init__
self._handle = _dlopen(self._name, mode)
OSError: dlopen(/Users/stellabiderman/opt/anaconda3/lib/python3.8/site-packages/torchaudio/_torchaudio.so, 6): Symbol not found: __ZN2at6detail10noopDeleteEPv
Referenced from: /Users/stellabiderman/opt/anaconda3/lib/python3.8/site-packages/torchaudio/_torchaudio.so
Expected in: /Users/stellabiderman/opt/anaconda3/lib/python3.8/site-packages/torch/lib/libtorch_cpu.dylib
in /Users/stellabiderman/opt/anaconda3/lib/python3.8/site-packages/torchaudio/_torchaudio.so
```
```
(base) stellabiderman@Stellas-MBP transformers % transformers-cli add-new-model
Traceback (most recent call last):
File "/Users/stellabiderman/opt/anaconda3/bin/transformers-cli", line 33, in <module>
sys.exit(load_entry_point('transformers', 'console_scripts', 'transformers-cli')())
File "/Users/stellabiderman/opt/anaconda3/bin/transformers-cli", line 25, in importlib_load_entry_point
return next(matches).load()
File "/Users/stellabiderman/opt/anaconda3/lib/python3.8/importlib/metadata.py", line 77, in load
module = import_module(match.group('module'))
File "/Users/stellabiderman/opt/anaconda3/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 783, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/Users/stellabiderman/Documents/transformers/src/transformers/commands/transformers_cli.py", line 23, in <module>
from .run import RunCommand
File "/Users/stellabiderman/Documents/transformers/src/transformers/commands/run.py", line 17, in <module>
from ..pipelines import SUPPORTED_TASKS, TASK_ALIASES, Pipeline, PipelineDataFormat, pipeline
File "/Users/stellabiderman/Documents/transformers/src/transformers/pipelines/__init__.py", line 26, in <module>
from ..models.auto.feature_extraction_auto import FEATURE_EXTRACTOR_MAPPING, AutoFeatureExtractor
File "/Users/stellabiderman/Documents/transformers/src/transformers/models/auto/feature_extraction_auto.py", line 20, in <module>
from transformers import DeiTFeatureExtractor, Speech2TextFeatureExtractor, ViTFeatureExtractor
File "<frozen importlib._bootstrap>", line 1039, in _handle_fromlist
File "/Users/stellabiderman/Documents/transformers/src/transformers/file_utils.py", line 1978, in __getattr__
value = getattr(module, name)
File "/Users/stellabiderman/Documents/transformers/src/transformers/file_utils.py", line 1977, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/Users/stellabiderman/Documents/transformers/src/transformers/file_utils.py", line 1986, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "/Users/stellabiderman/opt/anaconda3/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/Users/stellabiderman/Documents/transformers/src/transformers/models/speech_to_text/feature_extraction_speech_to_text.py", line 23, in <module>
import torchaudio.compliance.kaldi as ta_kaldi
File "/Users/stellabiderman/opt/anaconda3/lib/python3.8/site-packages/torchaudio/__init__.py", line 1, in <module>
from . import extension # noqa: F401
File "/Users/stellabiderman/opt/anaconda3/lib/python3.8/site-packages/torchaudio/extension/__init__.py", line 5, in <module>
_init_extension()
File "/Users/stellabiderman/opt/anaconda3/lib/python3.8/site-packages/torchaudio/extension/extension.py", line 11, in _init_extension
_init_script_module(ext)
File "/Users/stellabiderman/opt/anaconda3/lib/python3.8/site-packages/torchaudio/extension/extension.py", line 19, in _init_script_module
torch.classes.load_library(path)
File "/Users/stellabiderman/opt/anaconda3/lib/python3.8/site-packages/torch/_classes.py", line 46, in load_library
torch.ops.load_library(path)
File "/Users/stellabiderman/opt/anaconda3/lib/python3.8/site-packages/torch/_ops.py", line 105, in load_library
ctypes.CDLL(path)
File "/Users/stellabiderman/opt/anaconda3/lib/python3.8/ctypes/__init__.py", line 373, in __init__
self._handle = _dlopen(self._name, mode)
OSError: dlopen(/Users/stellabiderman/opt/anaconda3/lib/python3.8/site-packages/torchaudio/_torchaudio.so, 6): Symbol not found: __ZN2at6detail10noopDeleteEPv
Referenced from: /Users/stellabiderman/opt/anaconda3/lib/python3.8/site-packages/torchaudio/_torchaudio.so
Expected in: /Users/stellabiderman/opt/anaconda3/lib/python3.8/site-packages/torch/lib/libtorch_cpu.dylib
in /Users/stellabiderman/opt/anaconda3/lib/python3.8/site-packages/torchaudio/_torchaudio.so
```
## Expected behavior
I expected the new model builder to work.
| 07-22-2021 05:28:45 | 07-22-2021 05:28:45 | Hi @StellaAthena, thank you for your detailed issue. This error is usually due to mismatched versions of `torch` and `torchaudio`, see [here](https://discuss.huggingface.co/t/setup-questions/6796/8). We recommend either uninstalling `torchaudio`, or reinstalling torch and torchaudio to their latest versions (v1.9.0 and v0.9.0 respectively).
If none of these work, would you mind sending me the result of `conda list` so that I may try and see what's going on? Thank you!
<|||||>@LysandreJik It looks like that works! Thanks. |
transformers | 12,837 | closed | Fix CpmTokenizer for training/finetuning CPM model | When CpmTokenizer extends XLNetTokenizer, it works for inference/generation, but model in training/finetuning will failed after some steps, its error is random like:
RuntimeError: CUDA error: device-side assert triggered
terminate called after throwing an instance of 'std::runtime_error'
what(): NCCL error in: /pytorch/torch/lib/c10d/../c10d/NCCLUtils.hpp:136, unhandled cuda error, NCCL version 2.7.8,
failed when in different tensor operation every time, very strange.
after extend XLNetTokenizerFast, inference/training/finetuning all working.
| 07-22-2021 03:59:37 | 07-22-2021 03:59:37 | |
transformers | 12,836 | closed | [parallelism doc] document Deepspeed-Inference and parallelformers | This PR adds references to Tensor parallel implementations of Deepspeed-Inference and parallelformers (for transformers).
As discussed at https://github.com/huggingface/transformers/issues/12772
@sgugger | 07-21-2021 19:39:35 | 07-21-2021 19:39:35 | |
transformers | 12,835 | closed | Add support for T5 models in Zero Shot Classification pipeline | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR adds support for T5 like models to be used with the `ZeroShotClassificationWithT5` pipeline. It works exactly like `ZeroShotClassification` pipeline.
I decided to keep them separated in two different classes, and not implement one over the other in order to keep the code more simple. But, maybe can be interesting to put both together as they share the same input and solve the same task. What do you think?
I still have not written the documentation. I want to know the comunity response to the question asked above first.
The performed tests are equal to the `ZeroShotClassificationPipelineTests` without the `_test_entailment_id` because is no longer necessary.
Any suggestion is welcome.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 07-21-2021 19:20:28 | 07-21-2021 19:20:28 | Hello @osainz59! The pipelines are model agnostic, they shouldn't be model-specific with a `With{MODEL}`. I would instead improve the existing zero-shot classification pipeline to add support for T5. Would you be willing to try that out?<|||||>>
>
> Hello @osainz59! The pipelines are model agnostic, they shouldn't be model-specific with a `With{MODEL}`. I would instead improve the existing zero-shot classification pipeline to add support for T5. Would you be willing to try that out?
Sure!<|||||>Changed! What do you think @LysandreJik ?<|||||>>
>
> Hi @osainz59 ,
>
> First of all, thanks for this contribution, it's very interesting to be doing zero-shot in a different way.
>
> Couple of points/questions:
>
> * Do you have a benchmark of results or something we can compare with current pipeline version to showcase it has some merit ? (Being similar in performance would still be helpful IMO)
>
Not yet, but I am interested on how can affect the knowledge adquired from other GLUE and SuperGLUE tasks at the time of building a zero-shot system. This was my motivation to adapt the ZeroShotClassificaitonPipeline to work with T5 like models. I do not have any benchmark yet but it could be interesting to compare with [Yin et al. (2019)](https://aclanthology.org/D19-1404.pdf) (the standard NLI based zero-shot).
> * Anything t5 specific needs to be removed. Pipelines don't want to deal with anything model specific (as much as reasonably possible). Your code could work basically on any generating model (maybe limiting to encoder-decoder for simplicity) so we would need to update to adapt
>
Okay. Initially, I implemented it with only the T5 model in mind, but you are right, this could/should work with any other generative model.
> * Pipelines are undergoing a pretty major change so might be more interesting to wait for it to be merged.
Okay :)
<|||||>If you want to focus on the benchmark, that would be helpful, I started the migration part and probably could integrate generative models as well when we're closer to merging. The better the benchmark, the more incentive to get it merge into the library :)
You could very well use this branch to get your benchmark running too btw with heavily customized code and we can figure out later how to make it more general.<|||||>I will focus on the benchmark then. Once I get some results I will post them here.<|||||>I have run some evaluations over 4 different datasets using task agnostic and task specific `hypothesis_template`s to compare 2 standard NLI models (`roberta-large-mnli` and `facebook/bart-large-mnli`) with the T5 NLI model (`t5-large`).
I ran the experiments without any previous development, let's say, `hypothesis_template` or `candidate_labels` exploration, to simulate a real zero-shot situation where no development data is available. This may derive on a suboptimal performance of the models. The task agnostic `hypothesis_template` used is the default template: `"The example is {}."`. The `t5-large` model is evaluated using the MNLI and RTE tasks prefixes.
## Task definition
### AG News ("ag_news")
Topic classification task over a collection of news articles.
*Candidate labels:* **World**, **Sports**, **Business** and **Science & Technology**.
*Task specific template:* `"Topic: {}."`
### Yelp ("yelp_review_full")
Rating estimation (from 1 to 5 stars).
*Candidate labels:* **1 star**, **2 star**, **3 star**, **4 star** and **5 star**.
*Task specific template:* `"Review rating: {}."`
### Yelp polarity ("yelp_polarity")
Sentiment classification of restaurants reviews.
*Candidate labels:* **Negative** and **Positive**.
*Task specific template:* `"Polarity: {}."`
### Yahoo ("yahoo_answers_topics")
Topic classification of questions and answers.
*Candidate labels:* **Society & Culture**, **Science & Mathematics**, **Health**, **Education & Reference**, **Computers & Internet**, **Sports**, **Business & Finance**, **Entertainment & Music**, **Family & Relationships** and **Politics & Government**.
*Task specific template:* `"Topic: {}."`
## Results
| Model | Task specific template | AG News | Yelp | Yelp polarity | Yahoo | Avg |
|------:|:----------------------:|:----------:|:-----:|:-------------:|:------:|:-----:|
| roberta-large-mnli | | 42.17 | 40.56 | 84.13 | 28.64 | 48.87 |
| roberta-large-mnli | ☑ | **73.43** | 39.15 | 91.61 | 51.59 | 63.94 |
| facebook/bart-large-mnli | | 66.52 | 33.50 | 90.56 | 48.03 | 59.65 |
| facebook/bart-large-mnli | ☑ | 72.64 | 37.52 | 86.52 | 54.15 | 62.70 |
| t5-large (mnli prefix) | | 53.78 | 31.81 | 90.09 | 47.72 | 55.85 |
| t5-large (mnli prefix) | ☑ | 51.5 | 34.30 | 92.78 | 45.13 | 55.92 |
| t5-large (rte prefix) | | 65.06 | 39.33 | 82.52 | **54.40** | 60.32 |
| t5-large (rte prefix) | ☑ | 64.42 | **45.41** | **93.75** | 54.12 | **64.42** |
In terms of perfomance the `t5-large` is pretty similar with the rest I think. Something that can be concluded from this results is that some `hypothesis_template`s works better with some models than others, like for instance on the Yelp dataset almost every model improve the performance when a specific template is used except for `roberta-large-mnli`. There are other cases too. Another thing to emphasize is that the RTE prefix seems to be a better option for the T5 than the MNLI prefix.
<|||||>@osainz59
Thank you very much for this ! This is indeed very interesting.
Good to know it works so well out-of-the box.
The work for new pipelines has been done here: https://github.com/Narsil/transformers/tree/iterable_pipelines
It does not include T5 zero-shot for now, but given the magnitude of the change, it will have to wait probably to implement T5 later (lots of testing chnages first to make sure nothing is breaking, then big refactor PR, then T5-zero-shot)
As for the design of this, I think we should have 2 subclasses of ZeroShot being chosen based on the model (ForGeneration vs ForSequenceClassification) so that the code is nice and clean and we don't have any T5 specific code. For the default hypothesis_template, keeping the current one as the default for all seems like a good enough because we cannot infer on what type of dataset pipelines are going to be used.<|||||>Okay!
I have some doubts about the design, I am agree to keep them separated (as my initial proposal, but without T5 specific code), but re-naming the pipeline as ZeroShot (ForGeneration) might be confused with GPT-3 like prompting based zero-shot (https://twitter.com/BigscienceW/status/1429787756063043588?s=20). Also, this last method might be also implemented in the future, so we should make it clear that this pipeline still uses a model fine-tuned on a NLI task, even if it uses a generative model. <|||||>The large PR for pipeline refactoring is here:
https://github.com/huggingface/transformers/pull/13308
Will implement the t5 for zero shot after it is merged (or not). It's a pretty large PR with large impact, so it might take a while.<|||||>@osainz59 Do you mind sharing your benchmarking code ? I would like to run some tests on the zero-shot pipeline too and it would be great to use it as a starting point.
<|||||>Sure! @Narsil https://drive.google.com/file/d/1SXFEBo24tG1COv2Vm1roB4r5PbsoMzsA/view?usp=sharing
I uploaded it to Drive, maybe I should have shared it as a commit in this PR.
<|||||>It's fine that way, maybe as a gist might be a little more future proof or inline (it's not that long), but that's good enough, thanks !
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,834 | closed | Add ESM to hugging face | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 07-21-2021 18:36:40 | 07-21-2021 18:36:40 | Thanks for showing us the diff @Jason Liu! Those are the kind of changes we usually try to avoid: the core philosophy of the Transformers library is to have individual model files that only contain the code related to their model and not building blocks with lots of config parameters. For instance mBART, marian and BART are all pretty similar, yet there all have their own modeling files. RoBERTa is not that different from BERT, yet we also chose to have two different config/modeling files. Also, moving the LayerNorm module is breaking and will make every roberta checkpoint on the model hub fail, which is something no one really wants :-)
This is something that our users have said they really like about the library so we want to continue like this! That being said, to ensure the duplicate parts of the code don't diverge, we have internal scripts. You may have seen some # Copied from xxx statements in the roberta modeling file for instance, they are all there to enforce the copies stay synced with the originals.
So having a new esm model with the # Copied from xxx statements where necessary is the solution we would really like to see. Let us know how we can help you along the way! There is a template to add new models [here](https://github.com/huggingface/transformers/tree/master/templates/adding_a_new_model). If you follow the README, it will do all the steps to fully integrate a new model in the library that passes all the tests, you will just have to change the modeling file afterward with what you've shown us in the PR.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,833 | closed | unable to load cache when network is unavailable | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.8.2
- Platform: Linux-4.4.0-101-generic-x86_64-with-glibc2.17
- Python version: 3.8.10
- PyTorch version (GPU?): 1.8.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
## Information
Cannot load cache when ETag is unavailable (because of network connection issue etc.), because the filename of cache is something like `xxxxxxx(.<ETag>)`, when ETag is unavailable, the program tries to find something like `xxxxxxx` (without ETag as suffix), which won't succeed.
## To reproduce
Write some codes simply load a pre-trained tokenizer:
```python
from transformers import BertTokenizer
BertTokenizer.from_pretrained('bert-base-uncased')
```
Run the code above when network is available, wait for the program complete downloading the model and saving the cache. Then cutoff network connection and run the code above again, it will raise following exception:
```
ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.
```
## Expected behavior
The program loads the cache downloaded previously when network is unavailable.
| 07-21-2021 18:13:46 | 07-21-2021 18:13:46 | I might found the reason: when network is unavailable, the program tries to find `added_tokens.json`, however this was not downloaded when network is available.<|||||>>
>
> I might found the reason: when network is unavailable, the program tries to find `added_tokens.json`, however this was not downloaded when network is available.
Because the file `added_tokens.json` does not actually exists for `bert-base-uncased` model, so the program should not try to find this file when load from cache. I think maybe the cache should also maintain a list of all files.<|||||>This is not a issue related to the ETag, sorry for the misleading description at the beginning.<|||||>Have you find any solutions for this?
<|||||>@zhouyanxin282446 No, but maybe it's possible to create some empty dummy files, I haven't checked yet.<|||||>When you don't have internet access, you should specify `local_files_only` to `from_pretrained` so that the package only tries to fetch local files:
```py
BertTokenizer.from_pretrained('bert-base-uncased', local_files_only=True)
```<|||||>> When you don't have internet access, you should specify `local_files_only` to `from_pretrained` so that the package only tries to fetch local files:
>
> ```python
> BertTokenizer.from_pretrained('bert-base-uncased', local_files_only=True)
> ```
This works, thanks.
Actually, in my case, I'm only able to access hugging face stably when I'm using a proxy, but I don't want to enable the proxy all the time, so I'm used to firstly downloading the model with proxy, and running my code without proxy afterwards, without modifying the code (by setting `local_files_only`). How about trying local files after encountering a network error when `local_files_only` is `False`? Sounds reasonable doesn't it.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,832 | closed | Fix type of max_seq_length arg in run_swag.py | # What does this PR do?
The `max_seq_length` argument in the `run_swag.py` example should be an `Optional[int]`, because its default value is `None`. This fixes the error, found by `mypy`.
| 07-21-2021 16:41:58 | 07-21-2021 16:41:58 | |
transformers | 12,831 | closed | Raise warning in HP search when hp is not in args | # What does this PR do?
As seen on the [forums](https://discuss.huggingface.co/t/using-hyperparameter-search-in-trainer/785), an HP search using the `Trainer` will error if someone tries to include model parameters like dropout in the search. This PR addresses that by changing an error to a warning in that case. | 07-21-2021 16:41:38 | 07-21-2021 16:41:38 | |
transformers | 12,830 | closed | [Deepspeed] warmup_ratio docs | Docs were not updated in https://github.com/huggingface/transformers/pull/12818, this PR completes it.
@sgugger
| 07-21-2021 16:33:43 | 07-21-2021 16:33:43 | |
transformers | 12,829 | closed | Flaky tests | Hi,
in this PR #12794 I somehow have flaky tests.
See the two commits from here
https://github.com/huggingface/transformers/pull/12794/commits/0ea13d6cf8a6a1ca5772f6a129bea9a7c06f5571
to here
https://github.com/huggingface/transformers/pull/12794/commits/e93eb12c01f8de9df6a1130041fe315372f9378b
The 2nd commit is empty and fails at an other location. | 07-21-2021 16:15:57 | 07-21-2021 16:15:57 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,828 | closed | Incorrect Tokenization behavior when working with Hindi using RobertaTokenizer | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: https://github.com/huggingface/transformers@534f6eb9f1f8910a4912ccccd79f1f974c625168
- Platform: Linux
- Python version: 3.8.10
- PyTorch version (GPU?): 1.9.0 (Not using GPU)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
## Information
### Problem
[Our model](https://huggingface.co/flax-community/roberta-hindi) is tokenizing text in way longer sequences, more like a character level lm, than as sub-word level lm.
### Diagnosis
Tokenizer is at fault, when it tokenizes on a word with matra, it always breaks matra, i.e. diacritics into a seperate token, as in `'संसद' => [' स', 'ं', 'सद']`, where as a more plausible tokenization should be `'संसद' => ['सं', 'सद']`
For more details one can look at the analysis csv [here](https://gist.github.com/amankhandelia/f8192a6782714fab6537dc29aef8fbfc) where I have done the analysis for five different models. Given the pattern observed in all RoBERTa models I believe that the [BBPE tokenizer](https://arxiv.org/pdf/1909.03341.pdf) used in case of [RoBERTa-hindi-guj-san](https://huggingface.co/surajp/RoBERTa-hindi-guj-san), [HindiBERTa](https://huggingface.co/mrm8488/HindiBERTa) and [our model](https://huggingface.co/flax-community/roberta-hindi) is not upto the task, or need some modifications to work on Hindi.
The tasks I was working on was Roberta Hindi LM as part of the recent flax community sprint. Although the sprint has ended, I was analyzing the tokenizers in order to learn how these model works, when I stumbled upon this observation, so this bug report is more of question than an explicit bug.
Is this the expected behavior of BBPE, if so, what is a possible remedy, can I use different tokenizer with Roberta instead using the RobertaTokenizer?
cc: @LysandreJik, @patil-suraj
Keeping Suraj in cc, as he might be able to better communicate the problem I am facing, with the HF team. | 07-21-2021 14:20:29 | 07-21-2021 14:20:29 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,839 | closed | Error when doing `push_to_hub` two times in a row | When doing `tokenizer.push_to_hub()` with the same tokenizer that was already uploaded (can happen in a notebook in particular), we have a git error:
```bash
---------------------------------------------------------------------------
CalledProcessError Traceback (most recent call last)
~/miniconda2/envs/datasets/lib/python3.7/site-packages/huggingface_hub/repository.py in git_commit(self, commit_message)
424 encoding="utf-8",
--> 425 cwd=self.local_dir,
426 )
~/miniconda2/envs/datasets/lib/python3.7/subprocess.py in run(input, capture_output, timeout, check, *popenargs, **kwargs)
511 raise CalledProcessError(retcode, process.args,
--> 512 output=stdout, stderr=stderr)
513 return CompletedProcess(process.args, retcode, stdout, stderr)
CalledProcessError: Command '['git', 'commit', '-m', 'add tokenizer']' returned non-zero exit status 1.
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-55-99316ec239b8> in <module>
----> 1 new_tokenizer.push_to_hub("thomwolf/codeparrot-small-vocabulary")
2
3 reloaded_tokenizer_small = AutoTokenizer.from_pretrained("thomwolf/codeparrot-small-vocabulary")
~/miniconda2/envs/datasets/lib/python3.7/site-packages/transformers/file_utils.py in push_to_hub(self, repo_path_or_name, repo_url, use_temp_dir, commit_message, organization, private, use_auth_token)
2029 self.save_pretrained(repo_path_or_name)
2030 # Commit and push!
-> 2031 url = self._push_to_hub(repo, commit_message=commit_message)
2032
2033 # Clean up! Clean up! Everybody everywhere!
~/miniconda2/envs/datasets/lib/python3.7/site-packages/transformers/file_utils.py in _push_to_hub(cls, repo, commit_message)
2109 commit_message = "add model"
2110
-> 2111 return repo.push_to_hub(commit_message=commit_message)
~/miniconda2/envs/datasets/lib/python3.7/site-packages/huggingface_hub/repository.py in push_to_hub(self, commit_message)
459 """
460 self.git_add()
--> 461 self.git_commit(commit_message)
462 return self.git_push()
463
~/miniconda2/envs/datasets/lib/python3.7/site-packages/huggingface_hub/repository.py in git_commit(self, commit_message)
429 raise EnvironmentError(exc.stderr)
430 else:
--> 431 raise EnvironmentError(exc.stdout)
432
433 def git_push(self) -> str:
OSError: Sur la branche main
Votre branche est à jour avec 'origin/main'.
rien à valider, la copie de travail est propre
``` | 07-21-2021 13:01:29 | 07-21-2021 13:01:29 | Maybe this should fall gracefully?<|||||>Yes I agree
(it should only catch the error when calling `.push_to_hub()`, I think calling `.git_commit()` on an empty change set should still pop an error)<|||||>Adding the `--allow_empty` option to the `git_commit()` method as a boolean flag would be clean here. This way there's no need to try/except in downstream libraries, only to specify that having empty commits is okay then.<|||||>Proposal in https://github.com/huggingface/huggingface_hub/pull/220, that will need to be ported to `transformers`. Will PR this to `transformers` if this change looks good to you.<|||||>> When doing `tokenizer.push_to_hub()` with the same tokenizer that was already uploaded (can happen in a notebook in particular), we have a git error:
>
> ```shell
> ---------------------------------------------------------------------------
> CalledProcessError Traceback (most recent call last)
> ~/miniconda2/envs/datasets/lib/python3.7/site-packages/huggingface_hub/repository.py in git_commit(self, commit_message)
> 424 encoding="utf-8",
> --> 425 cwd=self.local_dir,
> 426 )
>
> ~/miniconda2/envs/datasets/lib/python3.7/subprocess.py in run(input, capture_output, timeout, check, *popenargs, **kwargs)
> 511 raise CalledProcessError(retcode, process.args,
> --> 512 output=stdout, stderr=stderr)
> 513 return CompletedProcess(process.args, retcode, stdout, stderr)
>
> CalledProcessError: Command '['git', 'commit', '-m', 'add tokenizer']' returned non-zero exit status 1.
>
> During handling of the above exception, another exception occurred:
>
> OSError Traceback (most recent call last)
> <ipython-input-55-99316ec239b8> in <module>
> ----> 1 new_tokenizer.push_to_hub("thomwolf/codeparrot-small-vocabulary")
> 2
> 3 reloaded_tokenizer_small = AutoTokenizer.from_pretrained("thomwolf/codeparrot-small-vocabulary")
>
> ~/miniconda2/envs/datasets/lib/python3.7/site-packages/transformers/file_utils.py in push_to_hub(self, repo_path_or_name, repo_url, use_temp_dir, commit_message, organization, private, use_auth_token)
> 2029 self.save_pretrained(repo_path_or_name)
> 2030 # Commit and push!
> -> 2031 url = self._push_to_hub(repo, commit_message=commit_message)
> 2032
> 2033 # Clean up! Clean up! Everybody everywhere!
>
> ~/miniconda2/envs/datasets/lib/python3.7/site-packages/transformers/file_utils.py in _push_to_hub(cls, repo, commit_message)
> 2109 commit_message = "add model"
> 2110
> -> 2111 return repo.push_to_hub(commit_message=commit_message)
>
> ~/miniconda2/envs/datasets/lib/python3.7/site-packages/huggingface_hub/repository.py in push_to_hub(self, commit_message)
> 459 """
> 460 self.git_add()
> --> 461 self.git_commit(commit_message)
> 462 return self.git_push()
> 463
>
> ~/miniconda2/envs/datasets/lib/python3.7/site-packages/huggingface_hub/repository.py in git_commit(self, commit_message)
> 429 raise EnvironmentError(exc.stderr)
> 430 else:
> --> 431 raise EnvironmentError(exc.stdout)
> 432
> 433 def git_push(self) -> str:
>
> OSError: Sur la branche main
> Votre branche est à jour avec 'origin/main'.
>
> rien à valider, la copie de travail est propre
> ```
Not sure if this is the same, but I have a similar problem when using `model.push_to_hub`:
```
from transformers import AutoModel
model = AutoModel.from_pretrained('/path/to/my/fine-tuned/model/on/my/local/machine')
model.push_to_hub("my-username/my-model-name")
```
And this is the error I get:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-3-84aee0bf13c0> in <module>()
4 model = AutoModel.from_pretrained(model_path)
5
----> 6 model.push_to_hub("my-username/my-model-name")
2 frames
/usr/local/lib/python3.7/dist-packages/huggingface_hub/repository.py in __init__(self, local_dir, clone_from, use_auth_token, git_user, git_email)
102 )
103 raise ValueError(
--> 104 "If not specifying `clone_from`, you need to pass Repository a valid git clone."
105 )
106
ValueError: If not specifying `clone_from`, you need to pass Repository a valid git clone.
```
P.S. I'm running this inside Colab and I have already logged in.<|||||>Please open a new issue when you encounter a bug that is different. Here the problem is that you are not passing a valid name, it should just be `model.push_to_hub("my-model-name")` (happy to discuss more on another issue if you still have problems).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,827 | closed | VisualBert ValueError: visual_embeds can not be of class 'NoneType' when running on text only | ## Environment info
- `transformers` version: 4.8.2
- Platform: Linux-4.15.0-143-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False
### Who can help
@sgugger @gchhablani
## Information
Model I am using (Bert, XLNet ...): VisualBERT
## To reproduce
Steps to reproduce the behavior:
```python
from transformers import AutoModel, AutoTokenizer
model_name_or_path = 'uclanlp/visualbert-vqa-coco-pre'
tokenizer_name_or_path = 'bert-base-uncased'
model = AutoModel.from_pretrained(model_name_or_path,
cache_dir='cache')
tokenizer = AutoTokenizer.from_pretrained(tokenizer_name_or_path,
cache_dir='cache')
inputs = tokenizer('This is a test.', return_tensors='pt')
encoder_out = model(**inputs)
```
Gives error:
```python
Traceback (most recent call last):
File "/cw/liir/NoCsBack/testliir/rubenc/miniconda3/envs/tsenv/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3437, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-2-0aa46003b81a>", line 12, in <module>
encoder_out = model(**inputs)
File "/cw/liir/NoCsBack/testliir/rubenc/miniconda3/envs/tsenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/cw/liir/NoCsBack/testliir/rubenc/miniconda3/envs/tsenv/lib/python3.8/site-packages/transformers/models/visual_bert/modeling_visual_bert.py", line 777, in forward
raise ValueError(
ValueError: `visual_embeds` can not be of type <class 'NoneType'> when using a VisualBert Model.
```
## Expected behavior
I would like to encode only text, not image_features. The [docs](https://huggingface.co/transformers/model_doc/visual_bert.html#transformers.VisualBertModel.forward) for VisualBert say that the `visual_embeds` parameter is optional. The forward method of `VisualBertEmbeddings` seems to work when
`visual_embeds` is `None`, so I think the only thing that prevents encoding text only seems to be the check in the forward method of `VisualBertModel`? Or am I missing something? 🙂
| 07-21-2021 10:55:19 | 07-21-2021 10:55:19 | Hi @rubencart
I think this is an issue with the documentation 😅 I can fix that.
Can you share your use case where you only want to pass textual inputs to VisualBERT?
I placed this check only to prevent usage of model without any visual embeddings.
CC @patil-suraj <|||||>I want to encode text, to later use it as input for a visual downstream task. Instead of using an encoder that has been pretrained on text only, it made sense to me to try to encode it with an encoder whose pretraining was more visually informed.
Can I not use VisualBert for this? Technically, if you just remove the check, wouldn't this work? :-)<|||||>@rubencart I think you should be able to.
Wdyt @patil-suraj, should this be allowed?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Yes, this should be allowed, feel free to open a PR if you want @gchhablani :) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.