repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 10,311 | closed | Matrix multiplication error for ReformerModelWithLMHead when tie_word_embeddings is True | ## Environment info
- `transformers` version: 4.3.2
- Platform: Windows 10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.0 cpu-only
- Tensorflow version (GPU?): not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): Reformer
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run the following script:
```python
import torch
from transformers import ReformerConfig, ReformerModelWithLMHead
config = ReformerConfig(is_decoder=True, tie_word_embeddings=True)
model = ReformerModelWithLMHead(config)
inp = torch.randint(0, 100, (1, 4096))
out = model(inp)
```
2. The error:
```
Traceback (most recent call last):
File "./test.py", line 8, in <module>
out = model(inp)
File "C:\Users\xe442\anaconda3\envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\xe442\anaconda3\envs\pytorch\lib\site-packages\transformers\models\reformer\modeling_reformer.py", line 2248, in forward
logits = self.lm_head(sequence_output)
File "C:\Users\xe442\anaconda3\envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\xe442\anaconda3\envs\pytorch\lib\site-packages\transformers\models\reformer\modeling_reformer.py", line 1761, in forward
return apply_chunking_to_forward(self.forward_chunk, self.chunk_size_lm_head, self.seq_len_dim, hidden_states)
File "C:\Users\xe442\anaconda3\envs\pytorch\lib\site-packages\transformers\modeling_utils.py", line 1787, in apply_chunking_to_forward
return forward_fn(*input_tensors)
File "C:\Users\xe442\anaconda3\envs\pytorch\lib\site-packages\transformers\models\reformer\modeling_reformer.py", line 1764, in forward_chunk
hidden_states = self.decoder(hidden_states)
File "C:\Users\xe442\anaconda3\envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\xe442\anaconda3\envs\pytorch\lib\site-packages\torch\nn\modules\linear.py", line 93, in forward
return F.linear(input, self.weight, self.bias)
File "C:\Users\xe442\anaconda3\envs\pytorch\lib\site-packages\torch\nn\functional.py", line 1692, in linear
output = input.matmul(weight.t())
RuntimeError: mat1 and mat2 shapes cannot be multiplied (4096x512 and 256x320)
```
## Expected behavior
There should be no errors.
| 02-21-2021 08:05:00 | 02-21-2021 08:05:00 | Hey @xe442,
Actually Reformer cannot make use of `tie_word_embeddings=True` because the output word embedding layer is twice as big as the input layer (because of Reformer's architecture, see section 3) in this blog: https://huggingface.co/blog/reformer<|||||>But, we should in this case give a better error message! Feel free to open a PR to add such an error message :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 10,310 | closed | [Trainer] implement gradient_accumulation_steps support in DeepSpeed integration | This PR:
Fixes in a bug:
- `lr_scheduler.step()` shouldn't be called under DeepSpeed - it's already called in its `optimizer.step()` internally - so it was moving through the scheduler rate change at twice the speed :(
Adds support for `gradient_accumulation_steps`:
* makes `gradient_accumulation_steps` work with deepspeed - for nuances see: https://github.com/microsoft/DeepSpeed/issues/776 - it required a lot of `if` / `if nots` - not helping the readability of the trainer - and took a lot of trial and error to figure out - but what to do
* adds a corresponding doc
* adds a first serious quality test for DeepSpeed that measures that `gradient_accumulation_steps` works - modelled after `test_trainer.py`'s own `test_gradient_accumulation` and extends it to compare loss as well, and also tests that the optimizer actually kicked in - with fp16 deepspeed it normally takes a few dozen steps before it kicks in with dynamic scaling enabled.
* extends `testing_utils` with a `mockenv_context` which is similar to `@mockenv`, but which can be used inside the test as context manager if multiple env vars need to be tested - `@mockenv` is only useful as a decorator. At the end I think I don't really need it as using the same env worked for all tests, but it might come handy if ports don't get released fast enough and then the test will use different ports - I'm concerned about CIs. And it's easier to re-use the class-wide env, rather hardcoding or creating a global variable - so it's just cleaner too.
Suggestion/Question:
* `get_regression_trainer` is very awesome! But importing it from a test file is not great - probably should move it and its components into a utilities file - `testing_utils.py` or create a new one `testing_training_utils.py` and in the future add other trainer-testing specific utils in there? Though this should be dealt with in a separate PR.
@sgugger | 02-21-2021 07:32:38 | 02-21-2021 07:32:38 | > Cool that they added it! This all looks pretty good to me!
Well, it has been there all this time, this PR just bolts it on correctly.
> Absolutely no problems with moving the regression trainer somewhere accessible, it was just in `test_trainer` because only used there.
Ah, that makes sense. I will rework it then next time I touch on this code.
Thank you for the feedback, @sgugger
|
transformers | 10,309 | closed | [Example] Using label_smoothing_factor raise error when evaluating model | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.2
- Platform: Ubuntu 20.04
- Python version: 3.8
- PyTorch version (GPU): 1.6.0
### Who can help
Library:
- pipelines: @LysandreJik
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
## Information
Model I am using BERT:
The problem arises when using:
* [x] the official example scripts: https://github.com/huggingface/transformers/blob/master/examples/legacy/token-classification/run_ner.py
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. I run the old script run_ner.py with default label_smoothing_factor = 0.0. It works well.
2. I add label_smoothing_factor = 0.1 to JSON config file.
`{
"data_dir": "/home/dzungle/NER/data/",
"train_file": "/home/dzungle/NER/data/train.csv",
"validation_file": "/home/dzungle/data/dev.csv",
"model_name_or_path": "emilyalsentzer/Bio_ClinicalBERT",
"output_dir": "/home/dzungle/NER/models/",
"label_smoothing_factor": 0.1,
"max_seq_length": 256,
"num_train_epochs": 1,
"per_device_train_batch_size": 8,
"gradient_accumulation_steps": 4,
"per_device_eval_batch_size": 1,
"save_steps": 1000,
"eval_steps" : 50,
"save_total_limit":1,
"seed": 1,
"do_train": true,
"do_eval": true,
"do_predict": true,
"overwrite_output_dir" : true,
"evaluate_during_training" : true
}`
3. I run the script and it works well for training but got an error when evaluating.
**Error:**
```
Traceback (most recent call last):
File "run_ner.py", line 333, in <module>
main()
File "run_ner.py", line 282, in main
result = trainer.evaluate()
File "/home/dzungle/miniconda3/envs/hppi/lib/python3.8/site-packages/transformers/trainer.py", line 1604, in evaluate
output = self.prediction_loop(
File "/home/dzungle/miniconda3/envs/hppi/lib/python3.8/site-packages/transformers/trainer.py", line 1742, in prediction_loop
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
File "/home/dzungle/miniconda3/envs/hppi/lib/python3.8/site-packages/transformers/trainer.py", line 1874, in prediction_step
labels = nested_detach(tuple(inputs.get(name) for name in self.label_names))
File "/home/dzungle/miniconda3/envs/hppi/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 111, in nested_detach
return type(tensors)(nested_detach(t) for t in tensors)
File "/home/dzungle/miniconda3/envs/hppi/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 111, in <genexpr>
return type(tensors)(nested_detach(t) for t in tensors)
File "/home/dzungle/miniconda3/envs/hppi/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 112, in nested_detach
return tensors.detach()
AttributeError: 'NoneType' object has no attribute 'detach'
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
As I known, label_smoothing_factor is a new feature of recent transformers version. I would expect that the script with label_smoothing_factor=0.1 works well as using default value 0.0.
| 02-20-2021 23:51:49 | 02-20-2021 23:51:49 | Can reproduce locally, here is a short reproducer from the root of the repo:
```
python examples/token-classification/run_ner.py \
--model_name_or_path bert-base-uncased \
--train_file tests/fixtures/tests_samples/conll/sample.json \
--validation_file tests/fixtures/tests_samples/conll/sample.json \
--output_dir /tmp/test-ner \
--overwrite_output_dir \
--do_train \
--do_eval \
--label_smoothing_factor 0.1
```
Will look into it tomorrow. |
transformers | 10,308 | closed | [ci] don't fail when there are no zombies | fixes:
```
Run pkill -f tests; pkill -f examples
4
Error: Process completed with exit code 1.
```
Didn't think that it'd `exit(1)` when there is nothing to kill
@sgugger | 02-20-2021 21:21:51 | 02-20-2021 21:21:51 | |
transformers | 10,307 | closed | pretraining objective of T5 model | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
Hi, it would be great to have pretraining of T5 model implemented. Currently, run_mlm.py script does not support it.
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
T5 is the SOTA model and having pertaining would be very helpful to the community.
| 02-20-2021 20:42:01 | 02-20-2021 20:42:01 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 10,306 | closed | Issue Loading bert-based-german-cased | Message on the website is:
Can't load tokenizer using from_pretrained, please update its configuration: 400 Client Error: Bad Request for url: https://int-deepset-models-bert.s3.eu-central-1.amazonaws.com/pytorch/bert-base-german-cased-vocab.txt
| 02-20-2021 18:59:47 | 02-20-2021 18:59:47 | Which code caused this error?<|||||>This issue comes from the hosted API https://huggingface.co/bert-base-german-cased?text=Ich+bin+%5BMASK%5D<|||||>@tholor The url above currently loads for me, but to be future-proof should we cp the files currently loaded from that S3 bucket to the corresponding model repo (here, https://huggingface.co/bert-base-german-cased)?
cc'ing @LysandreJik <|||||>@julien-c Sure, let's copy them from our S3 to the model repo. <|||||>copied to the model repo in
https://huggingface.co/bert-base-german-cased/commit/876457621368b8c955478cfe1cdee634f47ea34c
Changed hardcoded url in https://github.com/huggingface/transformers/pull/10353<|||||>@Narsil could you please check if the inference widget works for this model when you get a chance to upgrade the transformers dependency in the API? Thanks!<|||||>⚠️ Can't load tokenizer using from_pretrained, please update its configuration: 400 Client Error: Bad Request for url: https://int-deepset-models-bert.s3.eu-central-1.amazonaws.com/pytorch/bert-base-german-cased-vocab.txt
It's still not working!<|||||>Hi @George-Ogden, the change was merged two days ago and is therefore available on the `master` branch, but not yet in a release.
Do you still get the same error when installing from source?<|||||>This is on the inference API on the website I haven't tried it from source. |
transformers | 10,305 | closed | Documentation of the decode method is missing | The tokenizer documentation [page](https://huggingface.co/transformers/main_classes/tokenizer.html) is generated from the following files:
- tokenization_utils_base.py
- tokenization_utils_fast.py
- tokenization_utils.py
At least the documentation of the decode method is missing even if it is properly documented in the [source file](https://github.com/huggingface/transformers/blob/9a7e63729f3ff6ddf065fd0d443421e46b1a2ffb/src/transformers/tokenization_utils_base.py#L3099).
@sgugger
Could you please have a look?
| 02-20-2021 17:48:43 | 02-20-2021 17:48:43 | This has been fixed a few days ago, I believe. Look at the [master doc tokenizer page](https://huggingface.co/transformers/master/main_classes/tokenizer.html) (the stable documentation is only updated at each release).<|||||>Yes, you are right. |
transformers | 10,304 | closed | fixes #10303 | # What does this PR do?
Fixes #10303
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
Documentation: @sgugger
| 02-20-2021 17:34:56 | 02-20-2021 17:34:56 | Thanks for fixing! |
transformers | 10,303 | closed | convert_tokens_to_string documentation bug | The [documentation](https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.convert_tokens_to_string) states that convert_tokens_to_string would convert _a sequence of token ids in a single string._
That is actually not correct as it converts a sequence of tokens. The method that converts a sequence of token ids is the decode method.
Documentation: @sgugger
| 02-20-2021 17:20:11 | 02-20-2021 17:20:11 | |
transformers | 10,302 | closed | Tensorflow not found but i can import it | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4.0.dev0
- Platform: macOS-11.2.1-arm64-arm-64bit
- Python version: 3.8.6
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
@jplu, @patrickvonplaten, @LysandreJik
## Description
When i import transformers i get message bellow.
## To reproduce
Steps to reproduce the behavior:
1. install tf for mac m1(https://github.com/apple/tensorflow_macos)
2. install transformers
3. import transformers
Message when i import transformers:
```
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
```
## Expected behavior
I'd like to make transformers find tensorflow.
| 02-20-2021 16:06:53 | 02-20-2021 16:06:53 | What is the output of:
```
import tensorflow
print(tensorflow.__version__)
```
?<|||||>> What is the output of:
>
> ```
> import tensorflow
> print(tensorflow.__version__)
> ```
>
> ?
'2.4.0-rc0'<|||||>Hello!
We currently don't support other implementations than the Google's pypi versions. The reason is because we don't tests on other versions and then we cannot guarantee it will works on those "extra" versions.
To make `transformers` works on Mac I suggest you to use the official version of TensorFlow as proposed in their documentation https://www.tensorflow.org/install/pip<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 10,301 | closed | [WIP] Add Megatron-11B | # What does this PR do?
Fixes #9560
This PR introduces the Megatron model as described in https://github.com/pytorch/fairseq/blob/master/examples/megatron_11b/README.md
This one will probably be fun to test with DeepSpeed, as @stas00 mentioned it's referenced a lot in its docs :smile:
It's important to mention that there are actually two independent implementations of [Megatron-LM](https://arxiv.org/pdf/1909.08053.pdf):
* The one described in the original paper belongs to NVIDIA (https://github.com/NVIDIA/Megatron-LM), but they released only a 345M checkpoint. It's also based on a rewrite of GPT2 and is not compatible with the current huggingface implementation due to minor changes, like LayerNorm reordering (see https://github.com/NVIDIA/Megatron-LM/issues/37).
* [Fairseq](https://github.com/pytorch/fairseq/blob/master/examples/megatron_11b/README.md), on the other hand, uses its own GPT2 version based on their encoder-decoder framework (with the encoder removed) and it does release the colossal 11B pretrained model.
After some tinkering I realized that fairseq's checkpoint is already pretty compatible with the existing BART port. So, based on that and the fact that NVIDIA doesn't plan on releasing the 3B and 8B checkpoints, **I chose to port only the fairseq version**.
**NOTE:** The original fairseq implementation requires an 8-GPU server to even load the model weights, so I just load the checkpoints manually one by one and merge the model-parallelized tensors into single-model ones.
### How to reproduce the conversion
1. First, find a server with _at least 85GB of RAM_, this model is huge!
2. Next, download and untar the checkpoint:
```
# WARNING: this file is 19GB
wget https://dl.fbaipublicfiles.com/fairseq/models/model_parallel/megatron_11b.tar.gz
tar -xzvf megatron_11b.tar.gz
wget -P ./megatron_11b/ 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json'
wget -P ./megatron_11b/ 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe'
```
3. Run the conversion script
```
python convert_megatron_original_pytorch_checkpoint_to_pytorch.py --fairseq_path /path/to/megatron_11b --pytorch_dump_path /path/to/megatron_hf_dump
```
4. The conversion script will load the model-parallel shards of the checkpoint, group the sharded parameters and concatenate the weights, so that the [fairseq.ModelParallelTransformerLanguageModel](https://github.com/pytorch/fairseq/blob/3b27ed7996b0315f471c795cf9b7dfcc18467cbe/fairseq/model_parallel/models/transformer_lm.py) `state_dict` can be easily loaded into a CPU-compatible [faiseq.TransformerLanguageModel](https://github.com/pytorch/fairseq/blob/3b27ed7996b0315f471c795cf9b7dfcc18467cbe/fairseq/models/transformer_lm.py). The de-parallelisation is based on ParlAI's [conversion script](https://github.com/facebookresearch/ParlAI/blob/abfb771ac4ed2966d6f3ea22c7a38e4ebc9cc0f0/parlai/agents/bart/convert_fairseq_to_parlai.py#L258-L307).
5. Then the script will initialize the huggingface Megatron model and load the converted `state_dict` into it.
### Here's how Megatron differs from the existing BART/MBART implemenations:
1. The most controversial difference, IMO, is the missing encoder, since it's a decoder-only model. For now, I decided to remove the encoder parts inherited from MBART, bit left the encoder-dependent parts in the decoder (e.g. `encoder_hidden_states`, `encoder_attention_mask`) and the cross-attention to simplify the review process on your end.
2. Megatron uses `SinusoidalPositionalEmbedding` instead of learned ones, so I just yanked those from FSMT :smile:
3. Megatron does not have a `layernorm_embedding`
4. Minor detail: the `self_attn_layer_norm` is applied before self-attention (like in MBART) instead of after (like in BART).
### Important questions regarding the API:
1. What should be done about the missing encoder? I think the `decoder` variable can be left as is, since it's compatible with the fairseq checkpoint keys, but the `encoder_*` references in the code bother me a lot. We need to somehow strike a balance between `Copied from` and removing the unused parts.
2. I think the position of `self_attn_layer_norm` should be a parameter in the config, similar to `decoder_normalize_before=True` in faiseq. This will close the not-so-obvious difference between BART and MBART.
3. The existence of `layernorm_embedding` can also be parametrized, similar to `layernorm_embedding=False` in fairseq.
### Quick LM test
You can test out the model's capabilities like so (again, you'll probably need _at least 85GB RAM_, there's some weird memory duplication happening somewhere, this should not need more than 50):
```
from transformers import MegatronForCausalLM, MegatronTokenizer, TextGenerationPipeline
tokenizer = MegatronTokenizer.from_pretrained("megatron-11b")
model = MegatronForCausalLM.from_pretrained("anton-l/megatron-11b")
def generate(prompt, max_length=40, num_beams=5, num_return=3):
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
outputs = model.generate(
input_ids=input_ids, num_beams=num_beams, num_return_sequences=num_return, max_length=max_length
)
decoded = tokenizer.batch_decode(outputs, skip_special_tokens=True)
return decoded
print(generate("Before boarding your rocket to Mars, remember to pack these items: "))
```
```
['Before boarding your rocket to Mars, remember to pack these items: 1. A parachute.',
'Before boarding your rocket to Mars, remember to pack these items: 1. A parachute $100 bill2. A copy of your passport3. A copy of your passport444',
'Before boarding your rocket to Mars, remember to pack these items: 1. A parachute $1 million dollars dollars dollars dollars dollars dollars dollars dollars dollars dollars dollars dollars dollars dollars dollars dollars dollars']
```
To be honest, I'm not too impressed with its text-generation power. :smile: I guess it's either that the model was too large to train it for enough steps, or I missed something during the conversion. The original implementation does not have a text-generation script (or any non-wikitext results, for that matter), so I'm kinda in the dark here.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten, @patil-suraj
| 02-20-2021 15:24:25 | 02-20-2021 15:24:25 | That's very neat, @anton-l! thank you for the port
You demonstrated a very good creativity by finding a way to recompose the model shards!
> This one will probably be fun to test with DeepSpeed, as @stas00 mentioned it's referenced a lot in its docs
As you correctly noticed studying Megatron-LM's horizontal model parallel sharding is on my TODO list.
I suppose since `transformers` currently doesn't provide this feature you didn't port that part of the model, correct? i.e. you unsharded it. I had a brief read through the PR and didn't see anything of a sort - unless I somehow missed it? And without this feature, this is like any other `transformers` model - It's its horizontal model parallel feature that is needed to complete 3D parallelism with Deepspeed. Your PR is an excellent start.
I think the part that deals with sharding is here in the original:
https://github.com/jeffra/DSE/blob/79888e162425e8d64043a9597ee14751bd4b53d1/megatron/data/realm_index.py
Though this is the NVIDIA version.
So if the horizontal MP is eventually re-ported (I hope it will be so) the model will need to know when to load the flattened version and when the sharded one. But `transformers` doesn't even have a framework for loading multiple-part models at the moment, so I guess we will cross that bridge when we get to it.
I'm just just thinking aloud here, considering different options, not making any requests ;)
-------
The fp32 weights are ~41GB https://huggingface.co/anton-l/megatron-11b/tree/main - i.e. it's quite similar to t5-11b, so it should be possible to load it on a 40GB gpu w/ DeepSpeed ZeRO-Offload if there are some 256GB of RAM available.
-----
Also, FYI, Deepspeed are making a new port of Megatron-LM to work with DeepSpeed. https://github.com/jeffra/DSE/tree/master/megatron-lm
<|||||>@stas00 you're correct, I didn't port the model-parallel implementation. Fairseq uses an older Megatron-LM version as a submodule [here](https://github.com/pytorch/fairseq/tree/master/fairseq/model_parallel) for its MP map-reduce fuctions. This makes it quite cumbersome to reproduce, since it requires compiling an older `apex` library among other dependencies with broken versioning. It would also require a patched version of faiseq's state loader, since right now it requires exactly 8 GPUs available to load the sharded checkpoint correctly.
However, on the surface it seems like adding support for model parallelism comes down to porting `VocabParallelEmbedding`, `ColumnParallelLinear` and `RowParallelLinear` layers as implemented [here](https://github.com/ngoyal2707/Megatron-LM/blob/adb23324c222aad0aad89308e70302d996a5eaeb/mpu/layers.py). This seems doable, but I don't have multiple GPUs to test it out :(
I guess a proper MP implementation should also take care of splitting the checkpointed layers regardless of how many GPUs are available (i.e. 2, 4 or 8). That would remove the requirement to have a full DGX setup if the user is willing to use gradient checkpointing/accumulation instead.
<|||||>@anhon-l, in order not to make your and reviewers' lives unnecessarily difficult, let's take the discussion of the Horizontal MP to a dedicated issue, since it could take some time to figure and none of is required for you to complete this PR and I trust @patil-suraj and @patrickvonplaten will support you at completing this awesome effort.
So if you could re-post your last comment here: https://github.com/huggingface/transformers/issues/10321 and I will follow up there. Thank you!<|||||>> ```
> ['Before boarding your rocket to Mars, remember to pack these items: 1. A parachute.',
> 'Before boarding your rocket to Mars, remember to pack these items: 1. A parachute $100 bill2. A copy of your passport3. A copy of your passport444',
> 'Before boarding your rocket to Mars, remember to pack these items: 1. A parachute $1 million dollars dollars dollars dollars dollars dollars dollars dollars dollars dollars dollars dollars dollars dollars dollars dollars dollars']
> ```
>
> To be honest, I'm not too impressed with its text-generation power. 😄 I guess it's either that the model was too large to train it for enough steps, or I missed something during the conversion. The original implementation does not have a text-generation script (or any non-wikitext results, for that matter), so I'm kinda in the dark here.
This is amazing work, big kudos! The seemingly low text-generation quality surprises me though, because of the crazy good output you get from https://inferkit.com/ which is also just Megatron11b, according to their docs (https://inferkit.com/docs/generation). Their output seems to be much better than GPT2.<|||||>@anton-l, would you like to complete this PR? For it to be reviewed it needs to be a normal PR and not a draft.
I marked it as WIP so that the stale bot won't try to close it.
Thank you.<|||||>pinging @anton-l - let's revisit this? Please let us know what you need.
I know meanwhile someone else did the porting of the original GPT2-345M checkpoint https://huggingface.co/nvidia/megatron-gpt2-345m and I see from the docs they use straight GPT2 transformers model to operate it.
https://huggingface.co/nvidia/megatron-gpt2-345m#text-generation
All they have is a conversion script:
https://github.com/huggingface/transformers/tree/master/src/transformers/models/megatron_gpt2
Can the same be done with the fairseq version - i.e. reuse some of the existing models for that? or is it unique enough to warrant its own?
Please bear with me, I'm just starting to figure out Megatron-LM and its variants (there is also a Deepspeed variant), so I'm just slightly above clueless at the moment - should have a better understanding in a few days once I had a chance working with it.<|||||>@stas00 sorry for the late reply!
It's great that someone figured out a way to post the original megatron models. When I was looking into that, it wasn't exactly straightforward due to the differences between the attention block implementations in HF GPT2 and Megatron, which was probably patched/parameterized in the meantime.
I chose to implement a separate model for the fairseq megatron because the model uses the same code as the existing MBART & FSMT, but there's only an encoder model, without the decoder. However, we could take a different route and convert the fairseq weights to fit GPT2, since it's clearly possible now. I'll try that tomorrow, and if it works out, we can discard this PR and just add a simple conversion script :+1: <|||||>This PR seems very promising and I know the model would be really useful to many.
As it was earlier pointed out, the converted model doesn't seem to have the same quality of generation as the model elsewhere. Perhaps the conversion script could have caused it somehow? Just curious if there was any success with converting the fairseq weights to fit GPT2. |
transformers | 10,300 | closed | unexpected keyword argument 'forced_bos_token_id' when using mbart-large-50-many-to-many-mmt | When I try to run the example on the model card, I get this error;
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-9-88d049aaf9c0> in <module>
5 tokenizer.src_lang = "hi_IN"
6 encoded_hi = tokenizer(article_hi, return_tensors="pt")
----> 7 generated_tokens = model.generate(
8 **encoded_hi,
9 forced_bos_token_id=tokenizer.lang_code_to_id["fr_XX"]
~/opt/Python-3.8.2/lib/python3.8/site-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs)
24 def decorate_context(*args, **kwargs):
25 with self.__class__():
---> 26 return func(*args, **kwargs)
27 return cast(F, decorate_context)
28
~/opt/Python-3.8.2/lib/python3.8/site-packages/transformers/generation_utils.py in generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, num_return_sequences, decoder_start_token_id, use_cache, num_beam_groups, diversity_penalty, prefix_allowed_tokens_fn, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, **model_kwargs)
831 if self.config.is_encoder_decoder:
832 # add encoder_outputs to model_kwargs
--> 833 model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(input_ids, model_kwargs)
834
835 # set input_ids as decoder_input_ids
~/opt/Python-3.8.2/lib/python3.8/site-packages/transformers/generation_utils.py in _prepare_encoder_decoder_kwargs_for_generation(self, input_ids, model_kwargs)
376 argument: value for argument, value in model_kwargs.items() if not argument.startswith("decoder_")
377 }
--> 378 model_kwargs["encoder_outputs"]: ModelOutput = encoder(input_ids, return_dict=True, **encoder_kwargs)
379 return model_kwargs
380
~/opt/Python-3.8.2/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
TypeError: forward() got an unexpected keyword argument 'forced_bos_token_id'
```
Looking at the code in the master repository, I can't see the generate function taking that argument anywhere at all so I'm unsure how to proceed with this.
_Originally posted by @IamAdiSri in https://github.com/huggingface/tokenizers/issues/633#issuecomment-781689632_ | 02-20-2021 14:58:32 | 02-20-2021 14:58:32 | hi @IamAdiSri
What is your Transformers version ? mBART-50 currently only works on master.<|||||>@patil-suraj I'm on version 4.3.2, but I tried it with the modules in master branch. I searched through the repository but as far as I can tell, none of the relevant mbart modules take `forced_bos_token_id` as a parameter in their generate function.
I'm looking the example on [this](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) page btw.<|||||>`forced_bos_token_id` is included on master, https://github.com/huggingface/transformers/blob/master/src/transformers/generation_utils.py#L549
you should install from source to use mBART-50<|||||>Oh okay, thank you. |
transformers | 10,299 | closed | Object of type 'int64' is not JSON serializable in Trainer.save_checkpoint | I am using the recent run_ner.py example script to train an NER model. I want to evaluate the performance of the model during training and use the following command for training:
```
python3 run_ner.py
--model_name_or_path bert-base-uncased
--dataset_name conll2003
--return_entity_level_metrics
--output_dir conll-tmp
--overwrite_output_dir
--do_train
--do_eval
--do_predict
--evaluation_strategy steps
--logging_steps 10
--eval_steps 10
--load_best_model_at_end
```
I run the command in the current docker image huggingface/transformers-pytorch-gpu
However, I get the following error:
```
Traceback (most recent call last):
File "run_ner.py", line 470, in main()
File "run_ner.py", line 404, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 983, in train self._maybe_log_save_evaluate(tr_loss, model, trial, epoch) File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1062, in _maybe_log_save_evaluate self._save_checkpoint(model, trial, metrics=metrics)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1126, in _save_checkpoint self.state.save_to_json(os.path.join(output_dir, "trainer_state.json")) File "/usr/local/lib/python3.6/dist-packages/transformers/trainer_callback.py", line 95, in save_to_json json_string = json.dumps(dataclasses.asdict(self), indent=2, sort_keys=True) + "\n" File "/usr/lib/python3.6/json/__init__.py", line 238, in dumps **kw).encode(obj)
File "/usr/lib/python3.6/json/encoder.py", line 201, in encode chunks = list(chunks)
File "/usr/lib/python3.6/json/encoder.py", line 430, in _iterencode yield from _iterencode_dict(o, _current_indent_level)
File "/usr/lib/python3.6/json/encoder.py", line 404, in _iterencode_dict yield from chunks
File "/usr/lib/python3.6/json/encoder.py", line 325, in _iterencode_list yield from chunks
File "/usr/lib/python3.6/json/encoder.py", line 404, in _iterencode_dict yield from chunks
File "/usr/lib/python3.6/json/encoder.py", line 437, in _iterencode o = _default(o)
File "/usr/lib/python3.6/json/encoder.py", line 180, in default o.__class__.__name__)
TypeError: Object of type 'int64' is not JSON serializable
--
```
| 02-20-2021 13:12:30 | 02-20-2021 13:12:30 | I too ran into this problem and its caused by turning on evaluation strategy which then adds metrics in the log_history of the models state, which is using numpy data types and causes the JSON encoder issue. That was the case with 4.3.3. There appear to be a bunch of changes in the trainer in the works, whether this has been fixed as a result of those i've not checked.<|||||>As a temporary work around you can modify trainer.py at line 1260 "output = {**logs, **{"step": self.state.global_step}}" and add the following three lines after. If the metrics are being calculated the same in the latest code as in 4.3.3 then something like this may also be needed going forward, or things calling the log method will need to ensure they safely cast data points beforehand if its going to be added to the trainer state still.
```
for k,v in output.items():
if isinstance(v, np.generic):
output[k]=v.item()
```<|||||>I confirm I can reproduce in master. Will investigate more tomorrow.<|||||>My only comment on the fix submitted is that it targets the metrics output, but will not stop others putting things into the log history in the model state which later on cause the same problem if serializing the state to json. |
transformers | 10,298 | closed | Converting fairseq NMT to transformers misses model weight | Hi there, question about fairseq NMT model ([FSMT](https://huggingface.co/transformers/model_doc/fsmt.html)) conversion.
I tried to convert my own fairseq-nmt model ([`transformer_wmt_en_de`](https://github.com/pytorch/fairseq/blob/master/fairseq/models/transformer.py#L1046)) based on [this conversion script](https://github.com/huggingface/transformers/blob/master/src/transformers/models/fsmt/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py).
However, `decoder.embed_out` weight is missing after converting fairseq model to transformers FSMT model. This parameter exists when not specifing `--share-all-embeddings` or `--share-decoder-input-output-embed`, while official fairseq wmt models do not have `decoder.embed_out` weight because specifying `--share-all-embedding`.
https://github.com/pytorch/fairseq/issues/2537
Are there any solution or tips to converting own fairseq model? | 02-20-2021 09:35:04 | 02-20-2021 09:35:04 | Pinging @stas00 here<|||||>Thank you for the ping, @NielsRogge
@tagucci, when you file an issue you will find a list of who to tag for what topic, so please use it to tag the right people. Otherwise it's hard for everybody to try to follow all issues.
also when you link to a line of code in github, always hit `y` first to get the exact sha (it rewrites the url to embed the current git sha). Otherwise your links quickly become invalid, e.g. I have no idea where you were trying to link to in your link to [transformer_wmt_en_de](https://github.com/pytorch/fairseq/blob/master/fairseq/models/transformer.py#L1046) as the code was modified today.
--------------------------------
OK, could you first clarify where do you get "decoder.embed_out weight is missing" - the command line and the backtrace please. Also a dump of the model (i.e. `print(model)`.
Now to the guess work.
Does your model miss `output_projection` weight key?
The context is here:
https://github.com/pytorch/fairseq/blob/ab560669cd9baaa4009e1fd01c970f8ffccd1ee0/fairseq/models/transformer.py#L950-L960
fairseq has different versions of their code, and some have keys renamed or added, that's why they have all that logic.
You can see that it's a simple alias - i.e. in fsmt decoder embed and output are always shared.
https://github.com/huggingface/transformers/blob/461e8cacf94d1f76367cc9ba2cfd5b9bd3641c81/src/transformers/models/fsmt/modeling_fsmt.py#L651
So if it's missing you can assign it in the conversion script:
```
model_state_dict["model.decoder.output_projection.weight"] = model_state_dict["model.decoder.embed_tokens.weight"]
```
add this to this line:
https://github.com/huggingface/transformers/blob/461e8cacf94d1f76367cc9ba2cfd5b9bd3641c81/src/transformers/models/fsmt/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py#L247
but again I could have guessed wrong and will need to see the model dump to tell you more.
You can see the dump of original model I converted from here: https://github.com/stas00/porting/blob/master/transformers/fairseq-wmt19/nbs/config.ipynb
<|||||>@NielsRogge
Thanks for pinging @stas00!
@stas00
Sorry for the inconvenience of linking the code.
Following your advice, my model args and model dump are as below.
> in fsmt decoder embed and output are always shared.
As you said, fsmt does not have decoder embed and output seperately, my fairseq `transformer_wmt_en_de` without `share_decoder_input_output_embed` cannot fit fsmt in transformers. In this case, do I need to retrain fairseq model with `share_decoder_input_output_embed` or modify [FSMTDecoer](https://github.com/huggingface/transformers/blob/461e8cacf94d1f76367cc9ba2cfd5b9bd3641c81/src/transformers/models/fsmt/modeling_fsmt.py#L622)?
```python
import torch
from pprint import pprint
chkpt = torch.load("model/checkpoint_best.pt")
model = chkpt["model"]
pprint(vars(chkpt["args"]))
print("\n".join(model.keys()))
```
```
# args
{'activation_dropout': 0.0,
'activation_fn': 'relu',
'adam_betas': '(0.9, 0.98)',
'adam_eps': 1e-08,
'adaptive_input': False,
'adaptive_softmax_cutoff': None,
'adaptive_softmax_dropout': 0,
'arch': 'transformer_wmt_en_de',
'attention_dropout': 0.0,
'best_checkpoint_metric': 'loss',
'bpe': None,
'bucket_cap_mb': 25,
'clip_norm': 0.0,
'cpu': False,
'criterion': 'label_smoothed_cross_entropy',
'cross_self_attention': False,
'curriculum': 0,
'data': './data/data.src_trg',
'dataset_impl': None,
'ddp_backend': 'no_c10d',
'decoder_attention_heads': 8,
'decoder_embed_dim': 512,
'decoder_embed_path': None,
'decoder_ffn_embed_dim': 2048,
'decoder_input_dim': 512,
'decoder_layerdrop': 0,
'decoder_layers': 6,
'decoder_layers_to_keep': None,
'decoder_learned_pos': False,
'decoder_normalize_before': False,
'decoder_output_dim': 512,
'device_id': 0,
'disable_validation': False,
'distributed_backend': 'nccl',
'distributed_init_method': 'tcp://localhost:16441',
'distributed_no_spawn': False,
'distributed_port': -1,
'distributed_rank': 0,
'distributed_world_size': 4,
'dropout': 0.1,
'empty_cache_freq': 0,
'encoder_attention_heads': 8,
'encoder_embed_dim': 512,
'encoder_embed_path': None,
'encoder_ffn_embed_dim': 2048,
'encoder_layerdrop': 0,
'encoder_layers': 6,
'encoder_layers_to_keep': None,
'encoder_learned_pos': False,
'encoder_normalize_before': False,
'fast_stat_sync': False,
'find_unused_parameters': False,
'fix_batches_to_gpus': False,
'fixed_validation_seed': None,
'fp16': False,
'fp16_init_scale': 128,
'fp16_scale_tolerance': 0.0,
'fp16_scale_window': None,
'keep_interval_updates': 20,
'keep_last_epochs': -1,
'label_smoothing': 0.1,
'layer_wise_attention': False,
'layernorm_embedding': False,
'lazy_load': False,
'left_pad_source': True,
'left_pad_target': False,
'load_alignments': False,
'log_format': 'json',
'log_interval': 50,
'lr': [0.0007],
'lr_scheduler': 'inverse_sqrt',
'max_epoch': 100,
'max_sentences': None,
'max_sentences_valid': None,
'max_source_positions': 1024,
'max_target_positions': 1024,
'max_tokens': 4096,
'max_tokens_valid': 4096,
'max_update': 0,
'maximize_best_checkpoint_metric': False,
'memory_efficient_fp16': False,
'min_loss_scale': 0.0001,
'min_lr': 1e-09,
'no_cross_attention': False,
'no_epoch_checkpoints': True,
'no_last_checkpoints': False,
'no_progress_bar': True,
'no_save': False,
'no_save_optimizer_state': False,
'no_scale_embedding': False,
'no_token_positional_embeddings': False,
'num_workers': 1,
'optimizer': 'adam',
'optimizer_overrides': '{}',
'raw_text': False,
'required_batch_size_multiple': 8,
'reset_dataloader': False,
'reset_lr_scheduler': False,
'reset_meters': False,
'reset_optimizer': False,
'restore_file': 'checkpoint_last.pt',
'save_dir': './data/models',
'save_interval': 1,
'save_interval_updates': 1000,
'seed': 1,
'sentence_avg': False,
'share_all_embeddings': False,
'share_decoder_input_output_embed': False,
'skip_invalid_size_inputs_valid_test': True,
'source_lang': 'src',
'target_lang': 'trg',
'task': 'translation',
'tensorboard_logdir': '',
'threshold_loss_scale': None,
'tokenizer': None,
'train_subset': 'train',
'truncate_source': False,
'update_freq': [16],
'upsample_primary': 1,
'use_bmuf': False,
'user_dir': None,
'valid_subset': 'valid',
'validate_interval': 1,
'warmup_init_lr': 1e-07,
'warmup_updates': 4000,
'weight_decay': 0.0}
```
```
# model dump
encoder.version
encoder.embed_tokens.weight
encoder.embed_positions._float_tensor
encoder.layers.0.self_attn.k_proj.weight
encoder.layers.0.self_attn.k_proj.bias
encoder.layers.0.self_attn.v_proj.weight
encoder.layers.0.self_attn.v_proj.bias
encoder.layers.0.self_attn.q_proj.weight
encoder.layers.0.self_attn.q_proj.bias
encoder.layers.0.self_attn.out_proj.weight
encoder.layers.0.self_attn.out_proj.bias
encoder.layers.0.self_attn_layer_norm.weight
encoder.layers.0.self_attn_layer_norm.bias
encoder.layers.0.fc1.weight
encoder.layers.0.fc1.bias
encoder.layers.0.fc2.weight
encoder.layers.0.fc2.bias
encoder.layers.0.final_layer_norm.weight
encoder.layers.0.final_layer_norm.bias
encoder.layers.1.self_attn.k_proj.weight
encoder.layers.1.self_attn.k_proj.bias
encoder.layers.1.self_attn.v_proj.weight
encoder.layers.1.self_attn.v_proj.bias
encoder.layers.1.self_attn.q_proj.weight
encoder.layers.1.self_attn.q_proj.bias
encoder.layers.1.self_attn.out_proj.weight
encoder.layers.1.self_attn.out_proj.bias
encoder.layers.1.self_attn_layer_norm.weight
encoder.layers.1.self_attn_layer_norm.bias
encoder.layers.1.fc1.weight
encoder.layers.1.fc1.bias
encoder.layers.1.fc2.weight
encoder.layers.1.fc2.bias
encoder.layers.1.final_layer_norm.weight
encoder.layers.1.final_layer_norm.bias
encoder.layers.2.self_attn.k_proj.weight
encoder.layers.2.self_attn.k_proj.bias
encoder.layers.2.self_attn.v_proj.weight
encoder.layers.2.self_attn.v_proj.bias
encoder.layers.2.self_attn.q_proj.weight
encoder.layers.2.self_attn.q_proj.bias
encoder.layers.2.self_attn.out_proj.weight
encoder.layers.2.self_attn.out_proj.bias
encoder.layers.2.self_attn_layer_norm.weight
encoder.layers.2.self_attn_layer_norm.bias
encoder.layers.2.fc1.weight
encoder.layers.2.fc1.bias
encoder.layers.2.fc2.weight
encoder.layers.2.fc2.bias
encoder.layers.2.final_layer_norm.weight
encoder.layers.2.final_layer_norm.bias
encoder.layers.3.self_attn.k_proj.weight
encoder.layers.3.self_attn.k_proj.bias
encoder.layers.3.self_attn.v_proj.weight
encoder.layers.3.self_attn.v_proj.bias
encoder.layers.3.self_attn.q_proj.weight
encoder.layers.3.self_attn.q_proj.bias
encoder.layers.3.self_attn.out_proj.weight
encoder.layers.3.self_attn.out_proj.bias
encoder.layers.3.self_attn_layer_norm.weight
encoder.layers.3.self_attn_layer_norm.bias
encoder.layers.3.fc1.weight
encoder.layers.3.fc1.bias
encoder.layers.3.fc2.weight
encoder.layers.3.fc2.bias
encoder.layers.3.final_layer_norm.weight
encoder.layers.3.final_layer_norm.bias
encoder.layers.4.self_attn.k_proj.weight
encoder.layers.4.self_attn.k_proj.bias
encoder.layers.4.self_attn.v_proj.weight
encoder.layers.4.self_attn.v_proj.bias
encoder.layers.4.self_attn.q_proj.weight
encoder.layers.4.self_attn.q_proj.bias
encoder.layers.4.self_attn.out_proj.weight
encoder.layers.4.self_attn.out_proj.bias
encoder.layers.4.self_attn_layer_norm.weight
encoder.layers.4.self_attn_layer_norm.bias
encoder.layers.4.fc1.weight
encoder.layers.4.fc1.bias
encoder.layers.4.fc2.weight
encoder.layers.4.fc2.bias
encoder.layers.4.final_layer_norm.weight
encoder.layers.4.final_layer_norm.bias
encoder.layers.5.self_attn.k_proj.weight
encoder.layers.5.self_attn.k_proj.bias
encoder.layers.5.self_attn.v_proj.weight
encoder.layers.5.self_attn.v_proj.bias
encoder.layers.5.self_attn.q_proj.weight
encoder.layers.5.self_attn.q_proj.bias
encoder.layers.5.self_attn.out_proj.weight
encoder.layers.5.self_attn.out_proj.bias
encoder.layers.5.self_attn_layer_norm.weight
encoder.layers.5.self_attn_layer_norm.bias
encoder.layers.5.fc1.weight
encoder.layers.5.fc1.bias
encoder.layers.5.fc2.weight
encoder.layers.5.fc2.bias
encoder.layers.5.final_layer_norm.weight
encoder.layers.5.final_layer_norm.bias
decoder.embed_out
decoder.version
decoder.embed_tokens.weight
decoder.embed_positions._float_tensor
decoder.layers.0.self_attn.k_proj.weight
decoder.layers.0.self_attn.k_proj.bias
decoder.layers.0.self_attn.v_proj.weight
decoder.layers.0.self_attn.v_proj.bias
decoder.layers.0.self_attn.q_proj.weight
decoder.layers.0.self_attn.q_proj.bias
decoder.layers.0.self_attn.out_proj.weight
decoder.layers.0.self_attn.out_proj.bias
decoder.layers.0.self_attn_layer_norm.weight
decoder.layers.0.self_attn_layer_norm.bias
decoder.layers.0.encoder_attn.k_proj.weight
decoder.layers.0.encoder_attn.k_proj.bias
decoder.layers.0.encoder_attn.v_proj.weight
decoder.layers.0.encoder_attn.v_proj.bias
decoder.layers.0.encoder_attn.q_proj.weight
decoder.layers.0.encoder_attn.q_proj.bias
decoder.layers.0.encoder_attn.out_proj.weight
decoder.layers.0.encoder_attn.out_proj.bias
decoder.layers.0.encoder_attn_layer_norm.weight
decoder.layers.0.encoder_attn_layer_norm.bias
decoder.layers.0.fc1.weight
decoder.layers.0.fc1.bias
decoder.layers.0.fc2.weight
decoder.layers.0.fc2.bias
decoder.layers.0.final_layer_norm.weight
decoder.layers.0.final_layer_norm.bias
decoder.layers.1.self_attn.k_proj.weight
decoder.layers.1.self_attn.k_proj.bias
decoder.layers.1.self_attn.v_proj.weight
decoder.layers.1.self_attn.v_proj.bias
decoder.layers.1.self_attn.q_proj.weight
decoder.layers.1.self_attn.q_proj.bias
decoder.layers.1.self_attn.out_proj.weight
decoder.layers.1.self_attn.out_proj.bias
decoder.layers.1.self_attn_layer_norm.weight
decoder.layers.1.self_attn_layer_norm.bias
decoder.layers.1.encoder_attn.k_proj.weight
decoder.layers.1.encoder_attn.k_proj.bias
decoder.layers.1.encoder_attn.v_proj.weight
decoder.layers.1.encoder_attn.v_proj.bias
decoder.layers.1.encoder_attn.q_proj.weight
decoder.layers.1.encoder_attn.q_proj.bias
decoder.layers.1.encoder_attn.out_proj.weight
decoder.layers.1.encoder_attn.out_proj.bias
decoder.layers.1.encoder_attn_layer_norm.weight
decoder.layers.1.encoder_attn_layer_norm.bias
decoder.layers.1.fc1.weight
decoder.layers.1.fc1.bias
decoder.layers.1.fc2.weight
decoder.layers.1.fc2.bias
decoder.layers.1.final_layer_norm.weight
decoder.layers.1.final_layer_norm.bias
decoder.layers.2.self_attn.k_proj.weight
decoder.layers.2.self_attn.k_proj.bias
decoder.layers.2.self_attn.v_proj.weight
decoder.layers.2.self_attn.v_proj.bias
decoder.layers.2.self_attn.q_proj.weight
decoder.layers.2.self_attn.q_proj.bias
decoder.layers.2.self_attn.out_proj.weight
decoder.layers.2.self_attn.out_proj.bias
decoder.layers.2.self_attn_layer_norm.weight
decoder.layers.2.self_attn_layer_norm.bias
decoder.layers.2.encoder_attn.k_proj.weight
decoder.layers.2.encoder_attn.k_proj.bias
decoder.layers.2.encoder_attn.v_proj.weight
decoder.layers.2.encoder_attn.v_proj.bias
decoder.layers.2.encoder_attn.q_proj.weight
decoder.layers.2.encoder_attn.q_proj.bias
decoder.layers.2.encoder_attn.out_proj.weight
decoder.layers.2.encoder_attn.out_proj.bias
decoder.layers.2.encoder_attn_layer_norm.weight
decoder.layers.2.encoder_attn_layer_norm.bias
decoder.layers.2.fc1.weight
decoder.layers.2.fc1.bias
decoder.layers.2.fc2.weight
decoder.layers.2.fc2.bias
decoder.layers.2.final_layer_norm.weight
decoder.layers.2.final_layer_norm.bias
decoder.layers.3.self_attn.k_proj.weight
decoder.layers.3.self_attn.k_proj.bias
decoder.layers.3.self_attn.v_proj.weight
decoder.layers.3.self_attn.v_proj.bias
decoder.layers.3.self_attn.q_proj.weight
decoder.layers.3.self_attn.q_proj.bias
decoder.layers.3.self_attn.out_proj.weight
decoder.layers.3.self_attn.out_proj.bias
decoder.layers.3.self_attn_layer_norm.weight
decoder.layers.3.self_attn_layer_norm.bias
decoder.layers.3.encoder_attn.k_proj.weight
decoder.layers.3.encoder_attn.k_proj.bias
decoder.layers.3.encoder_attn.v_proj.weight
decoder.layers.3.encoder_attn.v_proj.bias
decoder.layers.3.encoder_attn.q_proj.weight
decoder.layers.3.encoder_attn.q_proj.bias
decoder.layers.3.encoder_attn.out_proj.weight
decoder.layers.3.encoder_attn.out_proj.bias
decoder.layers.3.encoder_attn_layer_norm.weight
decoder.layers.3.encoder_attn_layer_norm.bias
decoder.layers.3.fc1.weight
decoder.layers.3.fc1.bias
decoder.layers.3.fc2.weight
decoder.layers.3.fc2.bias
decoder.layers.3.final_layer_norm.weight
decoder.layers.3.final_layer_norm.bias
decoder.layers.4.self_attn.k_proj.weight
decoder.layers.4.self_attn.k_proj.bias
decoder.layers.4.self_attn.v_proj.weight
decoder.layers.4.self_attn.v_proj.bias
decoder.layers.4.self_attn.q_proj.weight
decoder.layers.4.self_attn.q_proj.bias
decoder.layers.4.self_attn.out_proj.weight
decoder.layers.4.self_attn.out_proj.bias
decoder.layers.4.self_attn_layer_norm.weight
decoder.layers.4.self_attn_layer_norm.bias
decoder.layers.4.encoder_attn.k_proj.weight
decoder.layers.4.encoder_attn.k_proj.bias
decoder.layers.4.encoder_attn.v_proj.weight
decoder.layers.4.encoder_attn.v_proj.bias
decoder.layers.4.encoder_attn.q_proj.weight
decoder.layers.4.encoder_attn.q_proj.bias
decoder.layers.4.encoder_attn.out_proj.weight
decoder.layers.4.encoder_attn.out_proj.bias
decoder.layers.4.encoder_attn_layer_norm.weight
decoder.layers.4.encoder_attn_layer_norm.bias
decoder.layers.4.fc1.weight
decoder.layers.4.fc1.bias
decoder.layers.4.fc2.weight
decoder.layers.4.fc2.bias
decoder.layers.4.final_layer_norm.weight
decoder.layers.4.final_layer_norm.bias
decoder.layers.5.self_attn.k_proj.weight
decoder.layers.5.self_attn.k_proj.bias
decoder.layers.5.self_attn.v_proj.weight
decoder.layers.5.self_attn.v_proj.bias
decoder.layers.5.self_attn.q_proj.weight
decoder.layers.5.self_attn.q_proj.bias
decoder.layers.5.self_attn.out_proj.weight
decoder.layers.5.self_attn.out_proj.bias
decoder.layers.5.self_attn_layer_norm.weight
decoder.layers.5.self_attn_layer_norm.bias
decoder.layers.5.encoder_attn.k_proj.weight
decoder.layers.5.encoder_attn.k_proj.bias
decoder.layers.5.encoder_attn.v_proj.weight
decoder.layers.5.encoder_attn.v_proj.bias
decoder.layers.5.encoder_attn.q_proj.weight
decoder.layers.5.encoder_attn.q_proj.bias
decoder.layers.5.encoder_attn.out_proj.weight
decoder.layers.5.encoder_attn.out_proj.bias
decoder.layers.5.encoder_attn_layer_norm.weight
decoder.layers.5.encoder_attn_layer_norm.bias
decoder.layers.5.fc1.weight
decoder.layers.5.fc1.bias
decoder.layers.5.fc2.weight
decoder.layers.5.fc2.bias
decoder.layers.5.final_layer_norm.weight
decoder.layers.5.final_layer_norm.bias
```<|||||>Thank you for the model dump, so my guess was correct - it's missing `output_projection` and I gave you the solution at the end of my previous comment.
I still don't know what the error you get, when and the backtrace, but perhaps my guessed solution is all you need.
But no, you don't need to re-train.
if it works could you adapt the script to check if the checkpoint that is being loaded doesn't have this key and if so to copy it as I suggested?<|||||>@stas00
Running [convert_fsmt_original_pytorch_checkpoint_to_pytorch.py]( https://github.com/huggingface/transformers/blob/461e8cacf94d1f76367cc9ba2cfd5b9bd3641c81/src/transformers/models/fsmt/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py) is successful, but there is something wrong.
In comparing fairseq model provided by `torch.hub` and converted HF model, the translation result is matched.
```python
from transformers import FSMTForConditionalGeneration, FSMTTokenizer, TranslationPipeline
import torch
input_text = "Machine learning is great!"
# fairseq
en2de = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.en-de', checkpoint_file='model1.pt:model2.pt:model3.pt:model4.pt',
tokenizer='moses', bpe='fastbpe')
fairseq_res = en2de.translate(input_text)
# tranformers
fsmt_path = "./fairseq2hf/data/wmt19-en-de/"
tokenizer = FSMTTokenizer.from_pretrained(fsmt_path)
model = FSMTForConditionalGeneration.from_pretrained(fsmt_path)
nlp = TranslationPipeline(model=model, tokenizer=tokenizer)
fsmt_res = nlp(input_text)[0]["translation_text"]
print("fairseq: {}".format(fairseq_res))
print("transformer: {}".format(fsmt_res))
print("match: {}".format(fairseq_res == fsmt_res))
"""
fairseq: Maschinelles Lernen ist großartig!
transformer: Maschinelles Lernen ist großartig!
match: True
"""
```
However, my fairseq model and converted HF model have wrong result with same parameter (beam_size=5). Do you have any idea to debug why tranlation results are different?
### fairseq result
```
# encoded token by hypo_token by fairseq-interactive
tensor([[5269, 2069, 5, 1154, 9, 4, 1823, 3382, 5, 3128, 116, 167,
1582, 7, 2192, 914, 63, 6, 1823, 2807, 124, 1219, 1106, 8,
53, 2175, 2007, 483, 4, 660, 708, 5229, 33, 44, 4, 6049,
1430, 5, 1806, 2050, 2282, 1908, 4, 334, 3229, 4808, 6102, 5,
5031, 11, 5, 291, 4214, 6485, 10, 5784, 1908, 23, 1765, 4916,
6, 2]])
# hypo_token by fairseq-interactive
tensor([ 924, 4938, 6, 3056, 59, 503, 1497, 4, 5835, 847, 6, 592,
2], dtype=torch.int32)
```
### transformers result
```python
encoded_token = torch.tensor([[5269, 2069, 5, 1154, 9, 4, 1823, 3382, 5, 3128, 116, 167, 1582, 7, 2192, 914, 63, 6, 1823, 2807, 124, 1219, 1106, 8, 53, 2175, 2007, 483, 4, 660, 708, 5229, 33, 44, 4, 6049, 1430, 5, 1806, 2050, 2282, 1908, 4, 334, 3229, 4808, 6102, 5, 5031, 11, 5, 291, 4214, 6485, 10, 5784, 1908, 23, 1765, 4916, 6, 2]])
fsmt = FSMTForConditionalGeneration.from_pretrained("./fairseq2HF/")
hypo = fsmt.generate(encoded_token, num_beams=5)
print(hypo)
# tensor([[ 2, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 2]])
```
<|||||>I'm a bit lost - we were discussing a missing state dict key, now we are discussing invalid translation.
Did my suggestion help to resolve the problem of the missing key and now you're presenting the next issue?
Wrt to your transformers result with your model, do you get any better behavior if you encode the tokens via transformers and then feed it to generate? perhaps the dict has somehow changed? though a repeated 21 is suspiciously bad.
<|||||>@stas00
> Did my suggestion help to resolve the problem of the missing key and now you're presenting the next issue?
Yes, thanks for the helpful comments.
Sorry, I should post it as another issue.
> do you get any better behavior if you encode the tokens via transformers and then feed it to generate?
I do not use transformers tokenizer because my fairseq model has a different vocab size, and it's impossible to encode/decode by a single tokenizer model. Converting token to id is used by fairseq's `Dictionary`.
I'll post another issue if necessary after scrutinizing my code.
Thanks for the big help!<|||||>Thank you for clarifying that your original issue has been resolved. Please feel free to close this issue when you feel it's working for you.
Based on your comments, I'm concerned about 2 things:
1. your different dictionaries - a model has to come with the exact dict it was trained on, after conversion too. So it sounds that something isn't right there. If you're not sure what's happening perhaps try to clarify how it came to be that your fairseq model has a different vocab size.
2. perhaps that `output_projection` layer is getting in the way of your model if it was trained without it. You could try to hunt down the few lines where it's used in the code and and bypass it and test whether your translation works then. If you're comfortable editing the source code that is. |
transformers | 10,297 | closed | AutoTokenizer from pretrained BERT throws TypeError when encoding certain input | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.2
- Platform: Arch Linux
- Python version: 3.9.1
- PyTorch version (GPU?): 1.7.1, no
- Tensorflow version (GPU?): Not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
Guess from git blame: @LysandreJik , @thomwolf @n1t0
## Information
Model I am using (Bert, XLNet ...): BERT
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
When I use a pretrained BERT tokenizer, it throws a TypeError on singleton input or input containing ø/æ/å.
It was discovered when I used the pretrained `Maltehb/danish-bert-botxo` which would fail in the below way on any input containing Danish characters (ø/æ/å), but I also realized that it happens with the standard `bert-base-uncased` as shown below.
Steps to reproduce the behavior:
1. Run these line
```py
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
tokenizer.encode(["hello", "world"]) # <--- This works
tokenizer.encode(["hello"]) # <--- This throws the below shown stack trace
tokenizer.encode(["dette", "er", "en", "sø"]) # <--- This throws the same error
```
Stack trace
```py
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-13-ef056deb5f59> in <module>
----> 1 tokenizer.encode(["hello"])
~/.venv/lib/python3.9/site-packages/transformers/tokenization_utils_base.py in encode(self, text, text_pair, add_special_tokens, padding, truncation, max_length, stride, return_tensors, **kwargs)
2102 ``convert_tokens_to_ids`` method).
2103 """
-> 2104 encoded_inputs = self.encode_plus(
2105 text,
2106 text_pair=text_pair,
~/.venv/lib/python3.9/site-packages/transformers/tokenization_utils_base.py in encode_plus(self, text, text_pair, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
2418 )
2419
-> 2420 return self._encode_plus(
2421 text=text,
2422 text_pair=text_pair,
~/.venv/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py in _encode_plus(self, text, text_pair, add_special_tokens, padding_strategy, truncation_strategy, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
453
454 batched_input = [(text, text_pair)] if text_pair else [text]
--> 455 batched_output = self._batch_encode_plus(
456 batched_input,
457 is_split_into_words=is_split_into_words,
~/.venv/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py in _batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, padding_strategy, truncation_strategy, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose)
380 )
381
--> 382 encodings = self._tokenizer.encode_batch(
383 batch_text_or_text_pairs,
384 add_special_tokens=add_special_tokens,
TypeError: TextEncodeInput must be Union[TextInputSequence, Tuple[InputSequence, InputSequence]]
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I expect the model not to throw a type error when the types are the same.
I also expected that the tokenization would produce id's.
[This issue](https://github.com/alexandrainst/danlp/issues/113) is caused by the above
I am grateful for the software and thank you in advance for the help!
| 02-20-2021 08:40:32 | 02-20-2021 08:40:32 | Hello! Thank you for opening an issue with a reproducible example, it helps a lot.
The issue here is that you're using the `encode` method to encode a batch, which it can't do. Encode only encodes single sequences, and can accept a "batch" of two because it processes them as two independent sequences that should be joined together, for example for text-classification where you would want to classify the relationship between two sequences (tasks like Next Sentence Prediction from BERT or Sentence Ordering Prediction ALBERT).
The method you're looking for is the `__call__` method of the tokenizer, which handles exactly all the use-cases you've mentioned, and is the recommended API for tokenizers:
```py
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
tokenizer(["hello", "world"]) # <--- This works
tokenizer(["hello"]) # <--- This works too :)
tokenizer(["dette", "er", "en", "sø"]) # <--- This works as well!
```
[Here is the documentation](https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.__call__) for that method, hope that helps!<|||||>Thank you very much for this good explanation which clearly resolves my problem.
Do you by any chance know whether this behaviour changed in the last years time?
The transformers-based repos [NERDA](https://github.com/ebanalyse/NERDA) and [danlp](https://github.com/alexandrainst/danlp) seem to rely on `tokenizer.encode` to be working as you show the call method does, and as such fail on the current version, but work on 3.5.1 (https://github.com/alexandrainst/danlp/issues/113)<|||||>I believe the `encode` method never accepted batches as inputs. We introduced `encode_plus` and `batch_encode_plus` down the road, the latter being the first to handle batching.
While these two methods are deprecated, they're still tested and working, and they're used under the hood when calling `__call__`.
What is happening here is that v3.5.1 is treating your input as individual words (but by all means it shouldn't as the `is_split_into_words` argument is `False` by default), rather than as different batches, I was mistaken in my first analysis. Something did change between version v3.5.1 and v4.0.0, all the breaking changes are documented in the [migration guide](https://huggingface.co/transformers/migration.html).
If you want to get back to the previous behavior, you have two ways of handling it:
- Specify that you don't want a fast tokenizer. The main change affecting you here is that the `AutoTokenizer` returns a fast tokenizer by default (in Rust) rather than the python-based tokenizer. You can change that behavior with the following:
```py
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased", use_fast=False)
```
- The behavior you're relying on here is the `is_split_into_words` parameter: you're passing it a list of words, rather than a sequence of words. That it worked in previous versions seems like a bug to me, here's how you would handle it now (works with a fast tokenizer):
```py
tokenizer(["hello", "world"], is_split_into_words=True)
tokenizer(["hello"], is_split_into_words=True)
tokenizer(["dette", "er", "en", "sø"], is_split_into_words=True)
```<|||||>Thank you, the `is_split_into_words` clears up the confusion between batches and tokens clearly for me!<|||||>Hi @LysandreJik
I am excpericing the same error:
`'TypeError: TextEncodeInput must be Union[TextInputSequence,Tuple[InputSequence, InputSequence]]'`
while running below code:
```python
self.tokenizer.encode_plus(example[0],
add_special_tokens=True,
padding='max_length',
max_length=max_length,
return_attention_mask=True,
return_tensors='pt')
```
`example[0]` is list of Int which I encoded:
[49518, 111, 22560, 20, 1112, 128, 29, 568, 7, 7244, 10, 10905, 111, 12396, 3781, 111, 4878, 1087, 396, 10, 812, 111, 3077, 629, 847, 202, 3607, 490, 5, 3302, 9, 17890, 154, 10, 3077, 629, 4878, 42, 76, 479, 10130, 273, 363, 2156, 5, 1112, 2763, 8176, 111, 262, 7, 7244, 41, 2319, 68, 508, 4, 245, 325, 2450, 14, 56, 57, 12850, 9, 2213, 9, 7668, 14, 74, 33, 23398, 2156, 1195, 87, 20546, 2156, 5, 752, 1229, 3781, 479, 2589, 6040, 17811, 28455, 5, 1087, 7, 18720, 3633, 14, 24, 21, 33602, 19, 780, 111, 773, 629...
Now i want to pad it and get the attention back.
In the docs it mentioned that i can send List[Int]
what I am missing ?
<|||||>Hi @shon-otmazgin could you open a new issue with a reproducible code example + full stack trace so that we can take a look? Thanks!<|||||>Taking a look at it, I believe the documentation is wrong here and the fast tokenizers handle strings as inputs. Have you tried using `prepare_for_model` for your use-case?<|||||>I will take a look on `prepare_for_model ` this is new to me.
`prepare_for_model ` accept list of input_ids, can pad and return attention mask? <|||||>So we dived into it with @n1t0 and actually the problem here is slightly complex. The slow & fast tokenizers have roughly the same API with a few excptions, and this is one of them: the fast tokenizers are great at handling strings and at being extremely efficient with a bunch of features (offsets is one example of a really powerful feature), but they're not made to handle lists of ints.
In this particular case, while I think it is theoretically possible with fast tokenizers methods by using some private methods, it seems you would be way better off to use a slow tokenizer to achieve what you're looking for.
But this begs the question: is there a way you could share your use-case so that we could study it and understand why you need to pass already processed lists of ints to the tokenizer, instead of tokenizing the text and relying on the information within the encoding?
Here the fast tokenizers would probably be way more efficient at handling this use-case in a one step process, rather than the two step process we're trying to achieve here.<|||||>Also, I'm seeing the following in the docs for `encode_plus`:

and for `batch_encode_plus`:

Is there a docstring we've forgotten somewhere that tells this is also supported for fast tokenizers?
<|||||>I tell you what happened:
I worked on version 3.3.1 which by default `use_fast=False` for `AutoTokenizer`.
I upgraded to version 4.4.2 and that break. -> `use_fast `changed to `True `for `AutoTokenizer`.
@LysandreJik thank you very much for your help. appreciate that :) |
transformers | 10,296 | closed | [predict] AttributeError: 'Seq2SeqTrainer' object has no attribute 'metrics_format' | Hi everybody
When using mbart for machine translation prediction, i got:
Traceback (most recent call last):
File "/Users/lishuqi/Desktop/WAT2021/transformers-master/examples/seq2seq/run_seq2seq.py", line 667, in <module>
main()
File "/Users/lishuqi/Desktop/WAT2021/transformers-master/examples/seq2seq/run_seq2seq.py", line 637, in main
metrics_formatted = trainer.metrics_format(metrics)
AttributeError: 'Seq2SeqTrainer' object has no attribute 'metrics_format'
Am I doing something wrong with the translation?
@patil-suraj
| 02-20-2021 04:19:15 | 02-20-2021 04:19:15 | `metrics_format` was recently introduced on master, you should update the transformers version to master.<|||||>Thanks! I'll try it again! |
transformers | 10,295 | closed | [examples/seq2seq] defensive programming + expand/correct README | This PR deals with the new s2s script and its usage - mostly documentation.
This PR:
`run_seq2seq.py`:
* checks for invalid column names
`README.md`:
* largely expands the document explaining and exemplifying the supported formats
* documents the nuances of t5 and mbart translation - I hope we fix this on the programmatical level in the future
* fixes examples where scores were bad - all examples were verified to work and provide good scores, including the custom files, which were far from easy to figure out. Hopefully now it'll be easier.
* makes the examples quick to complete by running only a short sample - this is important to notice breakages, e.g. in eval stage - nobody is going to wait for train to complete in hours.
* adds cnn/daily mail dataset
* recovers one preprocessed dataset from the last s2s incarnation recommendation: it is offered for high bleu scores (the other 3 are either identical or are just slightly worse than the preprocessed ones - full porting status: https://github.com/huggingface/transformers/issues/10044)
@patil-suraj, @sgugger | 02-20-2021 04:03:25 | 02-20-2021 04:03:25 | |
transformers | 10,294 | closed | Marian input decoding bug | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Marian
Language I am using the model on (English, Chinese ...): English, German
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
I'm examining Marian models to translate text. I noticed `convert_tokens_to_string` method uses `spm_target` which can be problematic if we want to decode source text.
Here is my script:
```
from transformers import MarianTokenizer, MarianModel
tokenizer = MarianTokenizer.from_pretrained('Helsinki-NLP/opus-mt-en-de')
model = MarianModel.from_pretrained('Helsinki-NLP/opus-mt-en-de')
input_text = "I was eating lunch when he saw me"
target_text = "Ich aß gerade zu Mittag, als er mich sah"
input_tokenized = tokenizer(input_text, return_tensors='pt')
with tokenizer.as_target_tokenizer():
target_tokenized = tokenizer(target_text, return_tensors='pt')
print(tokenizer.decode(input_tokenized.data['input_ids'][0]))
with tokenizer.as_target_tokenizer():
print(tokenizer.decode(target_tokenized.data['input_ids'][0]))
```
stdout:
```
I was▁eating▁lunch▁when he▁saw me
Ich aß gerade zu Mittag, als er mich sah
```
As you can see the input text is not decoded correctly since `spm_target` is used. A potential fix is to use `current_spm` and let `as_target_tokenizer` context manager decide which spm should be used (similar to text encoding):
```
def convert_tokens_to_string(self, tokens: List[str]) -> str:
return self.current_spm.DecodePieces(tokens)
```
I can PR the fix if needed.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: master branch (f6e53e3c2bafb37c861db71a4b28c304403af92b)
- Python version: 3.7.4
- PyTorch version (GPU?): 1.7.1 (False)
| 02-20-2021 02:17:13 | 02-20-2021 02:17:13 | Hey @Mehrad0711,
Thanks a lot for the very clean & easy to understand issue!
I can reproduce the error and would be super happy about a PR to fix it! Your fix to let the context manager handle the `spm_target` sounds like the correct solution to me!<|||||>Hi @patrickvonplaten!
Thank you for your feedback. I just submitted a PR fixing this issue.
Thanks ahead for reviewing. |
transformers | 10,293 | closed | [pretrained] model classes aren't checking the arch of the pretrained model it loads | While comparing different models trained on xsum (most of which are Bart) I made a mistake and passed "google/pegasus-xsum" to `BartForConditionalGeneration`
```
BartForConditionalGeneration.from_pretrained("google/pegasus-xsum")
```
I got:
```
Some weights of the model checkpoint at google/pegasus-xsum were not used when initializing BartForConditionalGeneration: ['model.encoder.layer_norm.weight', 'model.encoder.layer_norm.bias', 'model.decoder.layer_norm.weight', 'model.decoder.layer_norm.bias']
- This IS expected if you are initializing BartForConditionalGeneration from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BartForConditionalGeneration from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of BartForConditionalGeneration were not initialized from the model checkpoint at google/pegasus-xsum and are newly initialized: ['model.encoder.embed_positions.weight', 'model.encoder.layernorm_embedding.weight', 'model.encoder.layernorm_embedding.bias', 'model.decoder.embed_positions.weight', 'model.decoder.layernorm_embedding.weight', 'model.decoder.layernorm_embedding.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Traceback (most recent call last):
File "./bart-summarize2.py", line 8, in <module>
tokenizer = BartTokenizer.from_pretrained(mname)
File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py", line 1788, in from_pretrained
return cls._from_pretrained(
File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py", line 1860, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/roberta/tokenization_roberta.py", line 159, in __init__
super().__init__(
File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/gpt2/tokenization_gpt2.py", line 179, in __init__
with open(vocab_file, encoding="utf-8") as vocab_handle:
TypeError: expected str, bytes or os.PathLike object, not NoneType
```
Any reason why the model class doesn't check that it's being fed a wrong architecture? It could detect that and give a corresponding error message, rather than spitting random errors like above? I was pretty sure it was a bug in pegasus model until I noticed that pegasus != Bart.
Thanks.
@LysandreJik
| 02-20-2021 02:03:33 | 02-20-2021 02:03:33 | Ah indeed, that's a good request ! There's no reason, we could definitely raise a warning when loading the weights by checking the model type in the configuration against the arch's model type. Do you want to open a PR?<|||||>Why a warning and not an assert? If the code throws a totally unrelated long backtrace how would a user know to search for an earlier warnings?
Do you see a use-case where someone may need to load mismatching arch for the given model?<|||||>After thinking about it, you're right that an error would be better. I can't think of use-cases where that would affect someone's workflow negatively.<|||||>> Do you want to open a PR?
I could, but realistically it might not happen soon. But since it's not a complicated task perhaps asking the community to help? I guess it'd be as simple as:
1. read the config of the downloaded model as soon as the config got downloaded
2. compare `config.arch` with model's arch
3. assert if mismatch<|||||>Hi @LysandreJik Does someone work on that ? I'd like to make my first contribution to the project<|||||>Hi @ankh6, feel free to work on it! The issue is not reserved until a PR is opened with some progress made towards solving the issue.<|||||>And when you solve it, one test can be:
```
python -c 'from transformers import PegasusForConditionalGeneration; PegasusForConditionalGeneration.from_pretrained("patrickvonplaten/t5-tiny-random")'
```
but this one doesn't crash, just spits a lot of warnings.
This one does crash:
```
python -c 'from transformers import BartForConditionalGeneration; BartForConditionalGeneration.from_pretrained("prajjwal1/bert-tiny")'
```
So it'd be a better candidate to go into the test suite.
We want a tiny model so that it runs the test fast.<|||||>@LysandreJik If I understand correctly we should check that the input is in the PRETRAINED_VOCAB_FILES_MAP object (for this issue). Should the assertion occur when we call is_torch_available method, i.e. in src/transformers/models/gpt2/__init__.py, ? <|||||>As soon as you retrieved the config file and you know which model's class is used, so that you have the 2 things to compare.
It definitely shouldn't happen in the specific model files, but inside the common library.
Most likely there should be one check inside one of the super-classes for model's `from_pretrained` and the same for the tokenizer. Since either may have this conflict.<|||||>Hi,
I've made some progress on this issue. Think I've fixed it for initiating models.
To show if my approach is fine shall I submit a PR?
I've essentially added an assert statement in the `from_pretrained` method in the `PretrainedConfig` class. <|||||>That sounds about right, and yes PR please - thank you!<|||||>Added a pull request #10586 |
transformers | 10,292 | closed | [examples s2s] AttributeError: 'MBartTokenizerFast' object has no attribute 'tgt_lang' | After this PR https://github.com/huggingface/transformers/pull/10205 This is still broken for other models:
```
python examples/seq2seq/run_seq2seq.py --model_name_or_path facebook/mbart-large-en-ro --do_train --do_eval --task translation_en_to_ro --dataset_name wmt16 --dataset_config_name ro-en --source_prefix "translate English to Romanian: " --output_dir /tmp/tst-translation --per_device_train_batch_size=16 --per_device_eval_batch_size=16 --overwrite_output_dir --predict_with_generate --max_train_samples 500 --max_val_samples 500
```
```
Traceback (most recent call last):
File "examples/seq2seq/run_seq2seq.py", line 668, in <module>
main()
File "examples/seq2seq/run_seq2seq.py", line 469, in main
train_dataset = train_dataset.map(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_dataset.py", line 1120, in map
update_data = does_function_return_dict(test_inputs, test_indices)
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_dataset.py", line 1091, in does_function_return_dict
function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
File "examples/seq2seq/run_seq2seq.py", line 450, in preprocess_function
with tokenizer.as_target_tokenizer():
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/contextlib.py", line 113, in __enter__
return next(self.gen)
File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/mbart/tokenization_mbart_fast.py", line 193, in as_target_tokenizer
self.set_tgt_lang_special_tokens(self.tgt_lang)
AttributeError: 'MBartTokenizerFast' object has no attribute 'tgt_lang'
```
@patil-suraj, @sgugger | 02-20-2021 01:04:50 | 02-20-2021 01:04:50 | #10287 contains the fix.<|||||>Confirmed that it works, albeit the cl args changed so tested with:
```
PYTHONPATH=src python examples/seq2seq/run_translation.py --model_name_or_path facebook/mbart-large-en-ro --do_train --do_eval --dataset_name wmt16 --dataset_config_name ro-en --output_dir /tmp/tst-translation --per_device_train_batch_size=2 --per_device_eval_batch_size=2 --overwrite_output_dir --predict_with_generate --source_lang en_XX --target_lang ro_RO --max_val_samples 10 --max_train_samples 10
``` |
transformers | 10,291 | closed | Fix example links in the task summary | # What does this PR do?
This PR fixes (and adds or removes) the links shown in the task summary.
Fixes #10288 | 02-19-2021 22:34:21 | 02-19-2021 22:34:21 | |
transformers | 10,290 | closed | Trainer train continues after resume_from_checkpoint on a checkpoint with early stop | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
When continuing training from checkpoint, Trainer does not check if the checkpoint terminated with an `self.control.should_training_stop == True`.
`self.control.should_training_stop == True` holds when:
1. `state.global_step >= state.max_steps`
* training does not resume on `resume_from_checkpoint` due to recovering steps information (`state.global_step`) from checkpoint state 👍
2. Due to early stopping condition True
* training resumes as no mechanism to find previous early stopping state 👎
* even `early_stopping_patience_counter` is restarted from 0 on `EarlyStoppingCallback` init, irrespective of `resume_from_checkpoint` 👎
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger as issue in Trainer.
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Initialize `Trainer.train` with `resume_from_checkpoint` pointing to a checkpoint that stopped due to early stopping
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Training should not happen as the checkpoint loaded had stopped due to early stopping.
<!-- A clear and concise description of what you would expect to happen. -->
| 02-19-2021 22:29:06 | 02-19-2021 22:29:06 | Indeed, I can see the problem. I'm not sure there is an easy fix however and I don't have time right now to build a proper callback checkpointing system. Will have to wait a little bit to be fixed!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 10,289 | closed | Masking issues with GPT2LMHeadModel.generate() | Is this intended behavior, that padding a sentence and attention_mask it will not give the exact same generation result comparing to the same sentence unpadded?
Edit: [This notebook](https://colab.research.google.com/drive/1oyFRFigtSNUYwKO1EQPRHfEqke0-F6_N?usp=sharing) demonstrates this, with the newest version available on colab. I realized that I didn't turn sampling off with the example below but the colab one has sampling off.
```
>>> gpt2_model = transformers.GPT2LMHeadModel.from_pretrained("gpt2")
>>> gpt2_model.generate(torch.tensor([[100,200,300]]))
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
tensor([[100, 200, 300, 84, 12, 75, 84, 12, 75, 84, 12, 75, 84, 12,
75, 84, 12, 75, 84, 12]])
>>> gpt2_model.generate(torch.tensor([[100,200,300,50256]]),attention_mask=torch.tensor([[1,1,1,0]]))
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
tensor([[ 100, 200, 300, 50256, 198, 198, 7, 16, 8, 383,
3381, 366, 75, 1, 1724, 262, 4129, 286, 262, 4731]])
``` | 02-19-2021 21:59:24 | 02-19-2021 21:59:24 | Hey @xxbidiao,
For batched generation GPT2 has to be used in quite a special way... -> could you check out [this](https://discuss.huggingface.co/t/batch-generation-with-gpt2/1517/2) forum post to see whether this makes sense for you?<|||||>```
import torch,transformers
gpt2_model = transformers.GPT2LMHeadModel.from_pretrained("gpt2")
print(gpt2_model.generate(torch.tensor([[100,200,300]]),do_sample=False))
print(gpt2_model.generate(torch.tensor([[100,200,300,50256]]),attention_mask=torch.tensor([[1,1,1,0]]),do_sample=False))
print(gpt2_model.generate(torch.tensor([[50256,100,200,300]]),attention_mask=torch.tensor([[0,1,1,1]]),do_sample=False))
```
```
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
tensor([[100, 200, 300, 84, 12, 75, 84, 12, 75, 84, 12, 75, 84, 12,
75, 84, 12, 75, 84, 12]])
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
tensor([[ 100, 200, 300, 50256, 198, 198, 7, 16, 8, 383,
3381, 366, 75, 1, 1724, 262, 4129, 286, 262, 4731]])
tensor([[50256, 100, 200, 300, 84, 12, 75, 84, 12, 75,
84, 12, 75, 84, 12, 75, 84, 12, 75, 84]])
```
Looks like it works! Will double check. Thank you!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 10,288 | closed | Minor documentation issue | ## Minor issue in the Fine tuning docs
In the "Named Entity Recognition" section of the "Summary of tasks" documentation page there are some bad links.
Here is a link to the section: https://huggingface.co/transformers/task_summary.html#named-entity-recognition
The text in question is:
##Named Entity Recognition
Named Entity Recognition (NER) is the task of classifying tokens according to a class, for example, identifying a token as a person, an organisation or a location. An example of a named entity recognition dataset is the CoNLL-2003 dataset, which is entirely based on that task. If you would like to fine-tune a model on an NER task, you may leverage the run_ner.py (PyTorch), run_pl_ner.py (leveraging pytorch-lightning) or the run_tf_ner.py (TensorFlow) scripts.
## ISSUE:
The links for the following give a 404 error:
run_pl_ner.py, run_tf_ner.py
| 02-19-2021 21:51:10 | 02-19-2021 21:51:10 | Thanks for flagging! Those have not been updated in a while so I made a pass over that file.<|||||>I still see the bad links. Is the change getting pushed/merged later?<|||||>It will only be seen in the [master documentation](https://huggingface.co/transformers/master/) for now. At the next release, it will become visible in the stable documentation.<|||||>Will the new run_ner.py work with PyTorch and TF? PyTorch-Lightening too?
|
transformers | 10,287 | closed | Deprecate prepare_seq2seq_batch | # What does this PR do?
This PR officially deprecates `prepare_seq2seq_batch` to prepare its removal in Transformers v5. As discussed before, the proper way to prepare data for sequence-to-sequence tasks is to:
- call the tokenizer on the inputs
- call the tokenizers on the targets inside the context manager `as_target_tokenizer`
When only dealing with input texts without targets, just using the tokenizer call works perfectly well.
For `mBART` and `mBART50` tokenizers the source and target language can be specified at init or changed at any time by setting the attributes `.src_lang` and `.tgt_lang`.
Here is a full example showing how to port old code using `prepare_seq2seq_batch` to the new way in the case of an mBART tokenizer (remove the mentiones of `src_lang` and `tgt_lang` for other tokenizers:
```
tokenizer = MBartTokenizer.from_pretrained('facebook/mbart-large-en-ro')
batch = tokenizer.prepare_seq2seq_batch(src_texts, tgt_texts, padding=True, truncation=True, src_lang="en_XX", tgt_lang="ro_RO", return_tensors="pt")
```
becomes
```
tokenizer = MBartTokenizer.from_pretrained('facebook/mbart-large-en-ro', src_lang="en_XX", tgt_lang="ro_RO")
batch = tokenizer(src_texts, padding=True, truncation=True, return_tensors="pt")
with tokenizer.as_target_tokenizer():
targets = tokenizer(tgt_texts, padding=True, truncation=True, return_tensors="pt")
batch["labels"] = targets["input_ids"]
```
The languages can be changed at any time with
```
tokenizer.src_lang = new_src_code
tokenizer.tgt_lang = new_tgt_code
```
This PR fixes a few things in `MBartTokenizer` and `MBartTokenizerFast` for the new API to work completely and removes all mentions of `prepare_seq2seq_batch` from the documentation and tests (except the test of that method in the common tests). It was already not used anymore in the seq2seq example `run_seq2seq`. | 02-19-2021 20:27:54 | 02-19-2021 20:27:54 | Hi all! Sorry, but this seems to be cleaner: (Some feature request: #14255)
```python
encoded_train_dataset = train_dataset.map(
lambda batch: tokenizer.prepare_seq2seq_batch(
batch['text'], batch['summary'], padding='max_length', truncation=True, max_length=256, max_target_length=64
),
batched=True,
remove_columns=train_dataset.column_names,
)
``` |
transformers | 10,286 | closed | Introduce save_strategy training argument | * Introduce save_strategy training argument
* collapse EvaluationStrategy and LoggingStrategy into a single TimeStrategy enum
* modify tests to use modified enum
# What does this PR do?
1. Introduce new `save_strategy` argument to decide on interval between 2 model saves during training.
2. Introduce a unified enum `TimeStrategy` which is used across `evaluation_strategy`, `logging_strategy` and `save_strategy`.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. [Discussed during PR for logging_strategy.](https://github.com/huggingface/transformers/pull/10267)
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger | 02-19-2021 20:21:49 | 02-19-2021 20:21:49 | Hi @sgugger,
Got some time to raise the changes we talked about in [my previous PR](https://github.com/huggingface/transformers/pull/10267).
Do let me know if I missed something.
Thanks!<|||||>Hi @LysandreJik / @sgugger,
Thanks for your inputs! I can think of some better names, _but before that_:
Is deprecating usage of `EvaluationStrategy` and keeping its definition along with the `TimeStrategy` (or whatever the name would be) in `trainer_utils.py` for the time being a good option? Can also throw a FutureWarning when `EvaluationStrategy` is used.<|||||>Yes that would be the preferred option: not use it anymore but still keep it until v5, and each time someone uses it a `FutureWarning` indicating it's deprecated and will be removed in version 5 is thrown.
Let me know if you have other questions!<|||||>Hi @sgugger,
Seems like the latest changes are failing a certain `make modified_only_fixup` test.
I'm not entirely sure where this test is failing.
Given that this test is passing on my local machine, this could be due to erratic hard-updation of some test/doc?
> 2021-02-26T19:45:22.6161202Z Checking/fixing src/transformers/__init__.py src/transformers/integrations.py src/transformers/models/__init__.py src/transformers/models/auto/configuration_auto.py src/transformers/models/auto/modeling_auto.py src/transformers/models/auto/modeling_tf_auto.py src/transformers/trainer_callback.py src/transformers/trainer_tf.py src/transformers/trainer_utils.py src/transformers/training_args.py src/transformers/training_args_tf.py src/transformers/utils/dummy_pt_objects.py src/transformers/utils/dummy_tf_objects.py src/transformers/utils/dummy_tokenizers_objects.py src/transformers/utils/notebook.py tests/test_trainer.py tests/test_trainer_callback.py utils/check_repo.py
> 2021-02-26T19:45:24.8027639Z All done! ✨ 🍰 ✨
> 2021-02-26T19:45:24.8028839Z 18 files left unchanged.
> 2021-02-26T19:45:28.4905837Z tests/test_trainer.py:1060:37: F821 undefined name 'EvaluationStrategy'
> 2021-02-26T19:45:28.5127548Z make: *** [modified_only_fixup] Error 1
> 2021-02-26T19:45:28.5129201Z Makefile:7: recipe for target 'modified_only_fixup' failed<|||||>Oh and for the failing test, you missed an `EvaluationStrategy` toward the end of `tests/test_trainer.py`, that's why you have the error.<|||||>Thanks! Fixed it now.
Although I see some approaches mentioned on other forums, I'm not entirely sure what would be the best approach to print a warning on usage of enum `EvaluationStrategy`.
If not too complex, you can point me towards how to do it.
Otherwise, you can merge this PR :-).<|||||>I'm not finding anything easy to do that, so I think we can merge for now and I'll keep looking. |
transformers | 10,285 | closed | Random Word Replacement Probability | Hi,
It appears that the token masking function replaces tokens with random words 50% of the time instead of the commented 10%.
https://github.com/huggingface/transformers/blob/709c86b5a925f1efe650e24ee8b1f52bdc5a3acb/src/transformers/data/data_collator.py#L381 | 02-19-2021 19:50:33 | 02-19-2021 19:50:33 | Nevermind, didn't see the not for replaced indices so it's 50% of the remaining 20% after masking |
transformers | 10,284 | closed | Patch zero shot distillation script cuda issue | Quick patch to #10244 replacing an accidental deletion of `.cuda()` when using cuda. Causes error with multi-GPU. | 02-19-2021 19:04:13 | 02-19-2021 19:04:13 | |
transformers | 10,283 | closed | Clean TF BART and TF Seq2Ses template | # What does this PR do?
This PR aims to clean TF BART and the TF Seq2Seq template by adding explicit keyword arguments, typing and update the documentation in the model implementation to make it easier to understand and read.
| 02-19-2021 18:31:25 | 02-19-2021 18:31:25 | |
transformers | 10,282 | closed | [tests] tests/test_trainer_distributed.py intermittent failure | `tests/test_trainer_distributed.py` fails occasionally on multi-gpu github runner CI and as a result doesn't free up the 29500 default distributed port.
This could be caused by an occasional deadlock discusses in testing_utils.py's `_stream_subprocess``. When debugging one such zombie it was stuck in `exec(eval(sys.stdin.readline()))`
Note that other similar tests under `examples` don't exhibit the same behavior - perhaps it somehow has to do with this being a different script that it runs (this test runs its own file as the distributed script).
The bt of the subsequent failures is long and confusing, as there are several mixed failures, but it's all really one failure: `Address already in use` since the previous distributed run of the same test didn't free up this port.
A quick check should show which process is bound to it:
```
netstat -tulpn | grep :29500
```
The full bt:
```
NCCL_DEBUG=INFO pytest -sv tests/test_trainer_distributed.py
================================================================ test session starts ================================================================
platform linux -- Python 3.7.4, pytest-6.2.2, py-1.10.0, pluggy-0.13.1 -- /home/github_actions/actions-runner/_work/transformers/transformers/.env/bin/python
cachedir: .pytest_cache
rootdir: /home/github_actions/actions-runner/_work/transformers/transformers
plugins: xdist-2.2.1, forked-1.3.0
collected 1 item
tests/test_trainer_distributed.py::TestTrainerDistributed::test_trainer
Running: /home/github_actions/actions-runner/_work/transformers/transformers/.env/bin/python -m torch.distributed.launch --nproc_per_node=2 /home/github_actions/actions-runner/_work/transformers/transformers/tests/test_trainer_distributed.py --output_dir /tmp/tmp2k265qn5
stderr: Traceback (most recent call last):
stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/tests/test_trainer_distributed.py", line 82, in <module>
stderr: training_args = parser.parse_args_into_dataclasses()[0]
stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/src/transformers/hf_argparser.py", line 180, in parse_args_into_dataclasses
stderr: obj = dtype(**inputs)
stderr: File "<string>", line 61, in __init__
stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/src/transformers/training_args.py", line 491, in __post_init__
stderr: if is_torch_available() and self.device.type != "cuda" and self.fp16:
stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/src/transformers/file_utils.py", line 1369, in wrapper
stderr: return func(*args, **kwargs)
stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/src/transformers/training_args.py", line 620, in device
stderr: return self._setup_devices
stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/src/transformers/file_utils.py", line 1359, in __get__
stderr: cached = self.fget(obj)
stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/src/transformers/file_utils.py", line 1369, in wrapper
stderr: return func(*args, **kwargs)
stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/src/transformers/training_args.py", line 605, in _setup_devices
stderr: torch.distributed.init_process_group(backend="nccl")
stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/.env/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 436, in init_process_group
stderr: store, rank, world_size = next(rendezvous_iterator)
stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/.env/lib/python3.7/site-packages/torch/distributed/rendezvous.py", line 179, in _env_rendezvous_handler
stderr: store = TCPStore(master_addr, master_port, world_size, start_daemon, timeout)
stderr: RuntimeError: Address already in use
stderr: Traceback (most recent call last):
stderr: File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
stderr: "__main__", mod_spec)
stderr: File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
stderr: exec(code, run_globals)
stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/.env/lib/python3.7/site-packages/torch/distributed/launch.py", line 260, in <module>
stderr: main()
stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/.env/lib/python3.7/site-packages/torch/distributed/launch.py", line 256, in main
stderr: cmd=cmd)
stderr: subprocess.CalledProcessError: Command '['/home/github_actions/actions-runner/_work/transformers/transformers/.env/bin/python', '-u', '/home/github_actions/actions-runner/_work/transformers/transformers/tests/test_trainer_distributed.py', '--local_rank=1', '--output_dir', '/tmp/tmp2k265qn5']' returned non-zero exit status 1.
stdout: *****************************************
stdout: Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
stdout: *****************************************
stdout: multi-gpu-ci-runner:18062:18062 [1] NCCL INFO Bootstrap : Using [0]ens6:10.128.0.66<0>
stdout: multi-gpu-ci-runner:18062:18062 [1] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
stdout:
stdout: multi-gpu-ci-runner:18062:18062 [1] misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1]
stdout: multi-gpu-ci-runner:18062:18062 [1] NCCL INFO NET/Socket : Using [0]ens6:10.128.0.66<0>
stdout: multi-gpu-ci-runner:18062:18062 [1] NCCL INFO Using network Socket
stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO Call to connect returned Connection refused, retrying
stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO Call to connect returned Connection refused, retrying
stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO Call to connect returned Connection refused, retrying
stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO Call to connect returned Connection refused, retrying
stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO Call to connect returned Connection refused, retrying
stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO Call to connect returned Connection refused, retrying
stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO Call to connect returned Connection refused, retrying
stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO Call to connect returned Connection refused, retrying
stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO Call to connect returned Connection refused, retrying
stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO Call to connect returned Connection refused, retrying
stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO Call to connect returned Connection refused, retrying
stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO Call to connect returned Connection refused, retrying
stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO Call to connect returned Connection refused, retrying
stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO Call to connect returned Connection refused, retrying
stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO Call to connect returned Connection refused, retrying
stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO Call to connect returned Connection refused, retrying
stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO Call to connect returned Connection refused, retrying
stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO Call to connect returned Connection refused, retrying
stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO Call to connect returned Connection refused, retrying
stdout:
stdout: multi-gpu-ci-runner:18062:18089 [1] include/socket.h:403 NCCL WARN Connect to 10.128.0.66<52523> failed : Connection refused
stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO bootstrap.cc:95 -> 2
stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO bootstrap.cc:309 -> 2
stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO init.cc:555 -> 2
stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO init.cc:840 -> 2
stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO group.cc:73 -> 2 [Async thread]
stderr: Traceback (most recent call last):
stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/tests/test_trainer_distributed.py", line 82, in <module>
stderr: training_args = parser.parse_args_into_dataclasses()[0]
stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/src/transformers/hf_argparser.py", line 180, in parse_args_into_dataclasses
stderr: obj = dtype(**inputs)
stderr: File "<string>", line 61, in __init__
stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/src/transformers/training_args.py", line 491, in __post_init__
stderr: if is_torch_available() and self.device.type != "cuda" and self.fp16:
stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/src/transformers/file_utils.py", line 1369, in wrapper
stderr: return func(*args, **kwargs)
stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/src/transformers/training_args.py", line 620, in device
stderr: return self._setup_devices
stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/src/transformers/file_utils.py", line 1359, in __get__
stderr: cached = self.fget(obj)
stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/src/transformers/file_utils.py", line 1369, in wrapper
stderr: return func(*args, **kwargs)
stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/src/transformers/training_args.py", line 605, in _setup_devices
stderr: torch.distributed.init_process_group(backend="nccl")
stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/.env/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 455, in init_process_group
stderr: barrier()
stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/.env/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1960, in barrier
stderr: work = _default_pg.barrier()
stderr: RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:784, unhandled system error, NCCL version 2.7.8
FAILED
===================================================================== FAILURES ======================================================================
________________________________________________________ TestTrainerDistributed.test_trainer ________________________________________________________
self = <tests.test_trainer_distributed.TestTrainerDistributed testMethod=test_trainer>
@require_torch_multi_gpu
def test_trainer(self):
distributed_args = f"""
-m torch.distributed.launch
--nproc_per_node={torch.cuda.device_count()}
{self.test_file_dir}/test_trainer_distributed.py
""".split()
output_dir = self.get_auto_remove_tmp_dir()
args = f"--output_dir {output_dir}".split()
cmd = [sys.executable] + distributed_args + args
> execute_subprocess_async(cmd, env=self.get_env())
tests/test_trainer_distributed.py:72:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cmd = ['/home/github_actions/actions-runner/_work/transformers/transformers/.env/bin/python', '-m', 'torch.distributed.launc.../github_actions/actions-runner/_work/transformers/transformers/tests/test_trainer_distributed.py', '--output_dir', ...]
env = {'HOME': '/home/github_actions', 'KMP_DUPLICATE_LIB_OK': 'True', 'KMP_INIT_AT_FORK': 'FALSE', 'LANG': 'C.UTF-8', ...}, stdin = None
timeout = 180, quiet = False, echo = True
def execute_subprocess_async(cmd, env=None, stdin=None, timeout=180, quiet=False, echo=True) -> _RunOutput:
loop = asyncio.get_event_loop()
result = loop.run_until_complete(
_stream_subprocess(cmd, env=env, stdin=stdin, timeout=timeout, quiet=quiet, echo=echo)
)
cmd_str = " ".join(cmd)
if result.returncode > 0:
stderr = "\n".join(result.stderr)
raise RuntimeError(
> f"'{cmd_str}' failed with returncode {result.returncode}\n\n"
f"The combined stderr from workers follows:\n{stderr}"
)
```
A short term workaround could be to randomize the port, so this test won't trumple upon its previous zombie.
```
+ from random import randint
+ master_port = 2950 + randint(1, 99)
distributed_args = f"""
-m torch.distributed.launch
--nproc_per_node={torch.cuda.device_count()}
+ --master_port {master_port}
{self.test_file_dir}/test_trainer_distributed.py
""".split()
```
but this is a band-aid and a real solution is needed. It also will be an issue with any other distributed tests that rely on the same default port number.
I will keep on monitoring the issue.
Meanwhile this PR https://github.com/huggingface/transformers/pull/10281 should help with preventing the incremental number of zombies from scheduled runs.
It's difficult to debug w/o being able to reproduce this problem at will. | 02-19-2021 18:22:27 | 02-19-2021 18:22:27 | One other solution - since this is a single node we could use a unique file rather than port for setting up the distributed process group.
That is `init_process_group()` with `init_method="file:///tmp/unique_file"` - but the trainer currently hardcodes the `env://` method so we may need to make it more flexible around that.
Reference: https://pytorch.org/docs/master/distributed.html#torch.distributed.init_process_group<|||||>since we are switching to docker runs, this becomes moot, as there will be no processes from previous runs. |
transformers | 10,281 | closed | [CI] Kill any run-away pytest processes | As discussed on slack this PR proposes to change github runner to kill any run-away pytest processes before starting a new job.
@LysandreJik | 02-19-2021 17:22:01 | 02-19-2021 17:22:01 | |
transformers | 10,280 | closed | Trainer.train argument resume_from_last_checkpoint | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
`Trainer.train` accepts `resume_from_checkpoint` argument, which requires the user to explicitly provide the checkpoint location to continue training from.
`resume_from_last_checkpoint` can be useful to resume training by picking the latest checkpoint from `output_dir` of the `TrainingArguments` passed.
## Motivation
1. The checkpoint directory is created by the library, so user needs to navigate to the directory to find the value to provide for `resume_from_checkpoint`
2. User may just want to resume from the last valid checkpoint since their training got disrupted previously (a common scenario for someone to want to resume training). All they know is the `output_dir` they provided initially
This motivates to provide a `resume_from_last_checkpoint=True` to the `Trainer.train(...)` call, which will pick the latest checkpoint from `args.output_dir`. FYI `get_last_checkpoint` function from `trainer_utils` can be used to do exactly the same.
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
I can raise a PR if it is a useful feature to have! | 02-19-2021 15:53:02 | 02-19-2021 15:53:02 | Instead of adding a new argument, I would use the existing `resume_from_checkpoint` and change its type to bool or str/PathLike. If it's a bool and if it's `True`, we then use `get_last_checkpoint` to get the last checkpoint in `args.output_dir`. Does that sound good to you?<|||||>Yes, SGTM. I have raised [a PR](https://github.com/huggingface/transformers/pull/10334) doing the same. Do let me know if there is any other change required as well!
**PS**: Can you also review my [other PR](https://github.com/huggingface/transformers/pull/10286) introducing `save_strategy` in `TrainingArguments`? This PR is the last one to round-up the `save_strategy`, `evaluation_strategy` and `logging_strategy` enhancements.
Thanks!<|||||>> # 🚀 Feature request
> `Trainer.train` accepts `resume_from_checkpoint` argument, which requires the user to explicitly provide the checkpoint location to continue training from. `resume_from_last_checkpoint` can be useful to resume training by picking the latest checkpoint from `output_dir` of the `TrainingArguments` passed.
>
> ## Motivation
> 1. The checkpoint directory is created by the library, so user needs to navigate to the directory to find the value to provide for `resume_from_checkpoint`
> 2. User may just want to resume from the last valid checkpoint since their training got disrupted previously (a common scenario for someone to want to resume training). All they know is the `output_dir` they provided initially
>
> This motivates to provide a `resume_from_last_checkpoint=True` to the `Trainer.train(...)` call, which will pick the latest checkpoint from `args.output_dir`. FYI `get_last_checkpoint` function from `trainer_utils` can be used to do exactly the same.
>
> ## Your contribution
> I can raise a PR if it is a useful feature to have!
Is it possible to train while adding a new category to the dataset using this resume_from_checkpoint argument?
|
transformers | 10,279 | closed | Performance of mbart-large-50-many-to-many-mmt on de/fr/it | Hi everybody
I am using ` mbart-large-50-many-to-many-mmt` and I am running into the following problem.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4.0.dev0 (installed from source)
- Platform: Linux
- Python version: 3.8.5
- PyTorch version (CPU): 1.7.1
### Who can help
@patrickvonplaten, @patil-suraj
## Information
I am using the `mbart-large-50-many-to-many-mmt` model for translation and it works as expected when translating German to English but when translating to other languages such as French or Italian it seems broken. I am using the same code as highlighted in the model card.
```Python
from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-many-mmt")
tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-many-mmt")
input_text = "Der Mars hat einen neuen Besucher, wenn auch einen robotischen: \
Nach einer mehr als 472 Millionen Kilometer langen Reise setzte am Donnerstagabend \
das amerikanische Roboterfahrzeug Perseverance sanft im Marsstaub auf. "
tokenizer.src_lang = "de_DE"
encoded = tokenizer(input_text, return_tensors="pt")
generated_tokens = model.generate(**encoded, forced_bos_token_id=tokenizer.lang_code_to_id["en_XX"])
tokenizer.batch_decode(generated_tokens, skip_special_tokens=False)[0]
#--> '</s>en_XX Mars has a new visitor, but also a robotic one: After a journey of more than 472 million kilometers, the American robotic vehicle Perseverance gently set off in Mars dust on Thursday evening.</s>'
tokenizer.src_lang = "de_DE"
encoded = tokenizer(input_text, return_tensors="pt")
generated_tokens = model.generate(**encoded, forced_bos_token_id=tokenizer.lang_code_to_id["fr_XX"])
tokenizer.batch_decode(generated_tokens, skip_special_tokens=False)[0]
#--> '</s>fr_XX On Mars, on Mars, on Mars, on Mars, on Mars, on Mars, on Mars, on Mars, on Mars.</s>'
tokenizer.src_lang = "de_DE"
encoded = tokenizer(input_text, return_tensors="pt")
generated_tokens = model.generate(**encoded, forced_bos_token_id=tokenizer.lang_code_to_id["it_IT"])
tokenizer.batch_decode(generated_tokens, skip_special_tokens=False)[0]
#--> '</s>it_IT Mars has a new visitor, anche robotico: After a journey di più di 472 milioni di chilometri, Thursday evening, the American robot vehicle Perseverance si è calmato in the dust of Mars.</s>'
```
Am I doing something wrong with the translation or is the performance on these languages expected to be worse? | 02-19-2021 15:52:24 | 02-19-2021 15:52:24 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Unstale<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 10,278 | closed | Improving training time for Marian MT model with the Trainer | ## Environment info
- Platform: Linux-4.19.0-14-cloud-amd64-x86_64-with-debian-10.7
- Python version: 3.7.9
- PyTorch version (GPU): 1.7.1
- Using GPU in script?: Yes (2 Tesla V100 GPUs with 16160MiB memory)
- CUDA Version: 11.0
- transformers: 4.3.2
- datasets: 1.3.0
### Who can help
@lhoestq @sgugger @sshleifer
## Information
The model I am using is a [MarianMTModel](https://huggingface.co/transformers/model_doc/marian.html#marianmtmodel) to train a machine translation model for Italian to Dutch. In order to so I perform the following steps:
- Split the data
1. The dataset originally consists of 2 parts, 1 text file containing Italian sentences and 1 text file containing the corresponding Dutch sentences. (42,940,499 sentences). These text files are combined in a pandas dataframe with a source and target column and is written to a csv file. Based on this [issue](https://github.com/huggingface/datasets/issues/610#issuecomment-691672919), which solved out-of-memory issues, I split the dataframe in 1000 csv chunks.
2. The csv files are loaded with the load_dataset method (which results in a 7.5G csv-train.arrow file) from the datasets library as follows:
```python
train_files = glob.glob(data_folder + 'shards/data_chunk_train_*') # list of the individual csv files
train_dataset = load_dataset('csv', data_files=train_files, split='train', cache_dir=cache_folder, keep_in_memory=True)
```
I use the keep_in_memory=True here to hopefully make things faster during training.
- Tokenization
1. At first, I to batch tokenized all the sentences with the map function. However, this resulted in 16 * 32G cache-files and gave a training time of 2000 hours. So I changed this to use the set_transform method in the latest release from datasets as follows:
``` python
def encode(example):
return tokenizer.prepare_seq2seq_batch(src_texts=example['source'], tgt_texts=example['target'], padding='max_length', max_length=512)
train_dataset.set_transform(encode)
```
I use the max_length here to tokenize every sentence to the same size.
The tokenizer is a MarianTokenizer (where the spm files and vocab are trained with sentencepiece) and is defined as follows:
```python
tokenizer = MarianTokenizer(vocab='tokenizer/vocab.json', source_spm='tokenizer/source.model',
target_spm='tokenizer/target.model', source_lang='it', target_lang='nl', model_max_length=512)
```
- Model
1. A MarianMTModel is configured with the following MarianConfig (same config as the pretrained MarianMT models):
``` python
model = MarianConfig(decoder_layers=6, encoder_layers=6, d_model= 512, decoder_attention_heads=8,
decoder_ffn_dim=2048, decoder_layerdrop=0.0, encoder_attention_heads=8, encoder_ffn_dim=2048,
encoder_layerdrop=0.0, max_position_embeddings=512)
model = MarianMTModel(configuration)
```
- Training
1. After I loaded the data, created the MarianTokenizer and configured the MarianMTmodel I started training with the Trainer. I used the following trainingArguments:
``` python
training_args = TrainingArguments(num_train_epochs=3, per_device_train_batch_size=12,
per_device_eval_batch_size=12, warmup_steps=100, weight_decay=0.01, logging_dir='./logs', logging_steps=5000,
save_steps=10000, disable_tqdm=False, logging_first_step=True, fp16=True, remove_unused_columns = False)
```
I was not able to increase the batch size, as this gave out of memory errors on the GPU. And finally started training:
```python
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=dev_dataset # contains 50,000 samples
)
trainer.train()
```
## Expected behavior
I expect the model to train within a reasonable amount of time (i.e. a couple of days). However, the training process is going to take about 500 hours:
0 %| | 219/5367564 [01:18<531:01:44, 2.81it/s]
I was wondering if this is expected behaviour that it takes that long to train. Could you please give me any suggestions on how to modify this to make it faster? | 02-19-2021 15:12:20 | 02-19-2021 15:12:20 | +1 I found the same problem. The bottleneck seems to be huggingface/datasets. Hence I switched back to use the old customized dataset, which was way more faster.<|||||>Hi ! There's currently an issue in huggingface/datasets that makes iterating through the dataset slow if your dataset is big.
We're working on a fix and we'll do a new release soon to address this :)
I'll ping you when the fix is ready if you want to try it out !
edit: https://github.com/huggingface/datasets/pull/2122 fixed it<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@gyin-ai could you share your custom solution? I'm running into the same problem |
transformers | 10,277 | closed | ImportError: cannot import name 'pipeline' from 'transformers' (unknown location) | Hi, I created an env with conda, installed TF, then installed PyTorch, then "pip install git+https://github.com/huggingface/transformers", but when I ran 'python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('I hate you'))"', it gave me the ImportError. How can I resolve this? | 02-19-2021 14:47:13 | 02-19-2021 14:47:13 | I'm on MacOS btw.<|||||>Possibly duplicate of https://github.com/huggingface/transformers/issues/9939<|||||>> Possibly duplicate of https://github.com/huggingface/transformers/issues/9939
I have installed TF 2.0 right at the start. Is there a version to update to resolve this error?<|||||>So after install TF 2.0 with conda, I performed pip install --upgrade tensorflow to v2.4.1 and it works now.<|||||>The issue happens again with latest version of tensorflow and transformers.
`>>> import transformers`
`>>> from transformers import pipeline`
`Traceback (most recent call last):`
` File "<stdin>", line 1, in <module>`
`ImportError: cannot import name 'pipeline' from 'transformers' (unknown location)`
`>>> tensorflow.__version__`
`'2.5.0'`
`>>> transformers.__version__`
`'4.7.0'`<|||||>I had the same problem but used the false transformer:
**Initial**
`conda update -c conda-forge transformers`
**Before**
> tensorflow.__version__ # '2.3.0'
> transformers.__version__ # '2.1.1'
**Solution**
`conda install -c huggingface transformers `
**After**
> tensorflow.__version__ # '2.3.0'
> transformers.__version__ # '4.11.3'
> torch.__version__ # '1.10.0'
<|||||>It is related to the solution. I have to solve it by explicit stating the version. Otherwise, it keep installing the conda-forge version.
`conda install -c huggingface transformers =4.11.3`<|||||>What is your python files name? Mine was _tokenizers_ and when I changed it to _use_tokenizers_ it works. Maybe using names like "tokenizers", "pipeline" for files not the best idea.<|||||>> What is your python files name? Mine was _tokenizers_ and when I changed it to _use_tokenizers_ it works. Maybe using names like "tokenizers", "pipeline" for files not the best idea.
I made the mistake of calling mine tokenize.py - once changed the code worked. |
transformers | 10,276 | closed | Move the TF NER example | # What does this PR do?
This PR moves the `run_tf_ner.py` example into the legacy folder because it uses the "legacy" way to train a model with the `utils_ner.py` file.
| 02-19-2021 13:29:45 | 02-19-2021 13:29:45 | Nice! What means "same way of training", same way than what?<|||||>Same way as the current `run_ner` script. |
transformers | 10,275 | closed | Fix squad processor for TF | # What does this PR do?
This PR fixes the Squad processor that prepares and creates a `tf.data.dataset` to be able to be used in the `TFTrainer` through the `run_tf_squad.py` example script.
There were two issues:
1. The `token_type_ids` was forced to be `True` in the tokenizer output even if this argument was not part of the tokenizer's property `model_input_names`. Now it always belongs to the created dataset.
2. The `input_processing` method that parses the inputs of a TF model doesn't allow arguments that are not part of the signature, then the extra features have been removed from the created dataset.
# Fixes issue
#10246 | 02-19-2021 12:59:55 | 02-19-2021 12:59:55 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 10,274 | closed | Rework the AMP for TF XLNet | # What does this PR do?
This PR reworks the AMP of XLNet to remove some useless casts for better and less confusing AMP compliancy. | 02-19-2021 10:47:20 | 02-19-2021 10:47:20 | Yes, `bfloat16` is only for TPU. Hence, we cannot really test it elsewhere than inside a TPU context. I have added the `bfloat16` condition only if XLNet is run on TPU because we were handling a specific case when the model is run under AMP. |
transformers | 10,273 | closed | ElectraForQuestionAnswering with SQuADHead | # 🚀 Feature request
<!-- -->
Implement ElectraForQuestionAnswering as described in the paper. https://arxiv.org/abs/2003.10555
## Motivation
<!-- -->
In the original implementation, the authors use question answering module from XLNet rather than simple linear layer. There is a huge gap of performance between these 2 question answering module, especially on Squad2.0 like tasks. I suggest to follow the original implementation and rename the one with simple linear layer to ElectraForQuestionAnsweringSimple.
## Your contribution
<!-- -->
My team and I have implemented it using SQuADHead from modeling utils. I can submit a PR and make other necessary changes.
Code example:
```
from transformers import ElectraModel, ElectraPreTrainedModel
from transformers.modeling_utils import SQuADHead
class ElectraForQuestionAnswering(ElectraPreTrainedModel):
def __init__(self, config):
super().__init__(config)
self.start_n_top = config.start_n_top
self.end_n_top = config.end_n_top
self.electra = ElectraModel(config)
self.squad_head = SQuADHead(config)
self.init_weights()
def forward(
self,
input_ids=None,
attention_mask=None,
token_type_ids=None,
head_mask=None,
start_positions=None,
end_positions=None,
is_impossible=None,
cls_index=None,
p_mask=None,
return_dict=None):
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
transformer_outputs = self.electra(
input_ids=input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
head_mask=head_mask,
return_dict=return_dict,
)
hidden_states = transformer_outputs[0]
return self.squad_head(hidden_states=hidden_states,
start_positions=start_positions,
end_positions=end_positions,
cls_index=cls_index,
is_impossible=is_impossible,
p_mask=p_mask,
return_dict=return_dict,
)
```
| 02-19-2021 05:58:46 | 02-19-2021 05:58:46 | Hello! We would welcome a PR that offers this. Maybe instead of renaming the current QA model to `Simple` (which would break backwards-compatibility), we could add a new model called `ElectraForQuestionAnsweringBeamSearch`? What do you think?<|||||>Agreed. We should use a new name for the model for backward-compatibility. I will submit a PR soon.<|||||>This sounds great, thanks @bkiat1123!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 10,272 | closed | Summarization of long text with T5 seems to output random memory content | Hello everyone,
I'm trying to summarizing a long text (~1800 words) using the T5 model. I set the max-length and min_length parameters as well, and when I do so, the output seems to contain random memory content...
Here my code:
#len_text=1755
inputs = tokenizer.encode("summarize: " + proc_text, return_tensors="pt", max_length= (len_text + 1), truncation=True)
outputs = model.generate(
inputs,
max_length=round(len_text/3), #~590 words
min_length=round(len_text/5), #~350 words
no_repeat_ngram_size=2,
length_penalty=2.0,
num_beams=4,
early_stopping=False)
And here the output (~150 words only):
the key is to deploy predictive maintenance on assets where it makes sense. a combination of machine learning and data driven analytics can be used to plan, analyze, plan and expand across an enterprise to gain real savings and improvements. to achieve high operational efficiency and availability, ensuring that all assets are performing at peak performance with high availability and the lowest possible maintenance costs, companies are provided with some compelling options to manage their assets. the cost of adopting the wrong strategy can actually introduce failures in themselves. many companies in the process industry and energy sectors are still- - . s-» - nrhn hh gra,[&..._/*—s... '– [“?;e and(**” (---]" & – / _ »: " *',--..,.-...[[__[*-_-"-&-s-–&&_-(-//-n'-,n-d-'rs/d&m––re»â«_ââ-ââ–â? â€[“?.”123467891012—:’...;‘’’–‘‘'&’-«‘’&–——‘–-—–==– ‘‘-’—-e, ‘’l’ — e–’_—_–_&—&/–... ‘o ’ ‘–/— ‘—’.– “ ‘& ‘ rere i–“– (–,–e’,’ (‘—e&,—...—/& (&.&...’/ ‘...– =–./’=- ‘- «– „– «-»–»- (— (-)-‘&'–«–*–: ‘e-aiia–n–o–s–enyen–i—== ‘=&=?&*&# ;&e : r– and engtd&‘ ‘ ‘ee,,&n&y-r&o-in–y–d’; ‘n’e ‘s&
I attached the full text I'm attempting to summarize as well.
Thanks!
[brief.txt](https://github.com/huggingface/transformers/files/6007898/brief.txt)
| 02-19-2021 05:47:48 | 02-19-2021 05:47:48 | Hey @db1981,
could you please post a fully reproducible code snippet in the following format:
```python
from transformers import T5ForConditionalGeneration
model = T5ForConditionalGeneration.from_pretrained("...")
...
```
so that we can help you better?<|||||>Hi @patrickvonplaten thanks for you reply! Below the full code!
from transformers import T5ForConditionalGeneration, T5Tokenizer
```python
model_str = "t5-base"
model = T5ForConditionalGeneration.from_pretrained(model_str)
tokenizer = T5Tokenizer.from_pretrained(model_str)
full_path = "./brief.txt"
with open(full_path) as file: # Use file to refer to the file object
text = file.read()
proc_text = text.strip().replace("\n","")
len_text = len(proc_text.split())
inputs = tokenizer.encode("summarize: " + proc_text, return_tensors="pt", max_length= (len_text + 1), truncation=True)
outputs = model.generate(
inputs,
max_length=round(len_text/3),
min_length=round(len_text/5),
no_repeat_ngram_size=2,
length_penalty=2.0,
num_beams=4,
early_stopping=False)
final = tokenizer.decode(outputs[0], skip_special_tokens=True)
print("%d words: %s" % (len(final.split()), final))
<|||||>Yeah actually your input text is too long here I think -> T5 was only trained to handle up to 512 tokens (which corresponds to less than 512 words), so T5 will definitely not perform well for > 1500 words<|||||>@patrickvonplaten correct, T5 was originally trained to handle up to 512 tokens. But recently the Transformers library was updated to handle longer texts, using the Longformer apparoach. I thought that behind the scenes a longer text would have triggered automagically Longformer...I'll now give. it a shot with that class directly.
Thanks! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 10,271 | closed | [test] fix func signature | This PR makes a small fix where the func argument cannot be `None` as it's used w/o checking it's `None`.
@sgugger | 02-18-2021 23:23:50 | 02-18-2021 23:23:50 | |
transformers | 10,270 | closed | [ISSUES.md] propose using google colab to reproduce problems | It makes the reproduction process much faster if a user supplies a google colab notebook where we can see the problem.
This PR adds this suggestion to the existing how-to list.
@sgugger, @LysandreJik | 02-18-2021 23:12:27 | 02-18-2021 23:12:27 | |
transformers | 10,269 | closed | Language Modeling Task (GPT2 / CLM) Does Not Generate Line Breaks? | The legacy run_language_modeling.py script produced output that respected line breaks in the train_data_file. The updated run_clm.py script does not. I imagine this is likely due to how the dataset is processed in the new script, but if it is, how do I intervene and fix it?
## Environment info
- general environment: Google Colab
- `transformers` version: 4.4.0.dev0
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101 (True)
- Tensorflow version (GPU?): 2.4.1 (True)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
Models:
- gpt2: @patrickvonplaten, @LysandreJik
Library:
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
## Information
Model I am using (Bert, XLNet ...): GPT2
The problem arises when using:
* [x] the official example scripts: run_clm.py | run_language_modeling.py
* [x] my own modified scripts: colab notebooks that use these scripts
The tasks I am working on is:
* [x] my own task or dataset: Tiny Shakespeare (from text file)
## To reproduce
Steps to reproduce the behavior:
1. Download https://raw.githubusercontent.com/karpathy/char-rnn/master/data/tinyshakespeare/input.txt
2. python run_clm.py with --train_file set to input.txt
3. Instantiate finetuned GPT2 model and use model.generate to create new sequence
Colab notebooks may be found below:
Original (with legacy run_language_modeling.py):
https://colab.research.google.com/drive/1ieS4TuaFNJhuunaAM9wVmyp-n8Yx9_la?usp=sharing
Updated (with updated run_clm.py):
https://colab.research.google.com/drive/1dqIzv7WLk7sDOmFhLdMDhyKCIEcvw3lB?usp=sharing
## Expected behavior
When using the legacy run_language_modeling.py script, the output is as expected, with the correct line breaks:
<img width="951" alt="Screen Shot 2021-02-18 at 4 54 21 PM" src="https://user-images.githubusercontent.com/23064382/108426683-0986ca00-720a-11eb-9a3b-ae45fbcd7ce7.png">
When running the updated run_clm.py script, line breaks are conspicuously missing:
<img width="1027" alt="Screen Shot 2021-02-18 at 4 54 37 PM" src="https://user-images.githubusercontent.com/23064382/108426696-10154180-720a-11eb-9792-23b88e71c911.png">
Is there a straightforward way to remedy this?
My thanks as always for this wonderful repo, all your hard work, and any assistance you might be able to provide.
| 02-18-2021 21:59:09 | 02-18-2021 21:59:09 | Hi Colin. I ran into this same issue when I switched over to using the datasets library to load my poetry corpus, where line breaks are super important.
I ended up making a slightly modified version of the built-in [text](https://github.com/huggingface/datasets/blob/master/src/datasets/packaged_modules/text/text.py) loader called text_with_linebreaks, changing line 62 to `batch = batch.splitlines(True)` to keep the newlines.
<|||||>@jncasey Thanks for the rapid reply! I figured the culprit here might be the switch over to huggingface/datasets. How did you end up incorporating this into your workflow? Did you modify other scripts to reference text_with_linebreaks?<|||||>Yes, my training script is a sloppily modified version of the run_clm.py example. I added a new training arg for whether to keep the line breaks, and check for that arg in the section where the script determines which loader to use based on the file extension of the data files. <|||||>Cc @lhoestq to see how we could surface that functionality more easily.<|||||>Maybe let's add a `keep_linebreaks` parameter to the text loader ? What do you think ?
This is already a feature request: https://github.com/huggingface/datasets/issues/870<|||||>Thanks for the rapid replies, and relevant updates. would there be interest then in surfacing this new functionality an extra level to the run_[c]lm.py script? or should we just modify the relevant load_dataset call in that script?<|||||>We will do that as soon as there is a new release of datasets to pin in the requirements! For now changing the `load_dataset` in the script if you have a source install is the best way.<|||||>That seems a fine enough solution to me. Thanks again for the assistance. I'll close the issue for now. |
transformers | 10,268 | closed | [trainer] implement support for full fp16 in evaluation/predict | This PR allows users to use `model.half()` in evaluation/predict, which may or may not deliver results identical to fp32 eval. The outcome depends on how the model was trained and the application. e.g. if I use `--label_smoothing` with t5-small I get `eval loss = nan`, but bleu scores are exactly the same as with fp32.
### Need
Besides users asking for it in the past, the real need that prompted me to implement this is based on this Issue: https://github.com/huggingface/transformers/issues/10161. To explain - DeepSpeed trains in fp16, while keeping master copy of fp32 weights on cpu, which allows fitting a model like t5-11b (45GB in params) onto a 40GB gpu (only 22.5GB in fp16). But then the user wants to eval and deepspeed is of no use here at the moment. So we need to give a way to users to run full fp16 in eval, which is what this PR proposes.
This PR:
* [x] adds `is_in_train` public Trainer attribute which helps to tell whether `evaluation` is running on its own, or called from `train`
* [x] adds `--fp16_full_eval` to enable the full fp16 mode under eval/predict (while `full-fp16` would read better, I picked the name starting with `--fp16_` to align/group well with other3 `--fp16_*` args.
* [x] adds the first test that measures gpu mem deltas - let's hope it proves to work across different gpus
The logic is a bit tricky since we must not `model.to(device)` before `model.half()` or otherwise the model loading will OOM, but I hope I was able to keep it simple and not error-prone. Perhaps instead of replaying `place_on_device` logic at the end of `train` in the deepspeed clean up section - it'd be better to re-play the full logic in the `predict_loop`? So that each stage can decide at its beginning how and when to put the model on device.
A few small related fixes:
* [x] fixes `_wrap_model` to do nothing under deepspeed
* [x] fixes `--fp16` help to remove apex-only comment, as it's outdated.
Questions:
* [ ] Should I add a log, saying that half is used at `model.half()` activation
* [ ] I put it inside `prediction_step` which seems to be the right place, it won't run if it's a re-entrant eval-inside-train
* [ ] as the `inputs` are `ints` I don't think we need to switch them to `half()` as well.
@sgugger | 02-18-2021 21:18:30 | 02-18-2021 21:18:30 | |
transformers | 10,267 | closed | Introduce logging_strategy training argument | Introduce logging_strategy training argument
in TrainingArguments and TFTrainingArguments. (#9838)
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
1. Introduce a `logging_strategy` argument in TrainingArguments.
2. Define a LoggingStrategy enumeration. This is similar to `EvalStrategy`.
<!-- Remove if not applicable -->
Fixes #9838
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. [Link to issue](https://github.com/huggingface/transformers/issues/9838).
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
As changes in trainer: @sgugger | 02-18-2021 20:18:31 | 02-18-2021 20:18:31 | Currently WIP.
Thanks!<|||||>Thanks, yes that worked out!
IMO, defaulting `eval_steps` to `logging_steps` is not a good decision any longer.
With `logging_strategy` introduced, it seems more intuitive to decouple both. In case user chooses `logging_strategy="epoch"`, `logging_steps` is no longer a valid quantity.
What is your take on this?<|||||>> In case user chooses logging_strategy="epoch", logging_steps is no longer a valid quantity.
It will just be ignored in that case, so there is no weird behavior for the user.<|||||>Sure! That makes sense.
You can review the changes and let me know if any changes required.
Thanks<|||||>Yes, something like `TimeInterval` (or `TimeStrategy`) will be a good generic enum.
I can work on this generic enum and `saving_strategy` earlier next week most probably. Will raise a PR soon enough.
For now I've made the amends to introduce `LoggingStrategy.NO`.<|||||>Great! We can merge this in the meantime.
Looking forward to your next PR! |
transformers | 10,266 | closed | [trainer] add Trainer methods for metrics logging and saving | This PR introduces:
* [x] `trainer.log_metrics` - to perform consistent formatting for logged metrics
* [x] `trainer.save_metrics` - to save the metrics
This removes a lot of pointless noise from the example scripts and makes them much easier to read and understand. It doesn't take away from a user understanding the example, since these helper methods are just removing formatting and file saving.
If accepted it should be easy to replicate to other example scripts so that they all produce a consistent output and are all easier to read.
@sgugger | 02-18-2021 19:54:28 | 02-18-2021 19:54:28 | @sgugger, are you ok if we merge this and I will ask Second Good Issue to help with this - I'm not sure I will have time to do this and test all the scripts in the coming days, and since you guys discuss changing this script again, we should probably merge this first.<|||||>I'm fine with that!<|||||>Started an issue here: https://github.com/huggingface/transformers/issues/10337 - This is an easy task so I think First Good Issue might work. Let me know if I should bump it to Second.
|
transformers | 10,265 | closed | Tapas Tokenizer makes DataFrame iterrows() iterator crazy ... | ## Environment info
- `transformers` version: 4.3.2
- Platform: Colab Pro
- Python version: Python 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101 torch-scatter 2.0.5
- Tensorflow version (GPU?): 2.4.1
- Using GPU in script?: Tesla P100
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
@n1t0, @LysandreJik
-->
## Information
Model I am using : Tapas
Very very very strange (at least for me, a Computer Science newbie) with the function when the table ingested has been resampled with the pd.DataFrame.sample() method.
In the following block of code, the rows iterator returns corrupted rows with my table.
I have check the iterrows() outside the Tapas Tokenizer and the rows returned are correct.
But inside the Tokenizer, the rows are sometimes ok but sometimes Cell objects and corresponding to wrong rows !!
```
# Second, replace cell values by Cell objects
for row_index, row in table.iterrows():
for col_index, cell in enumerate(row):
table.iloc[row_index, col_index] = Cell(text=cell)
```
The direct result in my case is a crash in the normalize_for_match() method :
AttributeError: 'Cell' object has no attribute 'lower'
which is normal since several rows in the table now are of Cell type and not str.
I cannot see why the rows iterator suddenly returns corrupted data, for both Type and Values.
Thanks
Best regards
Jerome
The problem arises when using:
* [ X] my own modified scripts: I am using the Tapas Tokenizer with shuffled Pandas DataFrame for table.
The tasks I am working on is:
* [ X] my own task or dataset: Total R&D
## To reproduce
Steps to reproduce the behavior:
1. Use a standard Pandas DataFrame read from csv
2. Shuffle this DataFrame by using sample with frac=1
3. Tokenize the DataFrame as table using the Tapas Tokenizer
# Second, replace cell values by Cell objects
```
for row_index, row in table.iterrows():
for col_index, cell in enumerate(row):
table.iloc[row_index, col_index] = Cell(text=cell)
```
## Expected behavior
The iterrows() is returning unconsistent row information for both type and content.
<!-- A clear and concise description of what you would expect to happen. -->
The iterrows() should return consistent row values.
| 02-18-2021 16:54:15 | 02-18-2021 16:54:15 | Hi,
Can you provide the table on which you tried this?
To add numeric value information to the table, each cell in the table is replaced by a `Cell` object. A `Cell` object has 2 attributes: `text` (the original string corresponding to the cell value) and an optional `numeric_value` (which can be a `float_value` or a `date`).
Did you apply `.astype(str)` on your Pandas dataframe before providing it to `TapasTokenizer`? Since this is required before encoding the table.<|||||>Hi Niels,
Indeed, the code changes the DataFrame content to Cell format... but the `iterrows() `returns sometimes a correct row format which is transformed into Cell format... but sometimes a Cell object which is transformed into a Cell (of a Cell) object with text attribute initialized to the original Cell object !! :(
I have applied the `.astype(str)` yeap, before and after the sample() call... just to be sure :)
Here is the table with ; as separator :
Water injected volume (% P.V.);Oil recovery (% I.O.I.P.);Watercut (%)
214.11;61;98
215.23;61;98
216.36;61.1;99
217.49;61.1;99
218.62;61.2;98
219.75;61.2;99
220.88;61.2;98
222.02;61.3;98
223.15;61.3;99
224.28;61.4;98
225.41;61.4;99
226.55;61.4;98
227.67;61.4;99
228.8;61.5;99
229.94;61.5;99
231.07;61.6;98
232.2;61.6;99
233.33;61.6;98
234.47;61.7;99
235.6;61.7;98
236.73;61.8;99
237.86;61.8;99
239;61.8;99
240.11;61.9;99
241.24;61.9;99
242.39;61.9;99
243.51;62;99
244.64;62;99
245.77;62;99
246.9;62.1;99
248.03;62.1;99
How to repeat :
table = pd.read_csv(os.path.join(DATA_PATH, "Table_01.csv"), sep=";").astype(str)
table = table.sample(frac=1.0, random_state=42, replace=False).astype(str)
inputs = tokenizer(table=table, queries=questions, padding='max_length', return_tensors="pt")
Good luck :)<|||||>I was able to reproduce it, however when I reset the indices after sampling, it works:
`table = table.sample(frac=1.0, random_state=42, replace=False).reset_index(drop=True).astype(str)`
Will look into why it can't handle without resetting the row indices.
<|||||>Hi Niels,
I have also tried the reset index... and in my side it was crashing the same. But it was withtout the drop=True.
This behavior is very strange. I have check the Pandas documentation and normally sample() should returned a DataFrame object... nothing fancy here.
And it does because the iterrows() outside the Tapas Tokenizer works fine :)
So Tapas Tokenizer is doing something on the DataFrame modified by the sample() function :)<|||||>Here's a notebook illustrating the issue, and fixing it:
https://colab.research.google.com/drive/10MbZiMKyEWUGk2Y1fvIj0Y0_lB42NO38?usp=sharing
The reason why you're getting the error is because in the part where each cell of the table is replaced by a Cell object:
https://github.com/huggingface/transformers/blob/97e688bc220514cd5ea072f06b186401c9cfbbd0/src/transformers/models/tapas/tokenization_tapas.py#L2742-L2745, the row indices are used.
This can be fixed by replacing `table` with `table.reset_index(drop=True)` in the first line (or resetting the index of the table before providing it to the tokenizer). Another solution is to replace the final line by `table.iloc[row_index, col_index] = Cell(text=table.iloc[row_index, col_index])`. Will make a small PR to add this.
Thank you for spotting the error!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 10,264 | closed | Making TF TransfoXL model compliant with AMP | # What does this PR do?
This PR makes the TF TransfoXL model compliant with AMP. All the slow tests are passing as well for these models.
These two models cannot be XLA compliant for now, as it seems that tf.where cannot be used in XLA if the x and y parameters are None. See the _get_global_attn_indices method which has this case. I have opened [an issue](https://github.com/tensorflow/tensorflow/issues/47211) on the TF repo in order to ask if it is an expected behavior or a bug.
| 02-18-2021 16:04:21 | 02-18-2021 16:04:21 | |
transformers | 10,263 | closed | NER label re-alignment always expects B labelled first sub-words | ## Environment info
- `transformers` version: 4.3.1
- Platform: Darwin-19.6.0-x86_64-i386-64bit
- Python version: 3.7.7
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
- bert, tokenizers, pipelines: @LysandreJik
- trainer, maintained examples: @sgugger
## Information
Model I am using (Bert, XLNet ...): [DistilBERT fine-tuned for conll03](https://huggingface.co/elastic/distilbert-base-cased-finetuned-conll03-english)
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Fine-tune a BERT model for NER/conll03 using the `run_ner.py` example script, all default values
2. Correct the label alignments, see [config.json](https://huggingface.co/elastic/distilbert-base-cased-finetuned-conll03-english/blob/main/config.json)
3. Infer using entities that have not been seen at training time, and are composed of multiple word-parts as defined by WordPiece (my assumption as to the cause).
4. Sub-words are labelled but pipeline re-grouping/label alignment relies on perfect sub-word labelling:
E.g. Accenture → A ##cc ##ent ##ure → B-ORG O O O → A (ORG)
E.g. Max Mustermann → Max Must ##erman ##n → B-PER I-PER I-PER O → Max Musterman (PER)
E.g. Elasticsearch → El ##astic ##sea #rch → O O I-MISC O → ##sea (MISC)
## Expected behavior
I would expect that the realignment takes the label from the first word part or the best scoring sub-word part and propogates that label to the entire word, never returning sub-words. The default in `run_ner.py` is to use a padded sub-word label at training as per the BERT paper, but I've not tried setting that to `False` yet as that's not the typical/standard practice.
E.g. Accenture → A ##cc ##ent ##ure → B-ORG O O O → Accenture (ORG)
E.g. Max Mustermann → Max Must ##erman ##n → B-PER I-PER I-PER O → Max Mustermann (PER)
E.g. Elasticsearch → El ##astic ##sea #rch → O O I-MISC O → Elasticsearch (MISC)
I'll add that it seems odd that this business logic is in the `pipeline`. When evaluating on conll03, I assume we are using the sub-words/first word, but this realignment should be considered during evaluation. As-is, I suspect the recall is lower than it should be. | 02-18-2021 15:17:04 | 02-18-2021 15:17:04 | Hello @joshdevins! Indeed, this is a valid issue. The current pipeline outputs tokens that were attributed a class, but ignores the following tokens. For models that were trained with labels on all subwords this works, but using a padded sub-word label like you've done yields unsatisfactory results.
I think we could do better here when specifying `grouped_entities=True` to the NER pipeline, by looking ahead and checking if the tokens following a classified token are subwords tokens, in which case they can be grouped alongside the start of word token. I think this could be achievable by using offsets in fast tokenizers, as fast tokenizers are necessary for grouped entities anyway.
We can open a Good First Issue for this, or would you like to try your hand at it?<|||||>I think there's a few strategies that can be used to realign labels in the pipeline (I can enumerate these later). However, if we put these strategies in the pipeline only, the [evaluation used in fine-tuning NER with the script](https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_ner.py#L341-L349) will differ/be more limited since the evaluation currently has just two choices: use the label of the first sub-word only (ignore the other sub-words), or use each of labels on sub-words. It would be best to have the same realignment strategies available in both places.
In addition, the strategy used at training time for evaluation should really be the one that is used in the pipeline (or at least the default). So we might also consider storing the strategy in the config file that the pipeline can later read.
Happy to hear your thoughts. I'm trying to write down all the realignment strategies that make sense so I will update the thread later once I can wrap my head around the options 😆<|||||>Strategies that I can think of for how to label at inference time (+for evaluation):
- If training with padded sub-words/label for first sub-word only, e.g. `Max Mustermann` → `Max` `Must` `##erman` `##n` → `B-PER` `I-PER` `X` `X`
- Use the label from the first sub-word (default)
- If training with the same label for each sub-word, e.g. `Max Mustermann` → `Max` `Must` `##erman` `##n` → `B-PER` `I-PER` `I-PER` `I-PER`
- "First": (See above) Use the label from the first sub-word
- "Max": Use the label with the maximum score across all sub-words
- "Average": Average the score of each label across each sub-word and take the label with the maximum score (default)
This is a nice example of the latter two, see [Step 4: Evaluation](https://blog.codecentric.de/en/2020/12/ner-with-little-data-transformers-to-the-rescue/)

As a general principle, I would argue that if `grouped_entities=True`, we should never be returning sub-words alone. Either they're part of a word that has a label, or they're not. I honestly still don't understand what the flag `ignore_subwords` is supposed to control 🤷
I would propose two flags:
- `grouped_entities` (boolean) -- note that this implies subword grouping/label realignment (see below)
- `True` will group all words into larger entities, e.g. Max Mustermann -> B-PER I-PER -> "Max Musterman" (PER)
- `False` will leave words separated, , e.g. Max Mustermann -> B-PER I-PER -> "Max Musterman" (PER)
- `subword_label_realignment` (boolean or strategy name)
- `True` will use the default for the way the NER fine-tuning was performed, see default suggestions above
- `False` will leave sub-words alone -- note that this implies that `grouped_entities` should be ignores
- strategy name -- based on the above strategies<|||||>> As a general principle, I would argue that if grouped_entities=True, we should never be returning sub-words alone. Either they're part of a word that has a label, or they're not. I honestly still don't understand what the flag ignore_subwords is supposed to control :shrug:
I definitely agree with that statement, and it seems like the most straightforward way to improve that pipeline. I agree with the two flags you propose. Having finer control over these would be of great utility.
> In addition, the strategy used at training time for evaluation should really be the one that is used in the pipeline (or at least the default). So we might also consider storing the strategy in the config file that the pipeline can later read.
Yes, definitely. These are definitely model-specific as they're reliant on the training, so adding them to the configuration would make things simpler.<|||||>@LysandreJik Sounds good. Unfortunately I don't have time myself to work on this right now but hopefully in the future if someone else doesn't pick this one up.<|||||>I'll put this up as a good first issue to see if a member of the community feels like working on it. Thank you for the discussion and for writing all of this up!<|||||>I like to work on this. @LysandreJik besides @joshdevins's solution is there anything that I should consider? Do you have any suggestions?
I'm thinking to add these two flags [here](https://github.com/huggingface/transformers/blob/39f70a405838bec8a8446150d1d8741688a737a2/src/transformers/pipelines/token_classification.py#L76) and probably change `group_sub_entities` and `group_entities ` functions too.<|||||>Wonderful @elk-cloner! I think it's good to take it step by step, and @joshdevins' proposal already offers a very complete approach to re-alignment.
Yes, adding those two flags to the `__init__` makes sense! An important part of the development of that feature will be to develop tests to ensure that the behavior is the expected one. Please ping both @Narsil and I on the PR so that we can review!<|||||>Thanks @elk-cloner for having a look! Happy to contribute by reviewing PRs, etc. |
transformers | 10,262 | closed | Making TF T5 model compliant with AMP and XLA | # What does this PR do?
This PR makes the TF T5 model compliant with AMP and XLA. All the slow tests are passing as well for the model. | 02-18-2021 15:13:05 | 02-18-2021 15:13:05 | |
transformers | 10,261 | closed | Making TF OpenAI GPT model compliant with AMP and XLA | # What does this PR do?
This PR makes the TF OpenAI GPT model compliant with AMP and XLA. All the slow tests are passing as well for the model. | 02-18-2021 14:13:44 | 02-18-2021 14:13:44 | |
transformers | 10,260 | closed | Making TF MPNet model compliant with XLA | # What does this PR do?
This PR makes the TF MPNet model compliant with XLA. All the slow tests are passing as well for the model.
| 02-18-2021 13:30:23 | 02-18-2021 13:30:23 | |
transformers | 10,259 | closed | Making TF MobileBert model compliant with AMP | # What does this PR do?
This PR makes the TF MobileBert model compliant with AMP. All the slow tests are passing as well for the model.
| 02-18-2021 12:24:21 | 02-18-2021 12:24:21 | |
transformers | 10,258 | closed | Deberta Tokenizer convert_ids_to_tokens() is not giving expected results | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.0
- Platform: Colab
- Python version: 3.9
- PyTorch version (GPU?): No
- Tensorflow version (GPU?): No
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
## Information
I am using Deberta Tokenizer. `convert_ids_to_tokens()` of the tokenizer is not working fine.
The problem arises when using:
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset
## To reproduce
Steps to reproduce the behavior:
1. Get Debrta Tokenizer
```python
from transformers import DebertaTokenizer
deberta_tokenizer = DebertaTokenizer.from_pretrained('microsoft/deberta-base')
```
2. Encode Some Example Using Tokenizer
```python
example = "Hi I am Bhadresh. I found an issue in Deberta Tokenizer"
encoded_example = distilbert_tokenizer.encode(example)
```
3. Convert Ids to tokens:
```python
distilbert_tokenizer.convert_ids_to_tokens(encoded_example )
"""
Output: ['[CLS]', '17250', '314', '716', '16581', '324', '3447', '13', '314', '1043', '281', '2071', '287', '1024', '4835', '64', '29130', '7509', '[SEP]']
"""
```
[Colab Link For Reproducing](https://github.com/bhadreshpsavani/UnderstandingNLP/blob/master/DebertaTokenizerIssue.ipynb)
## Expected behavior
It should return some tokens like this
```
['[CLS]', 'hi', 'i', 'am', 'b', '##had', '##resh', '.', 'i', 'found', 'an', 'issue', 'in', 'de', '##bert', '##a', 'token', '##izer', '[SEP]']
```
Not just convert an integer to string like the current behavior
#### Tagging SMEs for help:
@n1t0, @LysandreJik | 02-18-2021 12:16:18 | 02-18-2021 12:16:18 | It seems expected behavior but something is still not right with this tokenizer<|||||>Seems like they have not implemented a decoder for the tokenizer. I will have a look at it.<|||||>It might be expected behaviour because it is based on GPT2 tokenizer and it is also having similar results
<|||||>That is not true:
```
from transformers import GPT2Tokenizer
t = GPT2Tokenizer.from_pretrained('gpt2')
encoded = t("Hi I am Bhadresh. I found an issue in Deberta Tokenizer")
t.convert_ids_to_tokens(encoded['input_ids'])
```
['Hi',
'ĠI',
'Ġam',
'ĠBh',
'ad',
'resh',
'.',
'ĠI',
'Ġfound',
'Ġan',
'Ġissue',
'Ġin',
'ĠDe',
'bert',
'a',
'ĠToken',
'izer']<|||||>Ya, you are right!, Something is missing in the implementation I can't figure out what!
I try to convert the SQUAD2 dataset in to feature using the SQAUD.py file in data preprocessing.
After Conversion when I decode the input id it is returning context like this
`Who are you?IamBhadresh`
I mean in context space is not considered!
The ConvertExampletoFeature uses `convert_ids_to_tokens` internally I suspect that was creating issue<|||||>You can convert them back with the following code:
```
from transformers import DebertaTokenizer
t = DebertaTokenizer.from_pretrained('microsoft/deberta-base')
example = "Hi I am Bhadresh. I found an issue in Deberta Tokenizer"
encoded_example = t.encode(example)
[t.gpt2_tokenizer.decode([t.gpt2_tokenizer.sym(id)]) if t.gpt2_tokenizer.sym(id) not in t.all_special_tokens else t.gpt2_tokenizer.sym(id) for id in encoded_example]
```
Output:
```
['[CLS]',
'Hi',
' I',
' am',
' Bh',
'ad',
'resh',
'.',
' I',
' found',
' an',
' issue',
' in',
' De',
'bert',
'a',
' Token',
'izer',
'[SEP]']
```
After some digging into the code, I am actually not sure if I should create a patch for it or not. I think with a patch we can **probably** also remove the method [download_asset](https://github.com/huggingface/transformers/blob/cdd31b4de4b446ccff9428d14fbeb45c4d96c608/src/transformers/models/deberta/tokenization_deberta.py#L224) and refactor the [load_vocab](https://github.com/huggingface/transformers/blob/cdd31b4de4b446ccff9428d14fbeb45c4d96c608/src/transformers/models/deberta/tokenization_deberta.py#L270) method.
I am not sure if this was discussed before but when we create the required files from the `bpe_encoder.bin`, we could probably get rid of the [GPT2Tokenizer](https://github.com/huggingface/transformers/blob/cdd31b4de4b446ccff9428d14fbeb45c4d96c608/src/transformers/models/deberta/tokenization_deberta.py#L301) class in tokenization_deberta.py and the DebertaTokenizer could inherit directly from the GPT2Tokenizer (like the RobertaTokenizer).
I will leave it to @LysandreJik and @BigBird01 to decide what to do with it. <|||||>@BigBird01, would you be open to having the `DebertaTokenizer` inheriting directly from the GPT-2 tokenizer as @cronoik proposes? It would prevent such cases like the one mentioned in this issue from happening.<|||||>Yes. Let’s do it this way.
Get Outlook for iOS<https://aka.ms/o0ukef>
________________________________
From: Lysandre Debut <[email protected]>
Sent: Monday, February 22, 2021 5:57:14 AM
To: huggingface/transformers <[email protected]>
Cc: Pengcheng He <[email protected]>; Mention <[email protected]>
Subject: Re: [huggingface/transformers] Deberta Tokenizer convert_ids_to_tokens() is not giving expected results (#10258)
@BigBird01<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FBigBird01&data=04%7C01%7CPengcheng.H%40microsoft.com%7C25586e57796247f3a7f108d8d739c427%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637495990401189739%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=2WajMgFu%2BFniyQviBr3x6%2BcK0dp4q4k2mXPctOA5Ldo%3D&reserved=0>, would you be open to having the DebertaTokenizer inheriting directly from the GPT-2 tokenizer as @cronoik<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fcronoik&data=04%7C01%7CPengcheng.H%40microsoft.com%7C25586e57796247f3a7f108d8d739c427%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637495990401189739%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=HE78YNCso1ULWJoBDEOahmmpmgdE7%2Bq7jrSBwpq1%2FiU%3D&reserved=0> proposes? It would prevent such cases like the one mentioned in this issue from happening.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fhuggingface%2Ftransformers%2Fissues%2F10258%23issuecomment-783393615&data=04%7C01%7CPengcheng.H%40microsoft.com%7C25586e57796247f3a7f108d8d739c427%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637495990401199701%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=4j7yOLhgDl83UAGBcxomF%2F3iBjpQScMRqFh%2BneP64ro%3D&reserved=0>, or unsubscribe<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAJDNDRU4H6NOWYGMYNP7BMTTAJPDVANCNFSM4X2GBZKQ&data=04%7C01%7CPengcheng.H%40microsoft.com%7C25586e57796247f3a7f108d8d739c427%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637495990401199701%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=GhMxfK7jJhigF9AloayWdcB9Z3lA6nxfxki%2F4Gef4TE%3D&reserved=0>.
<|||||>@cronoik do you want to take a stab at it?<|||||>Yes.
@LysandreJik |
transformers | 10,257 | closed | Making TF Lxmert model compliant with AMP | # What does this PR do?
This PR makes the TF Lxmert model compliant with AMP. All the slow tests are passing as well for the model. | 02-18-2021 12:06:38 | 02-18-2021 12:06:38 | |
transformers | 10,256 | closed | [Question]: Register new Tokenizer | Hi there,
I'm in the process of creating a new Transformer model. I have my own codebase and I'm using Transformers as an external library. If I implement a new Tokenizer that inherits from an existing one (say the BERT one) is there any way to "register" my new tokenizer so that Huggingface automatically instantiate it? I would like to support the `AutoTokenizer` API:
```python
tokenizer = AutoTokenizer.from_pretrained("heriot-watt/my_model_name")
```
And I would like that `AutoTokenizer` looks in my PYTHONPATH and automatically resolves the `Tokenizer` class with the name `my_model_name`. I've seen that currently, Transformers uses a hardcoded resolution strategy defined in `configuration_auto.py` or `tokenization_auto.py`. For instance, AllenNLP uses a nice register annotation to automatically resolve models, dataset reader and so on. What would be the best solution here?
Thanks for your answer,
Alessandro | 02-18-2021 11:02:58 | 02-18-2021 11:02:58 | Hi! `AutoTokenizer` is only used to redirect to the correct tokenizer implementation under the hood, and not to resolve to any tokenizer object. The procedure here would be to create your tokenizer like you want it to be, either by using the `tokenizers` library, by tweaking an existing one or by creating yours from scratch.
Then, you can open a PR on the repo and have your tokenizer/model be added to the available architectures, and available in the `Auto*` classes so that others may leverage your checkpoints easily.<|||||>So I take you're not planning to have an automatic module discovery. I see. Anyway, I feel like an equally nice way to solve this is to have a folder on your current path called `heriot-watt/my_model_name`. In it, I have my config files and tokenizer files that belong to the `Tokenizer` I'm inheriting from. Then, In my package `__init__.py` I had to add the following:
```python
MODEL_MAPPING.update({
MyModelConfig: MyModel
})
CONFIG_MAPPING.update({
"my_model": MyModelConfig
})
TOKENIZER_MAPPING.update({
MyModelConfig: (MyModelTokenizer, MyModelTokenizerFast)
})
MODEL_NAMES_MAPPING.update({
"my_model_name": "MyModel"
})
```
In this way, I'm able to use the `Auto*` API just fine :) <|||||>Thanks for showing us how you do it! That's a very interesting usage of the AutoModels, and definitely something we would be interested in adding. For example via a `transformers.register_auto_model(xxx)` or something along those lines.<|||||>I think the AllenNLP registrable is a very good starting point for this: https://github.com/allenai/allennlp/blob/main/allennlp/common/registrable.py<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Maybe this is reckless, but I could see value in at least partially inverting this relationship. If my `.save_pretrained()` implementation could drop a hint about what module an implementation resides in, Auto Classes could have the ability to try a dynamic import without needing any registration api, and the `Auto*.from_pretrained()` caller would be relieved of the burden of making sure implementation classes are loaded ahead of time.
I honestly went looking for where this happened in the code multiple times and assumed I just hadn't figured out how it worked yet.<|||||>This is sloppy and hardly thought through, but
```diff
diff --git a/src/transformers/models/auto/tokenization_auto.py b/src/transformers/models/auto/tokenization_auto.py
index f07e366c7..3ad9d1e22 100644
--- a/src/transformers/models/auto/tokenization_auto.py
+++ b/src/transformers/models/auto/tokenization_auto.py
@@ -14,6 +14,7 @@
# limitations under the License.
""" Auto Tokenizer class. """
+import importlib
import json
import os
from collections import OrderedDict
@@ -538,6 +539,10 @@ class AutoTokenizer:
if tokenizer_class is None:
tokenizer_class_candidate = config_tokenizer_class
tokenizer_class = tokenizer_class_from_name(tokenizer_class_candidate)
+ if tokenizer_class is None:
+ tokenizer_module = tokenizer_config.get("tokenizer_module")
+ tokenizer_module = importlib.import_module(tokenizer_module)
+ tokenizer_class = getattr(tokenizer_module, config_tokenizer_class)
if tokenizer_class is None:
raise ValueError(
```
for example, would allow subclasses that were not officially included with `transformers` to use
`super().__init__(..., tokenizer_module=self.__module__, ...)` in their constructor. That seems to be enough for the setting to save in the tokenizer_config.json file. Then the caller would no longer have to be aware of what imports are necessary for a `.from_pretrained()` call to succeed.<|||||>After 9870093f7b31bf774fe6bdfeed5e08f0d4649b07 I am unsure how to use a third party tokenizer class because `transformers.models.auto.tokenization_auto.tokenizer_class_from_name()` is using
```python
module = importlib.import_module(f".{module_name}", "transformers.models")
```
and trying to load and trying to use anything outside of transformers raises
```
ValueError: attempted relative import beyond top-level package
```
The workaround I have at the moment is adding
```python
transformers.models.auto.tokenization_auto.TOKENIZER_MAPPING_NAMES.update((
("MyModel", ('MyModelTokenizer', 'MyModelTokenizerFast')),
))
sys.modules['transformers.models.MyModel'] = sys.modules[__name__]
```
to replace the `TOKENIZER_MAPPING` patch used in previous versions. But dynamically patching in additional modules seems far more aggressive than updating data structures.
It would have been very convenient here if the module names in TOKENIZER_MAPPING_NAMES had included the "." rather than it being added by `tokenizer_class_from_name()`. |
transformers | 10,255 | closed | Addition of on-the-fly loading for MLM training and fix for default pad_to_max_length value for TPU | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #10204, #10024
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger @lhoestq @patil-suraj
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 02-18-2021 10:08:48 | 02-18-2021 10:08:48 | Thanks for your PR! We don't want to switch the examples to use on-the-fly tokenization however as in most cases it's actually faster to do it once and for all. Having to do it on-the-fly for a training with huge data is more of a specific use-case. Your PR can be referenced as an example of how to do it in practice but I don't think we will merge it. |
transformers | 10,254 | closed | ImportError: cannot import name 'MBart50TokenizerFast' from 'transformers' (unknown location) | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.2
- Platform: Linux-4.19.121-linuxkit-x86_64-with-debian-10.1
- Python version: 3.7.4
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: <NO>
- Using distributed or parallel set-up in script?: <NO>
### Who can help
Model:
https://huggingface.co/facebook/mbart-large-50-one-to-many-mmt
@patrickvonplaten, @patil-suraj
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: (translation)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```python
import os
from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
article_en = "The head of the United Nations says there is no military solution in Syria"
model = MBartForConditionalGeneration.from_pretrained(
"facebook/mbart-large-50-one-to-many-mmt", cache_dir=os.getenv("cache_dir", "model"))
tokenizer = MBart50TokenizerFast.from_pretrained(
"facebook/mbart-large-50-one-to-many-mmt", src_lang="en_XX")
model_inputs = tokenizer(article_en, return_tensors="pt")
# translate from English to Hindi
generated_tokens = model.generate(
**model_inputs,
forced_bos_token_id=tokenizer.lang_code_to_id["hi_IN"]
)
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => 'संयुक्त राष्ट्र के नेता कहते हैं कि सीरिया में कोई सैन्य समाधान नहीं है'
# translate from English to Chinese
generated_tokens = model.generate(
**model_inputs,
forced_bos_token_id=tokenizer.lang_code_to_id["zh_CN"]
)
decoded = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => '联合国首脑说,叙利亚没有军事解决办法'
print(decoded)
````
ERROR:
```
Traceback (most recent call last):
File "src/translation/run.py", line 7, in <module>
from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
ImportError: cannot import name 'MBart50TokenizerFast' from 'transformers' (unknown location)
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
no error
<!-- A clear and concise description of what you would expect to happen. -->
| 02-18-2021 07:58:01 | 02-18-2021 07:58:01 | Hi @loretoparisi
Did you install sentencepiece ? The tokenizer needs sentencepiece<|||||>@patil-suraj thanks I did right now
```
root@d2f0e8a5ec76:/app# pip install sentencepiece
Collecting sentencepiece
Downloading https://files.pythonhosted.org/packages/f5/99/e0808cb947ba10f575839c43e8fafc9cc44e4a7a2c8f79c60db48220a577/sentencepiece-0.1.95-cp37-cp37m-manylinux2014_x86_64.whl (1.2MB)
|████████████████████████████████| 1.2MB 507kB/s
Installing collected packages: sentencepiece
Successfully installed sentencepiece-0.1.95
WARNING: You are using pip version 19.3; however, version 21.0.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
root@d2f0e8a5ec76:/app# python src/translation/run.py
Traceback (most recent call last):
File "src/translation/run.py", line 7, in <module>
from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
ImportError: cannot import name 'MBart50TokenizerFast' from 'transformers' (unknown location)
```
Codebase is here: https://github.com/loretoparisi/hf-experiments/blob/master/src/translation/run.py<|||||>Hi @loretoparisi! Could you show the results of `pip list` so we can investigate? Maybe `tokenizers` is missing, that's what's required for the fast tokenizer. Thanks!<|||||>@LysandreJik of course!
```
root@d2f0e8a5ec76:/app# pip list
Package Version
---------------------- ------------
absl-py 0.11.0
appdirs 1.4.4
astunparse 1.6.3
audioread 2.1.9
cached-property 1.5.2
cachetools 4.2.1
certifi 2020.12.5
cffi 1.14.5
chardet 4.0.0
click 7.1.2
cycler 0.10.0
decorator 4.4.2
docopt 0.6.2
filelock 3.0.12
flatbuffers 1.12
gast 0.3.3
google-auth 1.26.1
google-auth-oauthlib 0.4.2
google-pasta 0.2.0
grpcio 1.32.0
h5py 2.10.0
idna 2.10
imageio 2.9.0
importlib-metadata 3.4.0
joblib 1.0.1
Keras 2.4.3
Keras-Preprocessing 1.1.2
kiwisolver 1.3.1
librosa 0.8.0
llvmlite 0.35.0
Markdown 3.3.3
matplotlib 3.3.4
munkres 1.1.4
networkx 2.5
numba 0.52.0
numpy 1.19.5
oauthlib 3.1.0
opt-einsum 3.3.0
packaging 20.9
pandas 1.2.2
Pillow 8.1.0
pip 19.3
pooch 1.3.0
protobuf 3.14.0
pyannote.algorithms 0.8
pyannote.core 4.1
pyannote.parser 0.8
pyasn1 0.4.8
pyasn1-modules 0.2.8
pycparser 2.20
pyparsing 2.4.7
python-dateutil 2.8.1
pytz 2021.1
PyWavelets 1.1.1
PyYAML 5.4.1
regex 2020.11.13
requests 2.25.1
requests-oauthlib 1.3.0
resampy 0.2.2
rsa 4.7.1
sacremoses 0.0.43
scikit-image 0.18.1
scikit-learn 0.24.1
scipy 1.6.0
sentencepiece 0.1.95
setuptools 41.4.0
SIDEKIT 1.3.8.5.2
simplejson 3.17.2
six 1.15.0
sortedcollections 2.1.0
sortedcontainers 2.3.0
SoundFile 0.10.3.post1
tensorboard 2.4.1
tensorboard-plugin-wit 1.8.0
tensorflow 2.4.1
tensorflow-estimator 2.4.0
termcolor 1.1.0
threadpoolctl 2.1.0
tifffile 2021.2.1
tokenizers 0.10.1
torch 1.7.1
torchvision 0.8.2
tqdm 4.56.2
transformers 4.3.2
typing-extensions 3.7.4.3
urllib3 1.26.3
Werkzeug 1.0.1
wheel 0.36.2
wrapt 1.12.1
xarray 0.16.2
zipp 3.4.0
```
Here I can see `tokenizers 0.10.1 `, not sure if that's the right version though.<|||||>Ah, I think I have found the culprit! MBart-50 was only just released on the `master` branch and you seem to be using version v4.3.2, which does not have it yet. Could you install from source and let me know if you still have the issue?<|||||>I installed transformer 4.3.2
Could any let me know how to install it from the source?

<|||||>To install from source clone the repo and run `pip install .` from the root of the repo or run
`pip install git+https://github.com/huggingface/transformers.git`, which will install the master branch.
<|||||>Confirmed it works with master branch install!
```
['联合国首脑说,叙利亚没有军事解决办法']
``` |
transformers | 10,253 | closed | Load custom models | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.2
- Platform: RHEL 7
- Python version: 3.7
- PyTorch version (GPU?): 1.7.0 (GPU)
- Tensorflow version (GPU?): N/A
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@sgugger @LysandreJik
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): A custom model
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [* ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [*] my own task or dataset: (give details below)
I created a custom model by extending the SqueezeBertPreTrainedModel and added another classification head for multi-task learning. Trained with Trainer and TrainingArguments successfully, and saved the model by calling trainer.save_model(TRAINED_MODEL_PATH). Everything worked fine.
However, when I tried to load the model by calling MyCustomModelClass.from_pretrained(TRAINED_MODEL_PATH, local_files_only=True), an error was thrown:
```
Traceback (most recent call last):
File "/home/ec2-user/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/transformers/configuration_utils.py", line 424, in get_config_dict
use_auth_token=use_auth_token,
File "/home/ec2-user/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/transformers/file_utils.py", line 1086, in cached_path
local_files_only=local_files_only,
File "/home/ec2-user/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/transformers/file_utils.py", line 1259, in get_from_cache
"Cannot find the requested files in the cached path and outgoing traffic has been"
FileNotFoundError: Cannot find the requested files in the cached path and outgoing traffic has been disabled. To enable model look-ups and downloads online, set 'local_files_only' to False.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ec2-user/workspaces/compressed_transformers/src/Compressed_transformers/compression/evaluate.py", line 135, in <module>
local_files_only=True,
File "/home/ec2-user/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/transformers/modeling_utils.py", line 962, in from_pretrained
**kwargs,
File "/home/ec2-user/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/transformers/configuration_utils.py", line 376, in from_pretrained
config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/home/ec2-user/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/transformers/configuration_utils.py", line 436, in get_config_dict
raise EnvironmentError(msg)
OSError: Can't load config for './SqueezeBert/results/best_checkpoint/config.json'. Make sure that:
- './SqueezeBert/results/best_checkpoint/config.json' is a correct model identifier listed on 'https://huggingface.co/models'
- or './SqueezeBert/results/best_checkpoint/config.json' is the correct path to a directory containing a config.json file
```
## To reproduce
Steps to reproduce the behavior:
1. Extend the SqueezeBertPreTrainedModel (maybe other PreTrainedModel classes as well) class and create a model with a dataset
2. Train the model with the dataset and save the model using trainer.save_model(MODEL_DIR)
3. Load the model by calling MyCustomModelClass.from_pretrained(MODEL_DIR)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
It shouldn't look for models from the internet or model classes available in the library when AutoModel or AutoConfig is not used. When MyCustomModelClass.from_pretrained(MODEL_DIR) is called, it should be able to look up config.json and load the checkpoint correctly.
<!-- A clear and concise description of what you would expect to happen. -->
| 02-18-2021 02:10:11 | 02-18-2021 02:10:11 | Hello! Could you provide a reproducible code example, for example the extended custom model you created, so that we can take a look?
Also, can you let us know what's in the `./SqueezeBert/results/best_checkpoint/` directory? It's trying to look for a configuration file there but it doesn't find it.<|||||>Thank you @LysandreJik for getting back! I have prepared a Google colab and it just ran fine: https://colab.research.google.com/drive/1SKx0DXHrgVUMFK7sk6jU05o_SnFKrc6k#scrollTo=MOijCowF3dqk
There must be something else in my code (which I can't share). Closing this now. |
transformers | 10,252 | closed | microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract not available for tensorflow | @jplu
The above said model is available for pytorch but not for tensorflow. How to convert a pytorch checkpoint to tensorflow for this one? Is it possible for doing a contribution for the same?
| 02-17-2021 23:16:51 | 02-17-2021 23:16:51 | Hello!
You can load PyTorch weights into Tensorflow with `TFBertModel.from_pretrained("microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract", from_pt=True)`<|||||>That works, thanks! |
transformers | 10,251 | closed | [ci] scheduled job test | please ignore
this time a branch on huggingface and not a fork | 02-17-2021 23:06:38 | 02-17-2021 23:06:38 | well, the job never finished, something or something aborted the workflow - so the test wasn't complete.<|||||>The test was probably too long (>6 hours) and was stopped by CircleCI. This PR that was just merged should help in that regard: https://github.com/huggingface/transformers/pull/10152<|||||>That PR won't help, since in this test I removed both tf jobs - it was just one pt set of jobs per runner. Needed to do it since tf jobs were getting to run first.
But otherwise this is an awesome improvement for TF jobs!
|
transformers | 10,250 | closed | [CI] force scheduled action hub re-run | please ignore
testing only SLOW pt job | 02-17-2021 22:59:37 | 02-17-2021 22:59:37 | |
transformers | 10,249 | closed | [CI] force scheduled action hub re-run | please ignore | 02-17-2021 22:32:34 | 02-17-2021 22:32:34 | |
transformers | 10,248 | closed | [CI] 2 fixes | This PR:
- fixes invalid port
- adds missing requirements install which lead to multiple test failures
@LysandreJik
| 02-17-2021 22:08:35 | 02-17-2021 22:08:35 | |
transformers | 10,247 | closed | [BUG] [Ray-Tune] ValueError: checkpoint not in list | ## Environment info
- `transformers` version: 4.4.0.dev0
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101 (True)
- Tensorflow version (GPU?): 2.4.1 (True)
- Using GPU in script?: Yes
### Who can help
Models:
- tensorflow: @jplu
Library:
- ray/raytune: @richardliaw, @amogkam
-->
## Information
Model I am using (Bert, XLNet ...): RoBERTa
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
I want to do this as a text classification task where I have a sequence and I want to classify it into either one of the 20 labels (all of them numeric).
Whenever I start tuning/Hyperparameter search, it starts running the first trial and logs a bit, then goes to the second trial showing the first one to be "running". This is how the logs look like:-
```
You are using PopulationBasedTraining but you haven't enabled checkpointing. This means your trials will train from scratch everytime they are exploiting new configurations. Consider enabling checkpointing by passing `keep_checkpoints_num=1` as an additional argument to `Trainer.hyperparameter_search`.
== Status ==
Memory usage on this node: 4.3/25.5 GiB
PopulationBasedTraining: 0 checkpoints, 0 perturbs
Resources requested: 3/4 CPUs, 1/1 GPUs, 0.0/14.99 GiB heap, 0.0/5.18 GiB objects (0/1.0 accelerator_type:P100)
Result logdir: /root/ray_results/tune_transformer_pbt
Number of trials: 1/100 (1 RUNNING)
+-----------------+----------+-------+-----------+-------+----------------+--------------+
| Trial name | status | loc | w_decay | lr | train_bs/gpu | num_epochs |
|-----------------+----------+-------+-----------+-------+----------------+--------------|
| _inner_0755e982 | RUNNING | | 0.366291 | 4e-05 | 8 | 15 |
+-----------------+----------+-------+-----------+-------+----------------+--------------+
Result for _inner_0755e982:
date: 2021-02-17_15-52-04
done: false
eval_accuracy: 0.14948453608247422
eval_f1: 0.14948453608247422
eval_loss: 8.17737865447998
eval_precision: 0.14948453608247422
eval_recall: 0.14948453608247422
eval_runtime: 6.8976
eval_samples_per_second: 56.252
experiment_id: 5d2db84f7e9745a997bfcadedbd7d440
hostname: 0df2f30fd76b
iterations_since_restore: 1
node_ip: 172.28.0.2
objective: 0.14948453608247422
pid: 39957
time_since_restore: 21.89615297317505
time_this_iter_s: 21.89615297317505
time_total_s: 21.89615297317505
timestamp: 1613577124
timesteps_since_restore: 0
training_iteration: 1
trial_id: 0755e982
== Status ==
Memory usage on this node: 6.3/25.5 GiB
PopulationBasedTraining: 0 checkpoints, 0 perturbs
Resources requested: 3/4 CPUs, 1/1 GPUs, 0.0/14.99 GiB heap, 0.0/5.18 GiB objects (0/1.0 accelerator_type:P100)
Result logdir: /root/ray_results/tune_transformer_pbt
Number of trials: 2/100 (1 PENDING, 1 RUNNING)
+-----------------+----------+------------------+-----------+-------+----------------+--------------+-------------+----------------------+
| Trial name | status | loc | w_decay | lr | train_bs/gpu | num_epochs | eval_loss | training_iteration |
|-----------------+----------+------------------+-----------+-------+----------------+--------------+-------------+----------------------|
| _inner_0755e982 | RUNNING | 172.28.0.2:39957 | 0.366291 | 4e-05 | 8 | 15 | 8.17738 | 1 |
| _inner_07580d98 | PENDING | | 0.376876 | 6e-05 | 8 | 10 | | |
+-----------------+----------+------------------+-----------+-------+----------------+--------------+-------------+----------------------+
Result for _inner_0755e982:
date: 2021-02-17_15-52-04
done: false
eval_accuracy: 0.14948453608247422
eval_f1: 0.14948453608247422
eval_loss: 8.17737865447998
eval_precision: 0.14948453608247422
eval_recall: 0.14948453608247422
eval_runtime: 6.8976
eval_samples_per_second: 56.252
experiment_id: 5d2db84f7e9745a997bfcadedbd7d440
experiment_tag: 1_num_train_epochs=15,per_device_eval_batch_size=16,per_device_train_batch_size=8
hostname: 0df2f30fd76b
iterations_since_restore: 1
node_ip: 172.28.0.2
objective: 0.14948453608247422
pid: 39957
time_since_restore: 21.89615297317505
time_this_iter_s: 21.89615297317505
time_total_s: 21.89615297317505
timestamp: 1613577124
timesteps_since_restore: 0
training_iteration: 1
trial_id: 0755e982
Result for _inner_07580d98:
date: 2021-02-17_15-52-29
done: false
eval_accuracy: 0.14948453608247422
eval_f1: 0.14948453608247422
eval_loss: 8.1666898727417
eval_precision: 0.14948453608247422
eval_recall: 0.14948453608247422
eval_runtime: 6.8883
eval_samples_per_second: 56.327
experiment_id: e5cb4d5b00524454b7f673f971318b30
hostname: 0df2f30fd76b
iterations_since_restore: 1
node_ip: 172.28.0.2
objective: 0.14948453608247422
pid: 39986
time_since_restore: 21.889320135116577
time_this_iter_s: 21.889320135116577
time_total_s: 21.889320135116577
timestamp: 1613577149
timesteps_since_restore: 0
training_iteration: 1
trial_id: 07580d98
== Status ==
Memory usage on this node: 6.3/25.5 GiB
PopulationBasedTraining: 0 checkpoints, 0 perturbs
Resources requested: 3/4 CPUs, 1/1 GPUs, 0.0/14.99 GiB heap, 0.0/5.18 GiB objects (0/1.0 accelerator_type:P100)
Result logdir: /root/ray_results/tune_transformer_pbt
Number of trials: 3/100 (1 ERROR, 1 PENDING, 1 RUNNING)
+-----------------+----------+------------------+-----------+-------+----------------+--------------+-------------+----------------------+
| Trial name | status | loc | w_decay | lr | train_bs/gpu | num_epochs | eval_loss | training_iteration |
|-----------------+----------+------------------+-----------+-------+----------------+--------------+-------------+----------------------|
| _inner_07580d98 | RUNNING | 172.28.0.2:39986 | 0.376876 | 6e-05 | 8 | 10 | 8.16669 | 1 |
| _inner_15e144f6 | PENDING | | 0.196785 | 2e-08 | 8 | 10 | | |
| _inner_0755e982 | ERROR | | 0.366291 | 4e-05 | 8 | 15 | 8.17738 | 1 |
+-----------------+----------+------------------+-----------+-------+----------------+--------------+-------------+----------------------+
Number of errored trials: 1
+-----------------+--------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Trial name | # failures | error file |
|-----------------+--------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| _inner_0755e982 | 1 | /root/ray_results/tune_transformer_pbt/_inner_0755e982_1_num_train_epochs=15,per_device_eval_batch_size=16,per_device_train_batch_size=8_2021-02-17_15-51-42/error.txt |
+-----------------+--------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Result for _inner_07580d98:
date: 2021-02-17_15-52-29
done: false
eval_accuracy: 0.14948453608247422
eval_f1: 0.14948453608247422
eval_loss: 8.1666898727417
eval_precision: 0.14948453608247422
eval_recall: 0.14948453608247422
eval_runtime: 6.8883
eval_samples_per_second: 56.327
experiment_id: e5cb4d5b00524454b7f673f971318b30
experiment_tag: 2_num_train_epochs=10,per_device_eval_batch_size=16,per_device_train_batch_size=8
hostname: 0df2f30fd76b
iterations_since_restore: 1
node_ip: 172.28.0.2
objective: 0.14948453608247422
pid: 39986
time_since_restore: 21.889320135116577
time_this_iter_s: 21.889320135116577
time_total_s: 21.889320135116577
timestamp: 1613577149
timesteps_since_restore: 0
training_iteration: 1
trial_id: 07580d98
Result for _inner_15e144f6:
date: 2021-02-17_15-52-53
done: false
eval_accuracy: 0.14948453608247422
eval_f1: 0.14948453608247422
eval_loss: 8.2146635055542
eval_precision: 0.14948453608247422
eval_recall: 0.14948453608247422
eval_runtime: 6.917
eval_samples_per_second: 56.094
experiment_id: 2dc02378bb5d4a20a7c6d0228ad81076
hostname: 0df2f30fd76b
iterations_since_restore: 1
node_ip: 172.28.0.2
objective: 0.14948453608247422
pid: 40016
time_since_restore: 21.961735486984253
time_this_iter_s: 21.961735486984253
time_total_s: 21.961735486984253
timestamp: 1613577173
timesteps_since_restore: 0
training_iteration: 1
trial_id: 15e144f6
== Status ==
Memory usage on this node: 6.2/25.5 GiB
PopulationBasedTraining: 0 checkpoints, 0 perturbs
Resources requested: 3/4 CPUs, 1/1 GPUs, 0.0/14.99 GiB heap, 0.0/5.18 GiB objects (0/1.0 accelerator_type:P100)
Result logdir: /root/ray_results/tune_transformer_pbt
Number of trials: 4/100 (2 ERROR, 1 PENDING, 1 RUNNING)
+-----------------+----------+------------------+-----------+-------+----------------+--------------+-------------+----------------------+
| Trial name | status | loc | w_decay | lr | train_bs/gpu | num_epochs | eval_loss | training_iteration |
|-----------------+----------+------------------+-----------+-------+----------------+--------------+-------------+----------------------|
| _inner_15e144f6 | RUNNING | 172.28.0.2:40016 | 0.196785 | 2e-08 | 8 | 10 | 8.21466 | 1 |
| _inner_247daa04 | PENDING | | 0.49907 | 5e-05 | 8 | 15 | | |
| _inner_0755e982 | ERROR | | 0.366291 | 4e-05 | 8 | 15 | 8.17738 | 1 |
| _inner_07580d98 | ERROR | | 0.376876 | 6e-05 | 8 | 10 | 8.16669 | 1 |
+-----------------+----------+------------------+-----------+-------+----------------+--------------+-------------+----------------------+
Number of errored trials: 2
+-----------------+--------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Trial name | # failures | error file |
|-----------------+--------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| _inner_0755e982 | 1 | /root/ray_results/tune_transformer_pbt/_inner_0755e982_1_num_train_epochs=15,per_device_eval_batch_size=16,per_device_train_batch_size=8_2021-02-17_15-51-42/error.txt |
| _inner_07580d98 | 1 | /root/ray_results/tune_transformer_pbt/_inner_07580d98_2_num_train_epochs=10,per_device_eval_batch_size=16,per_device_train_batch_size=8_2021-02-17_15-52-06/error.txt |
+-----------------+--------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Result for _inner_15e144f6:
date: 2021-02-17_15-52-53
done: false
eval_accuracy: 0.14948453608247422
eval_f1: 0.14948453608247422
eval_loss: 8.2146635055542
eval_precision: 0.14948453608247422
eval_recall: 0.14948453608247422
eval_runtime: 6.917
eval_samples_per_second: 56.094
experiment_id: 2dc02378bb5d4a20a7c6d0228ad81076
experiment_tag: 3_num_train_epochs=10,per_device_eval_batch_size=16,per_device_train_batch_size=8
hostname: 0df2f30fd76b
iterations_since_restore: 1
node_ip: 172.28.0.2
objective: 0.14948453608247422
pid: 40016
time_since_restore: 21.961735486984253
time_this_iter_s: 21.961735486984253
time_total_s: 21.961735486984253
timestamp: 1613577173
timesteps_since_restore: 0
training_iteration: 1
trial_id: 15e144f6
```
And this is the error that usually pops up:-
```
2021-02-17 16:22:17,040 ERROR trial_runner.py:616 -- Trial _inner_34e77498: Error processing event.
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/ray/tune/trial_runner.py", line 586, in _process_trial
results = self.trial_executor.fetch_result(trial)
File "/usr/local/lib/python3.6/dist-packages/ray/tune/ray_trial_executor.py", line 609, in fetch_result
result = ray.get(trial_future[0], timeout=DEFAULT_GET_TIMEOUT)
File "/usr/local/lib/python3.6/dist-packages/ray/_private/client_mode_hook.py", line 47, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/ray/worker.py", line 1456, in get
raise value.as_instanceof_cause()
ray.exceptions.RayTaskError(TuneError): ray::ImplicitFunc.train_buffered() (pid=473, ip=172.28.0.2)
File "python/ray/_raylet.pyx", line 480, in ray._raylet.execute_task
File "python/ray/_raylet.pyx", line 432, in ray._raylet.execute_task.function_executor
File "/usr/local/lib/python3.6/dist-packages/ray/tune/trainable.py", line 167, in train_buffered
result = self.train()
File "/usr/local/lib/python3.6/dist-packages/ray/tune/trainable.py", line 226, in train
result = self.step()
File "/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py", line 366, in step
self._report_thread_runner_error(block=True)
File "/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py", line 513, in _report_thread_runner_error
("Trial raised an exception. Traceback:\n{}".format(err_tb_str)
ray.tune.error.TuneError: Trial raised an exception. Traceback:
ray::ImplicitFunc.train_buffered() (pid=473, ip=172.28.0.2)
File "/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py", line 248, in run
self._entrypoint()
File "/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py", line 316, in entrypoint
self._status_reporter.get_checkpoint())
File "/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py", line 576, in _trainable_func
output = fn()
File "/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py", line 651, in _inner
inner(config, checkpoint_dir=None)
File "/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py", line 645, in inner
fn(config, **fn_kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/integrations.py", line 160, in _objective
local_trainer.train(resume_from_checkpoint=checkpoint, trial=trial)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 983, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1062, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial, metrics=metrics)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1130, in _save_checkpoint
self._rotate_checkpoints(use_mtime=True)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1460, in _rotate_checkpoints
checkpoints_sorted = self._sorted_checkpoints(use_mtime=use_mtime)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1448, in _sorted_checkpoints
best_model_index = checkpoints_sorted.index(str(Path(self.state.best_model_checkpoint)))
ValueError: 'results/run-34e77498/checkpoint-10' is not in list
```
Seems like it is trying to retrieve the best model but there is some sort of bug there (note that I am not checkpointing due to lack of storage capacity).
This is a part of the code that may have a clue as to the reason for this bug:-
```
from ray.tune.suggest.hyperopt import HyperOptSearch
from ray.tune.schedulers import PopulationBasedTraining
from ray.tune import CLIReporter, JupyterNotebookReporter
from ray import tune
import random
pbt = PopulationBasedTraining(
time_attr="training_iteration",
metric="eval_accuracy",
mode="max",
perturbation_interval=10, # every 10 `time_attr` units
# (training_iterations in this case)
hyperparam_mutations={
"weight_decay": tune.uniform(1, 0.0001),
"seed": tune.uniform(1,20000),
"learning_rate": tune.choice([1e-5, 2e-5, 3e-5, 4e-5, 5e-5, 6e-5, 2e-7, 1e-7, 3e-7, 2e-8]),
"adafactor": tune.choice(['True','False']),
"adam_beta1": tune.uniform(1.0, 0.0),
"adam_beta2": tune.uniform(1.0, 0),
"adam_epsilon": tune.choice([1e-8, 2e-8, 3e-8, 1e-9, 2e-9, 3e-10]),
"max_grad_norm": tune.uniform(1.0, 0),
})
reporter = JupyterNotebookReporter(
overwrite = True,
metric = 'eval_accuracy',
parameter_columns={
"weight_decay": "w_decay",
"learning_rate": "lr",
"per_device_train_batch_size": "train_bs/gpu",
"num_train_epochs": "num_epochs"},
metric_columns=["eval_acc", "eval_loss", "epoch", "training_iteration"])
tune_config = {
"per_device_train_batch_size": 8,
"per_device_eval_batch_size": 16,
"num_train_epochs": tune.choice([10,15])
}
def compute_objective(metrics):
return metrics["eval_accuracy"]
best = trainer.hyperparameter_search(hp_space = lambda _: tune_config,
n_trials=100, compute_objective=compute_objective, direction="maximize", backend='ray', #search_alg=HyperOptSearch(metric='accuracy', mode='max', use_early_stopped_trials=True)
scheduler=pbt, resources_per_trial={"cpu": 3, "gpu": 1}, keep_checkpoints_num=1,
name = "tune_transformer_pbt", progress_reporter=reporter,
search_alg=HyperOptSearch(metric='eval_accuracy', mode='max', use_early_stopped_trials=True),
reuse_actors=True, checkpoint_at_end=True)
```
I can supply more code if requested, but it is more or less tweaked from official examples.
So basically, the first trial keeps running, the second is put to pending while for the continuation of the rest of the trials, it results in an error (strangely not with the first one).
I tried waiting till most trials have errored out to see whether it would continue to train the 1st trial but it just terminated giving the list of all trials that couldn't be completed.
> I am not sure where this bug is to be filed - rayproject or Huggingface, so I apologize in advance if I have posted in the wrong place.
| 02-17-2021 21:41:05 | 02-17-2021 21:41:05 | @neel04 can you try a few things:
- What version of Ray are you using? Can you try with the latest Ray (1.2).
- When using the PBT scheduler, it's actually not compatible with Tune search algorithms (see the compatibility matrix here https://docs.ray.io/en/master/tune/api_docs/schedulers.html#summary). Can you remove either HyperOptSearch or PopulationBasedTraining and try again.
- Can you pass in an absolute path to the `output_dir` in `TrainingArguments` instead of a relative one. I think right now it's being set to `./results`.
- And just make sure to run all cells in the notebook from scratch just in case any state is being saved from previous runs.
The error you posted is coming Huggingface checkpointing, so a person from HF might be better suited to help out here.
Also, if none of the above works for you, it would help if you could post a small, reproducible example, perhaps with dummy data, that can be run. Thanks!<|||||>- I am using the Latest Ray version: `1.2.0`
- Trying `PBT` alone without HyperOPT yields the same error. I always scrap the kernel after most runs due to this reason only, but it yields the same error on all configurations (12GB RAM or 25, P100 or V100).
- Set all paths to absolute, made no difference.
I have shared a [gist](https://colab.research.google.com/drive/1uuhCac9hTw1dcDnqpHuvZ63rMDTGNWWM?usp=sharing) that successfully reproduces the error with a dummy dataset. You can download the checkpoint `zip` [here in Google Drive](https://drive.google.com/drive/folders/1z8OgKtOxPlEDQV9905CyqyuaUEBx5FxK?usp=sharing).
It seems strange to me why `hyperparameter_search` in general is so buggy and complex :( as compared to solutions using native libraries. <|||||>@neel04 thanks a lot for the posting this issue and the easy to run code. There is a bug with the checkpoint directories that is causing this error. This branch has a fix: https://github.com/amogkam/transformers/tree/tune-fix. Can you try it out and let me know if it is working for you. Thanks again!<|||||>@amogkam Thanx a lot for the branch. I am trying to use it but I am facing 2 problems:-
1. When constructing the `training_args` variable, It is not allowing me to use `adafactor` optimizer. Is this an expected limitation of this branch?
2. I am getting this error still :( that prevents all trials to run except the first one:-
```
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/ray/tune/trial_runner.py", line 586, in _process_trial
results = self.trial_executor.fetch_result(trial)
File "/usr/local/lib/python3.6/dist-packages/ray/tune/ray_trial_executor.py", line 609, in fetch_result
result = ray.get(trial_future[0], timeout=DEFAULT_GET_TIMEOUT)
File "/usr/local/lib/python3.6/dist-packages/ray/_private/client_mode_hook.py", line 47, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/ray/worker.py", line 1456, in get
raise value.as_instanceof_cause()
ray.exceptions.RayTaskError(TuneError): ray::ImplicitFunc.train_buffered() (pid=1384, ip=172.28.0.2)
File "python/ray/_raylet.pyx", line 480, in ray._raylet.execute_task
File "python/ray/_raylet.pyx", line 432, in ray._raylet.execute_task.function_executor
File "/usr/local/lib/python3.6/dist-packages/ray/tune/trainable.py", line 167, in train_buffered
result = self.train()
File "/usr/local/lib/python3.6/dist-packages/ray/tune/trainable.py", line 226, in train
result = self.step()
File "/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py", line 366, in step
self._report_thread_runner_error(block=True)
File "/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py", line 513, in _report_thread_runner_error
("Trial raised an exception. Traceback:\n{}".format(err_tb_str)
ray.tune.error.TuneError: Trial raised an exception. Traceback:
ray::ImplicitFunc.train_buffered() (pid=1384, ip=172.28.0.2)
File "/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py", line 248, in run
self._entrypoint()
File "/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py", line 316, in entrypoint
self._status_reporter.get_checkpoint())
File "/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py", line 576, in _trainable_func
output = fn()
File "/content/transformers/src/transformers/integrations.py", line 164, in _objective
trainer.train(model_path=model_path, trial=trial)
File "/content/transformers/src/transformers/trainer.py", line 757, in train
for step, inputs in enumerate(epoch_iterator):
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 435, in __next__
data = self._next_data()
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 475, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "<ipython-input-12-cd510628f360>", line 10, in __getitem__
TypeError: new(): invalid data type 'str'
```
Pretty huge tracebacks, but at least the previous error is gone which is an improvement.
I will try to see if I can reproduce in the gist
**EDIT:-** scrap all that above. Right now, I am just focusing on the [gist](https://colab.research.google.com/drive/1uuhCac9hTw1dcDnqpHuvZ63rMDTGNWWM?usp=sharing) where I am still getting the Valueerror with a list. can you confirm that you can reproduce the error again?<|||||>@neel04 ah when installing from my fork you also have to specify the branch: `pip install git+https://github.com/amogkam/transformers.git@tune-fix`. The transformers version should be `4.4.0.dev0`. The gist works for me, and I'm not seeing the other 2 issues that you posted. Can you confirm that this branch works for you?<|||||>Yep, that fixes it :blush: Thanks for your help! one last thing - is it normal to see a single trial take a _very_ long time while having all the other trials paused?<|||||>@amogkam is this bug still present in 1.4.1? I am running into same problem when I set number of trails to more than 10. |
transformers | 10,246 | closed | TensorFlow Question-Answering example fails to run (cardinality error) | ## Environment info
- `transformers` version: 4.4.0.dev0
- Platform: Linux-4.15.0-111-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.8
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
## Information
Model I am using (Bert, XLNet ...): bert-base-uncased or roberta-base
The problem arises when using:
* [ ] the official example scripts: question-answering (run_tf_squad.py)
Error message:
Instructions for updating:
back_prop=False is deprecated. Consider using tf.stop_gradient instead.
Instead of:
results = tf.map_fn(fn, elems, back_prop=False)
Use:
results = tf.nest.map_structure(tf.stop_gradient, tf.map_fn(fn, elems))
87599it [01:00, 1437.03it/s]
10570it [00:11, 958.83it/s]
convert squad examples to features: 2%|_ | 1697/87599 [00:13<10:43, 133.40it/s][WARNING|squad.py:118] 2021-02-17 22:20:03,736 >> Could not find answer: 'municipal building and' vs. 'a municipal building'
convert squad examples to features: 50%|_____ | 43393/87599 [05:04<05:04, 145.24it/s][WARNING|squad.py:118] 2021-02-17 22:24:55,103 >> Could not find answer: 'message stick,' vs. 'a message stick'
convert squad examples to features: 100%|__________| 87599/87599 [10:10<00:00, 143.59it/s]
add example index and unique id: 100%|__________| 87599/87599 [00:00<00:00, 784165.53it/s]
convert squad examples to features: 100%|__________| 10570/10570 [01:14<00:00, 140.99it/s]
add example index and unique id: 100%|__________| 10570/10570 [00:00<00:00, 510000.04it/s]
[WARNING|integrations.py:60] 2021-02-17 22:31:16,214 >> Using the `WAND_DISABLED` environment variable is deprecated and will be removed in v5. Use the --report_to flag to control the integrations used for logging result (for instance --report_to none).
[INFO|trainer_tf.py:125] 2021-02-17 22:31:16,214 >> To use comet_ml logging, run `pip/conda install comet_ml` see https://www.comet.ml/docs/python-sdk/huggingface/
Traceback (most recent call last):
File "run_tf_squad.py", line 256, in <module>
main()
File "run_tf_squad.py", line 250, in main
trainer.train()
File "/home/transformers/src/transformers/trainer_tf.py", line 457, in train
train_ds = self.get_train_tfdataset()
File "/home/transformers/src/transformers/trainer_tf.py", line 141, in get_train_tfdataset
self.num_train_examples = self.train_dataset.cardinality().numpy()
AttributeError: '_AssertCardinalityDataset' object has no attribute 'cardinality'
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: SQUaD v1
## To reproduce
1. Use the latest master from huggingface/transformers
2. Go to examples/question-answering
3. Run WANDB_DISABLED=true python run_tf_squad.py --model_name_or_path roberta-base --output_dir model --max_seq_length 384 --num_train_epochs 2 --per_device_train_batch_size 8 --per_device_eval_batch_size 16 --do_train --do_eval --logging_dir logs --logging_steps 10 --learning_rate 3e-5 --no_cuda=True --doc_stride 128
Could you take a look @sgugger? | 02-17-2021 20:40:46 | 02-17-2021 20:40:46 | cc @jplu since it seems to come from the `TFTrainer`.<|||||>Hello!
Since transformers 4.2.0 you need to have TensorFlow 2.3 at least.<|||||>@jplu With TensorFlow 2.3 and transformers 4.4.0.dev0, I'm getting the error below:
[INFO|trainer_tf.py:522] 2021-02-18 19:18:25,103 >> ***** Running training *****
[INFO|trainer_tf.py:523] 2021-02-18 19:18:25,103 >> Num examples = 87599
[INFO|trainer_tf.py:525] 2021-02-18 19:18:25,104 >> Num Epochs = 2
[INFO|trainer_tf.py:526] 2021-02-18 19:18:25,104 >> Instantaneous batch size per device = 8
[INFO|trainer_tf.py:528] 2021-02-18 19:18:25,105 >> Total train batch size (w. parallel, distributed & accumulation) = 8
[INFO|trainer_tf.py:530] 2021-02-18 19:18:25,105 >> Gradient Accumulation steps = 1
[INFO|trainer_tf.py:531] 2021-02-18 19:18:25,105 >> Steps per epoch = 10950
[INFO|trainer_tf.py:532] 2021-02-18 19:18:25,105 >> Total optimization steps = 21900
2021-02-18 19:18:25.182464: W tensorflow/core/framework/op_kernel.cc:1755] Invalid argument: TypeError: `generator` yielded an element that did not match the expected structure. The expected structure was ({'input_ids': tf.int32, 'attention_mask': tf.int32, 'feature_index': tf.int64, 'qas_id': tf.string}, {'
start_positions': tf.int64, 'end_positions': tf.int64, 'cls_index': tf.int64, 'p_mask': tf.int32, 'is_impossible': tf.int32}), but the yielded element was ({'input_ids': [0, 6179, 793, 21, 2708, 77, 79, 21, 39504, 8358, 25, 10, 33799, 116, 2, 2, 767, 7, 5, 6256, 45756, 34527, 24292, 9, 957, 6, 2708, 21, 5, 1
354, 9, 6130, 3889, 1488, 757, 8, 6130, 7896, 4, 3224, 2708, 18, 33694, 6, 7896, 56, 57, 36175, 8, 21, 444, 3319, 11, 107, 4, 2708, 21, 576, 7, 544, 25, 10, 39504, 8358, 33799, 11, 5, 9660, 11, 7007, 77, 79, 21, 130, 107, 793, 6, 203, 101, 11029, 362, 9581, 7, 5, 12765, 3281, 24618, 25, 2673, 11, 5, 3470, 36
209, 4, 993, 6256, 45756, 34527, 2349, 194, 14, 23, 5, 86, 9, 69, 5673, 1001, 27534, 7, 3351, 6, 2708, 21, 316, 2383, 1570, 107, 793, 6, 8, 37, 21, 16984, 107, 793, 6, 53, 215, 2349, 32, 31298, 4, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1
, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1
, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1
, 1, 1, 1, 1, 1, 1, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'feature_index': 0, 'qas_id': '570c2b046b8089140040fba5'}, {'start_positions': 73, 'end_positions': 76,
'cls_index': 0, 'p_mask': [0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'is_impossible': False}).
Traceback (most recent call last):
File "/home/svaroglu/.hf_venv/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 833, in generator_py_func
flattened_values = nest.flatten_up_to(output_types, values)
File "/home/svaroglu/.hf_venv/lib/python3.6/site-packages/tensorflow/python/data/util/nest.py", line 396, in flatten_up_to
assert_shallow_structure(shallow_tree, input_tree)
File "/home/svaroglu/.hf_venv/lib/python3.6/site-packages/tensorflow/python/data/util/nest.py", line 324, in assert_shallow_structure
check_types=check_types)
File "/home/svaroglu/.hf_venv/lib/python3.6/site-packages/tensorflow/python/data/util/nest.py", line 311, in assert_shallow_structure
% (len(input_tree), len(shallow_tree)))
ValueError: The two structures don't have the same sequence length. Input structure has length 5, while shallow structure has length 4.
<|||||>This is because there is an issue in https://github.com/huggingface/transformers/blob/master/src/transformers/data/processors/squad.py .
We will look into it asap and will let you know here once done. Sorry for the inconvenience.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hello, I am getting the same error.
It says "AttributeError: 'Dataset' object has no attribute 'cardinality'" when I train it. Does anyone know how I should address this issue? <|||||>> Hello, I am getting the same error.
>
> It says "AttributeError: 'Dataset' object has no attribute 'cardinality'" when I train it. Does anyone know how I should address this issue?
@jeehunkang Could you find a solution to this? |
transformers | 10,245 | closed | `compute_metrics` show better results than `generate` because target data leaks | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.2
- Platform: Linux-5.8.18-050818-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using:
- t5
The problem arises when using:
- my own modified scripts: (give details below)
The tasks I am working on is:
- my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. train model with `compute_metrics` function to monitor metrics
2. use `generate` to predict text with trained model
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I expect the metrics of `compute_metrics` to be equal to my generated text.
<!-- A clear and concise description of what you would expect to happen. -->
## More information
While training, I used `compute_metrics` to calculate the metric on my validation set every X steps. I was surprised to see that after training my model did not perform as expected using the `generate` function provided by huggingface.
After some digging through the code I think I understand what the problem is. [`compute_metrics`](https://github.com/huggingface/transformers/blob/cd48078ce59a195473729759c76d88ae612b0f7a/src/transformers/trainer.py#L1665) takes as input `preds`, which is [a collection of `logits`](https://github.com/huggingface/transformers/blob/cd48078ce59a195473729759c76d88ae612b0f7a/src/transformers/trainer.py#L1635) from [`prediction_step`](https://github.com/huggingface/transformers/blob/cd48078ce59a195473729759c76d88ae612b0f7a/src/transformers/trainer.py#L1630) which internally [calls `model` with the inputs and targets of the model](https://github.com/huggingface/transformers/blob/cd48078ce59a195473729759c76d88ae612b0f7a/src/transformers/trainer.py#L1733).
This means that the target text leaks into `preds.predictions` because `mode.forward` used the targets as [input for the decoder](https://github.com/huggingface/transformers/blob/1cd16512dc8060aa8c2419664f9cb83813ade4d5/src/transformers/models/t5/modeling_t5.py#L1331). This makes the metrics of `compute_metrics` seem much better than they really are.
In my opinion the target data should not be used to create `preds.predictions`. Maybe the `generate` function is a better fit.
| 02-17-2021 20:17:50 | 02-17-2021 20:17:50 | Pinging @patrickvonplaten and @sgugger <|||||>Did you use the flag `--predict_with_generate`? It's there just for this: predicting using the `generate` method and the labels are then not passed (except to compute the loss).<|||||>Thank you for the hint. I followed this tutorial [this example](https://huggingface.co/transformers/custom_datasets.html#fine-tuning-with-trainer) and used [`TrainingArguments`](https://huggingface.co/transformers/main_classes/trainer.html#trainingarguments), which to not have the `predict_with_generate` option, instead of [`Seq2SeqTrainingArguments`](https://huggingface.co/transformers/main_classes/trainer.html#transformers.Seq2SeqTrainingArguments).
Maybe it's just me, but I think the `predict_with_generate` option should be described more visibly. I found it only in [`Seq2SeqTrainingArguments`](https://huggingface.co/transformers/main_classes/trainer.html#transformers.Seq2SeqTrainingArguments) and in none of the examples. Also, none of the examples in the documentation use `Seq2SeqTrainingArguments`.
If you don't think the documentation should be updated you can close this issue since my confusion has been resolved. Thank you.
<|||||>Yes the documentation is missing a seq2seq example. This is because we have been working on the design recently. The most up-to-date example you should use as reference is the [run_seq2seq script](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/run_seq2seq.py).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 10,244 | closed | Script for distilling zero-shot classifier to more efficient student | This PR introduces a script that provides a way to improve the speed and memory performance of a zero-shot classifier by training a more efficient student model from the zero-shot teacher's predictions over an unlabeled dataset.
For a given sequence, the zero-shot classification pipeline requires each possible label to be fed through the large NLI model separately. This requirement slows results considerably, particularly for tasks with a large number of classes `K`.
Given (1) an unlabeled corpus and (2) a set of candidate class names, this script allows a user to train a standard classification head with `K` output dimensions. The script generates a softmax distribution for the provided data & class names, and a student classifier is then fine-tuned on these proxy labels. The resulting student model can be used for classifying novel text instances over these `K` classes with an order-of-magnitude boost in inference speed in addition to decreased memory usage.
A teacher NLI model can be distilled to a student model by running `distill_classifier.py` like so:
```
python distill_classifier.py \
--data_file unlabeled_data.txt \
--class_names_file class_names.txt \
--output_dir ./distilled_model
```
A number of other args are provided as well, such as `--teacher_name_or_path` and `--student_name_or_path` for specifying the pre-trained student & teacher models to be used (by default `roberta-large-mnli` and `distillbert-base-uncased`) and `--hypothesis_template` for customizing the [hypothesis template](https://huggingface.co/transformers/main_classes/pipelines.html#transformers.ZeroShotClassificationPipeline.__call__) used by the teacher zero-shot model. The training is implemented via `Trainer`, so any [`TrainingArguments`](https://huggingface.co/transformers/main_classes/trainer.html#trainingarguments) can be specified as well.
The resulting model can then be used trivially in a text classification pipeline or in any other way:
```python
model = AutoModelForSequenceClassification.from_pretrained("./distilled_model")
tokenizer = AutoTokenizer.from_pretrained("./distilled_model")
distilled_classifier = TextClassificationPipeline(model=model, tokenizer=tokenizer)
```
See the included [README.md](https://github.com/joeddav/transformers/blob/zero-shot-distillation/examples/research_projects/zero-shot-distillation/README.md) for more details and examples.
Soon I'll introduce a similar script for self-training an NLI model, boosting the model's performance after training on only unlabeled data, which model can then be subsequently distilled with this script like any NLI model.
**Update**: I also just added a link to a working [colab notebook demo](https://colab.research.google.com/drive/1mjBjd0cR8G57ZpsnFCS3ngGyo5nCa9ya?usp=sharing). | 02-17-2021 20:12:36 | 02-17-2021 20:12:36 | @LysandreJik cool thanks for the feedback.
@sgugger Thanks, I added `fp16` for the teacher predictions. It will also now throw an error if someone tries to run it w/ distributed or TPUs and I added a note in the readme about that as well. It _can_ do multi-gpu though and will do so automatically if multiple GPUs are available on the machine, it just can't do multi-node.<|||||>Yes I meant distributed multi-GPU. I did see it will use all GPUs available on the machine however :-) |
transformers | 10,243 | closed | [trainer] refactor place_model_on_device logic, add deepspeed | This PR:
* refactors 3 places of `place_model_on_device` logic - into one public attribute with the same name as the `TrainingArguments.place_model_on_device` attribute
* adds deepspeed to that logic (it was missing in 2 places)
@sgugger | 02-17-2021 19:52:38 | 02-17-2021 19:52:38 | |
transformers | 10,242 | closed | Upgrade transformers from 3.5.0 to 4.3.2 instance error | Hi guys. I tried to update the transformers module from the version 3.5.0 to the version 4.3.2.
After this upgrade, the code that previously was working now has some problems.
This is my code:
```
from transformers import AutoConfig, AutoTokenizer, TFAutoModel
config = AutoConfig.from_pretrained("amazon/bort")
model = TFAutoModel.from_pretrained("amazon/bort", config=config)
bert_main_layer = model.bert
encoder, pooler = bert_main_layer(input_ids_in, attention_mask=input_masks_in)
X = tf.keras.layers.Dropout(config.hidden_dropout_prob)(pooler)
X = tf.keras.layers.Dense(
constants.CLASSES,
kernel_initializer=tf.keras.initializers.TruncatedNormal(stddev=config.initializer_range),
activation="softmax"
)(X)
model = tf.keras.Model(
inputs=[input_ids_in, input_masks_in],
outputs=[X]
)
```
In particular, when I instantiate the class `bert_model` (`bert_main_layer(input_ids_in, attention_mask=input_masks_in)`), with the 3.5.0 version it returns two tensors:
```
<tf.Tensor 'bert/encoder/layer_._3/output/LayerNorm/batchnorm/add_1:0' shape=(None, 333, 1024) dtype=float32>
<tf.Tensor 'bert/pooler/dense/Tanh:0' shape=(None, 1024) dtype=float32>
```
With the version 4.3.2 it returns two strings:
```
last_hidden_state
pooler_output
```
The consequence is that now I have this exception on the Dense layer:
```
ValueError: Input 0 of layer dense is incompatible with the layer: : expected min_ndim=2, found ndim=0. Full shape received: []
```
Since I haven't found any documentation or guidance, can you please help me? What I'm doing wrong?
Thank you | 02-17-2021 18:28:47 | 02-17-2021 18:28:47 | Hi, thanks for opening an issue. The breaking changes from version v3 to v4 are heavily documented: https://huggingface.co/transformers/migration.html#migrating-from-transformers-v3-x-to-v4-x
Your particular issue is [bullet number 4](https://huggingface.co/transformers/migration.html#switching-the-return-dict-argument-to-true-by-default).<|||||>For our reference, when trying to look for breaking changes between v3 and v4, where did you look? If we can improve visibility for those, it would be great. Maybe "Migration" isn't the best identifier? Are there some specific keywords you used?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 10,241 | closed | [Trainer] doc update | Trainer doc update:
* [x] port the instructions to use the new `run_seq2seq.py` script
* [x] add clarifications to using DeepSpeed in the notebook
@sgugger | 02-17-2021 18:11:42 | 02-17-2021 18:11:42 | |
transformers | 10,240 | closed | CUDA memory error on increasing the number of generations | I am getting CUDA memory error when generating text with num_return_sequences set to more than 100 for a self-trained gpt2 model. This is not expected since after every generation there should be nothing left in the GPU.
| 02-17-2021 17:47:13 | 02-17-2021 17:47:13 | By setting `num_return_sequences` you're creating bigger batches, so it is expected to have an OOM if you ask for too much returned sequences.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 10,239 | closed | Question about (no_decay = ['bias', 'LayerNorm.weight']) in BERT(Transformer-based) | Hi, I have a Question about the BERT model code.
I saw "no_decay = ['bias', 'LayerNorm.weight']" in BERT code(especially, in Optimizer part). It seemed reasonable, however, did this prove to be better performance? Or is it just to speed up calculations? | 02-17-2021 16:41:14 | 02-17-2021 16:41:14 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discusss.huggingface.co) instead?
Thanks! |
transformers | 10,238 | closed | ConvBert not compatible with torch v1.6 | ConvBERT uses statements like `torch.multiply` which did not exist in pytorch v1.6 => ConvBERT is not compatible with v1.6 (cc @abhishekkrthakur).
This can easily be checked when running:
`pytest tests/test_modeling_convbert.py`
@LysandreJik, @sgugger - It would be great to test all the different pytorch versions in a slow test I think. | 02-17-2021 16:18:50 | 02-17-2021 16:18:50 | Testing all versions from torch v1.3.0+ is indeed on the roadmap, I expect ~1 month out alongside all the other tests improvements.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 10,237 | closed | TransCoder | # What does this PR do?
Adds TransCoder https://github.com/facebookresearch/TransCoder | 02-17-2021 15:55:44 | 02-17-2021 15:55:44 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 10,236 | closed | Add m2m100 | # What does this PR do?
Adds the M2M100 model
https://github.com/pytorch/fairseq/tree/master/examples/m2m_100
Fixes #8054 | 02-17-2021 15:13:39 | 02-17-2021 15:13:39 | Sure, Patrick !<|||||>I’ve addressed all the review comments, and all the slow/fast tests are now passing.
I didn’t add fast tokenizer because `M2M100` is `sentencepiece` based tokenizer, but it uses `sentencepiece` for just tokenizing and then uses a vocab file to convert the tokens to ids and ids to tokens. So our current `SpmConverter` doesn’t work for this. I’ll try to add fast tokenizer in a follow-up PR.
Merging!<|||||>> I’ve addressed all the review comments, and all the slow/fast tests are now passing.
>
> I didn’t add fast tokenizer because `M2M100` is `sentencepiece` based tokenizer, but it uses `sentencepiece` for just tokenizing and then uses a vocab file to convert the tokens to ids and ids to tokens. So our current `SpmConverter` doesn’t work for this. I’ll try to add fast tokenizer in a follow-up PR.
>
> Merging!
Hey, I was wondering if there's any progress on a Fast Tokenizer for M2M or if any help can be needed?
Thanks :) |
transformers | 10,235 | closed | [file_utils] do not gobble certain kinds of requests.ConnectionError | Backport from https://github.com/huggingface/huggingface_hub/pull/14/commits/34b7b70d07ab1c9fc2f7da603d47cb344e256af6
might close (or at the very least provide more transparency into) #8690, #10067, and others | 02-17-2021 15:08:00 | 02-17-2021 15:08:00 | @sgugger Definitely on my radar at some point.
For now though it's useful for me to have a more experimental codebase where the API can change/break :)<|||||>> LGTM but I don't have sufficient `requests` knowledge to be sure this catches all exceptions that we want to catch
we're in the same boat, sailor<|||||>In order to validate your strategy, I took the list of every exception in the public API of requests: https://2.python-requests.org/en/master/api/#exceptions
I built the inheritance tree between exceptions: https://requests.readthedocs.io/en/master/_modules/requests/exceptions/
Here's a readable version:
```
IOError
+-- RequestException
+-- HTTPError
+-- ConnectionError
| +-- ProxyError
| +-- SSLError
| +-- ConnectTimeout (also inherits Timeout)
+-- Timeout
| +-- ConnectTimeout (also inherits ConnectionError)
| +-- ReadTimeout
+-- URLRequired
+-- TooManyRedirects
+-- MissingSchema (also inherits ValueError)
+-- InvalidSchema (also inherits ValueError)
+-- InvalidURL (also inherits ValueError)
| +-- InvalidProxyURL
+-- InvalidHeader (also inherits ValueError)
+-- ChunkedEncodingError
+-- ContentDecodingError (also inherits urllib3.exceptions.HTTPError)
+-- StreamConsumedError (also inherits TypeError)
+-- RetryError
+-- UnrewindableBodyError
```
Multiple inheritance is most likely for backwards-compatibility when changing exception tyes. For example, if you want to raise the more accurate `ContentDecodingError` instead of a generic `urllib3.exceptions.HTTPError`, making `ContentDecodingError` inherit `urllib3.exceptions.HTTPError` ensures you don't break the code of users who used to catch `urllib3.exceptions.HTTPError`.
I assume you want to tell apart situations where it's worth retrying from situations it isn't worth retrying, because there's a configuration issue that won't solve itself by retrying.
Based on the above, on the documentation, and on a quick search in the source code of requests, what you're doing looks correct to me, modulo my suggestion on the style.<|||||>Applied most of your comments @aaugustin, merging after internal review from @julien-c<|||||>👍 |
transformers | 10,234 | open | Request to add Switch Transformer | Google has come up with yet another transformer: https://arxiv.org/pdf/2101.03961.pdf | 02-17-2021 14:57:41 | 02-17-2021 14:57:41 | Google released the source code for transformer-based mixture-of-experts (the switch architecture): https://github.com/tensorflow/mesh/blob/master/mesh_tensorflow/transformer/moe.py
According to https://www.infoq.com/news/2021/02/google-trillion-parameter-ai/ the model weights are not available yet. |
transformers | 10,233 | closed | Making TF Longformer-like models compliant with AMP | # What does this PR do?
This PR makes the TF Longformer-like models compliant with AMP. All the slow tests are passing as well for these models.
These two models cannot be XLA compliant for now, as it seems that `tf.where` cannot be used in XLA if the `x` and `y` parameters are `None`. See the `_get_global_attn_indices` method which has this case. I have opened [an issue](https://github.com/tensorflow/tensorflow/issues/47211) on the TF repo in order to ask if it is an expected behavior or a bug.
| 02-17-2021 14:34:54 | 02-17-2021 14:34:54 | |
transformers | 10,232 | open | Multilabel Sequence Classification in trainer | # 🚀 Feature request
We need to be able to use the trainer for multilabel classification problems.
## Motivation
Right now we create our models in the old fashioned way, with a sigmoid layer at the end so we can do multilabel. However if we could use the trainer directly, we wouldn't need to maintain different training scripts. Are there any plans for adding this to the trainer?
## Your contribution
I could try to help but I don't even know where to start with.
Thanks you very much for reading
| 02-17-2021 13:02:34 | 02-17-2021 13:02:34 | Hi @LysandreJik,
If no one is working on this can I start on this feature?
I imagine this will not be that difficult and should be possible by using the sigmoid instead of the softmax, where we're calculating a probability between 0-1 for each class which will be encapsulated in a class similar to [ModelName]ForSequenceClassification. <|||||>Hi @vimarshc, I believe @abhishekkrthakur is working on this in https://github.com/huggingface/transformers/pull/11012<|||||>Thanks for the update!
Shall try to make myself useful for some other issue. Haha. <|||||>#11012 is merged and 4.6.0 is released, is this feature already there?<|||||>I believe there is now multi-label classification within the models, and changing a model configuration argument (`config.problem_type = "multi_label_classification"`) should enable that out of the box. Have you tried it out?<|||||>I did but I didn't have enough time to try it in depth, I had some problems with the labels format. However I did not find any documentation or examples in the docs.<|||||>This seems to work but I found a weird problem that might be interesting to solve.
When you binarize the labels, if you are training with pytorch it will throw an error because `BCEWithLogitsLoss` expects floats and not ints. This is counterintuitive for me, I had to cast my labels to floats and then it worked.
Also, there is not a lot of information about the anything of this in the documentation and the notebook for multi-label sequence classification still uses the old training loops, instead of the trainer.<|||||>Indeed, it would be nice to put this in the documentation, it's sparse on this subject right now.<|||||>Sorry for the flood but, is there a way to instance a multilabel model on a pipeline? I'm trying it but I don't think it is working, the prediction probabilities always sum up to 1.<|||||>I believe this is being taken care of in https://github.com/huggingface/transformers/pull/8328<|||||>Indeed, it looks really good. Thank you<|||||>Is the multilabel pipeline implemented in the last release `4.9.2` @LysandreJik ? If yes, are there examples on how to use it?<|||||>Ssee the docs for the text classification pipeline [here](https://huggingface.co/transformers/master/main_classes/pipelines.html#transformers.TextClassificationPipeline) (See `function_to_apply` argument)
It is currently on the `master` branch and will be in the next release (v4.10.0)<|||||>Great thanks!
However, I don't see a documentation which is specific to multilabel classification (I understand that choosing `function_to_apply='sigmoid'` makes it work though). With the results of the `pipeline` be prettified accordingly? |
transformers | 10,231 | closed | Wav2Vec2 finetune | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
Can you please share code to how to finetune transformers.Wav2Vec2ForCTC, or maybe on how to give labels to the model in order to get loss?
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| 02-17-2021 12:42:45 | 02-17-2021 12:42:45 | Patrick is working on it, see #10145 |
transformers | 10,230 | closed | Making TF GPT2 compliant with XLA and AMP | # What does this PR do?
This PR makes the TF GPT2 model compliant with XLA and AMP. All the slow tests are passing as well. | 02-17-2021 10:04:03 | 02-17-2021 10:04:03 | |
transformers | 10,229 | closed | Introduce warmup_ratio training argument | Introduce warmup_ratio training argument in both
TrainingArguments and TFTrainingArguments classes (#6673)
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR will add a new argument `warmup_ratio` to both `TrainingArguments` and `TFTrainingArguments` classes. This can be used to specify the ratio of total training steps for which linear warmup will happen.
This is especially convenient when the user wants to play around with the `num_train_epochs` or `max_steps` arguments while keeping the ratio of warmup steps a constant.
Fixes #6673
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. [Link to the issue raised](https://github.com/huggingface/transformers/issues/6673).
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
Since modifications in trainer: @sgugger | 02-17-2021 09:43:43 | 02-17-2021 09:43:43 | As per current implementation, any non-zero value given for `warmup_steps` will override any effects of `warmup_ratio`. It made sense for me to give higher precedence to `warmup_steps` as it seems to be the more inconvenient argument of the 2 to provide from user perspective. Please let me know if this default behaviour is to be changed.
PS: this is my first PR, so feel free to correct me. I will be happy to accommodate 😄 <|||||>Thanks for the review! I've incorporated the comments.
Do let me know if anything else needs to be addressed.<|||||>Thanks for your comments!
Agreed, the code looks much more readable now.
Do let me know if there can be any more improvement.
Thanks.<|||||>It would indeed be better in `TrainingArguments.__post_init__`: the rational for that is that when instantiating an object with wrong values, we want the error to be raised as soon as possible and as close as possible to the source for easy debugging.
In this case, the problem should appear at the line that parses the `TrainingArguments` or when they are created.<|||||>Thanks! Taken care of it.<|||||>Thanks for adding this functionality! |
transformers | 10,228 | closed | Converting original T5 to be used in Transformers | I want to use original T5 checkpoint in Transformers library. I found multiple answers referring to `convert_t5_original_tf_checkpoint_to_pytorch.py` which does not seem to exist. Any other way? Or where can I find a (currently working) version of that file? | 02-17-2021 09:33:15 | 02-17-2021 09:33:15 | Hi,
this file exists, it can be found here: https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/convert_t5_original_tf_checkpoint_to_pytorch.py
<|||||>Thank you! I tried the script and it misses a `config.json` file. Where can I find this?<|||||>The config.json should be part of the original T5 files, which can be found [here](https://github.com/google-research/text-to-text-transfer-transformer#released-model-checkpoints).
However, I wonder why you want to convert the original checkpoints yourself, because this has already been done by the authors of HuggingFace. You can find all T5 checkpoints on the [hub](https://huggingface.co/models?search=google/t5). <|||||>Because I finetuned them on TPU which is not possible in Transformers yet (at least not in TF) and I want to use Transformers for prediction.<|||||>...I think you linked this issue as location for original T5 files<|||||>Apologies, updated the URL. The `config.json` file should look something like [this](https://huggingface.co/google/t5-large-ssm-nq/blob/main/config.json), containg all the hyperparameter values. When you fine-tuned T5 on TPUs, do you have a configuration available?<|||||>Thanks! (you are a lifesaver by the way with these response times :)). I finetuned using the original repo which uses Mesh Tensorflow and it exports checkpoints in the same format as the original published checkpoints. And there is no `config.json` file, not even in the original published checkpoints you linked. For future reference: you can look at the files by going to this url: https://console.cloud.google.com/storage/browser/t5-data/pretrained_models/small if you have a google cloud account.<|||||>I see that they store configurations in .gin files, like this one: https://console.cloud.google.com/storage/browser/_details/t5-data/pretrained_models/small/operative_config.gin
When opening this on my laptop in Notepad, this looks like this:
```
import t5.models.mesh_transformer
import t5.data.sentencepiece_vocabulary
import mesh_tensorflow.optimize
import mesh_tensorflow.transformer.dataset
import mesh_tensorflow.transformer.learning_rate_schedules
import mesh_tensorflow.transformer.t2t_vocabulary
import mesh_tensorflow.transformer.transformer_layers
import mesh_tensorflow.transformer.utils
# Macros:
# ==============================================================================
d_ff = 2048
d_kv = 64
d_model = 512
dropout_rate = 0.1
inputs_length = 512
mean_noise_span_length = 3.0
MIXTURE_NAME = 'all_mix'
noise_density = 0.15
num_heads = 8
num_layers = 6
targets_length = 512
init_checkpoint = "gs://t5-data/pretrained_models/small/model.ckpt-1000000"
tokens_per_batch = 1048576
# Parameters for AdafactorOptimizer:
# ==============================================================================
AdafactorOptimizer.beta1 = 0.0
AdafactorOptimizer.clipping_threshold = 1.0
AdafactorOptimizer.decay_rate = None
AdafactorOptimizer.epsilon1 = 1e-30
AdafactorOptimizer.epsilon2 = 0.001
AdafactorOptimizer.factored = True
AdafactorOptimizer.min_dim_size_to_factor = 128
AdafactorOptimizer.multiply_by_parameter_scale = True
# Parameters for Bitransformer:
# ==============================================================================
Bitransformer.shared_embedding = True
# Parameters for denoise:
# ==============================================================================
denoise.inputs_fn = @preprocessors.noise_span_to_unique_sentinel
denoise.noise_density = %noise_density
denoise.noise_mask_fn = @preprocessors.random_spans_noise_mask
denoise.targets_fn = @preprocessors.nonnoise_span_to_unique_sentinel
# Parameters for decoder/DenseReluDense:
# ==============================================================================
decoder/DenseReluDense.dropout_rate = %dropout_rate
decoder/DenseReluDense.hidden_size = %d_ff
# Parameters for encoder/DenseReluDense:
# ==============================================================================
encoder/DenseReluDense.dropout_rate = %dropout_rate
encoder/DenseReluDense.hidden_size = %d_ff
# Parameters for decoder/EncDecAttention:
# ==============================================================================
# None.
# Parameters for get_sentencepiece_model_path:
# ==============================================================================
get_sentencepiece_model_path.mixture_or_task_name = %MIXTURE_NAME
# Parameters for get_variable_dtype:
# ==============================================================================
get_variable_dtype.activation_dtype = 'bfloat16'
# Parameters for decoder/LayerStack:
# ==============================================================================
decoder/LayerStack.dropout_rate = %dropout_rate
decoder/LayerStack.norm_epsilon = 1e-06
# Parameters for encoder/LayerStack:
# ==============================================================================
encoder/LayerStack.dropout_rate = %dropout_rate
encoder/LayerStack.norm_epsilon = 1e-06
# Parameters for learning_rate_schedule_noam:
# ==============================================================================
learning_rate_schedule_noam.linear_decay_fraction = 0.1
learning_rate_schedule_noam.multiplier = 1.0
learning_rate_schedule_noam.offset = 0
learning_rate_schedule_noam.warmup_steps = 10000
# Parameters for make_bitransformer:
# ==============================================================================
make_bitransformer.decoder_name = 'decoder'
make_bitransformer.encoder_name = 'encoder'
# Parameters for decoder/make_layer_stack:
# ==============================================================================
decoder/make_layer_stack.block_scope = True
decoder/make_layer_stack.layers = \
[@mesh_tensorflow.transformer.transformer_layers.SelfAttention,
@mesh_tensorflow.transformer.transformer_layers.EncDecAttention,
@mesh_tensorflow.transformer.transformer_layers.DenseReluDense]
decoder/make_layer_stack.num_layers = %num_layers
# Parameters for encoder/make_layer_stack:
# ==============================================================================
encoder/make_layer_stack.block_scope = True
encoder/make_layer_stack.layers = \
[@mesh_tensorflow.transformer.transformer_layers.SelfAttention,
@mesh_tensorflow.transformer.transformer_layers.DenseReluDense]
encoder/make_layer_stack.num_layers = %num_layers
# Parameters for mesh_train_dataset_fn:
# ==============================================================================
mesh_train_dataset_fn.mixture_or_task_name = %MIXTURE_NAME
# Parameters for noise_span_to_unique_sentinel:
# ==============================================================================
# None.
# Parameters for nonnoise_span_to_unique_sentinel:
# ==============================================================================
# None.
# Parameters for pack_dataset:
# ==============================================================================
# Parameters for pack_or_pad:
# ==============================================================================
# None.
# Parameters for random_spans_helper:
# ==============================================================================
random_spans_helper.extra_tokens_per_span_inputs = 1
random_spans_helper.extra_tokens_per_span_targets = 1
random_spans_helper.inputs_length = %inputs_length
random_spans_helper.mean_noise_span_length = %mean_noise_span_length
random_spans_helper.noise_density = %noise_density
# Parameters for targets_length/random_spans_helper:
# ==============================================================================
targets_length/random_spans_helper.extra_tokens_per_span_inputs = 1
targets_length/random_spans_helper.extra_tokens_per_span_targets = 1
targets_length/random_spans_helper.inputs_length = %inputs_length
targets_length/random_spans_helper.mean_noise_span_length = %mean_noise_span_length
targets_length/random_spans_helper.noise_density = %noise_density
# Parameters for random_spans_noise_mask:
# ==============================================================================
random_spans_noise_mask.mean_noise_span_length = %mean_noise_span_length
# Parameters for targets_length/random_spans_targets_length:
# ==============================================================================
# None.
# Parameters for random_spans_tokens_length:
# ==============================================================================
# None.
# Parameters for rate_num_examples:
# ==============================================================================
rate_num_examples.maximum = 1000000.0
rate_num_examples.scale = 1.0
rate_num_examples.temperature = 1.0
# Parameters for rate_unsupervised:
# ==============================================================================
rate_unsupervised.value = 710000.0
# Parameters for reduce_concat_tokens:
# ==============================================================================
reduce_concat_tokens.batch_size = 128
reduce_concat_tokens.feature_key = 'targets'
# Parameters for run:
# ==============================================================================
run.autostack = True
run.batch_size = ('tokens_per_batch', %tokens_per_batch)
run.dataset_split = 'train'
run.ensemble_inputs = None
run.eval_checkpoint_step = None
run.eval_dataset_fn = None
run.eval_summary_dir = None
run.export_path = ''
run.iterations_per_loop = 100
run.keep_checkpoint_max = None
run.layout_rules = \
'ensemble:ensemble,batch:batch,d_ff:model,heads:model,vocab:model,experts:batch'
run.learning_rate_schedule = @learning_rate_schedules.learning_rate_schedule_noam
run.mesh_shape = @mesh_tensorflow.transformer.utils.tpu_mesh_shape()
run.mode = 'train'
run.init_checkpoint = %init_checkpoint
run.model_type = 'bitransformer'
run.optimizer = @optimize.AdafactorOptimizer
run.perplexity_eval_steps = 10
run.predict_fn = None
run.save_checkpoints_steps = 2400
run.sequence_length = {'inputs': %inputs_length, 'targets': %targets_length}
run.train_dataset_fn = \
@t5.models.mesh_transformer.mesh_train_dataset_fn
run.train_steps = 1000000000
run.variable_filter = None
run.vocabulary = \
@t5.data.sentencepiece_vocabulary.SentencePieceVocabulary()
# Parameters for select_random_chunk:
# ==============================================================================
select_random_chunk.feature_key = 'targets'
select_random_chunk.max_length = 65536
# Parameters for decoder/SelfAttention:
# ==============================================================================
decoder/SelfAttention.attention_kwargs = None
decoder/SelfAttention.dropout_rate = %dropout_rate
decoder/SelfAttention.key_value_size = %d_kv
decoder/SelfAttention.num_heads = %num_heads
decoder/SelfAttention.num_memory_heads = 0
decoder/SelfAttention.relative_attention_num_buckets = 32
decoder/SelfAttention.relative_attention_type = 'bias_shared'
decoder/SelfAttention.shared_kv = False
# Parameters for encoder/SelfAttention:
# ==============================================================================
encoder/SelfAttention.attention_kwargs = None
encoder/SelfAttention.dropout_rate = %dropout_rate
encoder/SelfAttention.key_value_size = %d_kv
encoder/SelfAttention.num_heads = %num_heads
encoder/SelfAttention.num_memory_heads = 0
encoder/SelfAttention.relative_attention_num_buckets = 32
encoder/SelfAttention.relative_attention_type = 'bias_shared'
encoder/SelfAttention.shared_kv = False
# Parameters for SentencePieceVocabulary:
# ==============================================================================
SentencePieceVocabulary.extra_ids = 100
SentencePieceVocabulary.sentencepiece_model_file = \
@t5.models.mesh_transformer.get_sentencepiece_model_path()
# Parameters for serialize_num_microbatches:
# ==============================================================================
serialize_num_microbatches.tokens_per_microbatch_per_replica = 8192
# Parameters for split_tokens:
# ==============================================================================
split_tokens.feature_key = 'targets'
split_tokens.max_tokens_per_segment = @preprocessors.random_spans_tokens_length()
split_tokens.min_tokens_per_segment = None
# Parameters for tpu_estimator_model_fn:
# ==============================================================================
tpu_estimator_model_fn.init_checkpoint = %init_checkpoint
tpu_estimator_model_fn.outer_batch_size = 1
tpu_estimator_model_fn.tpu_summaries = False
# Parameters for tpu_mesh_shape:
# ==============================================================================
tpu_mesh_shape.ensemble_parallelism = None
tpu_mesh_shape.model_parallelism = 1
tpu_mesh_shape.tpu_topology = '8x8'
# Parameters for decoder/Unitransformer:
# ==============================================================================
decoder/Unitransformer.d_model = %d_model
decoder/Unitransformer.ensemble = None
decoder/Unitransformer.input_full_attention = False
decoder/Unitransformer.label_smoothing = 0.0
decoder/Unitransformer.loss_denominator = None
decoder/Unitransformer.loss_fn = None
decoder/Unitransformer.loss_on_targets_only = False
decoder/Unitransformer.max_length = 512
decoder/Unitransformer.positional_embedding = False
decoder/Unitransformer.shared_embedding_and_softmax_weights = True
decoder/Unitransformer.vocab_divisor = 128
decoder/Unitransformer.z_loss = 0.0001
decoder/Unitransformer.loss_denominator = 233472
# Parameters for encoder/Unitransformer:
# ==============================================================================
encoder/Unitransformer.d_model = %d_model
encoder/Unitransformer.ensemble = None
encoder/Unitransformer.input_full_attention = False
encoder/Unitransformer.label_smoothing = 0.0
encoder/Unitransformer.loss_denominator = None
encoder/Unitransformer.loss_fn = None
encoder/Unitransformer.loss_on_targets_only = False
encoder/Unitransformer.max_length = 512
encoder/Unitransformer.positional_embedding = False
encoder/Unitransformer.shared_embedding_and_softmax_weights = True
encoder/Unitransformer.vocab_divisor = 128
encoder/Unitransformer.z_loss = 0.0001
# Parameters for unsupervised:
# ==============================================================================
unsupervised.preprocessors = \
[@preprocessors.select_random_chunk,
@preprocessors.reduce_concat_tokens,
@preprocessors.split_tokens,
@preprocessors.denoise]
```
=> the relevant part here seems to be only the model hyperparameters:
```
d_ff = 2048
d_kv = 64
d_model = 512
dropout_rate = 0.1
inputs_length = 512
mean_noise_span_length = 3.0
MIXTURE_NAME = 'all_mix'
noise_density = 0.15
num_heads = 8
num_layers = 6
targets_length = 512
init_checkpoint = "gs://t5-data/pretrained_models/small/model.ckpt-1000000"
tokens_per_batch = 1048576
```
So maybe you can create a config.json based on those?
> Thanks! (you are a lifesaver by the way with these response times :)).
And happy to hear this :) you're welcome<|||||>...actually the link you sent for the example config file proved to be extremely useful! Starting from there I've found all related files. Here is everything (including the config file) for T5 Small: https://huggingface.co/t5-small. Also an example workflow for future reference:
```
mkdir t5
gsutil -m cp -r gs://t5-data/pretrained_models/small t5
python ~/transformers/src/transformers/models/t5/convert_t5_original_tf_checkpoint_to_pytorch.py --tf_checkpoint_path t5/small/model.ckpt-1000000 --pytorch_dump_path t5-small-pt --config_file t5/small_config.json
``` |
transformers | 10,227 | closed | Showing individual token and corresponding score during beam search | ## Who can help
@patrickvonplaten
## Information
Hello,
I am using beam search with a pre-trained T5 model for summarization. I would like to visualize the beam search process by showing the tokens with the highest scores, and eventually the chosen beam like this diagram:

(Taken from https://huggingface.co/blog/how-to-generate)
**I am unsure how I can show the tokens and their corresponding scores.**
I followed the discussion https://discuss.huggingface.co/t/announcement-generationoutputs-scores-attentions-and-hidden-states-now-available-as-outputs-to-generate/3094 and https://github.com/huggingface/transformers/pull/9150.
Following the docs, upon calling `generate`, I have set `return_dict_in_generate=True`, `output_scores=True`
```
generated_outputs = model_t5summary.generate(
input_ids=input_ids.to(device),
attention_mask=features['attention_mask'].to(device),
max_length=input_ids.shape[-1] + 2,
return_dict_in_generate=True,
output_scores=True,
output_hidden_states=True,
output_attentions=True,
no_repeat_ngram_size=2,
early_stopping=True,
num_return_sequences=3,
num_beams=5,
)
```
Now I have an instance of `BeamSearchEncoderDecoderOutput`.
If I understand the docs (https://huggingface.co/transformers/master/internal/generation_utils.html#generate-outputs) correctly, `scores` will provide me with what I want but I am unsure on how to use the `scores`.
Any help/pointers from the community would be greatly appreciated, thank you 🙏 | 02-17-2021 08:17:19 | 02-17-2021 08:17:19 | Hey @monmanuela,
Thanks for checking out the post! We try to keep the repository for github issues and kindly ask you to post these kinds of questions on the [forum](https://discuss.huggingface.co/). Feel free to tag me there (@patrickvonplaten) :-)<|||||>@patrickvonplaten thanks for your quick reply! Posted a topic on the forum, closing this issue. |
transformers | 10,226 | closed | Trainer.train() is stuck | Hi,
I'm training roberta-base using HF Trainer, but it's stuck at the starting itself. Here's my code -
```
train_dataset[0]
{'input_ids': tensor([ 0, 100, 657, ..., 1, 1, 1]),
'attention_mask': tensor([1, 1, 1, ..., 0, 0, 0]),
'labels': tensor(0)}
val_dataset[0]
{'input_ids': tensor([ 0, 11094, 14, ..., 1, 1, 1]),
'attention_mask': tensor([1, 1, 1, ..., 0, 0, 0]),
'labels': tensor(0)}
## simple test
model(train_dataset[:2]['input_ids'], attention_mask = train_dataset[:2]['attention_mask'], labels=train_dataset[:2]['labels'])
SequenceClassifierOutput(loss=tensor(0.6995, grad_fn=<NllLossBackward>), logits=tensor([[ 0.0438, -0.1893],
[ 0.0530, -0.1786]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None)
train_args = transformers.TrainingArguments(
output_dir='test_1',
overwrite_output_dir=True,
evaluation_strategy="epoch",
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
learning_rate=3e-5,
weight_decay=0.01,
num_train_epochs=2,
load_best_model_at_end=True,
)
trainer = transformers.Trainer(
model=model,
args=train_args,
train_dataset=train_dataset,
eval_dataset=val_dataset,
tokenizer=tok,
)
trainer.train()
```
I saw memory consumption and it is stuck at -
```
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.51.06 Driver Version: 450.51.06 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla V100-SXM2... On | 00000000:62:00.0 Off | 0 |
| N/A 49C P0 60W / 300W | 1756MiB / 32510MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 Tesla V100-SXM2... On | 00000000:8A:00.0 Off | 0 |
| N/A 50C P0 61W / 300W | 1376MiB / 32510MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
```
Plz let me know how to proceed further.. | 02-17-2021 08:07:28 | 02-17-2021 08:07:28 | Hi,
For training-related issues, it might be better to ask your question on the [forum](https://discuss.huggingface.co/).
The authors of HuggingFace (and community members) are happy to help you there!
|
transformers | 10,225 | closed | [Trainer] memory tracker metrics | This PR introduced memory usage metrics in Trainer:
* [x] adds `TrainerMemoryTracker` (pytorch only, no-op for tf), which records deltas of the first gpu and cpu of the main process - and records them for `init|train|eval|test` stages - if there is no gpu it reports cpu only.
* [x] adds `--skip_memory_metrics` to disable this new behavior - i.e. by default it'll print the memory metrics
* [x] adds `trainer.metrics_format` which will intelligently reformat the metrics to do the right thing - this is only for logger - moves manual rounding from the scripts into that helper method.
* [x] formats GFlops as GF number, so ` 2285698228224.0`, which is very unreadable and now it will be a nice `2128GF` (similar to `100MB`)
* [x] as a sample changes `run_seq2seq.py` to use `trainer.metrics_format` - can replicate to other scripts in another PR.
* [x] changes the metrics logger in `run_seq2seq.py` to align data, so that it's easy to read the relative numbers e.g. allocated plus peak memory should be in the same column to make a quick read of the situation.
* [x] adds a new file_utils helper function `is_torch_cuda_available` to detect no gpu setups in one call.
* [x] adds a test
* [x] consistently use the strange `train/eval/test` trio - it's very confusing - but at least it's consistent - I proposed to fix this `examples`-wide in https://github.com/huggingface/transformers/issues/10165
Request: I beg you to allow me to restore the original refactored metrics dump logic in `run_seq2seq.py` - the current repetition doesn't help the readability and it's just dumping a dict - nothing ML/NLP specific here, there is nothing to understand there IMHO. and then it'd be easy to replicate this to other examples. Thanks. This is the original (and will need to add to it a few formatting entries I added in this PR):
https://github.com/huggingface/transformers/blob/e94d63f6cbf5efe288e41d9840a96a5857090617/examples/legacy/seq2seq/finetune_trainer.py#L132-L145
A picture is worth a thousand words:
```
export BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 python examples/seq2seq/run_seq2seq.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --do_eval --do_train --do_predict --evaluation_strategy=steps --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --val_max_target_length 128 --warmup_steps 500 --max_train_samples 100 --max_val_samples 100 --max_test_samples 100 --dataset_name wmt16-en-ro-pre-processed --source_prefix "translate English to Romanian: "
```
gives:
```
02/16/2021 17:06:39 - INFO - __main__ - ***** train metrics *****
02/16/2021 17:06:39 - INFO - __main__ - epoch = 1.0
02/16/2021 17:06:39 - INFO - __main__ - init_mem_cpu_alloc_delta = 2MB
02/16/2021 17:06:39 - INFO - __main__ - init_mem_cpu_peaked_delta = 0MB
02/16/2021 17:06:39 - INFO - __main__ - init_mem_gpu_alloc_delta = 230MB
02/16/2021 17:06:39 - INFO - __main__ - init_mem_gpu_peaked_delta = 0MB
02/16/2021 17:06:39 - INFO - __main__ - total_flos = 2128GF
02/16/2021 17:06:39 - INFO - __main__ - train_mem_cpu_alloc_delta = 55MB
02/16/2021 17:06:39 - INFO - __main__ - train_mem_cpu_peaked_delta = 0MB
02/16/2021 17:06:39 - INFO - __main__ - train_mem_gpu_alloc_delta = 692MB
02/16/2021 17:06:39 - INFO - __main__ - train_mem_gpu_peaked_delta = 661MB
02/16/2021 17:06:39 - INFO - __main__ - train_runtime = 2.3114
02/16/2021 17:06:39 - INFO - __main__ - train_samples = 100
02/16/2021 17:06:39 - INFO - __main__ - train_samples_per_second = 3.028
02/16/2021 17:06:43 - INFO - __main__ - ***** val metrics *****
02/16/2021 17:13:05 - INFO - __main__ - epoch = 1.0
02/16/2021 17:13:05 - INFO - __main__ - eval_bleu = 24.6502
02/16/2021 17:13:05 - INFO - __main__ - eval_gen_len = 32.9
02/16/2021 17:13:05 - INFO - __main__ - eval_loss = 3.7533
02/16/2021 17:13:05 - INFO - __main__ - eval_mem_cpu_alloc_delta = 0MB
02/16/2021 17:13:05 - INFO - __main__ - eval_mem_cpu_peaked_delta = 0MB
02/16/2021 17:13:05 - INFO - __main__ - eval_mem_gpu_alloc_delta = 0MB
02/16/2021 17:13:05 - INFO - __main__ - eval_mem_gpu_peaked_delta = 510MB
02/16/2021 17:13:05 - INFO - __main__ - eval_runtime = 3.9266
02/16/2021 17:13:05 - INFO - __main__ - eval_samples = 100
02/16/2021 17:13:05 - INFO - __main__ - eval_samples_per_second = 25.467
02/16/2021 17:06:48 - INFO - __main__ - ***** test metrics *****
02/16/2021 17:06:48 - INFO - __main__ - test_bleu = 27.146
02/16/2021 17:06:48 - INFO - __main__ - test_gen_len = 41.37
02/16/2021 17:06:48 - INFO - __main__ - test_loss = 3.6682
02/16/2021 17:06:48 - INFO - __main__ - test_mem_cpu_alloc_delta = 0MB
02/16/2021 17:06:48 - INFO - __main__ - test_mem_cpu_peaked_delta = 0MB
02/16/2021 17:06:48 - INFO - __main__ - test_mem_gpu_alloc_delta = 0MB
02/16/2021 17:06:48 - INFO - __main__ - test_mem_gpu_peaked_delta = 645MB
02/16/2021 17:06:48 - INFO - __main__ - test_runtime = 5.1136
02/16/2021 17:06:48 - INFO - __main__ - test_samples = 100
02/16/2021 17:06:48 - INFO - __main__ - test_samples_per_second = 19.556
```
To understand the memory reports:
- `alloc_delta` - is the difference in the used/allocated memory counter between the end and the start of the stage - it can be negative if a function released more memory than it allocated
- `peaked_delta` - is any extra memory that was consumed and then freed - relative to the current allocated memory counter - it is never negative - this is the mysterious cause of OOM, since normally it doesn't register when everything fits into the memory.
- so when you look at the metrics of any stage you add up `alloc_delta` + `peaked_delta` and you know how much memory was needed to complete that stage. But the two numbers need to be separate.
We can change the names if you'd like, but if we do, let's make sure that allocated/used shows up before peaked when alphabetically sorted - as they should be read in that order.
Also it would be useful to have them of the same length so it's less noisy vertically. I was thinking perhaps to add `m` to `alloc`? Then it becomes perfect:
```
test_mem_cpu_malloc_delta = 0MB
test_mem_cpu_peaked_delta = 0MB
```
Logic behind `init`:
- since Trainer's `__init__` can consume a lot of memory, it's important that we trace it too, but since any of the stages can be skipped, I basically push it into the metrics of whichever stage gets to update metrics first, so it gets tacked on to that group of metrics. In the above example it happens to be `train`.
Logic behind nested calls:
- since eval calls may be intermixed with train calls, we can't handle nested invocations because `torch.cuda.max_memory_allocated` is a single counter, so if it gets reset by a nested eval call, train will report incorrect info. One day pytorch will fix this issue: https://github.com/pytorch/pytorch/issues/16266 and then it will be possible to be re-entrant, for now we will only track the outer level `train` / `evaluation` / `predict` functions.
After this addition we can already profile/detect regressions for specific training stages. But this doesn't give us the full picture as there other allocations outside of the trainer - i.e. in user's code. It's a start.
Down the road I may code a different version, based on pynvml, which gives somewhat different numbers, and has its own complications. But it gives you the exact gpu memory usage, so you know exactly how much memory is used or left. PyTorch only reports its internal allocations on the other hand.
@patrickvonplaten, this feature should give us already a partial way to track memory regression. So this could be the low hanging fruit you and I were discussing.
It also should be possible to extend the tracker to use TF, but I don't know anything about TF.
@sgugger, @patil-suraj, @LysandreJik, @patrickvonplaten | 02-17-2021 01:32:36 | 02-17-2021 01:32:36 | > Thanks for adding this functionality! One general comment I have is on the type of the `stage` argument. Since it has only four possible values from what I can see, it would be better to create an enum for those (to avoid typos and have auto-complete in an IDE).
Oh, let me make it absolutely automatic with `inspect` so it won't need a `stage` argument at all.
And I will collapse the two calls into one in all but `__init__`, so it'll be less noisy.
<|||||>So, the API has been simplified to remove the need for naming the stages in the caller, tests added.
I'm sure we will think of further improvements down the road, please let me know if this is good for the first iteration.
I'm not sure if anybody else wants to review before we merge this. |
transformers | 10,224 | closed | No module named 'tasks' | ## Environment info
- `transformers` version: 4.3.2
- Platform: 5.10.8-200.fc33.x86_64
- Python version: 3.8.3
- PyTorch version (GPU?): 1.6.0 (GPU)
- Tensorflow version (GPU?): 2.4.1 (GPU)
- Using GPU in script?: No.
- Using distributed or parallel set-up in script?: No.
@sgugger, @patil-suraj
## Information
Model I am using (Bert, XLNet ...): allenai/scibert_scivocab_uncased
The problem arises when using:
* [X ] the official example scripts: (give details below)
I'm using the old NER script, since the model I'm using doesn't support Fast Tokenizers.
https://github.com/huggingface/transformers/blob/master/examples/legacy/token-classification/run_ner.py
However, I get an error on trying to import "tasks.py". I did find a previous git issue for this, which recommended downloading said file.... but it no longer exists.
The tasks I am working on is:
* [X ] my own task or dataset: (give details below)
I'm using the bc2gm-corpus dataset.
## To reproduce
Steps to reproduce the behaviour:
1. Download the script from the provided github link.
2. Run it with any real model name, a real directory for data (doesn't need to include data), and an output directory.
I used the following command: python3 run_ner.py --model_name_or_path allenai/scibert_scivocab_uncased --data_dir bc2gm-corpus/conll --output_dir ./output
` File "run_ner.py", line 323, in <module>
main()
File "run_ner.py", line 122, in main
module = import_module("tasks")
File "/home/dkaplan/miniconda3/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 973, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'tasks' `
## Expected behavior
I expect the NER script to run, finetuning a model on the provided dataset.
| 02-16-2021 23:34:00 | 02-16-2021 23:34:00 | I think you did not clone the repository properly or are not running the command from the folder `examples/legacy/token-classification`, since that folder does have a task.py file.<|||||>Ah, I was searching for "tasks.py" instead. Just user error, thanks for the fast reply. |
transformers | 10,223 | closed | Slow Multi-GPU DDP training with run_clm.py and GPT2 | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.0
- Platform: Linux-3.10.0-1160.11.1.el7.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.6
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes (using official `run_clm.py`)
- Using distributed or parallel set-up in script?: using DDP with `run_clm.py`
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
- gpt2: @patrickvonplaten, @LysandreJik
- trainer, maintained examples: @sgugger
## Information
Model I am using (Bert, XLNet ...): gpt2-medium
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Generate some dummy text data (3000 examples) and save a csv:
```python
import pandas as pd
text = " ".join(100*["Here is some very long text."])
text = 3000*[text]
pd.Series(text).to_frame("text").to_csv("data_temp.csv",index=False)
```
2. Run official [`run_clm.py`](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_clm.py) from examples using 3 gpus and DDP:
```shell
CUDA_VISIBLE_DEVICES="1,2,3" python -m torch.distributed.launch --nproc_per_node 3 run_clm.py \
--model_name_or_path gpt2-medium \
--do_train \
--output_dir /proj/semafor/kirill/tmp \
--per_device_train_batch_size 4 \
--block_size 128 \
--train_file data_temp.csv \
--fp16
```
3. Using 3 GeForce RTX 2080Ti with 11Gbs, tqdm says it should approximately take 1 hour: `48/4101 [00:38<53:51, 1.25it/s`. The memory in each GPU is maxed out: ` 10782MiB / 11019MiB `
4. Now, if I just run the same script on a single GPU:
```shell
CUDA_VISIBLE_DEVICES="3" python run_clm.py \
--model_name_or_path gpt2-medium \
--do_train \
--output_dir /proj/semafor/kirill/tmp \
--per_device_train_batch_size 4 \
--block_size 128 \
--train_file data_temp.csv \
--fp16
```
It's actually a little faster: `260/12303 [00:57<44:02, 4.56it/s` and the GPU memory is not maxed out: `9448MiB / 11019MiB`
I can actually double the `--per_device_train_batch_size` from `4 -> 8` and get it down to under 30 mins per epoch: `138/6153 [00:36<26:30, 3.78it/s`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
So I expected that:
- DDP training on 3 GPUs to be faster than a single GPU (It's actually a little slower).
- If I can load a batch of 8 on a device in a single GPU mode then it should work in multi-GPU mode as well (it doesn't, I get OOM error). | 02-16-2021 23:09:12 | 02-16-2021 23:09:12 | The answers to those two points is not necessarily yes.
- DDP training is only faster if you have NVLinks between your GPUs, otherwise the slow communication between them can slow down training.
- DDP training takes more space on GPU then a single-process training since there is some gradients caching.
Both issues come from PyTorch and not us, the only thing we can check on our side is if there is something in our script that would introduce a CPU-bottleneck, but I doubt this is the reason here (all tokenization happens before the training so there is nothing I could think of there).
You should also try regular `DataParalell` which does not have the memory problem IIRC, but I don't remember the comparison in terms of speed. I think @stas00 may have more insight there.<|||||>Thanks. I’ll give DP a try.<|||||>What @sgugger said,
please see this excellent benchmark: https://github.com/huggingface/transformers/issues/9371#issuecomment-768656711 You can see that a single GPU beats DDP over 2 gpus there if it's not NVLink-connected. I haven't tried it on 3 gpus though. Surely it should be somewhat faster at least.
Did you check you're feeding the gpus fast enough? - i.e. check their utilization %, it they are under 90% then you probably have an issue with loading - add more dataloader workers.
Also please consider using DeepSpeed ZeRO-DP, which should be even faster. https://huggingface.co/blog/zero-deepspeed-fairscale<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 10,222 | open | the change from single mask to multi mask support for pytorch | # What does this PR do?
A draft PR for the Feature request to change from single mask to multi mask support for the fill mask pipeline.
As discussed this is one a draft PR to discuss the changes that need to be made to the output format to jointly support multiple and single mask in one pipeline call. The PR implements the change for Pytorch and code has not been pushed in yet for when the keyword argument is called.
The pipeline tests are expected to fail since the output format changed.
#10158
Example code that tests this feature is below.
```
import json
from transformers import pipeline
unmasker = pipeline('fill-mask', model='bert-base-uncased')
t = unmasker("hi [MASK] morning I'm a [MASK] model.")
print(json.dumps(t, indent=4))
```
@LysandreJik | 02-16-2021 22:06:26 | 02-16-2021 22:06:26 | @Narsil @LysandreJik How do you suggest we go about with the targets param? At the moment, targets can either be a list of strings or a string. In case of multiple masks, there are 2 ways to go about with it.
1. Provide a way for the user to define targets for each mask.
2. One single target list that can be uniformly applied across all the positions.
The first method would be best implemented by expecting a dict as argument in the keyword param. Something like
{ "0" : "str or list of strings" , "2" : "str or list of strings" ... }
This way the user can decide to skip explicitly defining candidate keywords in some of the mask positions if needed ( skipped mask 1 in the example above).
<|||||>Tough question indeed regarding targets! Switching to a dict sounds a bit non intuitive to me, but I don't see any other choice. I guess eventually the API would be the following:
Given a single input string, with a single mask:
- A candidate as a string returns the candidate score for the mask
- A candidate list of strings returns the candidate scores for the mask
Given a single input string, with multiple masks:
- A candidate as a string returns the candidate scores for all masks
- A candidate list of strings returns the candidate scores for all masks, on all candidates
- A candidate dict of strings returns the candidate scores for the masks which are concerned by the dictionary keys. Their candidates is the dictionary value linked to that dictionary key.
- A candidate dict of list of strings returns the candidate scores for the masks which are concerned by the dictionary keys. Their candidates are the dictionary values linked to that dictionary key.
Then there are also lists of input strings, with single masks, and lists of input strings, with multiple masks. This results in a very large amount of possibilities, with different returns, which sounds overwhelming. I'm not too sure that's the best way to handle the issue, I'll give it a bit more thought.<|||||>@LysandreJik I had a question. From what I can understand, one can only define a single set of targets at the moment irrespective of how many input texts are given right? For both the case of a single input text and multiple input texts for even the base case of a single mask, we can only define a single target or a list of targets that applies across them all right? Essentially, it is a many to one relation for the input texts to the target. If that is the case, targets functionality is currently not designed in a useful manner right?<|||||>Hi, sorry for getting back to you so late on this. I agree with you that we can improve the `targets`. I'm pinging @joeddav as he's the author of the PR that added them.
@joeddav your input on this PR would be more than welcome! Thank you.<|||||>Personally, I think the simplest solution would be best: only support `targets` in the single-mask case. If `targets` is passed and there are multiple mask tokens, raise a `ValueError`. It's a pretty narrow use case to need to pass a string with multiple masked tokens while also needing to evaluate possible target tokens for each. In my opinion, that's a complicated and rare use case and we don't need to muddle pipelines code by attempting to support it. It can always be accomplished by using the core modules instead of a pipeline.<|||||>@joeddav That does make sense to me! The objective of a pipeline should only be to accommodate for some quick use test cases. Making it cumbersome misses the point altogether. @LysandreJik What do you think? <|||||>Yes, I agree with @joeddav as well!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>> [...] If you think this still needs to be addressed please comment on this thread.
This feature would have many applications and would enable comparison of MLMs in gloze tests beyond the restricted setting of targeting words in the intersection of the vocabularies of the models to be compared. There are some open questions how `top_k` predictions should be made, see issue #3609, so I think it would be good to wait a few more weeks to give everybody time to read the linked paper and discuss ideas.<|||||>@jowagner Just to clarify it for others who might be following, the paper you are referring to is this one https://arxiv.org/abs/2002.03079 right?<|||||>> @jowagner Just to clarify it for others who might be following, the paper you are referring to is this one https://arxiv.org/abs/2002.03079 right?
Yes. I hope to read it soon and get a more clear picture what is needed here. I tend to think that producing `top_k` predictions for multiple masked tokens is outside the scope of the BERT model and really needs an extra model on top of it, e.g. a model that predicts a ranked list of best crystallisation points and can then be used to perform a beam search, fixing on subword unit at a time and producing a k-best list of best crystallisation processes.
<|||||>@jowagner I have a doubt in that case coming back to the basics of BERT. when some of the words are masked and a prediction is to be made on multiple masks during pre-training step in BERT, does BERT not face the same issue? Or are the masks predicted one mask at a time in each training sentence fed to BERT?<|||||>Looking at Devlin et al 2018 again, I don't see the pre-training objective stated but certainly they try to push as much probability mass as possible to the one completion attested in the training data. BERT is trained to get the top prediction right. Good secondary predictions for individual tokens are only a by-product. Nothing pushes the model to make the k-th predictions consistent across multiple masked subword units for k > 1.
Yes, making predictions can be expected to be harder when there are multiple masked subword units but that also happens in pre-training and BERT therefore learns to do this. Maybe BERT does this in steps, crystallising only a few decisions in each layer. A way to find out would be to fix the BERT layers, add MLM heads to each layer, tune these heads and then see how the predictions (and probabilities) change from layer to layer. (This would make a nice paper, or maybe somebody has done this already.)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Do we have a final verdict yet on the approach to be followed? @mitramir55 had suggested a code proposal I believe in #3609 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@LysandreJik Shall i replace this with the implementation suggested earlier in #3609 and raise a PR? Though I dont quite think we have discussed on what scoring would be ideal for the beam search used to sort the predictions.
<|||||>@Narsil had good insights about your previous implementation - @Narsil could you let us know what you think of the solution proposed here https://github.com/huggingface/transformers/issues/3609#issuecomment-854005760?<|||||>The design in https://github.com/huggingface/transformers/issues/3609#issuecomment-854005760 seems very interesting !
Main comments:
- I would be curious to see (and probably it would need to become a test) to prove that doing `n` inference instead of 1 will produce better results (because it should be close to the real joint probabilities) that's the main interest of this proposed approach.
- I think it should output the same tokens as fill-mask pipeline in the degenerate case (when there's only 1 mask).
I don't think it's correct right now (see below what I tried)
- Because we iteratively do `topk` for each `mask` it's a bit of an exponential if I understand correctly. I would probably add some kind of cleanup to limit the number of "beams" to topk (I may have overlooked but it seems to be currently missing)
- the proposed code could probably be refactored a bit for clarity and avoid integer indexing and deep nesting.
```python
import torch
from transformers import AutoModelForMaskedLM, AutoTokenizer, pipeline
import random
def predict_seqs_dict(sequence, model, tokenizer, top_k=5, order="right-to-left"):
ids_main = tokenizer.encode(sequence, return_tensors="pt", add_special_tokens=False)
ids_ = ids_main.detach().clone()
position = torch.where(ids_main == tokenizer.mask_token_id)
positions_list = position[1].numpy().tolist()
if order == "left-to-right":
positions_list.reverse()
elif order == "random":
random.shuffle(positions_list)
# print(positions_list)
predictions_ids = {}
predictions_detokenized_sents = {}
for i in range(len(positions_list)):
predictions_ids[i] = []
predictions_detokenized_sents[i] = []
# if it was the first prediction,
# just go on and predict the first predictions
if i == 0:
model_logits = model(ids_main)["logits"][0][positions_list[0]]
top_k_tokens = torch.topk(model_logits, top_k, dim=0).indices.tolist()
for j in range(len(top_k_tokens)):
# print(j)
ids_t_ = ids_.detach().clone()
ids_t_[0][positions_list[0]] = top_k_tokens[j]
predictions_ids[i].append(ids_t_)
pred = tokenizer.decode(ids_t_[0])
predictions_detokenized_sents[i].append(pred)
# append the sentences and ids of this masked token
# if we already have some predictions, go on and fill the rest of the masks
# by continuing the previous predictions
if i != 0:
for pred_ids in predictions_ids[i - 1]:
# get the logits
model_logits = model(pred_ids)["logits"][0][positions_list[i]]
# get the top 5 of this prediction and masked token
top_k_tokens = torch.topk(model_logits, top_k, dim=0).indices.tolist()
for top_id in top_k_tokens:
ids_t_i = pred_ids.detach().clone()
ids_t_i[0][positions_list[i]] = top_id
pred = tokenizer.decode(ids_t_i[0])
# append the sentences and ids of this masked token
predictions_ids[i].append(ids_t_i)
predictions_detokenized_sents[i].append(pred)
return predictions_detokenized_sents
sequence = "This is some super neat [MASK] !"
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModelForMaskedLM.from_pretrained("bert-base-uncased")
pipe = pipeline(task="fill-mask", tokenizer=tokenizer, model=model)
print(predict_seqs_dict(sequence, model, tokenizer))
print(pipe(sequence))
```<|||||>> This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Yes, we need more time or help from somebody with time to review the discussion and make recommendations.
My thoughts re-reading a few comments, including some of my own:
- Producing k-best predictions for multiple masked tokens requires choices, i.e. a model, separate from the underlying transformer model. This is where the PR is stalled. A quick way forward would be to support only `k=1` when there are multiple masked tokens for the time being. For `k=1`, it is undisputed that the prediction should be the transformer's top prediction for each token.
- This PR/feature does not directly allow comparison of cloze test predictions of models with different vocabularies. Users would have to probe with continuous sequences of masked tokens of varying length and somehow decide between the candidate predictions.<|||||>After reading this thread and skimming through #14716, I must confess I still a little unsure how the scores for multi-masked prompts are computed. Based on my understanding, for a prompt with k-masks, it seems like you want to do a beam search over over the Cartesian product `mask_1_targets x mask_2_targets x ... x mask_k_targets` and return the top-n most likely tuples maximizing `P(mask_1=token_i_k, mask_2=token_i_2, ... m_k=token_i_k)`, i.e.:
```
{
T_1=[(token_1_1, ..., token_1_k), score_t_1],
T_2=[(token_2_1, ..., token_2_k), score_t_2],
...
T_n=[(token_n_1, ..., token_n_k), score_t_n]
}
```
Is this accurate? Perhaps you could try to clarify the design intent and limitations of the current API in the documentation somewhere. If you intend to eventually support computing the joint probability, I think would be beneficial to provide a way for consumers to supply a set of per-mask targets and configure the beam search parameters, e.g. beam width. Thanks!<|||||>> After reading this thread and skimming through #14716, I must confess I still a little unsure how the scores for multi-masked prompts are computed. Based on my understanding, for a prompt with k-masks, it seems like you want to do a beam search over over the Cartesian product `mask_1_targets x mask_2_targets x ... x mask_k_targets` and return the top-n most likely tuples maximizing `P(mask_1=token_i_k, mask_2=token_i_2, ... m_k=token_i_k)`, i.e.:
>
> ```
> {
> T_1=[(token_1_1, ..., token_1_k), score_t_1],
> T_2=[(token_2_1, ..., token_2_k), score_t_2],
> ...
> T_n=[(token_n_1, ..., token_n_k), score_t_n]
> }
> ```
>
> Is this accurate?
Actually no, this was the intent of *this* PR which never got merged. Instead of trying to make educated guess about mask combinations, https://github.com/huggingface/transformers/pull/14716 added what seems the most appropriate, which is what the models really answers, which is various tokens at mask locations, without ANY information about correlations.
This is how the model is built, and as such, we return it raw.
```python
from transformers import pipeline
pipe = pipeline(model="bert-base-uncased")
print(pipe("This is a [MASK] and a [MASK]", top_k=3))
```
```
[[{'score': 0.5048776268959045,
'sequence': '[CLS] this is a. and a [MASK] [SEP]',
'token': 1012,
'token_str': '.'},
{'score': 0.07435218244791031,
'sequence': '[CLS] this is a ; and a [MASK] [SEP]',
'token': 1025,
'token_str': ';'},
{'score': 0.05109349265694618,
'sequence': '[CLS] this is a, and a [MASK] [SEP]',
'token': 1010,
'token_str': ','}],
[{'score': 0.8665121793746948,
'sequence': '[CLS] this is a [MASK] and a. [SEP]',
'token': 1012,
'token_str': '.'},
{'score': 0.05160374939441681,
'sequence': '[CLS] this is a [MASK] and a | [SEP]',
'token': 1064,
'token_str': '|'},
{'score': 0.046446096152067184,
'sequence': '[CLS] this is a [MASK] and a ; [SEP]',
'token': 1025,
'token_str': ';'}]]
```
You are then free to do all the complex attempts to make the suggestions combined. But we don't attempt to hide it since, the model really doesn't model that.
<|||||>I appreciate this implementation for the support of multiple [MASK] tokens in the input.
However, I cannot figure out why the pipeline output is kept nested only in those cases. It forces me to do some additional coding to make it unnested.
Is there any specific reason for this?
https://github.com/huggingface/transformers/blob/4975002df50c472cbb6f8ac3580e475f570606ab/src/transformers/pipelines/fill_mask.py#L142-L144<|||||>> Is there any specific reason for this?
Backward compatibility, the first pipeline wasn't built with that option in mind making it harder to support multi mask seamlessly like you would expect. The removal of such quirks might happen in 5.0 though. We know it's not convenient as it is, but breaking user code is even less convenient.
|
transformers | 10,221 | closed | T5 relative attention bias: Discrepancy to original implementation | ### Who can help
@patrickvonplaten
## Information
Model I am using: T5
In the huggingface TF T5 implementation, the relative attention bias only seems to be applied to the first layer of the stack. If I understand the original implementation correctly, though, it is applied to all layers there.
HF: https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/modeling_tf_t5.py#L570
Mesh: https://github.com/tensorflow/mesh/blob/master/mesh_tensorflow/transformer/transformer_layers.py#L263 | 02-16-2021 20:54:44 | 02-16-2021 20:54:44 | The implementation of T5 is based on the original implementation which can be found [here](https://github.com/google-research/text-to-text-transfer-transformer).
The implementation you are referring to is a general Transformer implementation from Tensorflow Mesh. It seems like this repo does not implement T5, but other models like the Funnel Transformer and the Evolved Transformer.
<|||||>But the T5 repo actually calls the Tensorflow Mesh implementation internally. AFAIK there is no standalone t5 implementation (apart from the HF one).<|||||>Ok I see. In that case I'll leave it on to Patrick to help you. <|||||>Hey @maurice-g, we compute it ones and then forward it to all the follow-up layers since the result will be the same for all layers. 2 month ago, I made sure that our implementation is exactly the same as the original implementation - see those tests: https://github.com/huggingface/transformers/blob/e94d63f6cbf5efe288e41d9840a96a5857090617/tests/test_modeling_t5.py#L734 and https://github.com/huggingface/transformers/blob/e94d63f6cbf5efe288e41d9840a96a5857090617/tests/test_modeling_tf_t5.py#L492
So I'm confident that the model behaves as expected, but in case you have a reproducible code snippet showcasing differences between the 2 implementations, I'm more than happy to take a look :-)<|||||>You're right, missed that they were returned and forwarded to the other layers. Sorry for that.<|||||>No worries! Thanks for checking in-detail. I think it's always a very good practice to check things in-detail or more often than not you will find subtle bugs in Transformers that will help us improve the code :-) |
transformers | 10,220 | closed | fix deprecated reference `tokenizer.max_len` in glue.py | This is to fix deprecated reference to `tokenizer.max_len` with `tokenizer.model_max_length` - similar to [issue 8739](https://github.com/huggingface/transformers/issues/8739) and [PR 8604](https://github.com/huggingface/transformers/pull/8604).
See error example [in Colab here](https://colab.research.google.com/gist/poedator/f8776349e5c625ce287fc6fcd312fa1e/tokenizer-max_len-error-in-transformers_glue.ipynb). it causes `AttributeError: 'BertTokenizer' object has no attribute 'max_len'`
The error happens when `glue_convert_examples_to_features()` is called without `max_length` parameter specified. In that case [line 119](https://github.com/huggingface/transformers/blob/master/src/transformers/data/processors/glue.py#L119) with wrong reference gets called. This simple fix should do it.
| 02-16-2021 20:14:20 | 02-16-2021 20:14:20 | |
transformers | 10,219 | closed | [trainer] fix ignored columns logger | This PR fixes a confusing log entry that says:
```
The following columns in the evaluation set don't have a corresponding argument in `T5ForConditionalGeneration.forward` and have been ignored: .
```
when everything is in order.
@sgugger | 02-16-2021 19:04:48 | 02-16-2021 19:04:48 | |
transformers | 10,218 | closed | discrepancy between the Huggingface T5Tokenizer and the original T5tokenizer | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `Transformers` version: 4.3.2
- Platform: Linux
- Python version: 3.7
- PyTorch version (GPU?): yes
- Tensorflow version (GPU?): -
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
- tokenizers: @n1t0, @LysandreJik
## Information
Model I am using T5Tokenizer, and I adapted the code of run_mlm.py [1] to use it with T5 tokenizer, when I run the code I am getting
```
This tokenizer does not have a mask token which is necessary for masked language modeling. "
ValueError: This tokenizer does not have a mask token which is necessary for masked language modeling. You should pass `mlm=False` to train on causal language modeling instead.
```
I checked the error and this is because: tokenizer.mask_token is None for T5Tokenizer, checking T5 paper, they use a masked language modeling with their seq2seq objective as the pretraining objective, so they must have trained a masked token as their paper says, could you give me some insight why masked token does not exist in huggingface implementation of T5Tokenizer and how I can correct this to be able to run run_mlm codes ? thank you
[1] https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py
## To reproduce
```
tokenizer = AutoTokenizer.from_pretrained("t5-small")
print(tokenizer.mask_token) => this is None
```
## Expected behavior
The masked token as per T5 paper should exist in T5Tokenizer. | 02-16-2021 18:37:34 | 02-16-2021 18:37:34 | Hi,
T5 is an encoder-decoder Transformer. The `run_mlm.py` script can only be used for encoder-only models, such as BERT, RoBERTa, DeBERTa, etc.
Besides this, T5 does not use the regular [MASK] token as BERT. Rather than masked language modeling, T5 is pre-trained on "unsupervised denoising training". This is explained [here](https://huggingface.co/transformers/model_doc/t5.html#training).<|||||>Hi @NielsRogge thanks, but I have checked the paper of T5 and this seems to be a unique token:
"We consider two strategies to achieve this: First, instead of replacing each corrupted token with a mask token, we replace
the entirety of each consecutive span of corrupted tokens with a unique mask token."
As for using run_mlm.py script, I do not think T5 model can be an issue as if we could add T5ForConditionalGeneration it could work to me in run_mlm.py out of the box.
Is there any place I could look see how to create datasets the way you mentioned to do T5 pretraining with huggingface codes? thanks
<|||||>Hi
@patrickvonplaten, @patil-suraj could you give me some advice how to do T5 pretraining with denoising objective? thanks <|||||>Here is an old issue on this subject: https://github.com/huggingface/transformers/issues/5079
Also @NielsRogge is correct - T5 replaces each span of tokens with a unique mask token -> the so-called sentinel tokens.
Currently, there is sadly no script showcasing pertaining for T5. Maybe you have some luck when asking this question on the [forum](https://discuss.huggingface.co/)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>> Hi @NielsRogge thanks, but I have checked the paper of T5 and this seems to be a unique token:
>
> "We consider two strategies to achieve this: First, instead of replacing each corrupted token with a mask token, we replace
> the entirety of each consecutive span of corrupted tokens with a unique mask token."
>
> As for using run_mlm.py script, I do not think T5 model can be an issue as if we could add T5ForConditionalGeneration it could work to me in run_mlm.py out of the box.
>
> Is there any place I could look see how to create datasets the way you mentioned to do T5 pretraining with huggingface codes? thanks
HI, Did you successfully run the Huggingface T5 pretraining? Can you give me some advice?
|
transformers | 10,217 | closed | Fix add_token_positions in custom datasets tutorial | Discussed in #10210. The example `add_token_positions` function incorrectly converts `answers[i]['answer_end']` to its corresponding tokenized index rather than `answers[i]['answer_end'] - 1`. | 02-16-2021 17:12:46 | 02-16-2021 17:12:46 | |
transformers | 10,216 | closed | Making TF Funnel compliant with AMP | # What does this PR do?
This PR makes the TF Funnel model compliant with AMP. All the slow tests are passing as well.
| 02-16-2021 16:59:16 | 02-16-2021 16:59:16 | |
transformers | 10,215 | closed | Factor out methods | With PyTorch's DataParallel, it is not possible to simply iterate over parameters in order to find the `nn.Module`'s dtype or device.
Some efforts were made to catch the error (`StopIteration`) in most cases, but the some were forgotten. This PR factors the try/except in a method, which is applied everywhere instead.
Closes #10214 | 02-16-2021 16:34:33 | 02-16-2021 16:34:33 | @sgugger @patrickvonplaten verified this fixed the issue in #10214 |
transformers | 10,214 | closed | StopIteration Error when running beam search for squad 2.0 | I'm using `huggingface/transformers-pytorch-gpu:4.3.0` on Ubuntu DGX1 server with 8 V100 GPUs.
`NVIDIA-SMI 418.126.02 Driver Version: 418.126.02 CUDA Version: 10.1`
When running the step in `examples/question_answering/README.md` for beam search for squad 2.0
```
python run_qa_beam_search.py \
--model_name_or_path xlnet-large-cased \
--dataset_name squad_v2 \
--do_train \
--do_eval \
--version_2_with_negative \
--learning_rate 3e-5 \
--num_train_epochs 4 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir ./wwm_cased_finetuned_squad/ \
--per_device_eval_batch_size=2 \
--per_device_train_batch_size=2 \
--save_steps 5000
```
Error
```
StopIteration: Caught StopIteration in replica 0 on device 0.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/xlnet/modeling_xlnet.py", line 1978, in forward
start_logits = self.start_logits(hidden_states, p_mask=p_mask)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py", line 1241, in forward
if next(self.parameters()).dtype == torch.float16:
StopIteration
``` | 02-16-2021 16:10:05 | 02-16-2021 16:10:05 | Indeed! I can reproduce, will fix.<|||||>Can you tell me if https://github.com/huggingface/transformers/pull/10215 fixes it? You can try by installing the following:
```
pip install git+https://github.com/huggingface/transformers@parameter-device-dtype
```<|||||>Thanks a lot for the quick fix. I'm running it right now. I will post whether the training is done in ca. 7 hours.<|||||>> Can you tell me if #10215 fixes it? You can try by installing the following:
>
> ```
> pip install git+https://github.com/huggingface/transformers@parameter-device-dtype
> ```
Hi, the issue is fixed. Thanks a lot. |
transformers | 10,213 | closed | Store FLOS as floats to avoid overflow. | # What does this PR do?
As pointed out in #10212, the `total_flos` stored as ints can result in overflowing errors: when in Python ints there is no risk, but when in distributed training, we use torch.int64 to gather all FLOS on all processes which can trigger that error.
Fixes #10212 | 02-16-2021 15:56:17 | 02-16-2021 15:56:17 | |
transformers | 10,212 | closed | RuntimeError: Overflow when unpacking long | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4.0.dev0
- Platform: linux
- Python version: 3.6.10
- PyTorch version (GPU?): 1.7.0a0. (gpu)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: distributed, 4 nodes with 4 GPU's each
Models:
- albert, bert, xlm: @LysandreJik
running language modelling on a large 335 million token sequences
Library:
- trainer: @sgugger
- Fairscale
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ x ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. I am running run_mlm with just small changes as my datasets are already tokenized
2. Getting an error while saving checkpoint
Traceback (most recent call last):
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 661, in <module>
Traceback (most recent call last):
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 661, in <module>
Traceback (most recent call last):
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 661, in <module>
Traceback (most recent call last):
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 661, in <module>
Traceback (most recent call last):
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 661, in <module>
Traceback (most recent call last):
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 661, in <module>
Traceback (most recent call last):
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 661, in <module>
Traceback (most recent call last):
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 661, in <module>
Traceback (most recent call last):
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 661, in <module>
Traceback (most recent call last):
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 661, in <module>
Traceback (most recent call last):
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 661, in <module>
Traceback (most recent call last):
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 661, in <module>
Traceback (most recent call last):
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 661, in <module>
Traceback (most recent call last):
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 661, in <module>
Traceback (most recent call last):
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 661, in <module>
main()
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 465, in main
main()
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 465, in main
main()
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 465, in main
main()
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 465, in main
main()
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 465, in main
main()
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 465, in main
main()
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 465, in main
main()
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 465, in main
main()
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 465, in main
main()
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 465, in main
train_result = trainer.train(model_path=model_path)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 924, in train
main()
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 465, in main
train_result = trainer.train(model_path=model_path)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 924, in train
train_result = trainer.train(model_path=model_path)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 924, in train
train_result = trainer.train(model_path=model_path)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 924, in train
train_result = trainer.train(model_path=model_path)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 924, in train
train_result = trainer.train(model_path=model_path)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 924, in train
train_result = trainer.train(model_path=model_path)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 924, in train
train_result = trainer.train(model_path=model_path)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 924, in train
train_result = trainer.train(model_path=model_path)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 924, in train
train_result = trainer.train(model_path=model_path)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 924, in train
main()
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 465, in main
main()
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 465, in main
train_result = trainer.train(model_path=model_path)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 924, in train
main()
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 465, in main
main()
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 465, in main
train_result = trainer.train(model_path=model_path)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 924, in train
train_result = trainer.train(model_path=model_path)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 924, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1003, in _maybe_log_save_evaluate
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1003, in _maybe_log_save_evaluate
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1003, in _maybe_log_save_evaluate
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1003, in _maybe_log_save_evaluate
train_result = trainer.train(model_path=model_path)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 924, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1003, in _maybe_log_save_evaluate
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1003, in _maybe_log_save_evaluate
train_result = trainer.train(model_path=model_path)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 924, in train
self._save_checkpoint(model, trial, metrics=metrics)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1026, in _save_checkpoint
self._save_checkpoint(model, trial, metrics=metrics)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1026, in _save_checkpoint
self._save_checkpoint(model, trial, metrics=metrics)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1026, in _save_checkpoint
self._save_checkpoint(model, trial, metrics=metrics)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1026, in _save_checkpoint
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1003, in _maybe_log_save_evaluate
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1003, in _maybe_log_save_evaluate
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1003, in _maybe_log_save_evaluate
self.store_flos()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1361, in store_flos
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1003, in _maybe_log_save_evaluate
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1003, in _maybe_log_save_evaluate
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1003, in _maybe_log_save_evaluate
self.store_flos()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1361, in store_flos
self.store_flos()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1361, in store_flos
self.store_flos()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1361, in store_flos
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1003, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial, metrics=metrics)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1026, in _save_checkpoint
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1003, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial, metrics=metrics)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1026, in _save_checkpoint
self._save_checkpoint(model, trial, metrics=metrics)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1026, in _save_checkpoint
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1003, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial, metrics=metrics)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1026, in _save_checkpoint
self._save_checkpoint(model, trial, metrics=metrics)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1026, in _save_checkpoint
self._save_checkpoint(model, trial, metrics=metrics)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1026, in _save_checkpoint
self._save_checkpoint(model, trial, metrics=metrics)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1026, in _save_checkpoint
self.store_flos()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1361, in store_flos
self._save_checkpoint(model, trial, metrics=metrics)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1026, in _save_checkpoint
self._save_checkpoint(model, trial, metrics=metrics)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1026, in _save_checkpoint
self.store_flos()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1361, in store_flos
self._save_checkpoint(model, trial, metrics=metrics)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1026, in _save_checkpoint
self.store_flos()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1361, in store_flos
self.store_flos()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1361, in store_flos
self._save_checkpoint(model, trial, metrics=metrics)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1026, in _save_checkpoint
self.store_flos()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1361, in store_flos
self.store_flos()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1361, in store_flos
self.store_flos()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1361, in store_flos
self.store_flos()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1361, in store_flos
self.state.total_flos = distributed_broadcast_scalars([self._total_flos]).sum().item()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 140, in distributed_broadcast_scalars
self.state.total_flos = distributed_broadcast_scalars([self._total_flos]).sum().item()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 140, in distributed_broadcast_scalars
self.state.total_flos = distributed_broadcast_scalars([self._total_flos]).sum().item()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 140, in distributed_broadcast_scalars
self.store_flos()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1361, in store_flos
self.state.total_flos = distributed_broadcast_scalars([self._total_flos]).sum().item()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 140, in distributed_broadcast_scalars
self.state.total_flos = distributed_broadcast_scalars([self._total_flos]).sum().item()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 140, in distributed_broadcast_scalars
self.store_flos()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1361, in store_flos
self.state.total_flos = distributed_broadcast_scalars([self._total_flos]).sum().item()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 140, in distributed_broadcast_scalars
self.state.total_flos = distributed_broadcast_scalars([self._total_flos]).sum().item()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 140, in distributed_broadcast_scalars
self.store_flos()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1361, in store_flos
self.state.total_flos = distributed_broadcast_scalars([self._total_flos]).sum().item()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 140, in distributed_broadcast_scalars
self.state.total_flos = distributed_broadcast_scalars([self._total_flos]).sum().item()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 140, in distributed_broadcast_scalars
self.state.total_flos = distributed_broadcast_scalars([self._total_flos]).sum().item()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 140, in distributed_broadcast_scalars
self.state.total_flos = distributed_broadcast_scalars([self._total_flos]).sum().item()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 140, in distributed_broadcast_scalars
tensorized_scalar = torch.tensor(scalars).cuda()
tensorized_scalar = torch.tensor(scalars).cuda()
tensorized_scalar = torch.tensor(scalars).cuda()
tensorized_scalar = torch.tensor(scalars).cuda()
RuntimeError: Overflow when unpacking long
RuntimeError: Overflow when unpacking long
RuntimeError: Overflow when unpacking long
RuntimeError: Overflow when unpacking long
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Saving the checkpoint normally. It occurs only at some checkpoints randomly!
<!-- A clear and concise description of what you would expect to happen. -->
| 02-16-2021 14:17:14 | 02-16-2021 14:17:14 | This comes from the `_total_flos` being stored as long and overflowing in a big training. Will fix this by storing them as floats (hoping for a PR by the end of today). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.