repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
β | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 14,539 | closed | Two bugs in AdamW | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.13.0.dev0
- Platform: Linux-3.10.0-1160.45.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.7
- PyTorch version (GPU?): 1.10.0+cu113 (True)
- Tensorflow version (GPU?): 2.7.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@thomwolf and @stas00 should be able to help based on `git blame`
## Information
There are two bugs in the implementation of AdamW.
Here's the current code https://github.com/manuelciosici/transformers/blob/04683c0659aacf31a1e1df8aa2e6cf7b447a6f12/src/transformers/optimization.py#L324-L371
### Weight decay bug
Look at lines 369-370. The weight decay is multiplied with `p.data` which no longer corresponds to `theta_{t-1}` since `p.data` was modified in line 369. Below is a picture of Algorithm 2 from the [original Adamw paper](https://openreview.net/forum?id=Bkg6RiCqY7) that shows on line 12 that the weight decay should be multiplied with the previous step's parameters (i.e., `theta_{t-1}`).

From what I can tell, this is a regression since the original AdamW implementation in `transformers` applied weight decay properly. Here's the commit that introduces the bug https://github.com/HuggingFace/transformers/commit/ec07cf5a660926833d6f5208b58730e4af8d1178#diff-40c6163602943c11431f1ec360299a7646bb436c691a646b9f54b2284f556ce0
For confirmation that weight decay is currently buggy, see the original AdamW implementation, where, [on line 74](https://github.com/loshchil/AdamW-and-SGDW/blob/0ae5185b3c655e45cc249adaca4457cd881874cc/UPDATETORCHFILES/adam.lua#L74), the weight decay is multiplied with the old parameters as opposed to the new parameters that are calculated [on line 71](https://github.com/loshchil/AdamW-and-SGDW/blob/0ae5185b3c655e45cc249adaca4457cd881874cc/UPDATETORCHFILES/adam.lua#L71).
### Denominator computation bug
The second bug appears in the computation of the denominator corresponding to line 10 in Algorithm 2 above. In the current code (see link in the `Information` section), on line 351, the denominator excludes the division by `math.sqrt(bias_correction2)`. On line 357, division by `math.sqrt(bias_correction2)` appears, but, by this time, `eps` has already been added to `denom`, making the division not equivalent to line 10 in Algorithm 10.
From what I can tell, this bug was also introduced as part of commit https://github.com/HuggingFace/transformers/commit/ec07cf5a660926833d6f5208b58730e4af8d1178#diff-40c6163602943c11431f1ec360299a7646bb436c691a646b9f54b2284f556ce0. The previous line `update = next_m / (next_v.sqrt() + group['e'])` was correct.
For confirmation that the denominator is not properly calculated, see the original AdamW implementation, where, [on line 64](https://github.com/loshchil/AdamW-and-SGDW/blob/0ae5185b3c655e45cc249adaca4457cd881874cc/UPDATETORCHFILES/adam.lua#L64) the denominator is computed.
## To reproduce
Steps to reproduce the behavior:
1. Checkout the branch at https://github.com/manuelciosici/transformers/tree/reveal_broken_adamw:
2. Run the unit tests in `tests/test_optimization.py`
3. Tests `test_compare_adamw_no_weight_decay` and `test_compare_adamw_with_weight_decay` should fail (see the attached [failed_tests.txt](https://github.com/huggingface/transformers/files/7609907/failed_tests.txt))
## Expected behavior
The two implementations of AdamW should match their parameter updates.
<!-- A clear and concise description of what you would expect to happen. -->
## Proposed fix
Checkout the branch at https://github.com/manuelciosici/transformers/tree/fix_adamw . It contains both the unit tests above and a fix for both bugs mentioned above.
I can make a PR once we agree on the two bugs and the fix. | 11-26-2021 16:50:56 | 11-26-2021 16:50:56 | Thank you for submitting this bug report and the investigation, @manuelciosici
> Look at lines 369-370. The weight decay is multiplied with `p.data` which no longer corresponds to `theta_{t-1}` since `p.data` was modified in line 369.
You must have meant line 359 in the sentence above.
Your investigation looks correct on both accounts, @manuelciosici. I was able to follow through your helpful notes.
I suspect the denominator buglet was an optimization since epsilon is tiny and it's there only to avoid a division by zero. The missing part of the denominator is `eps*(sqrt(bias_correction2)-1)`. Since you can choose a slightly different epsilon w/o breaking the algorithm then I believe this missing part is practically irrelevant. Please correct me if I'm wrong. If the current code remains unchanged we should definitely add a comment that eps1+eps_2 = eps3 is still an eps.
The decay part applied to t instead of t-1 does appear to be significant.
Since I wasn't the one involved in writing this code (I only did a small adjustment) I will let @thomwolf and perhaps @LysandreJik and @sgugger to confirm.
p.s. I did see references where the choice of epsilon was important.
<|||||>I was not the one who made the adjustments, which may have been made on purpose for some reason.
I don't think the current behavior should be changed (even if different from the original paper) as it might break all reported results on all our examples, and this implementation of AdamW has worked quite well on all our tasks. Furthermore, PyTorch now has an implementation of AdamW, so people should use that one for a "bug-free" version.<|||||>@manuelciosici, if you could indulge my curiosity - what was the impetus for checking the AdamW implementation?
I'm just trying to understand the actual impact of this different implementation on the training stability/convergence/etc.
Thank you.<|||||>@stas00 I was reading it as a reference implementation while trying to understand `deepspeed`'s CPU AdamW implementation.
One thing to note is that magnitude of both bugs is a function of AdamW's hyper-parameters (i.e., it is influenced by learning rate, epsilon, and weight decay). For example, for [prompt tuning](https://aclanthology.org/2021.emnlp-main.243/) where learning rates can be as high as `0.3`, the effect of buggy weight decay will be more pronounced.
@sgugger I understand the concerns that fixing the optimizer will lead to misalignment with existing examples and documentation. However, ignoring the bugs is not good either. Since opening the issue, found that [I was not the first one to discover the weight decay issue](https://discuss.huggingface.co/t/adamw-implementation/8426). I expect that, if the code stays as is, the two bugs will be rediscovered periodically.
An alternative to ignoring the bugs would be for `transformers` to deprecate its AdamW implementation with a removal target of, say `transformers>=5.0.0` (or `6.0.0` if a longer sunset is necessary) and add a comment in the AdamW implementation explaining the two bugs. This way, current examples & documentation can continue to work as expected, while users migrate to `torch`'s AdamW. How does this sound?<|||||>Yes, I agree with your last suggestion @manuelciosici and I think this is the right way to go. Deprecation with a removal of v5.0.0 sounds about right, and then the `Trainer` can have an additional `TrainingArguments` that one can use to already use the right implementation of AdamW from PyTorch instead of our class.
Are you interested of making a PR for this @manuelciosici ?<|||||>Additionally to @sgugger's notes: the updated AdamW API should include a new arg like `disable_deprecate_warning=False` - so that by default the deprecation is printed but the user should be able to shut it off if they want to continue using this version.
> then the Trainer can have an additional TrainingArguments that one can use to already use the right implementation of AdamW from PyTorch instead of our class.
The question is whether we switch HF Trainer to use torch's implementation by default or not.
Also, if we are rewriting the optimizer API, perhaps we can add a generic `--optim` flag which could support various optimizers. I'm proposing that since I'm going to suggest shortly for HF Trainer to support BNB https://github.com/facebookresearch/bitsandbytes which saves 3/4 of optim memory and so far tested to work great.
So we can have:
* `--optim adamw_hf`
* `--optim adamw_torch`
* `--optim adamw_bnb`
* `--optim some_other`
<|||||>@sgugger Yes. I can to write a PR deprecating AdamW, including @stas00 's suggestions.
@stas00 BNB sounds exciting. How should we split the work into PRs? I can also help with BNB. I think that could be fun.<|||||>We don't need to worry about BNB here, I was just suggesting to add a generic `--optim` HF Trainer arg, rather than for example `--use-torch-adamw`, which opens up opportunities for new optimizers to be supported.
Adding BNB to transformers is a bit intricate since it calls for an embedding layernorm which we currently don't have. I will open an issue where we can discuss the details. That additional layernorm proved to be an essential for stability of gpt-104B training we are working on at BigScience.
<|||||>The plan is not to add any new optimizer to the Transformers library. It is a library of models, not optimizers, and no one in the team has the bandwidth to support other optimizers. We are deprecating the few we have. Adding support for optimizers implemented in other libraries is completely fine however.
Adding an `--optim` argument is fine, though the default of the learning rate might not be suitable for any optimizer added, so we might have to be careful with the options accepted.
> The question is whether we switch HF Trainer to use torch's implementation by default or not.
Given the fact is is breaking, the Trainer should stay with the current optimizer for now, and we can either switch in v5
or when someone has checked all examples and seen comparable results, whichever comes first.<|||||>Apologies for not being clear. I was not proposing to add a new optimizer, but to add integration for a new optimizer. i.e. we will not need to support it. It's just that it's not just importing it, but requires some tweaks on our side. I will make a separate issue about it.
> Given the fact is is breaking, the Trainer should stay with the current optimizer for now
OK, so the default remains the current version.
Here is the updated spec then:
So with HF Trainer:
1. default is current `--optim adamw_hf`, but prints deprecation warning which includes info on how to enable torch's version
2. --optim adamw_torch - switched to torch.AdamW
With AdamW class itself
1. the default is to print deprecation warning, unless `no_deprecation_warning=True` is passed.
Sylvain, please confirm that this is the correct spec before @manuelciosici starts on it. Thank you.<|||||>Thanks for the summary @stas00, this looks great to me!<|||||>@stas00 Thank you. I work on this during the weekend.<|||||>The NVIDIA engineers have been profiling a few things and torch's AdamW is faster than ours (apparently apex's is even faster), so I will add this to the performance docs once I'm able to benchmark this when your PR is ready, @manuelciosici
https://github.com/huggingface/transformers/pull/14708
<|||||>It appears that `apex.optimizers.FusedAdam` is even faster. So we can plug that one in as well.<|||||>This implementation of Adamw, Although slower, seems to give me better performance then the pytorch one in terms of acc and F1. I'm not sure if I'm the only one with this result but if this is the case for multiple persons, deprecating it could be a shame.
<|||||>The key to understand is that it's not implementing AdamW, but a slightly different algorithm.
Users expect exact algorithm implementation out of the box and if it's not exact it should be named differently.
Perhaps `AdamWHF`? |
transformers | 14,538 | closed | cannot import name 'DataCollatorForSeq2Seq' from 'transformers' | ## Environment info
- `transformers` version: 4.13.0.dev0
- Platform: Ubuntu
- Python version: 3.7
- PyTorch version (GPU?): 1.10.0
### Who can help
@sgugger
## Models:
- encoder-decoder models
## To reproduce
Steps to reproduce the behavior:
`git clone https://github.com/huggingface/transformers.git`
`cd transformers`
`pip install -e .`


The class DataCollatorForSeq2Seq is in data_collator.py. I don't know why this error occurs. Could you help me to solve it?
| 11-26-2021 16:43:17 | 11-26-2021 16:43:17 | After restarting jupyter, the problem is solved. |
transformers | 14,537 | closed | Is the attention_mask in BertSelfAttention applied correctly? | https://github.com/huggingface/transformers/blob/69511cdcaec8c1c7f0d7f378964eca0ce74ed5a8/src/transformers/models/bert/modeling_bert.py#L325-L327
Relevant Models:
- BERT: @LysandreJik
I was just working on adjusting Bert to my custom architecture, and when editing the BertSelfAttention module, I have noticed a very strange couple of lines (see the linked code). Shouldn't the masking be applied multiplicatively instead of additively? :thinking:
I'm happy to be proven wrong and learn a new thing, but it seemed worth bringing up. | 11-26-2021 13:59:11 | 11-26-2021 13:59:11 | These issues may be of help: https://github.com/huggingface/transformers/issues/13555 https://github.com/huggingface/transformers/issues/542
Apparently it would still help quite a bit to have comments explaining what's happening - would you like to try your hand at it?<|||||>Oooh, yeah, didn't realize it was right before softmax, I had to solve this issue before...
I found the code that converts the binary mask into the log-scale mask as well.
Sure, I will update it and post a pr during the weekend. :)<|||||>hmm this piece of code is also present in multiple models I'm sure, all of the models using masked attention could use documenting, correct? @LysandreJik <|||||>Yes, if you could add 1-2 lines of doc to the models concerned, that would be of great help! Thanks, @avolny!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,536 | closed | Add CodeParrot π¦ codebase | # Add CodeParrot π¦ codebase
This PR adds the CodeParrot π¦ codebase to `examples/research_projects/`. The folder `scripts/` includes files for the following steps:
- data preprocessing
- model initialization
- training with accelerate
- validation loss
- evaluate on HumanEval benchmark
In addition the README gives an overview and highlights the results. The requirements file fixes the dependencies.
cc @LysandreJik @thomwolf | 11-26-2021 13:26:10 | 11-26-2021 13:26:10 | > * For each step (preprocessing, training, evaluation etc), I think it would be useful to have a CLI with some default values. Currently, there are some hard-coded variables / configs that require the end-user to read the source code to understand what's going on.
The reason I went without a CLI is that most scripts are quite lean and a CLI would decrease readability. E.g. for initialisation would significantly longer with a CLI. Due to the required compute scale I also expect this to be run by "advanced" users. This reminded me that the training script has to be executed from the accelerate CLI so I'll add a remark about this in any case.
What do you think @LysandreJik? |
transformers | 14,535 | closed | [flax] unfreeze initial cache in gpt models | # What does this PR do?
Fix flax `generate` for GPT models when the initial `seq_len` is 1.
The issue is the init_cache method of flax GPT2 returns the cache as a `FrozenDict`, but the modelβs forward returns cache as a `dict`.
It works with seq_len > 1 because, when seq_len > 1, we call the body fun outside of the while loop -> body calls forward -> which returns cache as a `dict`.
then we iterate over `body_fn`, using `lax.while_loop`, and it works as the type signature of `cache` is similar.
It breaks for seq_len = 1 because, when itβs one we directly call the `body_fn` with `lax.while_loop` , so here the initial type of cache is `FrozenDict` but the forward in `body_fn` returns `dict`, which raises this error
```body_fun output and input must have same type structure, got PyTreeDe...```
cc @Narsil | 11-26-2021 12:27:22 | 11-26-2021 12:27:22 | |
transformers | 14,534 | closed | Fixes | Final updates to quicktour. | 11-26-2021 09:35:01 | 11-26-2021 09:35:01 | |
transformers | 14,533 | closed | Quicktour updates | Tiny fixes for the quicktour cc @cfregly @philschmid | 11-26-2021 09:08:35 | 11-26-2021 09:08:35 | This ensures that the `!` are inserted when converted to notebooks, and will only install TensorFlow in TF envs, PyTorch in PT envs, and both in envs that do both |
transformers | 14,532 | closed | Difference in the length of positional embeddings produces different results | Hi, I am currently experimenting with how the length of dialogue histories in one input affects the performance of dialogue models using multi-session chat data. While I am working on **BlenderbotSmallForConditionalGeneration** from Huggingface's transformers with the checkpoint "blenderbot_small-90M", I encountered results which are not understandable for me.
Since I want to put long inputs(ex. 1024, 2048, 4096...), I expanded the positional embedding matrix of the encoder since it is initialized in the size (512, 512). I copied the first 512 embeddings and appended them repeatedly to make the embedding matrix the size I want. On the other hand, I truncated the position embedding matrix of the decoder into (128, 512), since the max target length is 128.
```python
from transformers import BlenderbotSmallForConditionalGeneration
from transformers.models.blenderbot.modeling_blenderbot import BlenderbotLearnedPositionalEmbedding
from torch import nn
import torch
src_max_len = SRC_MAX_LEN # 4906, 2048, 1024...
trg_max_len = 128
def reset_position_embeddings():
# Expand encoder position embedding.
encoder_weights = model.model.encoder.embed_positions.weight.data
model.model.encoder.embed_positions = BlenderbotLearnedPositionalEmbedding(src_max_len, model.config.d_model)
num_repeats = src_max_len // model.config.max_position_embeddings
new_encoder_weights = encoder_weights.repeat(num_repeats, 1)
with torch.no_grad():
model.model.encoder.embed_positions.weight = nn.Parameter(new_encoder_weights)
assert torch.equal(model.model.encoder.embed_positions.weight.data, encoder_weights.repeat(num_repeats, 1))
model.config.max_length = src_max_len
model.config.max_position_embeddings = self.args.src_max_len
# Truncate decoder position embedding.
decoder_weights = model.model.decoder.embed_positions.weight.data
model.model.decoder.embed_positions = BlenderbotLearnedPositionalEmbedding(trg_max_len, model.config.d_model)
with torch.no_grad():
model.model.decoder.embed_positions.weight = nn.Parameter(decoder_weights[:self.args.trg_max_len, :])
assert torch.equal(model.model.decoder.embed_positions.weight, decoder_weights[:self.args.trg_max_len])
```
After modifying the model, I trained it with different lengths of source data. But the maximum length of the source inputs is shorter than 2048 and the target response is the same, the results from the 4096 and 2024 versions must be identical, even if there is a difference in the size of position embeddings. However, the results were different.

This is odd since I checked all other variables, including the model parameters except the expanded parts of the position embeddings, the preprocessed data itself, the order of batches, etc. The reproducibility was guaranteed when I tested other data and models, but the only difference is the size of position embeddings.
I thought although the max length of the embedding matrix is different, the inputs are the same and this should not affect the results. Did I understand correctly, or there is something I am missing? | 11-26-2021 04:34:27 | 11-26-2021 04:34:27 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,531 | closed | Deepspeed and T5-11B for multitask training | Carrying on my conversation here @stas00
https://github.com/huggingface/transformers/issues/9996#issuecomment-968348129
Used the run_translation.py and now my loss is 0.0 :( . This is probably doomed to fail
```
{'loss': 7.2639, 'learning_rate': 0.001, 'epoch': 0.02}
3%|ββββ | 612/24128 [42:13<26:09:12, 4.00s/it]{'loss': 0.0, 'learning_rate': 0.001, 'epoch': 0.04}
{'loss': 0.0, 'learning_rate': 0.001, 'epoch': 0.06}
8%|βββββββββββββ | 1999/24128 [2:15:09<24:43:54, 4.02s/it][2021-11-25 22:01:13,181] [INFO] [logging.py:69:log_dist] [Rank 0] step=2000, skipped=1995, lr=[0.001, 0.001], mom=[0.0, 0.0]
[2021-11-25 22:01:13,181] [INFO] [timer.py:181:stop] 0/2000, SamplesPerSec=7.902960485741644
{'loss': 0.0, 'learning_rate': 0.001, 'epoch': 0.08}
{'loss': 0.0, 'learning_rate': 0.001, 'epoch': 0.1}
```
Script
```
export BS=8;
PYTHONPATH=../../../src
USE_TF=0
deepspeed --num_gpus=4 ./run_translation.py \
--model_name_or_path t5-11b \
--output_dir /local/nlp/temp/poetryT5-11B_new \
--evaluation_strategy=epoch \
--do_train \
--train_file /home/tuhin.chakr/gpt3/poetrynew/train.json \
--save_strategy=epoch \
--label_smoothing 0.1 \
--learning_rate 1e-3 \
--adafactor \
--overwrite_output_dir \
--max_source_length 64 \
--max_target_length 64 \
--num_train_epochs 1 \
--per_device_train_batch_size $BS \
--per_device_eval_batch_size $BS \
--source_lang en \
--target_lang en \
--deepspeed /home/tuhin.chakr/gpt3/transformers/tests/deepspeed/ds_config_zero2.json \
--fp16
~
```
Data format
```
{"translation": {"en1": "Write a poetic sentence about 'people'", "en2": "In this age what people used to call."}}
{"translation": {"en1": "Write a poetic sentence about 'tale'", "en2": "Where evening is empty, an unfinished tale."}}
{"translation": {"en1": "Write a poetic sentence that ends in a word which rhymes with 'planes'", "en2": "Now the blood freezes in the veins."}}
{"translation": {"en1": "Write a poetic sentence about 'Weighs his spread' and ending in 'behold'", "en2": "Weighs his spread wings, at leasure to behold."}}
{"translation": {"en1": "Write a poetic sentence about 'lips'", "en2": "Her dry lips were tightly closed up."}}
```
```
def preprocess_function(examples):
inputs = [ex["en1"] for ex in examples["translation"]]
targets = [ex["en2"] for ex in examples["translation"]]
inputs = [prefix + inp for inp in inputs]
model_inputs = tokenizer(inputs, max_length=data_args.max_source_length, padding=padding, truncation=True)
# Setup the tokenizer for targets
with tokenizer.as_target_tokenizer():
labels = tokenizer(targets, max_length=max_target_length, padding=padding, truncation=True)
# If we are padding here, replace all tokenizer.pad_token_id in the labels by -100 when we want to ignore
# padding in the loss.
if padding == "max_length" and data_args.ignore_pad_token_for_loss:
labels["input_ids"] = [
[(l if l != tokenizer.pad_token_id else -100) for l in label] for label in labels["input_ids"]
]
model_inputs["labels"] = labels["input_ids"]
return model_inputs
```
| 11-26-2021 03:45:55 | 11-26-2021 03:45:55 | I have a feeling that the issue is not in using deepspeed but somewhere else in your setup.
Let's remove deepspeed for a moment from the equation and try your setup with a single gpu setup with `t5-large` or even `t5-small` - make it work first so that it produces what you expect albeit with a lower quality.
Once this is working you can then progress to a higher model size and eventually you'd just plug deepspeed to work with t5-11b.
It'll also make your debug process much easier since it takes forever to even load t5-11b.
Always start small and simple, then progress to bigger and slightly more complex, and then big and complex.
<|||||>@stas00 thanks I tried with t5-small with and without deepspeed and the loss was non zero it was in the range of 3.6 and was slowly decreasing. I removed the label smoothing before training
```
t5-small with deepspeed / t5-small without deepspeed
{'loss': 3.6752, 'learning_rate': 0.001, 'epoch': 0.02}
{'loss': 3.4976, 'learning_rate': 0.001, 'epoch': 0.04}
{'loss': 3.4253, 'learning_rate': 0.001, 'epoch': 0.06}
8%|ββββββββββββββ | 1999/24128 [08:14<1:25:00, 4.34it/s]
[2021-11-26 10:02:46,946] [INFO] [logging.py:69:log_dist] [Rank 0] step=2000, skipped=5, lr=[0.001, 0.001], mom=[0.0, 0.0]
[2021-11-26 10:02:46,964] [INFO] [timer.py:181:stop] 0/2000, SamplesPerSec=133.5718740278581
{'loss': 3.3788, 'learning_rate': 0.001, 'epoch': 0.08}
{'loss': 3.3362, 'learning_rate': 0.001, 'epoch': 0.1}
{'loss': 3.3234, 'learning_rate': 0.001, 'epoch': 0.12}
{'loss': 3.303, 'learning_rate': 0.001, 'epoch': 0.15}
17%|βββββββββββββββββββββββββββ | 3999/24128 [16:20<1:17:30, 4.33it/s]
[2021-11-26 10:10:53,519] [INFO] [logging.py:69:log_dist] [Rank 0] step=4000, skipped=8, lr=[0.001, 0.001], mom=[0.0, 0.0]
[2021-11-26 10:10:53,566] [INFO] [timer.py:181:stop] 0/4000, SamplesPerSec=134.3619251306713
{'loss': 3.2785, 'learning_rate': 0.001, 'epoch': 0.17}
{'loss': 3.2497, 'learning_rate': 0.001, 'epoch': 0.19}
{'loss': 3.238, 'learning_rate': 0.001, 'epoch': 0.21}
{'loss': 3.225, 'learning_rate': 0.001, 'epoch': 0.23}
25%|βββββββββββββββββββββββββββββββββββββββββ | 5999/24128 [24:09<59:07, 5.11it/s]
[2021-11-26 10:18:42,146] [INFO] [logging.py:69:log_dist] [Rank 0] step=6000, skipped=12, lr=[0.001, 0.001], mom=[0.0, 0.0]
[2021-11-26 10:18:42,209] [INFO] [timer.py:181:stop] 0/6000, SamplesPerSec=136.2225860449825
{'loss': 3.2199, 'learning_rate': 0.001, 'epoch': 0.25}
{'loss': 3.2117, 'learning_rate': 0.001, 'epoch': 0.27}
{'loss': 3.1959, 'learning_rate': 0.001, 'epoch': 0.29}
{'loss': 3.179, 'learning_rate': 0.001, 'epoch': 0.31}
33%|βββββββββββββββββββββββββββββββββββββββββββββββββββββ | 7999/24128 [32:08<1:02:08, 4.33it/s]
[2021-11-26 10:26:40,925] [INFO] [logging.py:69:log_dist] [Rank 0] step=8000, skipped=14, lr=[0.001, 0.001], mom=[0.0, 0.0]
[2021-11-26 10:26:40,956] [INFO] [timer.py:181:stop] 0/8000, SamplesPerSec=136.46790814424403
{'loss': 3.1771, 'learning_rate': 0.001, 'epoch': 0.33}
```
I started doing T5-11B with deepspeed
```
{'loss': 6.2645, 'learning_rate': 0.001, 'epoch': 0.02}
{'loss': 0.0, 'learning_rate': 0.001, 'epoch': 0.04}
{'loss': 0.0, 'learning_rate': 0.001, 'epoch': 0.06}
8%|βββββββββββββ | 1999/24128 [1:52:09<20:27:23, 3.33s/it][2021-11-26 03:07:16,494] [INFO] [logging.py:69:log_dist] [Rank 0] step=2000, skipped=1995, lr=[0.001, 0.001], mom=[0.0, 0.0]
[2021-11-26 03:07:16,494] [INFO] [timer.py:181:stop] 0/2000, SamplesPerSec=9.526738918021234
{'loss': 0.0, 'learning_rate': 0.001, 'epoch': 0.08}
{'loss': 0.0, 'learning_rate': 0.001, 'epoch': 0.1}
{'loss': 0.0, 'learning_rate': 0.001, 'epoch': 0.12}
{'loss': 0.0, 'learning_rate': 0.001, 'epoch': 0.15}
17%|ββββββββββββββββββββββββββ | 3999/24128 [3:43:02<18:36:58, 3.33s/it][2021-11-26 04:58:10,385] [INFO] [logging.py:69:log_dist] [Rank 0] step=4000, skipped=3995, lr=[0.001, 0.001], mom=[0.0, 0.0]
[2021-11-26 04:58:10,386] [INFO] [timer.py:181:stop] 0/4000, SamplesPerSec=9.581176077667344
{'loss': 0.0, 'learning_rate': 0.001, 'epoch': 0.17}
{'loss': 0.0, 'learning_rate': 0.001, 'epoch': 0.19}
{'loss': 0.0, 'learning_rate': 0.001, 'epoch': 0.21}
{'loss': 0.0, 'learning_rate': 0.001, 'epoch': 0.23}
25%|βββββββββββββββββββββββββββββββββββββββ | 5999/24128 [5:33:57<16:42:03, 3.32s/it][2021-11-26 06:49:04,614] [INFO] [logging.py:69:log_dist] [Rank 0] step=6000, skipped=5995, lr=[0.001, 0.001], mom=[0.0, 0.0]
[2021-11-26 06:49:04,614] [INFO] [timer.py:181:stop] 0/6000, SamplesPerSec=9.599231332866195
{'loss': 0.0, 'learning_rate': 0.001, 'epoch': 0.25}
{'loss': 0.0, 'learning_rate': 0.001, 'epoch': 0.27}
{'loss': 0.0, 'learning_rate': 0.001, 'epoch': 0.29}
{'loss': 0.0, 'learning_rate': 0.001, 'epoch': 0.31}
33%|ββββββββββββββββββββββββββββββββββββββββββββββββββββ | 7999/24128 [7:24:52<14:51:53, 3.32s/it][2021-11-26 08:40:00,444] [INFO] [logging.py:69:log_dist] [Rank 0] step=8000, skipped=7995, lr=[0.001, 0.001], mom=[0.0, 0.0]
[2021-11-26 08:40:00,445] [INFO] [timer.py:181:stop] 0/8000, SamplesPerSec=9.607671816549383
{'loss': 0.0, 'learning_rate': 0.001, 'epoch': 0.33}
{'loss': 0.0, 'learning_rate': 0.001, 'epoch': 0.35}
{'loss': 0.0, 'learning_rate': 0.001, 'epoch': 0.37}
```<|||||>t5-large with deepspeed back to zero lr
```
Time to load utils op: 0.0009222030639648438 seconds
[INFO|trainer.py:1196] 2021-11-26 13:25:43,185 >> ***** Running training *****
[INFO|trainer.py:1197] 2021-11-26 13:25:43,185 >> Num examples = 772073
[INFO|trainer.py:1198] 2021-11-26 13:25:43,185 >> Num Epochs = 1
[INFO|trainer.py:1199] 2021-11-26 13:25:43,185 >> Instantaneous batch size per device = 8
[INFO|trainer.py:1200] 2021-11-26 13:25:43,186 >> Total train batch size (w. parallel, distributed & accumulation) = 32
[INFO|trainer.py:1201] 2021-11-26 13:25:43,186 >> Gradient Accumulation steps = 1
[INFO|trainer.py:1202] 2021-11-26 13:25:43,186 >> Total optimization steps = 24128
2%|ββββ | 500/24128 [03:08<2:47:02, 2.36it/s][WARNING|trainer_pt_utils.py:803] 2021-11-26 13:28:52,075 >> tried to get lr value before scheduler/optimizer started stepping, returning lr=0
[WARNING|trainer_pt_utils.py:803] 2021-11-26 13:28:52,075 >> tried to get lr value before scheduler/optimizer started stepping, returning lr=0
[WARNING|trainer_pt_utils.py:803] 2021-11-26 13:28:52,076 >> tried to get lr value before scheduler/optimizer started stepping, returning lr=0
[WARNING|trainer_pt_utils.py:803] 2021-11-26 13:28:52,076 >> tried to get lr value before scheduler/optimizer started stepping, returning lr=0
{'loss': 0.0729, 'learning_rate': 0, 'epoch': 0.02}
4%|βββββββ | 1000/24128 [06:12<2:20:51, 2.74it/s][WARNING|trainer_pt_utils.py:803] 2021-11-26 13:31:56,044 >> tried to get lr value before scheduler/optimizer started stepping, returning lr=0
[WARNING|trainer_pt_utils.py:803] 2021-11-26 13:31:56,044 >> tried to get lr value before scheduler/optimizer started stepping, returning lr=0
[WARNING|trainer_pt_utils.py:803] 2021-11-26 13:31:56,045 >> tried to get lr value before scheduler/optimizer started stepping, returning lr=0
[WARNING|trainer_pt_utils.py:803] 2021-11-26 13:31:56,045 >> tried to get lr value before scheduler/optimizer started stepping, returning lr=0
{'loss': 0.0765, 'learning_rate': 0, 'epoch': 0.04}
```<|||||>So first good to see that you have the non-DS setup working.
I don't understand this in your log 2 comments up. Is it with or without DS?
> t5-small with deepspeed / t5-small without deepspeed
Re: last comment:
As the warning says, the optimizer hasn't started running, so it doesn't have an LR yet, and just returns 0.
So we need to figure out why the optimizer isn't running.
For example you can edit the ds config file to remove the optimizer section and it will use Transformers' AdamW instead of the DS's one.
Meanwhile could you help me to reproduce the issue on my side? Could you perhaps make a tarball that I could run with the data and your customizations? So that I could run the same setup as you do<|||||>I also run a sanity check with this and verified that in general things work correctly:
```
export BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0,1 deepspeed --num_gpus=2 examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --do_train --label_smoothing 0.1 --learning_rate 1e-3 --logging_first_step --logging_steps 2 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --sortish_sampler --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" --source_prefix "translate English to Romanian: " --val_max_target_length 128 --max_train_samples 500 --deepspeed tests/deepspeed/ds_config_zero2.json
[...]
{'loss': 2.9952, 'learning_rate': 0.0, 'epoch': 0.06}
{'loss': 2.7144, 'learning_rate': 0.001, 'epoch': 0.12}
{'loss': 2.809, 'learning_rate': 0.001, 'epoch': 0.25}
{'loss': 2.4788, 'learning_rate': 0.001, 'epoch': 0.38}
{'loss': 2.2926, 'learning_rate': 0.001, 'epoch': 0.5}
```
So something is different about your setup.<|||||>You are trying with t5-small in your sanity check . t5-small works for me too with deepseed as well
as without deepspeed. It gives me loss zero for t5-11b with same code. I also am using adafactor instead of adam since I am trying to reproduce the same hyperparameters as T0pp<|||||>Additionally, I have noticed you're using `--adafactor`, which until recently didn't have a way to access LR as it's an internal state. Some months back I added a hack to have it extract the LR, but it's not great.
So it's very likely this could be related as well. e.g. try to use the default ds_config w/ optimizer and remove `--adafactor` and see if things are different?
<|||||>> You are trying with t5-small in your sanity check . t5-small works for me too with deepseed as well as without deepspeed. It gives me loss zero for t5-11b with same code. I also am using adafactor instead of adam since I am trying to reproduce the same hyperparameters as T0pp
Understood.
As I explained earlier, for some reason the optimizer isn't **stepping** in your t5-11b example.
So we need to figure out why that is.
You can also try the larger ones first - t5-base, t5-large
<|||||>I removed adafactor. This is for t5-large
my config
```
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": 0.001,
"betas": [0.9, 0.999],
"eps": 1e-06,
"weight_decay": 0.0
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 0.001,
"warmup_num_steps": 0
}
},
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 2.000000e+08,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 2.000000e+08,
"contiguous_gradients": true
},
"train_batch_size": 32,
"train_micro_batch_size_per_gpu": 8,
"gradient_clipping": 1.0,
"steps_per_print": 2.000000e+03,
"wall_clock_breakdown": false
}
```
```
export BS=8;
PYTHONPATH=../../../src
USE_TF=0
deepspeed --num_gpus=4 ./run_translation.py \
--model_name_or_path t5-large \
--output_dir /local/nlp/temp/poetryT5-11B_new \
--evaluation_strategy=epoch \
--do_train \
--train_file /home/tuhin.chakr/gpt3/poetrynew/train.json \
--validation_file /home/tuhin.chakr/gpt3/poetrynew/val.json \
--save_strategy=epoch \
--learning_rate 1e-3 \
--adam_eps 1e-06 \
--overwrite_output_dir \
--max_source_length 64 \
--max_target_length 64 \
--num_train_epochs 1 \
--per_device_train_batch_size $BS \
--per_device_eval_batch_size $BS \
--source_lang en_XX \
--target_lang en_XX \
--deepspeed /home/tuhin.chakr/gpt3/transformers/tests/deepspeed/ds_config_zero2.json \
--fp16
~
```
```
Time to load utils op: 0.002509593963623047 seconds
[INFO|trainer.py:1196] 2021-11-26 22:54:54,098 >> ***** Running training *****
[INFO|trainer.py:1197] 2021-11-26 22:54:54,098 >> Num examples = 772073
[INFO|trainer.py:1198] 2021-11-26 22:54:54,098 >> Num Epochs = 1
[INFO|trainer.py:1199] 2021-11-26 22:54:54,098 >> Instantaneous batch size per device = 8
[INFO|trainer.py:1200] 2021-11-26 22:54:54,098 >> Total train batch size (w. parallel, distributed & accumulation) = 32
[INFO|trainer.py:1201] 2021-11-26 22:54:54,098 >> Gradient Accumulation steps = 1
[INFO|trainer.py:1202] 2021-11-26 22:54:54,098 >> Total optimization steps = 24128
2%|ββββ | 500/24128 [03:10<2:36:00, 2.52it/s][WARNING|trainer_pt_utils.py:803] 2021-11-26 22:58:04,534 >> tried to get lr value before scheduler/optimizer started stepping, returning lr=0
[WARNING|trainer_pt_utils.py:803] 2021-11-26 22:58:04,534 >> tried to get lr value before scheduler/optimizer started stepping, returning lr=0
[WARNING|trainer_pt_utils.py:803] 2021-11-26 22:58:04,534 >> tried to get lr value before scheduler/optimizer started stepping, returning lr=0
[WARNING|trainer_pt_utils.py:803] 2021-11-26 22:58:04,534 >> tried to get lr value before scheduler/optimizer started stepping, returning lr=0
{'loss': 0.0729, 'learning_rate': 0, 'epoch': 0.02}
4%|βββββββ | 1000/24128 [06:19<2:26:00, 2.64it/s][WARNING|trainer_pt_utils.py:803] 2021-11-26 23:01:13,601 >> tried to get lr value before scheduler/optimizer started stepping, returning lr=0
[WARNING|trainer_pt_utils.py:803] 2021-11-26 23:01:13,601 >> tried to get lr value before scheduler/optimizer started stepping, returning lr=0
[WARNING|trainer_pt_utils.py:803] 2021-11-26 23:01:13,601 >> tried to get lr value before scheduler/optimizer started stepping, returning lr=0
[WARNING|trainer_pt_utils.py:803] 2021-11-26 23:01:13,601 >> tried to get lr value before scheduler/optimizer started stepping, returning lr=0
{'loss': 0.0765, 'learning_rate': 0, 'epoch': 0.04}
6%|ββββββββββ | 1500/24128 [09:29<2:22:56, 2.64it/s][WARNING|trainer_pt_utils.py:803] 2021-11-26 23:04:23,358 >> tried to get lr value before scheduler/optimizer started stepping, returning lr=0
[WARNING|trainer_pt_utils.py:803] 2021-11-26 23:04:23,358 >> tried to get lr value before scheduler/optimizer started stepping, returning lr=0
[WARNING|trainer_pt_utils.py:803] 2021-11-26 23:04:23,358 >> tried to get lr value before scheduler/optimizer started stepping, returning lr=0
[WARNING|trainer_pt_utils.py:803] 2021-11-26 23:04:23,358 >> tried to get lr value before scheduler/optimizer started stepping, returning lr=0
{'loss': 0.0185, 'learning_rate': 0, 'epoch': 0.06}
```<|||||>In general try to use `auto` values in the ds config file, so that you don't have to sync them manually. HF Trainer will set them up correctly for you on the fly.
But I don't see any fault with your config.
And you haven't tried t5-base, t5-large, t5-3b to see if they work and it's an issue specifically with t5-11b.
Can you please send me a sample of your data you train with - if it's not for a public eye, let me know. It'd be easier to experiment directly rather than ask you to do this and that all the time.
And I suppose you have a custom code - best to send me a tarball of the whole thing (custom script+data), so that I don't have to spend time sorting it out. Thanks.
p.s. I don't actually have access to A100 at the moment, but I hope to sort it out on a smaller gpu.<|||||>I can't share it publicly on this thread but I emailed you the zip file containing code and data
I emailed at your email id mentioned here https://stasosphere.com/<|||||>missing your custom `run_translation.py`<|||||>I made changes already in the code run_translation.py
Check for this function and you will know
```
def preprocess_function(examples):
inputs = [ex["en1"] for ex in examples["translation"]]
targets = [ex["en2"] for ex in examples["translation"]]
```<|||||>Could you please try after applying this patch to deepspeed:
```
# patch.txt
diff --git a/deepspeed/runtime/zero/stage2.py b/deepspeed/runtime/zero/stage2.py
index b995e4d..8df4997 100755
--- a/deepspeed/runtime/zero/stage2.py
+++ b/deepspeed/runtime/zero/stage2.py
@@ -1622,6 +1622,14 @@ class FP16_DeepSpeedZeroOptimizer(object):
prev_scale = self.loss_scale
self._update_scale(self.overflow)
if self.overflow:
+
+ if dist.get_rank() == 0:
+ logger.info(
+ "[deepscale] OVERFLOW! Rank {} Skipping step. Attempted loss scale: {}, "
+ "reducing to {}".format(dist.get_rank(),
+ prev_scale,
+ self.loss_scale))
+
see_memory_usage('After overflow before clearing gradients')
self.zero_grad()
if self.cpu_offload:
```
```
git clone https://github.com/microsoft/DeepSpeed
cd DeepSpeed
git apply patch.txt
pip install -e .
```
[patch.txt](https://github.com/huggingface/transformers/files/7611262/patch.txt)
This should now tell if you OVERFLOW happens and that's why it skips the `step`
PR: https://github.com/microsoft/DeepSpeed/pull/1593
<|||||>Does this solve the issue ? I think for t5-large I was getting 0 LR however for T5-11b loss was zero. I am just trying to understand here<|||||>```
[2021-11-27 00:36:54,800] [INFO] [stage2.py:1628:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
0%|β | 98/24128 [00:40<2:40:10, 2.50it/s][2021-11-27 00:36:55,194] [INFO] [stage2.py:1628:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
0%|β | 99/24128 [00:40<2:39:27, 2.51it/s][2021-11-27 00:36:55,588] [INFO] [stage2.py:1628:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
0%|β | 100/24128 [00:40<2:39:03, 2.52it/s][2021-11-27 00:36:55,985] [INFO] [stage2.py:1628:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
0%|β | 101/24128 [00:41<2:39:00, 2.52it/s][2021-11-27 00:36:56,390] [INFO] [stage2.py:1628:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
0%|β | 102/24128 [00:41<2:39:53, 2.50it/s][2021-11-27 00:36:56,789] [INFO] [stage2.py:1628:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
0%|β | 103/24128 [00:41<2:39:50, 2.51it/s][2021-11-27 00:36:57,210] [INFO] [stage2.py:1628:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
0%|β | 104/24128 [00:42<2:42:25, 2.47it/s][2021-11-27 00:36:57,613] [INFO] [stage2.py:1628:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
0%|β | 105/24128 [00:42<2:42:12, 2.47it/s][2021-11-27 00:36:58,024] [INFO] [stage2.py:1628:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
0%|β | 106/24128 [00:43<2:42:48, 2.46it/s][2021-11-27 00:36:58,424] [INFO] [stage2.py:1628:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
0%|β | 107/24128 [00:43<2:42:01, 2.47it/s][2021-11-27 00:36:58,826] [INFO] [stage2.py:1628:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
0%|β | 108/24128 [00:44<2:41:43, 2.48it/s][2021-11-27 00:36:59,219] [INFO] [stage2.py:1628:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
0%|β | 109/24128 [00:44<2:40:21, 2.50it/s][2021-11-27 00:36:59,621] [INFO] [stage2.py:1628:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
0%|β | 110/24128 [00:44<2:40:35, 2.49it/s][2021-11-27 00:37:00,014] [INFO] [stage2.py:1628:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
0%|β | 111/24128 [00:45<2:39:33, 2.51it/s][2021-11-27 00:37:00,407] [INFO] [stage2.py:1628:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
0%|β | 112/24128 [00:45<2:38:53, 2.52it/s][2021-11-27 00:37:00,805] [INFO] [stage2.py:1628:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
0%|β | 113/24128 [00:46<2:38:58, 2.52it/s][2021-11-27 00:37:01,200] [INFO] [stage2.py:1628:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
0%|β | 114/24128 [00:46<2:38:45, 2.52it/s][2021-11-27 00:37:01,596] [INFO] [stage2.py:1628:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
0%|β | 115/24128 [00:46<2:38:37, 2.52it/s][2021-11-27 00:37:01,998] [INFO] [stage2.py:1628:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
0%|β | 116/24128 [00:47<2:39:18, 2.51it/s][2021-11-27 00:37:02,421] [INFO] [stage2.py:1628:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
0%|β | 117/24128 [00:47<2:42:16, 2.47it/s][2021-11-27 00:37:02,814] [INFO] [stage2.py:1628:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
0%|β | 118/24128 [00:48<2:40:49, 2.49it/s][2021-11-27 00:37:03,210] [INFO] [stage2.py:1628:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
0%|β | 119/24128 [00:48<2:40:06, 2.50it/s][2021-11-27 00:37:03,616] [INFO] [stage2.py:1628:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
0%|β | 120/24128 [00:48<2:40:49, 2.49it/s][2021-11-27 00:37:04,008] [INFO] [stage2.py:1628:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
1%|β | 121/24128 [00:49<2:39:36, 2.51it/s][2021-11-27 00:37:04,404] [INFO] [stage2.py:1628:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
1%|β | 122/24128 [00:49<2:39:15, 2.51it/s][2021-11-27 00:37:04,797] [INFO] [stage2.py:1628:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
1%|β | 123/24128 [00:50<2:38:34, 2.52it/s][2021-11-27 00:37:05,193] [INFO] [stage2.py:1628:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
1%|β | 124/24128 [00:50<2:38:33, 2.52it/s][2021-11-27 00:37:05,591] [INFO] [stage2.py:1628:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
1%|β | 125/24128 [00:50<2:38:42, 2.52it/s][2021-11-27 00:37:05,989] [INFO] [stage2.py:1628:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
1%|β | 126/24128 [00:51<2:38:51, 2.52it/s][2021-11-27 00:37:06,386] [INFO] [stage2.py:1628:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
1%|β | 127/24128 [00:51<2:38:52, 2.52it/s][2021-11-27 00:37:06,782] [INFO] [stage2.py:1628:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
1%|β | 128/24128 [00:51<2:38:41, 2.52it/s][2021-11-27 00:37:07,177] [INFO] [stage2.py:1628:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
```
Yes so I guess OVERFLOW is happening <|||||>No, it's not solving the issue - I just added a diagnostic logging. It was already in `zero3.py` - so I just ported it to `zero2.py` - I will submit a PR to Deepspeed.
So why does it start with loss scale: 1, e.g. when I run with t5-small I get:
(Also added `--logging_steps 2` to the cmd args so you don't have to wait for long to see the logs)
```
0%| | 0/96510 [00:00<?, ?it/s][2021-11-26 21:18:19,660] [INFO] [stage2.py:1627:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 65536, reducing to 65536
0%| | 1/96510 [00:00<10:57:06, 2.45it/s][2021-11-26 21:18:19,753] [INFO] [stage2.py:1627:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 65536, reducing to 32768.0
[WARNING|trainer_pt_utils.py:803] 2021-11-26 21:18:19,754 >> tried to get lr value before scheduler/optimizer started stepping, returning lr=0
{'loss': 4.9414, 'learning_rate': 0, 'epoch': 0.0}
0%| | 2/96510 [00:00<10:57:06, 2.45it/s][2021-11-26 21:18:19,848] [INFO] [stage2.py:1627:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 32768.0, reducing to 16384.0
0%| | 3/96510 [00:00<4:41:53, 5.71it/s][2021-11-26 21:18:19,940] [INFO] [stage2.py:1627:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 16384.0, reducing to 8192.0
[WARNING|trainer_pt_utils.py:803] 2021-11-26 21:18:19,941 >> tried to get lr value before scheduler/optimizer started stepping, returning lr=0
{'loss': 5.4297, 'learning_rate': 0, 'epoch': 0.0}
0%|
```
In the ds config file:
```
"initial_scale_power": 16,
```
which is 2**16, hence you can see that its first step on my t5-small setup is:
```
Attempted loss scale: 65536, reducing to 65536
```
well, it's actually a minor bug, but ignore it, as the next one does the right thing:
```
[2021-11-26 21:18:19,753] [INFO] [stage2.py:1627:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 65536, reducing to 32768.0
```
but in your case it appears that `"initial_scale_power": 0,` which is 2**0, but you pasted your config and it's 16.
need to figure out how it jumped to:
```
Attempted loss scale: 1, reducing to 1
```
instead of starting with 2**16.
so it gets an overflow and it's already at loss scale 1, so it can't go anywhere from here.<|||||>I can reproduce your issue with `t5-large`, so this is good as now I should be able to sort it out or at least communicate the problem to the Deepspeed team.
```
OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
```
I hope to have time tomorrow to debug this. <|||||>zero3 does the right thing, starting with `65536`, but it too goes down to 1. it just skips one degree down per step in a different fashion.
if you want to experiment before I get a chance, the next step is for you to try `t5-large` w/o deepspeed as you don't need it.
And it fails too:
```
export BS=8; PYTHONPATH=src USE_TF=0 deepspeed --num_gpus=1 examples/pytorch/translation/run_translation.py --model_name_or_path t5-large --output_dir output_dir --evaluation_strategy=epoch --do_train --train_file ../poetrynew/train.json --validation_file ../poetrynew/val.json --save_strategy=epoch --learning_rate 1e-3 --adam_eps 1e-06 --overwrite_output_dir --max_source_length 64 --max_target_length 64 --num_train_epochs 1 --per_device_train_batch_size $BS --per_device_eval_batch_size $BS --source_lang en_XX --target_lang en_XX --fp16 --logging_steps 2
[...]
{'loss': 0.0, 'learning_rate': 0.001, 'epoch': 0.0}
{'loss': 0.0, 'learning_rate': 0.001, 'epoch': 0.0}
{'loss': 0.0, 'learning_rate': 0.001, 'epoch': 0.0}
```
**So your issue is not with deepspeed, but either your code or `transformers`.**
(I just left the deepspeed launcher, but it's not running deepspeed)
<|||||>OK, your issue is `--fp16`. t5 and most other models trained in bf16 have huge issues with fp16. (Search Issues if you're curious)
bf16 has a much larger dynamic range than fp16, and models trained in the former often overflow on the first step in fp16. e.g. mt5 overflows even on a small model on the first step.
Removing `--fp16` (and disabling it in deepspeed if you use the latter) fixes the problem.
But you want speed of course, so here is what you can do next:
1. a workaround for overflow to continue using `--fp16`: https://github.com/huggingface/transformers/pull/10956 - works for some people
2. a WIP `--bf16` PR (since you're on A100) https://github.com/huggingface/transformers/pull/13207
3. finetune in fp32 - much slower on pre-Amphere cards, but pytorch allows you to enable TF32 on Amphere - so you should have speed somewhat closer to fp16 while using the normal fp32 mode.
for 3. make sure you use torch>=1.10 and enable:
```
torch.backends.cuda.matmul.allow_tf32 = True
```
https://pytorch.org/docs/master/notes/numerical_accuracy.html#tensorfloat-32-tf32-on-nvidia-ampere-devices
I recommend you try 3 first, then 2, and then 1.<|||||>And DS has recently added bf16 for
https://www.deepspeed.ai/docs/config-json/#bfloat16-options
```
"bfloat16": {
"enabled": true
}
```
so that's option 4 to try with deepspeed - just replace the float16 section with the above one and don't use `--fp16`.
I think it only works with z2.<|||||>Stas you are amazing and I appreciate all the help and fast turnaround. I am just trying to understand if I use OPTION 3 (fp32) won't it give me OOM eventually? I just wanted to let you know my entire research questions tests on the ability to finetune T5-11B so unless that works t5-large/small/3B doesn't really help me
Just to be sure and consistent I have 4 A100 GPUs, so if you can tell me what would be the best way for me to use T5-11B. I am trying to reproduce (https://arxiv.org/abs/2110.08207) and honestly its been a bit difficult for me to get to train T5-11B .:(<|||||>I got t5-large to work with fp32 but ofcourse got OOM with batch size 1 fp32 T5-11B zero2. Appreciate any help here<|||||>Option 4 gave me this
```
File "./run_translation.py", line 622, in <module>
main()
File "./run_translation.py", line 539, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/tuhin.chakr/yes/envs/fairseq/lib/python3.8/site-packages/transformers/trainer.py", line 1317, in train
tr_loss_step = self.training_step(model, inputs)
File "/home/tuhin.chakr/yes/envs/fairseq/lib/python3.8/site-packages/transformers/trainer.py", line 1857, in training_step
loss = self.compute_loss(model, inputs)
File "/home/tuhin.chakr/yes/envs/fairseq/lib/python3.8/site-packages/transformers/trainer.py", line 1889, in compute_loss
outputs = model(**inputs)
File "/home/tuhin.chakr/yes/envs/fairseq/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/tuhin.chakr/DeepSpeed/deepspeed/runtime/engine.py", line 1599, in forward
loss = self.module(*inputs, **kwargs)
File "/home/tuhin.chakr/yes/envs/fairseq/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/tuhin.chakr/yes/envs/fairseq/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 1574, in forward
encoder_outputs = self.encoder(
File "/home/tuhin.chakr/yes/envs/fairseq/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/tuhin.chakr/yes/envs/fairseq/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 1004, in forward
layer_outputs = layer_module(
File "/home/tuhin.chakr/yes/envs/fairseq/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/tuhin.chakr/yes/envs/fairseq/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 639, in forward
self_attention_outputs = self.layer[0](
File "/home/tuhin.chakr/yes/envs/fairseq/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/tuhin.chakr/yes/envs/fairseq/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 546, in forward
attention_output = self.SelfAttention(
File "/home/tuhin.chakr/yes/envs/fairseq/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/tuhin.chakr/yes/envs/fairseq/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 472, in forward
query_states = shape(self.q(hidden_states)) # (batch_size, n_heads, seq_length, dim_per_head)
File "/home/tuhin.chakr/yes/envs/fairseq/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/tuhin.chakr/yes/envs/fairseq/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 103, in forward
return F.linear(input, self.weight, self.bias)
File "/home/tuhin.chakr/yes/envs/fairseq/lib/python3.8/site-packages/torch/nn/functional.py", line 1848, in linear
return torch._C._nn.linear(input, weight, bias)
RuntimeError: expected scalar type Float but found BFloat16
```<|||||>The first step is to make things work w/o overflow, the second step is dealing with memory.
As bf16 is all new it will take some time to fully sort it out. You can try solution (1) as well - it might just work.
So your fp32 OOM was w/ or w/o deepspeed?
fp32 takes about the same amount of memory as fp16 mixed precision, because the latter still allocates 4 bytes for master weights per param. So the latter saves some memory in some places, but uses more memory in others. fp16 amp is really about up to 5x speed up, not saving memory.
Here are the next things to try:
**Experiment A**. Try deepspeed with both fp16 and bf16 disabled and stage2 (your current setup) and add on top of `run_translation.py` add:
```
import torch
torch.backends.cuda.matmul.allow_tf32 = True
```
how does that fair?
**Experiment B**. Same as A, but use stage 3 in the config file, and ensure your cpu offload is enabled - the default config file from the docs will do.
I of course assume you're also using torch==1.10 and some fairly recent cuda - at least cuda=11.3
----------
re: bf16-support in deepspeed I haven't tried it myself yet as it was literally just added. I will give it a try.
<|||||>Additionally, I know you're trying to use Adafactor, but if nothing else works right away and you're in a hurry one other things to consider is using https://github.com/facebookresearch/bitsandbytes 8-bit AdamW optimizer. It will save you 6 out of 8 bytes per param. This is a huge memory saving, hence the suggestion.
Here is the saving breakdown:
- fp32: from 16 (8+4+4) to 10 (2+4+4) bytes per param
- fp16 or bf16 mixed precision: from 18 (8+4+4+2) to 12 (2+4+4+2) bytes per param
We are testing it (BNB) out right now at BigScience and so far it tracks the normal AdamW performance quality-wise.
The main issue with BNB is that it needs a Embed norm, which transformers models don't have at the moment. So we need to discuss this.
<|||||>Turns out with zero3 and fp32 it works. I was training it and it went OOM after 25% training so I reduced it to batch size 12 from 16. If it still fails will fall back to 8. The time its taking is definitely more but atleast working
```
Time to load utils op: 0.0010249614715576172 seconds
[INFO|trainer.py:1196] 2021-11-27 22:54:13,786 >> ***** Running training *****
[INFO|trainer.py:1197] 2021-11-27 22:54:13,786 >> Num examples = 772073
[INFO|trainer.py:1198] 2021-11-27 22:54:13,786 >> Num Epochs = 1
[INFO|trainer.py:1199] 2021-11-27 22:54:13,786 >> Instantaneous batch size per device = 16
[INFO|trainer.py:1200] 2021-11-27 22:54:13,786 >> Total train batch size (w. parallel, distributed & accumulation) = 64
[INFO|trainer.py:1201] 2021-11-27 22:54:13,786 >> Gradient Accumulation steps = 1
[INFO|trainer.py:1202] 2021-11-27 22:54:13,786 >> Total optimization steps = 12064
2%|βββ | 182/12064 [23:29<24:13:05, 7.34s/it]
{'loss': 3.2285, 'learning_rate': 0.001, 'epoch': 0.04}
{'loss': 3.0005, 'learning_rate': 0.001, 'epoch': 0.08}
{'loss': 2.8807, 'learning_rate': 0.001, 'epoch': 0.12}
17%|ββββββββββββββββββββββββββ | 1999/12064 [4:02:17<20:06:29, 7.19s/it][2021-11-28 02:56:38,748] [INFO] [logging.py:69:log_dist] [Rank 0] step=2000, skipped=0, lr=[0.001], mom=[[0.9, 0.999]]
[2021-11-28 02:56:38,749] [INFO] [timer.py:181:stop] 0/2000, SamplesPerSec=8.819555807358741
{'loss': 2.7952, 'learning_rate': 0.001, 'epoch': 0.17}
{'loss': 2.7062, 'learning_rate': 0.001, 'epoch': 0.21}
{'loss': 2.6237, 'learning_rate': 0.001, 'epoch': 0.25}
25%|ββββββββββββββββββββββββββββββββββββββββ | 3010/12064 [6:04:11<18:12:50, 7.24s/it]Traceback (most recent call last):
File "./run_translation.py", line 621, in <module>
main()
File "./run_translation.py", line 538, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/tuhin.chakr/yes/envs/fairseq/lib/python3.8/site-packages/transformers/trainer.py", line 1317, in train
tr_loss_step = self.training_step(model, inputs)
File "/home/tuhin.chakr/yes/envs/fairseq/lib/python3.8/site-packages/transformers/trainer.py", line 1857, in training_step
loss = self.compute_loss(model, inputs)
File "/home/tuhin.chakr/yes/envs/fairseq/lib/python3.8/site-packages/transformers/trainer.py", line 1889, in compute_loss
outputs = model(**inputs)
File "/home/tuhin.chakr/yes/envs/fairseq/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/tuhin.chakr/DeepSpeed/deepspeed/runtime/engine.py", line 1599, in forward
loss = self.module(*inputs, **kwargs)
File "/home/tuhin.chakr/yes/envs/fairseq/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1120, in _call_impl
result = forward_call(*input, **kwargs)
File "/home/tuhin.chakr/yes/envs/fairseq/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 1611, in forward
decoder_outputs = self.decoder(
File "/home/tuhin.chakr/yes/envs/fairseq/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1120, in _call_impl
result = forward_call(*input, **kwargs)
File "/home/tuhin.chakr/yes/envs/fairseq/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 1004, in forward
layer_outputs = layer_module(
File "/home/tuhin.chakr/yes/envs/fairseq/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1120, in _call_impl
result = forward_call(*input, **kwargs)
File "/home/tuhin.chakr/yes/envs/fairseq/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 665, in forward
cross_attention_outputs = self.layer[1](
File "/home/tuhin.chakr/yes/envs/fairseq/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1120, in _call_impl
result = forward_call(*input, **kwargs)
File "/home/tuhin.chakr/yes/envs/fairseq/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 580, in forward
attention_output = self.EncDecAttention(
File "/home/tuhin.chakr/yes/envs/fairseq/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1120, in _call_impl
result = forward_call(*input, **kwargs)
File "/home/tuhin.chakr/yes/envs/fairseq/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 518, in forward
attn_output = self.o(attn_output)
File "/home/tuhin.chakr/yes/envs/fairseq/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1109, in _call_impl
result = hook(self, input)
File "/home/tuhin.chakr/DeepSpeed/deepspeed/runtime/zero/stage3.py", line 1476, in _pre_forward_module_hook
self.pre_sub_module_forward_function(module)
File "/home/tuhin.chakr/DeepSpeed/deepspeed/runtime/zero/stage3.py", line 1588, in pre_sub_module_forward_function
self.param_coordinator.fetch_sub_module(sub_module)
File "/home/tuhin.chakr/DeepSpeed/deepspeed/runtime/zero/stage3.py", line 448, in fetch_sub_module
self._all_gather(partitioned_params, async_op=False)
File "/home/tuhin.chakr/DeepSpeed/deepspeed/runtime/zero/stage3.py", line 525, in _all_gather
handles = partitioned_params[0].all_gather(
File "/home/tuhin.chakr/DeepSpeed/deepspeed/runtime/zero/partition_parameters.py", line 595, in all_gather
return self._all_gather(param_list, async_op=async_op, hierarchy=hierarchy)
File "/home/tuhin.chakr/DeepSpeed/deepspeed/runtime/zero/partition_parameters.py", line 704, in _all_gather
ret_value = self._allgather_params_coalesced(all_gather_list, hierarchy)
File "/home/tuhin.chakr/DeepSpeed/deepspeed/runtime/zero/partition_parameters.py", line 936, in _allgather_params_coalesced
flat_tensor = torch.empty(tensor_size,
RuntimeError: CUDA out of memory. Tried to allocate 64.00 MiB (GPU 2; 39.59 GiB total capacity; 35.71 GiB already allocated; 56.94 MiB free; 36.27 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
```<|||||>That's a progress.
Additionally have you made sure that you have ` torch.backends.cuda.matmul.allow_tf32 = True` and you're using torch==1.10 and some fairly recent cuda - at least cuda=11.3? You haven't confirmed that.<|||||>Yes
Done and yes confirming
torch.backends.cuda.matmul.allow_tf32 = True
torch==1.10
cuda=11.3<|||||>watch this PR https://github.com/microsoft/DeepSpeed/pull/1453
as soon as it's merged you can have the speed back with bf16/Zero3 under Deepspeed with Amphere.
I guess you can already try it if you are in need.<|||||>I can confirm I could train and evaluate using fp32 and zero3. It does take me 28 hours even after using 4 GPUs
is there any way to make this faster? I am not entirely sure I understand your last comment but what should i change at my end to enable the PR ?<|||||>Thank you for the confirmation, @tuhinjubcse, that it works just not very fast.
To be faster you want bf16-support, which is a work in progress.
The plan is as following:
1. complete and merge: https://github.com/huggingface/transformers/pull/13207 (mostly done, just tweaking docs)
2. complete and merge: https://github.com/microsoft/DeepSpeed/pull/1453 (promised to be done soon - I have no control there)
3. meanwhile I will start working on integrating 1 and 2 here: https://github.com/huggingface/transformers/pull/14569 - but I'm blocked by 2.
Once 3 is done or least I have it working you should be able to use bf16 w/ Deepspeed/HF integration.
I will let you know once this happens.
<|||||>Many many thanks<|||||>One thing I have been noticing is my performance once using run_translation which indirectly uses trainer is significantly lower. In my earlier code where I did not use a trainer, my perplexity loss was so much better than what I am getting now. Are there any trainer specific hyperparameters which I am missing
are there any hyperparameter that I might be missing? This was my training code prior to deep speed. You can see the train function
https://github.com/tuhinjubcse/tuhinjubcse.github.io/blob/master/fine_tune_lm.py
<|||||>But you're not using --fp16 now, which sometimes makes a huge difference, so it's not the code base that is different. And your original finetune script was using trainer, the script was rewritten but it's the same trainer.
That is not to say there is surely no regression here. We have been talking about adding speed regression tests, but so far we haven't gone beyond talking about it.
Once deepspeed releases bloat16-support I think you should be back at a fast speed.
I will start working on the integration now, against the Deepspeed PR, now that we have completed --bf16 in transformers.
So perhaps you will have something to experiment with shortly. I will keep you posted.
<|||||>No before finetune_trainer I was using something else as you can see in the link above
As an experiment I was trying model.parallelize using T5 3B just to see what happens without deep speed and honestly it's surprising that the evaluation loss is lower for T5-3B with model parallelize compared to T5-11B with deep speed
I would expect since T5-11B is a bigger model it should give better performance anyway
I will put a comparative result of T5-3B using model.parallelize and deep speed. I am wondering if there is performance degradation with deepspeed<|||||>Thank you for clarifying that you were talking about the pre-finetune_trainer. I assumed that it was `finetune_trainer` based on the name of your script, but I haven't read it as it's too long.
OK, so for you to understand what Deepspeed ZeRO does conceptually - it shards the tensors over multiple gpus and then at the point of calculation (forward/backward) it restores the tensors to their normal unsharded state, so the model doesn't see anything different - it has no idea ZeRO even exists. i.e. Deepspeed ZeRO itself doesn't change anything and can't make any difference to the math, and thus you should be getting an identical numerical output w/ or w/o ZeRO.
Now, it's possible that you are using a different optimizer or lr scheduler, - when you're using Deepspeed since it lets you use your own or provides its own - and they aren't identical most of the time. And you could be mismatching on whatever other hparams are involved. So when comparing such things you need to make sure you compare oranges to orange.
Besides Deepspeed you have `transformers` which also changes over time and it could also have regressions.
Now, bigger models typically give better performance but usually they take much longer to get to the same range of loss.
Now that you understand these things, let's focus on how you could get better results faster, since we want the same thing.
<|||||>Deepspeed/bf16/zero2 should work with https://github.com/huggingface/transformers/pull/14569
Please let me know if you run into any problems if you choose to try that branch. Please follow up directly in that PR's comments if you run into issues.
To use bf16 you need 2 things:
1. you just need to add `--bf16` to your command line
2. use a new config file that enables bf16. The PR includes a sample config file `tests/deepspeed/ds_config_zero2_bf16.json`.
On the deepspeed side I'm not sure if you just need deepspeed@master, or you actually need this branch: https://github.com/microsoft/DeepSpeed/pull/1453 - I was testing with the latter.
zero3 doesn't seem to be ready on the deepspeed size. But it's all ready on the transformers side.
p.s. remember zero2 doesn't shard params, so it will be more memory demanding.
p.p.s. I think I need to do some tweaks to t5 models as well to save more memory for bf16 - I will have a look in the next few days.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>> och
For future reference, I could help with LR logging from `adafactor` as I have successfully monitored it and used it for research. However, I think I was using a setting where the LR is not automatically inferred by the optimizer, which is what Google folkes actually do when optimising this. However, I only trained `T5-base` variants so far... Soon to try `XXL` :)<|||||>> Stas you are amazing and I appreciate all the help and fast turnaround. I am just trying to understand if I use OPTION 3 (fp32) won't it give me OOM eventually? I just wanted to let you know my entire research questions tests on the ability to finetune T5-11B so unless that works t5-large/small/3B doesn't really help me
>
> Just to be sure and consistent I have 4 A100 GPUs, so if you can tell me what would be the best way for me to use T5-11B. I am trying to reproduce (https://arxiv.org/abs/2110.08207) and honestly its been a bit difficult for me to get to train T5-11B .:(
Are these 40GB or 80 GB A100s?<|||||>I'm running into this exact same issue except with bf16 and llama 13b+ combo.
Turning off bf16 fixes it, but I then can't fit 65b onto my GPUs. Any idea why bf16 is causing problems?<|||||>> I'm running into this exact same issue except with bf16 and llama 13b+ combo.
>
> Turning off bf16 fixes it, but I then can't fit 65b onto my GPUs. Any idea why bf16 is causing problems?
I also meet the same error when setup is ds_stage2/bf16 and baichuan13b model. I want to ask about `Turning off bf16 fixes it` is meaning that using fp32? or using fp16?<|||||>I ran into issues with fp16 as well, so I used fp32. |
transformers | 14,530 | closed | Logits warper for batch generation | # π Feature request
Is there a way to modify `logits_warper`, so that it will apply different parameters (`top_p`, `top_k`, `temperature`) for each item in a batch instead of applying the same parameters for all items in the batch:
https://github.com/huggingface/transformers/blob/f25a9332e8d091398ce96c462e02a467943c8eb9/src/transformers/generation_utils.py#L1562
## Motivation
It will very useful for inference because it will increase the throughput. Currently, you cannot really use #7552 for inference because the same parameters apply to the entire batch.
## Your contribution
Can help with implementing this feature and reviewing the PR. | 11-25-2021 20:57:45 | 11-25-2021 20:57:45 | Hey @bnurbekov,
Sorry for replying so late. We recently allowed to pass customized logits wrapper to `generate()`. Could you maybe try to build a custom wrapper for your purpose this way?
https://github.com/huggingface/transformers/pull/14779<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,529 | closed | added save_directories for _psave_pretrained_pt and _tf, changed model to tf_model and pt_model, enable the notebook to run cleanly from top to bottom without error | @LysandreJik @philschmid
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-25-2021 20:04:22 | 11-25-2021 20:04:22 | |
transformers | 14,528 | closed | trainer process bar can't move while training | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:4.12.5
- Platform:pytorch
- Python version:3.8
- PyTorch version (GPU?):1.12.0-cuda11.3
- Tensorflow version (GPU?):
- Using GPU in script?:yes
- Using distributed or parallel set-up in script?:no
### Who can help
@sgugger
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- encoder-decoder models (For example, BlenderBot, BART, Marian, Pegasus, T5, ByT5): @patrickvonplaten, @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj @patrickvonplaten
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten
- Tokenizers: @LysandreJik
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):XLNET
The problem arises when using:
* [ ] the official example scripts: (give details below)
I use the official notebook:how to fine-tuning a tokenClassification model
* [ ] my own modified scripts: (give details below)
my code is alomost the same as the official notebook.
link:https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/token_classification.ipynb#scrollTo=h9lcmFol-_Le
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
token classification
* [ ] my own task or dataset: (give details below)
I used my own dataset ,and use the datasets to load my own datasets.
when the training start, the process just stop at 2/66672 (steps),while the training process seems to continue, because after a while, the validation began, and the validation process bar still didn't show up, the training bar didn't move.
I test my code in colab, there is no problem when running in colab environment, I uninstall all packages to ensure all the package version is correct ,but It didn't work. could anyone can help me
the matrix I use is seqeval.
<img width="388" alt="WX20211126-001218@2x" src="https://user-images.githubusercontent.com/37979232/143475094-f21b7600-3446-4721-8156-9e210c00494f.png">
## To reproduce
Steps to reproduce the behavior:
1.run my code
here is my code snippets
`
model_name = model_checkpoint.split("/")[-1]
args = TrainingArguments(
f"{model_name}-finetuned-NER",
fp16=True,
evaluation_strategy = "steps",
learning_rate=2e-5,
gradient_accumulation_steps= 8,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
num_train_epochs=3,
weight_decay=0.01,
push_to_hub=False,
logging_steps=100
)
`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
the matrix I use is several
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I hope the bar can run normally
| 11-25-2021 16:26:33 | 11-25-2021 16:26:33 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,527 | closed | Fix a slow test. | # What does this PR do?
Fixes the test
```bash
RUN_SLOW=1 RUN_PIPELINE_TESTS=1 pytest -sv tests/test_pipelines_audio_classification.py::Audi
oClassificationPipelineTests::test_large_model_pt
```
Fails on `torch==1.10.0+cu113`. Given the small differences of results I assume this is of no consequence.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 11-25-2021 15:32:35 | 11-25-2021 15:32:35 | |
transformers | 14,526 | closed | Rename ImageGPT | # What does this PR do?
This PR renames `ImageGPTForCausalLM` to `ImageGPTForCausalImageModeling`.
It's a long name. I know. | 11-25-2021 15:16:31 | 11-25-2021 15:16:31 | |
transformers | 14,525 | closed | fix #14524 (IndexError when mask prob is too low) | # What does this PR do?
This fixes the issue described in #14524. This assumes that users are okay with at least some masking if they set the probability to something > 0. I assume this is an okay fix because for time masking `min_masks=2`.
## Who can review?
@patrickvonplaten
| 11-25-2021 12:55:12 | 11-25-2021 12:55:12 | I've had a deeper look into the actual masking methods in `fairseq` and `transformers`. I think the documentation is vague, and at odds with the description in https://arxiv.org/abs/2006.11477:
> During fine-tuning we apply a masking strategy to the feature encoder outputs similar to SpecAug-
ment [41]: we randomly choose a number of starting time steps for which a span of ten subsequent
time-steps is replaced with a mask embedding; spans may overlap and we use the same masked time
step embedding as during pre-training. We also mask channels by choosing a number of channels as
starting indices and then expand each one to cover the subsequent 64 channels. Spans may overlap
and the selected channel spans are set to zero value.
Let's take the probabilities of fine-tuning on full 960h of librispeech in Table 6: `0.05` for masking in the time axis and `0.0016` for masking in the channel/feature axis. Let's also assume these probabilities to mean _the independent probability for each vector to be the start of a masked span of length n_.
Let's assume a 3 second audio clip, for the `facebook/wav2vec2-base` model this would be a tensor of shape `[150 (time), 768 (features)]`.
In the time axis, we expect `0.05 * 150 = 7.5` vectors to be the start of a mask, for a maximum of `7.5 * length of 10 = 75` vectors be masked if no overlap, or in other words, `75/150 = 50%` of the sequence length.
In the feature axis, we expect `0.0016 * 768 = 1.22` vectors to be the start of a mask, for a maximum of `1.22 * length of 64 = 78` vectors to be masked if no overlap, or in other words, `78/768 ~= 10%` of the feature length.
Let's now take a look at the documentation of `compute_mask_indices` in `fairseq`:
> mask_prob: probability for each token to be chosen as start of the span to be masked. this will be multiplied by number of timesteps divided by length of mask span to mask approximately this percentage of all elements. however due to overlaps, the actual number will be smaller (unless no_overlap is True)
and the documentation in transformers:
> Propability of each feature vector along the time axis to be chosen as the start of the vector span to be masked. Approximately ``mask_time_prob * sequence_length // mask_time_length`` feature vectors will be masked along the time axis. This is only relevant if ``apply_spec_augment is True``.
If we look in the config file for fine-tuning in fairseq:
https://github.com/pytorch/fairseq/blob/2380a6e46675ca17bdf22be06bc7c6d138736e59/examples/wav2vec/config/finetuning/base_960h.yaml#L51-L52
we see that they actually use the percentage of the whole sequence which should be masked (50% and 10%) instead of the probabilities quoted in the article.
But, the documentation is not correct, as `0.50 * 150 // 10=7` and `0.1 * 768 // 64 = 1.0` is the number of vectors which start a span, and not the percentage of spanned elements (according to `fairseq`), nor the number of features which will be masked (according to `transformers`).
So what do the `mask_probs` need to be in `transformers` to replicate the behavior in `fairseq`?
for feature axis:
```python
import torch as t
from transformers.models.wav2vec2.modeling_wav2vec2 import _compute_mask_indices
batch_size = 10
sequence_length = 768
mask_prob = 0.10
mask_length = 64
mask = _compute_mask_indices(
shape=(batch_size, sequence_length),
mask_prob=mask_prob, # or even lower
mask_length=mask_length,
)
mask = t.from_numpy(mask)
print("number of masked vectors in each batch dimension")
num_masks = t.sum(mask, dim=1)
print(num_masks)
# prints tensor([64, 64, 64, 64, 64, 64, 64, 64, 64, 64])
```
for time axis:
```python
import torch as t
from transformers.models.wav2vec2.modeling_wav2vec2 import _compute_mask_indices
batch_size = 10
sequence_length = 150
mask_prob = 0.5
mask_length = 10
mask = _compute_mask_indices(
shape=(batch_size, sequence_length),
mask_prob=mask_prob, # or even lower
mask_length=mask_length,
)
mask = t.from_numpy(mask)
print("number of masked vectors in each batch dimension")
num_masks = t.sum(mask, dim=1)
print(num_masks)
# prints tensor([72, 76, 66, 73, 70, 59, 65, 63, 68, 75])
```
# what about this PR
First, there should still be a bugfix for when `mask_prob` is too low. Incidentally, this bug also exist for `fairseq`!
```python
import torch as t
from fairseq.data.data_utils import compute_mask_indices
batch_size = 10
sequence_length = 768
mask_prob = 0.0001
mask_length = 64
mask = compute_mask_indices(
shape=(batch_size, sequence_length),
padding_mask=None,
mask_prob=mask_prob, # or even lower
mask_length=mask_length,
)
mask = t.from_numpy(mask)
```
returns
```
Traceback (most recent call last):
File "/home/nik/workspace/phd/repo/transformers/playground_fairseq.py", line 12, in <module>
mask = compute_mask_indices(
File "/home/nik/workspace/phd/repo/transformers/venv/lib/python3.8/site-packages/fairseq/data/data_utils.py", line 424, in compute_mask_indices
lengths[0] = min(mask_length, sz - 1)
IndexError: index 0 is out of bounds for axis 0 with size 0
```
However, instead we should probably warn users, or raise an error, that their chosen `mask_prob` is too low to ever insert a mask.
Moreover, we should take one of two options:
* fix `mask_prob` to mean the the independent probability for each vector to be the start of a masked span of length n
* fix `mask_prob` to mean the percentage of the sequence which will be masked.
If we can agree on one of these two options, I can update this PR.
<|||||>Well, I decided to go for option 1 because:
* It requires fewer changes to the code
* it matches the wav2vec2 article description, with I expect people to read sooner than the fairseq code
* it's easier to reason about a `mask_length` if `mask_prob` means option 1.
* we don't need to change the configuration name in the code, preventing API breakage (assuming `mask_prob` should be renamed to something like `mask_percentage`.
* the default value of `mask_time_prob` now matches the settings for finetuning on `960h` in the article.
As for the failing test, I'm not familiar with the other models, so I don't know if it's acceptable to change their implementation of `_comptute_mask_indices` as well.
Also, as the default value for `mask_time_prob` is `0.05`, why is the `mask_feature_prob` defaulted to `0` instead of `0.0016` or `0.0008`? <|||||>Well, that was obviously not as easy as just fix-copy. If the changes in this PR as acceptable I can change the configs for each model, and look into fixing their tests as well...<|||||>Hey @nikvaessen,
Thanks for the PR and having dove into the complexity of this function. It is indeed not very easy to grasp what's going on there. And you are right there are two slighly different interpretation of the `mask_prob` value.
In the paper, `mask_prob` is used to state the probability that **each** token is the start index of the mask span. In the code however, `mask_prob` is rather used as the upper bound to the overall percentage of possible masked vectors.
Note that this function is also used in `pretraining` and it has been shown that Wav2Vec2 pretraining is extremely sensible to how those mask_time_indices are generated. This PR sadly includes too many changes for a fix for a low mask probability.
Could we instead maybe just simple do the following:
- a) either add a `min_mask` function argument to `_compute_mask_indices` to prevent this error
- b) I'm fine with computing/catching when the probability is too low and then simple set the whole mask to 0
Given that this is such a sensible function, it would be great if we could in a first step just make very minor changes to the function to solve the issue. Let me know what you think! Otherwise, I'm also happy to open a small first PR for this<|||||>I've reverted my changes. This PR now updates the documentation so it's (hopefully) more clear how `mask_prob` should be interpreted. The IndexError described in the linked issue should now be fixed by returning the zero mask when `max_num_masked_span` is `0`. This is possible because we only compute epsilon once. It might be more elegant to compute epsilon separately for every batch dimension, but as you indicated that pre-training is very sensitive to the masks I thought it better to leave it as is.<|||||>Hey @nikvaessen,
Great thanks a lot for your PR, it looks very clean now. Also thanks a lot for improving the docs - that's super useful for the community!<|||||>@anton-l - can you give it a second look and if ok for you merge the PR? :-) Ok to merge on my side |
transformers | 14,524 | closed | Computation of mask indices in Wav2vec2Model fails with low probabilities | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.12.2
- Platform: Linux
- Python version: 3.8
- PyTorch version (GPU?): 1.10
### Who can help
@patrickvonplaten
## Information
I'm trying to reproduce fine-tuning with Wav2vec2 on Librispeech, however using feature mask probability 0.0012 as in the paper makes the code crash at some point (after ~3_000 steps).
## To reproduce
```
from transformers.models.wav2vec2.modeling_wav2vec2 import _compute_mask_indices
mask = _compute_mask_indices(
shape=(10, 500),
mask_prob=0.0012, # or even lower
mask_length=10,
)
print(mask)
```
raises
```
Traceback (most recent call last):
File "/home/nik/workspace/phd/repo/w2v2-mt-learning/playground/buggy_mask.py", line 3, in <module>
mask = _compute_mask_indices(
File "/home/nik/workspace/phd/repo/w2v2-mt-learning/.venv/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 201, in _compute_mask_indices
dummy_mask_idx = spec_aug_mask_idx[0]
IndexError: index 0 is out of bounds for axis 0 with size 0
```
Note that using `min_mask=1` prevents this issue as well.
## Expected behavior
If the probability is so low that no features are masked, the method shouldn't raise an `IndexError`.
| 11-25-2021 12:46:25 | 11-25-2021 12:46:25 | |
transformers | 14,523 | closed | Examples for speech recognition trainings from scratch | # π Feature request
Fine-tuning is rather straight forward but it looks to me as if running a training from scratch isn't. I am rather new to π€ but from what I've learned to far is that it's rather tricky to get by and find out how to start a new `Speech2Text` training (for example).
We got [`run_wav2vec2_pretraining_no_trainer.py`](https://github.com/huggingface/transformers/blob/d1fd64e7aa40d6a3c69cb21f7fd411a2a3141e04/examples/pytorch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py) in order to train a new `Wav2Vec2` model from scratch but I wonder why this is (explicitly) not using the `Trainer` API? Is there any particular reason?
## Motivation
After running into out-of-memory issues during `Wav2Vec2` trainings I figured it would be better to use a smaller model for this purpose. Since training an end-to-end model using `Wav2Vec2` requires multiple stages I thought it would be better to start with a simple `Speech2Text` transformer model and continue from there. However, up until now I am unable to properly run a training. For some reason the word-error-rate is basically 0% from the start only to get worse over time to a point where the model is not predicting anything anymore. I have no explanation for this but you can take a look at the code that (in a sense) brought me here.
<details>
<summary>Code (click to expand)</summary>
```python
import json
import os
from dataclasses import dataclass
from functools import partial
from typing import List, Dict, Union
import torch
import tqdm
from torch.nn.utils.rnn import pad_sequence
from torch.utils.data import IterableDataset
from transformers import TrainingArguments, Trainer, trainer_utils, Speech2TextTokenizer, Speech2TextFeatureExtractor, \
Speech2TextProcessor, Speech2TextConfig, Speech2TextModel, Speech2TextForConditionalGeneration, Seq2SeqTrainer, \
IntervalStrategy, EarlyStoppingCallback, Seq2SeqTrainingArguments
import sentencepiece as spm
import tensorflow as tf
from .speech.bin.hf_train import get_dataset, get_preprocessor
from .speech.data.speech_dataset import SpeechRecognitionDatasets
from .speech import bin as binaries
from .speech.lab.training.metrics import error_rate
import numpy as np
class Speech2TextTFDataset(IterableDataset):
def __init__(self, processor: Speech2TextProcessor, text_preprocessor, dataset: tf.data.Dataset, num_samples: int = None):
self.processor = processor
self.text_preprocessor = text_preprocessor
self.dataset = dataset
self.num_samples = num_samples
def __len__(self):
if self.num_samples is None:
raise RuntimeError("Number of samples is unknown.")
return self.num_samples
def __getitem__(self, item):
raise NotImplementedError
def __iter__(self):
for example in self.dataset:
inputs = example["inputs"]
targets = example["targets"].numpy()[0].decode()
targets = self.text_preprocessor.preprocess(targets)
sampling_rate = self.processor.feature_extractor.sampling_rate
# Extract features & target labels
audio_features = self.processor.feature_extractor(inputs, sampling_rate=sampling_rate)["input_features"][0]
labels = self.processor.tokenizer.encode(targets)
size, _ = audio_features.shape
attention_mask = torch.ones(size)
yield dict(inputs=audio_features, targets=labels, attention_mask=attention_mask)
@classmethod
def get_split(cls, processor, text_preprocessor, datasets: SpeechRecognitionDatasets, split: str, max_samples=None):
dataset = datasets.get(split, load_noise=False)
if split == "train":
dataset = dataset.repeat()
if max_samples is not None:
dataset = dataset.take(max_samples)
num_samples = datasets.get_num_speech_samples(split)
return cls(processor, text_preprocessor, dataset, num_samples=num_samples)
@dataclass
class Speech2TextCollator:
def __init__(self, processor: Speech2TextProcessor):
self.processor = processor
def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:
inputs = [torch.Tensor(f["inputs"]) for f in features]
targets = [torch.Tensor(f["targets"]) for f in features]
# Create batches
inputs_batch = pad_sequence(inputs, batch_first=True)
targets_batch = pad_sequence(targets, batch_first=True).long()
attention_mask = pad_sequence([f["attention_mask"] for f in features], batch_first=True).long()
return dict(
input_features=inputs_batch,
# decoder_input_ids=targets_batch,
attention_mask=attention_mask,
labels=targets_batch
)
def compute_metrics(processor: Speech2TextProcessor, pred):
# pred_logits = pred.predictions
pred_ids = np.argmax(pred.predictions[0], axis=-1)
pred_str = processor.batch_decode(pred_ids)
# we do not want to group tokens when computing the metrics
label_str = processor.batch_decode(pred.label_ids, group_tokens=False)
wer = error_rate(targets=label_str, predictions=pred_str, tokens="words")
cer = error_rate(targets=label_str, predictions=pred_str, tokens="characters")
return {"wer": wer, "cer": cer}
def get_sentence_piece_model(sentence_generator, text_preprocessor, overwrite=False):
model_prefix = "/tmp/en"
vocab_file = model_prefix + ".json"
spm_file = model_prefix + ".model"
if os.path.exists(vocab_file) and os.path.exists(spm_file) and not overwrite:
return vocab_file, spm_file
text_fp = "/tmp/spm.txt"
with open(text_fp, "w") as f:
for sentence in sentence_generator():
text = sentence.strip()
text = text_preprocessor.preprocess(text)
f.write(text)
f.write("\n")
spm.SentencePieceTrainer.Train(
input=text_fp,
vocab_size=1000,
model_prefix=model_prefix,
user_defined_symbols=["<mask>"],
# hard_vocab_limit=False,
)
processor = spm.SentencePieceProcessor()
processor.Load(model_file=model_prefix + ".model")
vocab_file = model_prefix + ".json"
spm_file = model_prefix + ".model"
# noinspection PyUnresolvedReferences
vocab = {processor.id_to_piece(piece_id): piece_id for piece_id in range(processor.get_piece_size())}
with open(vocab_file, "w") as f:
json.dump(vocab, f, indent=2)
return vocab_file, spm_file
def main():
# TODO Paths!
local_raw_root = "/data/-asr/corpora/raw"
local_shards_root = "/data/-asr/corpora/sharded"
# TODO Paths!
remote_raw_root = "/mariana/asr/raw"
remote_shards_root = "/mariana/asr/corpora/sharded"
remote_converted_root = "/mariana/asr/corpora/converted"
remote_vocabs_root = "/mariana/asr/vocabularies/masking"
tf.config.set_visible_devices([], "GPU")
out_dir = "/data/-asr/models/huggingface/dev"
log_dir = os.path.join(out_dir, "logs")
config_fp = os.path.join(os.path.dirname(binaries.__file__), "configs/data/en/timit.yml")
early_stopping_patience = 5
print(f"data config: {config_fp}")
with tf.device("cpu"):
asr_datasets = get_dataset(
config_fp=config_fp,
local_raw_root=local_raw_root,
local_shards_root=local_shards_root,
remote_raw_root=remote_raw_root,
remote_shards_root=remote_shards_root,
remote_converted_root=remote_converted_root,
)
vocab_config_fp = os.path.join(
os.path.dirname(binaries.__file__), f"configs/vocabulary/{asr_datasets.language}.yml"
)
text_preprocessor = get_preprocessor(
remote_vocabs_root=remote_vocabs_root, vocab_config_fp=vocab_config_fp, asr_datasets=asr_datasets
)
sampling_rate = 16_000
max_vocab_samples = 100000
def sentence_generator():
for i, example in tqdm.tqdm(enumerate(asr_datasets.get("train", load_noise=False)), total=max_vocab_samples):
if i >= max_vocab_samples:
break
targets = example["targets"].numpy()[0].decode()
yield text_preprocessor.preprocess(targets)
vocab_file, spm_file = get_sentence_piece_model(
sentence_generator=sentence_generator, text_preprocessor=text_preprocessor, overwrite=False
)
tokenizer = Speech2TextTokenizer(vocab_file=vocab_file, spm_file=spm_file)
feature_extractor = Speech2TextFeatureExtractor(sampling_rate=sampling_rate)
processor = Speech2TextProcessor(
feature_extractor=feature_extractor,
tokenizer=tokenizer
)
save_and_eval_steps = 1
training_args = Seq2SeqTrainingArguments(
output_dir=out_dir,
evaluation_strategy=IntervalStrategy("steps"),
save_steps=save_and_eval_steps,
eval_steps=save_and_eval_steps,
num_train_epochs=3,
per_device_train_batch_size=16,
per_device_eval_batch_size=64,
warmup_steps=500,
weight_decay=0.01,
logging_dir=log_dir,
group_by_length=True,
# label_smoothing_factor=1,
load_best_model_at_end=True,
save_total_limit=2,
)
# Create the model
config = Speech2TextConfig(
return_dict=True,
sampling_rate=sampling_rate,
vocab_size=tokenizer.vocab_size,
pad_token_id=processor.tokenizer.pad_token_id,
bos_token_id=processor.tokenizer.bos_token_id,
eos_token_id=processor.tokenizer.eos_token_id,
decoder_start_token_id=processor.tokenizer.bos_token_id,
)
model = Speech2TextForConditionalGeneration(config)
# model = Speech2TextModel(config)
model.train()
train_dataset = Speech2TextTFDataset.get_split(
processor=processor, text_preprocessor=text_preprocessor, datasets=asr_datasets, split="train"
)
eval_dataset = Speech2TextTFDataset.get_split(
processor=processor, text_preprocessor=text_preprocessor, datasets=asr_datasets, split="dev", max_samples=3
)
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
data_collator=Speech2TextCollator(processor=processor),
compute_metrics=partial(compute_metrics, processor),
callbacks=[EarlyStoppingCallback(early_stopping_patience=early_stopping_patience)],
)
last_checkpoint = trainer_utils.get_last_checkpoint(out_dir)
trainer.train(resume_from_checkpoint=last_checkpoint)
print("All done.")
if __name__ == '__main__':
main()
```
</details> | 11-25-2021 12:11:25 | 11-25-2021 12:11:25 | @patrickvonplaten @anton-l unfortunately I didn't get an answer to [my post](https://discuss.huggingface.co/t/need-help-training-speech2text-from-scratch/12306) in the π€ forum yet. I don't know if you would mind to take a look and maybe leave some advice on this topic. Thanks for any help.<|||||>Hey @stefan-falk,
Sorry to only reply now & thanks for pinging me again.
In general training transformers-speech models from scratch is really difficult and I would strongly recommend to leverage pretrained checkpoints for an encoder-decoder setup.
Do you need to pretrain a model from scratch? What is your target task exactly?
I will finish a encoder-decoder example script this week which shows how to leverage pretrained speech and text checkpoints for ASR and will try to have a colab version as well so that it's easy to follow this tutorial. I think this could help a lot - hopefully I'll be done by this week :-)<|||||>Hi @patrickvonplaten !
No worries :)
Well, the reason why I'd want to train a model from scratch is because I would like to do that on custom (and non-public) datasets in different languages as well. Wav2Vec is a nice-to-have at this point. Right now I'd just be happy to be able to train any sensible model on a new dataset. In the end the goal is to use this model on mobile devices.<|||||>Hey @stefan-falk,
I see! I think even if the model should work well **only** on custom (non-public) datasets, it would still make sense to leverage general pre-trained checkpoints. I'll try to have a working encoder-decoder example by the end of the week :-)<|||||>Yeah, that's surely correct :)
It would be great to get an example for this! Please be so kind and ping me once it's available! :) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@patrickvonplaten Hi! Are there any news on the encoder-decoder example? :) <|||||>We have an example here now: https://github.com/huggingface/transformers/tree/master/examples/pytorch/speech-recognition#sequence-to-sequence<|||||>@patrickvonplaten Thanks, but this seems to be an example for fine-tuning and not training from scratch.
What I am looking for is a hands on tutorial/example that shows how I can e.g. train a `Speech2Text` model from scratch.
The code I posted originally (see above) is running (without crashing) but looking at tensorboard I am rather convinced that there are still some issues.
It's not clear to me if I have to use `model = Speech2TextForConditionalGeneration(config)` or `model = Speech2TextModel(config)`.
Can I use the `Trainer` or `Seq2SeqTrainer`?
Am I batching correctly:
```python
@dataclass
class Speech2TextCollator:
def __init__(self, processor: Speech2TextProcessor):
self.processor = processor
def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:
inputs = [torch.Tensor(f["inputs"]) for f in features]
targets = [torch.Tensor(f["targets"]) for f in features]
# Create batches
inputs_batch = pad_sequence(inputs, batch_first=True)
targets_batch = pad_sequence(targets, batch_first=True).long()
attention_mask = pad_sequence([f["attention_mask"] for f in features], batch_first=True).long()
return dict(input_features=inputs_batch, attention_mask=attention_mask, labels=targets_batch)
```
and so on.
It would be great to have an example that guides one through details like this.
If I run the code I wrote what I get is something like this:

<|||||>I see - sorry we don't have any examples on how to train encoder-decoder from scratch yet for ASR. I also don't think it's a good idea given how well it works to leverage pretrained speech and text checkpoints<|||||>@patrickvonplaten Okay, I see. The issue here is just that I am now reliant on the availability of pre-trained models in all the languages I want to support. For example, `facebok/wav2vec2-base` was only trained on English which probably does not help for languages like Chinese. Going for he cross-language model is also not an option due to its size.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>There are also the XLS-R checkpoints which have been pretrained on over 128 languages :-) https://huggingface.co/models?other=xls_r_pretrained<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,522 | closed | Speeding up the models inference by OpenVINO through accurate quantization via NNCF | # π Feature request
Add quantization support via [NNCF ](https://github.com/openvinotoolkit/nncf) for fast inference on OpenVINO
## Motivation
Using NNCF can speed up models up to 4x by INT8 quantization. Look at the results [here](https://github.com/openvinotoolkit/nncf#nlp-huggingface-transformers-powered-models)
## Your contribution
NNCF team already created a patch for your repo that enables this feature. Please, take a look at https://github.com/openvinotoolkit/nncf/tree/develop/third_party_integration/huggingface_transformers
| 11-25-2021 10:23:50 | 11-25-2021 10:23:50 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,521 | closed | GPT model `generate()` function not correctly skipping the padding tokens indicated by `attention_mask` | According to #7552, the padding tokens will be skipped when calculating the `postional_id` during `generate()`, if the corresponding positions are masked out in `attention_mask`. If I understand this correctly, this would mean that the appearance of padding tokens does not matter as long as they are not attended to. However, I found that it is not exactly the case, do I miss something here?
----------------------------------------------------------------
Check the following code for reproduction:
```
import torch
from transformers import GPTNeoForCausalLM, GPT2Tokenizer
# note that input_str_1 and input_str_2 only differs in number & postion of eos tokens
input_str_1 = "# in a kilometer race , a beats b by 48 meters or 12 seconds . what time does a take to complete the race ? n0 = 48.0 n1 = 12.0\nleg = n0 / n1\n<|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|>"
input_str_2 = "# in a kilometer race , a beats b by 48 meters or 12 seconds . what time does a take to complete the race ? n0 = 48.0 n1 = 12.0\n<|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|>leg = n0 / n1\n"
tokenizer = GPT2Tokenizer.from_pretrained("EleutherAI/gpt-neo-125M")
tokenizer.pad_token = tokenizer.eos_token
gradient_ckpt = True
model = GPTNeoForCausalLM.from_pretrained("EleutherAI/gpt-neo-125M", pad_token_id=tokenizer.eos_token_id, gradient_checkpointing=gradient_ckpt, use_cache=not gradient_ckpt)
def test_generate(input_str: str):
input_ids = tokenizer.encode(input_str, add_special_tokens=False, return_tensors="pt")
attention_mask = torch.where(input_ids == tokenizer.eos_token_id, torch.zeros_like(input_ids), torch.ones_like(input_ids)).to(model.device)
output_ids = model.generate(input_ids, attention_mask=attention_mask, max_new_tokens=30, num_return_sequences=1)
output_str = tokenizer.decode(output_ids[0], skip_special_tokens=False, clean_up_tokenization_spaces=False)
print(f"##################\n{output_str}\n##################")
test_generate(input_str_1)
test_generate(input_str_2)
``` | 11-25-2021 09:05:34 | 11-25-2021 09:05:34 | Maybe of interest to @patrickvonplaten @Narsil <|||||>Update: I changed my experiment code from right padding to left padding and the performance is greatly improved. If the `generate()` function truly skips the padding tokens, this should not have happened. <|||||>I just checked, and the attention_mask is correctly sent back to the model `Gpt_neo` so if anything it seems that the model would be the culprit.
Looking at the code, the `position_ids` are correctly skipped : https://github.com/huggingface/transformers/blob/master/src/transformers/models/gpt_neo/modeling_gpt_neo.py#L688
Then you can check that the attention_mask adds a very large negative number : https://github.com/huggingface/transformers/blob/master/src/transformers/models/gpt_neo/modeling_gpt_neo.py#L200
I am not familiar enough with the internals to know if that's enough, but it definitely seems to be doing what it should.
I even tried a much smaller example:
```python
input_str_1 = "This is a test of<|endoftext|>"
input_str_2 = "This is a test<|endoftext|> of"
```
Now I checked that the ids are actually correct ( which is not necessarily the case with extra spaces etc..)
`[ 1212, 318, 257, 1332, 286, 50256]`
`[ 1212, 318, 257, 1332, 50256, 286]`
And then both generate exactly the same thing.
Is there a possibility that the issue comes from slightly twisted `input_ids` in your script ?<|||||>Hi @Narsil, thanks a lot for the reply!
Yeah, I can see those code as well and it seems to be doing the correct thing but the results I am getting suggests otherwise. It is possible, however, related to how GPT-NEO handles those positional ids internally.
With the smaller example here, though the generated sequences are the same, the logits are actually different, which is why it exhibits the incorrect behavior in longer sequences. Here is the code to reproduce:
```
import torch
from transformers import GPTNeoForCausalLM, GPT2Tokenizer
# note that input_str_3 and input_str_4 only differs in number & postion of eos tokens
input_str_3 = "This is a test of<|endoftext|>"
input_str_4 = "This is a test<|endoftext|> of"
tokenizer = GPT2Tokenizer.from_pretrained("EleutherAI/gpt-neo-125M")
tokenizer.pad_token = tokenizer.eos_token
gradient_ckpt = True
model = GPTNeoForCausalLM.from_pretrained("EleutherAI/gpt-neo-125M", pad_token_id=tokenizer.eos_token_id, gradient_checkpointing=gradient_ckpt, use_cache=not gradient_ckpt)
def check_first_token_prob(input_str: str):
input_ids = tokenizer.encode(input_str, add_special_tokens=False, return_tensors="pt")
attention_mask = torch.where(input_ids == tokenizer.eos_token_id, torch.zeros_like(input_ids), torch.ones_like(input_ids)).to(model.device)
outputs = model.generate(input_ids, attention_mask=attention_mask, max_new_tokens=30, num_return_sequences=1,
output_scores=True, return_dict_in_generate=True)
print(f"##################\n{outputs['scores'][-1][0]}\n##################")
return outputs['scores'][-1][0]
print(sum(check_first_token_prob(input_str_3) - check_first_token_prob(input_str_4)))
```
The output I got is:
```
##################
tensor([-15.1894, -12.5526, -13.0819, ..., -19.2879, -14.2211, -12.7208])
##################
##################
tensor([-15.1894, -12.5526, -13.0818, ..., -19.2878, -14.2211, -12.7208])
##################
tensor(-0.8249)
```
The output scores only differs in a very small amount since the sequence is short and the position of the padding token is only off-by-one, but it's still different.<|||||>Tagging @patil-suraj, if you have more information on how the `attention_mask` works and if that behavior is in line with what it should do ?
Just for reference, I also checked outputs, and indeed there's variance (even more than in you post, I get:
```
----------------------------------------
tensor([[[ -8.1140, -5.9630, -8.3320, ..., -18.4336, -13.0972, -8.0018],
[ -9.3932, -7.8721, -12.6465, ..., -17.8364, -15.9489, -11.9218],
[ -7.0515, -6.0169, -8.5999, ..., -15.7377, -12.0931, -8.7372],
[ -6.9112, -10.0014, -12.7149, ..., -20.2539, -17.8208, -11.0143],
[-10.9951, -8.5840, -10.7879, ..., -13.4873, -12.2152, -9.3264],
[ -6.2603, -3.7231, -7.3898, ..., -11.6948, -10.7496, -7.6801]]])
----------------------------------------
tensor([[[ -8.1140, -5.9630, -8.3320, ..., -18.4336, -13.0972, -8.0018],
[ -9.3932, -7.8721, -12.6465, ..., -17.8364, -15.9489, -11.9218],
[ -7.0515, -6.0169, -8.5999, ..., -15.7377, -12.0931, -8.7372],
[ -6.9112, -10.0014, -12.7149, ..., -20.2539, -17.8208, -11.0143],
[ -7.6365, -7.4540, -13.7994, ..., -17.4893, -16.3242, -12.3888],
[-10.9951, -8.5840, -10.7879, ..., -13.4873, -12.2152, -9.3264]]]) # Here particularly different
----------------------------------------<|||||>Jumping in the conversation here to maybe solve some problems.
One thing to remember is that `generate()` will **always** auto-regressively sample from the last token. This means that if the last token is a padding token than it will sample from it which is **always** incorrect.
This means one should never look at the output of the padding token, *i.e.* in @Narsil example:
```
----------------------------------------
tensor([[[ -8.1140, -5.9630, -8.3320, ..., -18.4336, -13.0972, -8.0018],
[ -9.3932, -7.8721, -12.6465, ..., -17.8364, -15.9489, -11.9218],
[ -7.0515, -6.0169, -8.5999, ..., -15.7377, -12.0931, -8.7372],
[ -6.9112, -10.0014, -12.7149, ..., -20.2539, -17.8208, -11.0143],
[-10.9951, -8.5840, -10.7879, ..., -13.4873, -12.2152, -9.3264],
[ -6.2603, -3.7231, -7.3898, ..., -11.6948, -10.7496, -7.6801]]])
----------------------------------------
tensor([[[ -8.1140, -5.9630, -8.3320, ..., -18.4336, -13.0972, -8.0018],
[ -9.3932, -7.8721, -12.6465, ..., -17.8364, -15.9489, -11.9218],
[ -7.0515, -6.0169, -8.5999, ..., -15.7377, -12.0931, -8.7372],
[ -6.9112, -10.0014, -12.7149, ..., -20.2539, -17.8208, -11.0143],
[ -7.6365, -7.4540, -13.7994, ..., -17.4893, -16.3242, -12.3888],
[-10.9951, -8.5840, -10.7879, ..., -13.4873, -12.2152, -9.3264]]]) # Here particularly different
----------------------------------------
```
this means that the last row of the first logits and the previous to last row of the second logits are useless (they correspond to padding tokens). What we should instead compare here is the previous to last row of the first logits to the last row of the second logits (both corresponding to the output logits of `"of"`) - which are identical. This shows that the position ids are correctly shifted.
Now as a conclusion for padded inputs to GPT-like models one should **always** use ``padding=left`` because otherwise the model will necessarly have to sample from a padding token which is wrong (maybe we should put a warning for this actually somewhere - @Narsil what do you think about adding a warning (in pseudo code):
```
if model is not encoder decoder and any of last token is padding token -> then throw a warning that the user should probably use padding=left
```
<|||||>@patrickvonplaten thanks a lot for the clarification! It confirms what I found in the experiments -- right padding for the GPT-like model is incorrect and leads to performance degradation.
However, I do think the problem for not correctly skipping the padding tokens still exists in general. if sampling from the padding token will lead to incorrect results, then in the following examples, the logits for the generated tokens should be the same since the last token is not padding token anymore:
```
input_str_3 = "This is a test of<|endoftext|> some"
input_str_4 = "This is a test<|endoftext|> of some"
```
However, the output I've been getting is:
```
##################
tensor([-15.8802, -16.3779, -15.6428, ..., -21.8622, -17.9515, -14.6956])
##################
##################
tensor([-15.8802, -16.3779, -15.6428, ..., -21.8622, -17.9514, -14.6956])
##################
tensor(0.6359)
```
Notice that they look the same, but when doing subtraction and summation, we can see they are of different values.
In principle, if the padding tokens are correctly skipped everywhere, then it would not matter even if I have input like this:
```
input_str_3 = "This is a test of<|endoftext|><|endoftext|><|endoftext|> some"
input_str_4 = "This<|endoftext|> is a test<|endoftext|> of some"
```
Or am I understanding it incorrectly?
The full code snippet I used to generate the output is pasted below:
```
import torch
from transformers import GPTNeoForCausalLM, GPT2Tokenizer
# note that input_str_3 and input_str_4 only differs in number & postion of eos tokens
input_str_3 = "This is a test of<|endoftext|> some"
input_str_4 = "This is a test<|endoftext|> of some"
tokenizer = GPT2Tokenizer.from_pretrained("EleutherAI/gpt-neo-125M")
tokenizer.pad_token = tokenizer.eos_token
gradient_ckpt = True
model = GPTNeoForCausalLM.from_pretrained("EleutherAI/gpt-neo-125M", pad_token_id=tokenizer.eos_token_id, gradient_checkpointing=gradient_ckpt, use_cache=not gradient_ckpt)
def check_first_token_prob(input_str: str):
input_ids = tokenizer.encode(input_str, add_special_tokens=False, return_tensors="pt")
attention_mask = torch.where(input_ids == tokenizer.eos_token_id, torch.zeros_like(input_ids), torch.ones_like(input_ids)).to(model.device)
outputs = model.generate(input_ids, attention_mask=attention_mask, max_new_tokens=30, num_return_sequences=1,
output_scores=True, return_dict_in_generate=True, do_sample=False)
print(f"##################\n{outputs['scores'][-1][0]}\n##################")
return outputs['scores'][-1][0]
print(sum(check_first_token_prob(input_str_3) - check_first_token_prob(input_str_4)))
```
<|||||>Hey @niansong1996,
I think your understanding is very much correct here. If I understand your example
```
##################
tensor([-15.8802, -16.3779, -15.6428, ..., -21.8622, -17.9515, -14.6956])
##################
##################
tensor([-15.8802, -16.3779, -15.6428, ..., -21.8622, -17.9514, -14.6956])
##################
tensor(0.6359)
```
you are seeing (very) small differences in the output logits that shouldn't be there.
I'm quite sure that this is because masked tokens are not **perfectly** masked but just increase by a large negative number (-10.000) to not have any issues with float16. Now this is amplified in GPT2 for two reasons:
1) GPT2 uses a causal mask by default with -10,000 and then in the token is also masked it **adds** -10,000 again instead of replacing it with just -10,000. E.g. see those lines: https://github.com/huggingface/transformers/blob/39cb6f58e645c90efbcc13593b0d3bf37db2e566/src/transformers/models/gpt2/modeling_gpt2.py#L188
2) GPT2 has been seen to produce very large logits (e.g.: https://github.com/huggingface/transformers/pull/2303#issuecomment-587375740) which means that small differences in the padding, *e.g.* using -10,000 and -20,000 instead of -inf before the softmax can actually make a significant difference.
Now taking this into account for your example:
```
input_str_3 = "This is a test of<|endoftext|> some"
input_str_4 = "This is a test<|endoftext|> of some"
```
It means the following for `input_str_3`, `"of"` attends to `"<|endoftext|>"` just with a padding penalty of -10,000 (padding mask) while for `"input_str_4"`, `"of"` attends to `"<|endoftext|>"` just with a padding penalty of -20,000 (padding mask + causal mask). Even though -10,000 and -20,000 both essentially mean the softmax is zero, those differences can up in GPT2 (especially since it tends to have extreme values).
I think you're reasoning is 100% correct and think those small differences on what values are used for padding could be the explanation - you could maybe try to replace all `-10,000` with `-torch.inf` to see if the problem persists <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I found this issue extremely helpful for my experiment. I was wondering why pretrained decoder-only LM's are failing to generate anything with <code>tokenizer.add_special_tokens({'pad_token': '[PAD]'});model.resize_token_embeddings(len(tokenizer)</code>. This issue pretty much explains why my implementation failed so badly on generation task. Again, I really appreciate =] |
transformers | 14,520 | closed | [CI] clear `~/.cache/torch_extensions` between builds | This PR is trying to address CI failures with pt-nightly. https://github.com/huggingface/transformers/runs/4280926354?check_suite_focus=true
`~/.cache/torch_extensions/` currently uses a single hardcoded path to install all custom cuda extensions and so when it was built with pt-1.8 but then attempted to be used with pt-nightly (pt-1.11-to-be), the following happens:
```
ImportError: /github/home/.cache/torch_extensions/py38_cu111/cpu_adam/cpu_adam.so: undefined symbol: curandCreateGenerator
```
pt-1.10 has improved the situation by adding a prefix: `~/.cache/torch_extensions/py38_cu113` which makes the builds not shared between different cuda and python versions, but it missed the crucial pt-version in that prefix. I reported the issue here:
https://github.com/pytorch/pytorch/issues/68905
And of course ideally all the builds should be installed into the virtual python environment and not have a global shared dir.
This PR tries to address the issue by wiping out `~/.cache/torch_extensions/` completely when CI starts.
This of course means `deepspeed` will rebuild the extensions on every CI run, but this is actually a good thing, because then we really test the right version of it. It does it really fast so it shouldn't introduce a large overhead.
@LysandreJik | 11-25-2021 02:30:29 | 11-25-2021 02:30:29 | |
transformers | 14,519 | closed | Make the "Can't load <file> for <model_name>" error more user-friendly | # What does this PR do?
This makes the "wrong path" error more explicit and adaptive to avoid issues like https://github.com/huggingface/transformers/issues/14479 where the error message didn't specify that tokenizer files were not in the model repo in the first place.
- [ ] TODO: rewrite the errors for all modules in the same way (feature extractor, model weights,...)
| 11-24-2021 20:21:52 | 11-24-2021 20:21:52 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,518 | closed | Wav2vec2 finetuned model's strange truncated predictions | ### What is your question?
I'm getting strange truncation of prediction at different steps of training. Please help to understand what is the issue?
At the first steps of training like 800-1600 (2-3 epochs) I'm getting predictions of valid length and words count but with low accuracy (which is ok at the first steps), After steps > ~8000 things begin getting strange - accuracy of word prediction getting better, WER respectfully getting lower but an overall sentences' lengths getting truncated to the right side of an utterances. For example:
Target:
DΙrbΙndin caxΔ±r-konyak kombinatΔ± ΙrazisindΙ yanΔΔ±n qeydΙ alΔ±nΔ±b. HadisΙ axΕam saatlarΔ±nda baΕ verib. Δ°lkin mΙlumata gΓΆrΙ, insidentΙ spirt mΙhlulunun yerΙ daΔΔ±lmasΔ± sΙbΙb olub
Prediction @ 400 step (length is correct, WER 60+)
dΙrbΙndin Γ§axΔ±r kona kombinantΔ± erazisindΙ yanΔΔ±n qeydΙ alΔ±nΔ±b harisi axΕam satlarΔ±nda baΕ verb ilki mΙlumata gΓΆrΙ insidentΙs birt mΙxlunun yerΙ daΔΔ±lmasΔ± sΙbΙb olub
Prediction @ 800 step (length is correct, WER 50+)
dΙrbΙndin Γ§axΔ±rkonakombinanta ΙrazisindΙ yanΔΔ±n qeydΙ alΔ±nΔ±b hadisΙ axΕamsaatlarΔ±nda baΕ verib ilki mΙlumata gΓΆrΙ insidentΙs birt mΙhlullunun yerΙ daΔΔ±lmasΔ± sΙbΙb olub
Prediction @ 1600 step (length getting truncated, words joining each other, WER 40+)
dΙrbΙdinΓ§Δ±ki ΙazisdΙ ynΔqdΔ±nΔ± hadiΕΔ±a veiklumagΓΆrΙ insidentspirt mΙlun yerΙ daΔΔ±lmasΔ± sΙbΙb olub
Prediction @ > 20000 step (around 30 to 100 epochs, almost no changes in WER, sentence completely truncated to the right part, WER keep around 16-27 depending on audio quality)
ndΙyaninsidentΙspirtmΙluunun yerΙ daΔΔ±lmasΔ± sΙbΙb olub
insidntΙ spirt mΙhlulunun yerΙ daΔΔ±lmasΔ± sΙbΙb olub
insidentΙ spΓΌrt mΙhlulunun yerΙ daΔΔ±lmasΔ± sΙbΙb olub
nsientΙ spirt mΙhlulunun yerΙ daΔΔ±lmasΔ± sΙbΙb olub
Code
[](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2)
Exactly the same code but with different epoch param (num_train_epochs 30 to 100)...
What have you tried?
Training data: 30 hours of labeled data, single spoken person per clip, around 5 to 10 words per clip
Looks like after training a model tries to fit prediction into the dataset's training data length? Can it skew this way?
I've used https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 to train very similar to Turkish language. It differs only for a few characters in alphabet so I used exactly the same params for the first training. Then removed return_attention_mask but nothing changed at all. Then I tried to fine-tune Turkish finetuned model from tutorial itself from Patrick's hub repo - got the same results.
What's your environment?
fairseq Version (e.g., 1.0 or main): current master branch
PyTorch Version (e.g., 1.0): the one which comes with Python 3.8
OS (e.g., Linux): Linux
How you installed fairseq (pip, source): clone and installed
Python version: 3.8
CUDA/cuDNN version: 10.2
GPU models and configuration: 1 x V100S (32 GB)
| 11-24-2021 15:37:05 | 11-24-2021 15:37:05 | Hi @BakuDev! Indeed, it looks like the model overfits to the length of sentences in the training data. Try augmenting the training dataset by randomly concatenating 2 or more clips together to roughly match the length of your validation data, or add some long examples from CommonVoice.<|||||>> Hi @BakuDev! Indeed, it looks like the model overfits to the length of sentences in the training data. Try augmenting the training dataset by randomly concatenating 2 or more clips together to roughly match the length of your validation data, or add some long examples from CommonVoice.
<|||||>> Hi @BakuDev! Indeed, it looks like the model overfits to the length of sentences in the training data. Try augmenting the training dataset by randomly concatenating 2 or more clips together to roughly match the length of your validation data, or add some long examples from CommonVoice.
Forgot to mention that I get [PAD] after each predicted symbol. Maybe this can be related to described issue as well? Example:
[PAD]i[PAD]n[PAD]s[PAD]i[PAD]d[PAD]e[PAD]n[PAD]t[PAD]Ι[PAD] [PAD]s[PAD]pΓΌ[PAD]r[PAD]t[PAD] m[PAD]Ι[PAD]h[PAD]lul[PAD]un[PAD]un[PAD] [PAD]y[PAD]e[PAD]r[PAD]Ι [PAD]d[PAD]a[PAD]Δ[PAD]Δ±[PAD]l[PAD]m[PAD]as[PAD]Δ±[PAD] [PAD]s[PAD]Ι[PAD]bΙ[PAD]b[PAD] olub[PAD]
<|||||>@BakuDev in Wav2Vec2's CTC decoder the `<pad>` is also used as a special *blank token* (see the **Alignment** section about it in this article: https://distill.pub/2017/ctc/). <|||||>@anton-l ΠΡ Π½Π΅ ΡΠΎΠ³Π»Π°ΡΠΈΠ»ΠΈΡΡ Π±Ρ ΠΏΠΎΠΌΠΎΡΡ Ρ ΡΡΠΎΠΉ ΠΏΡΠΎΠ±Π»Π΅ΠΌΠΎΠΉ Π½Π° ΠΏΠ»Π°ΡΠ½ΠΎΠΉ ΠΎΡΠ½ΠΎΠ²Π΅ Π΅ΡΠ»ΠΈ Π²ΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎ? ΠΡΠ»ΠΈ Π½Π°ΠΏΠΈΡΠΈΡΠ΅ ΠΊΡΠ΄Π° ΠΏΠΈΡΠ°ΡΡ Π² ΠΠ‘, ΡΠ²ΡΠΆΡΡΡ Π΄Π»Ρ ΠΎΠ±ΡΡΠΆΠ΄Π΅Π½ΠΈΡ.<|||||>@BakuDev feel free to ask any specific questions about training strategies (i.e. not necessarily about the `transformers` library) on our [forums](https://discuss.huggingface.co/) or on [Discord](https://hf.co/join/discord), someone from the community will gladly help you out :)
We also have a paid support program which you might be interested in: https://huggingface.co/support |
transformers | 14,517 | closed | Update versions.yml format | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-24-2021 15:25:52 | 11-24-2021 15:25:52 | |
transformers | 14,516 | closed | Fix typo in toctree | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-24-2021 14:50:48 | 11-24-2021 14:50:48 | No worries :) |
transformers | 14,515 | closed | Fix feature extraction utils import | null | 11-24-2021 13:59:12 | 11-24-2021 13:59:12 | |
transformers | 14,514 | closed | LayoutLMv2FeatureExtractor now supports non-English languages when applying Tesseract OCR. | # What does this PR do?
This PR adds an additional `ocr_lang` argument to the \_\_init\_\_ method of LayoutLMv2FeatureExtractor which specifies which Teserract model to use when applying Tesseract OCR.
Fixes #14511
@NielsRogge | 11-24-2021 12:24:40 | 11-24-2021 12:24:40 | Can you verify the slow tests (which are not run by the CI) are passing as well?
i.e. `RUN_SLOW=yes pytest tests/test_feature_extraction_layoutlmv2.py` and `RUN_SLOW=yes pytest tests/test_feature_processor_layoutlmv2.py `<|||||>I assume you meant `RUN_SLOW=yes pytest tests/test_feature_extraction_layoutlmv2.py` and `RUN_SLOW=yes pytest tests/test_processor_layoutlmv2.py`?
I am developing on Windows, therefore my options when installing tesseract are limited to available installer versions. After moving from v5.0.0 to v4.1.0, which is the closest to v4.1.1, the version used to get bboxes in the `test_feature_extraction_layoutlmv2.py`, all of these tests ran successfully (There are no slow ones here).
As for `test_processor_layoutlmv2.py`, `test_processor_case1` failed for me, even when staying in the master branch. I was unable to reproduce the environment used to develop these tests on my computer. The reason is probably the different Tesseract version or model. The error comes from this line: https://github.com/huggingface/transformers/blob/3772af49ceba348f2c9c5bbbb7f7c12e35d2a6eb/tests/test_processor_layoutlmv2.py#L210 |
transformers | 14,513 | closed | β Define tokenizer from `tokenizers` as a `PreTrainedTokenizer` | Hi there,
I defined a simple whitespace tokenizer using the `tokenizers` library and I would like to integrate it with the transformers ecosystem. As an example, I would like to be able to use it with the `DataCollatorWithPadding`. Is there a way to easily (i.e., non-hacky) integrate tokenizers from `tokenizers` library and the `PreTrainedTokenizer` class?
For reference, please find below the code for the whitespace tokenizer.
Thanks a lot in advance for your help.
Best,
Pietro
```python
class WordTokenizer: # <- Maybe subclassing here?
def __init__(self, max_vocab_size=30_000, unk_token="[UNK]", pad_token="[PAD]"):
self.max_vocab_size = max_vocab_size
self.unk_token = unk_token
self.pad_token = pad_token
self.tokenizer, self.trainer = self._build_tokenizer()
os.environ["TOKENIZERS_PARALLELISM"] = "true"
def _build_tokenizer(self):
tokenizer = Tokenizer(WordLevel(unk_token=self.unk_token))
tokenizer.normalizer = BertNormalizer()
tokenizer.pre_tokenizer = Sequence([Digits(), Punctuation(), WhitespaceSplit()])
trainer = WordLevelTrainer(vocab_size=self.max_vocab_size, special_tokens=[self.pad_token, self.unk_token])
return tokenizer, trainer
def __call__(self, text_column, batch):
return {"input_ids": [enc.ids for enc in self.tokenizer.encode_batch(batch[text_column])]}
@staticmethod
def _batch_iterator(hf_dataset, batch_size, text_column):
for i in range(0, len(hf_dataset), batch_size):
yield hf_dataset[i : i + batch_size][text_column]
def fit(self, hf_dataset, batch_size=1_000, text_column="text"):
self.tokenizer.train_from_iterator(
self._batch_iterator(hf_dataset, batch_size, text_column), trainer=self.trainer, length=len(hf_dataset)
)
self.vocab_size = self.tokenizer.get_vocab_size()
``` | 11-24-2021 11:22:22 | 11-24-2021 11:22:22 | Hello, have you taken a look at the following documentation? https://huggingface.co/transformers/fast_tokenizers.html
It showcases how to handle tokenizers from the tokenizer library within `transformers`. Let me know if it helps!<|||||>Hi @LysandreJik,
Thanks a lot for your swift reply.
That's exactly what I was looking for. It's a shame I did not get it before asking (I even tried to write my own way of subclassing `PreTrainedTokenizer` π !).
Once again, really thanks a lot for your help!
Best,
Pietro<|||||>Hi @LysandreJik,
Just as feedback, running the example in the doc, I noticed that the special tokens are not directly transferred from the `Tokenizer` to the `PreTrainedTokenizerFast` (e.g., `unk_token`, `pad_token`).
I hope this can be useful.
Best,
Pietro<|||||>Thanks for the heads-up, pinging @SaulLu and @sgugger for knowledge<|||||>Thanks for the feedback @pietrolesci ! :hugs:
It makes me think that maybe we should explain this point in the documentation shared by LysandreJik because indeed `PreTrainedTokenizer` has no way to automatically know which tokens of the tokenizer correspond to the `unk_token`, `cls_token` etc.
But if you ever see an automatic way to do it, I'd be really happy to discuss it!<|||||>Hi @SaulLu,
I agree with you that it's non-trivial to do that. I can share my **big** hack below. For context, I want to define a simple WhiteSpace tokenizer. My hack is manually creating a `special_token_map` on the original tokenizer. The challenge is that even the underlying tokenizer does not store the named special tokens (apart from the `unk_token` which is available in `tokenizer.model`).
I hope this helps.
Best,
Pietro
```python
class WordTokenizer(PreTrainedTokenizerFast):
def __init__(self, **kwargs):
self._tokenizer, self._trainer = self._build_tokenizer(**kwargs)
os.environ["TOKENIZERS_PARALLELISM"] = "true"
def _build_tokenizer(self, **kwargs):
pad_token = kwargs.get("pad_token", "[PAD]")
unk_token = kwargs.get("unk_token", "[UNK]")
max_vocab_size = kwargs.get("max_vocab_size", 50_000)
tokenizer = Tokenizer(WordLevel(unk_token=unk_token))
tokenizer.normalizer = BertNormalizer()
tokenizer.pre_tokenizer = Sequence([Digits(), Punctuation(), WhitespaceSplit()])
trainer = WordLevelTrainer(
vocab_size=max_vocab_size,
special_tokens=[pad_token, unk_token],
)
tokenizer.special_tokens_map = {"pad_token": pad_token, "unk_token": unk_token}
return tokenizer, trainer
@staticmethod
def _batch_iterator(hf_dataset, batch_size, text_column):
for i in range(0, len(hf_dataset), batch_size):
yield hf_dataset[i : i + batch_size][text_column]
def fit(self, hf_dataset, batch_size=1_000, text_column="text"):
self._tokenizer.train_from_iterator(
self._batch_iterator(hf_dataset, batch_size, text_column),
trainer=self._trainer,
length=len(hf_dataset),
)
super().__init__(tokenizer_object=self._tokenizer)
setattr(self, "model_input_names", ["input_ids"])
for k, v in self._tokenizer.special_tokens_map.items():
setattr(self, k, v)
```<|||||>Thank you very much for your answer @pietrolesci. I'm glad to read your solution, it's always very interesting to see how you use the libraries and what difficulties you're facing! |
transformers | 14,512 | closed | Doc new front github actions | New doc builder.
Before merging, will only need to change the branch name to which the script pushes. | 11-24-2021 10:55:32 | 11-24-2021 10:55:32 | Doesn't have to be part of this PR. making a note here that:
needs to be updated with each new build/release
https://github.com/huggingface/transformers/blob/765f6a4f1728050e487ba93d51512672ef3d6c4d/docs/source/versions.yml#L1-L5 just as this list is updated currently
https://github.com/huggingface/transformers/blob/c6c075544d95940b086c2aaae46e114a0e3b9ab2/.circleci/deploy.sh#L81-L84 |
transformers | 14,511 | closed | LayoutXLMProcessor applies the english Tesseract model | # π Feature request
LayoutXLMProcessor.\_\_call\_\_ should support a language argument for Tesseract OCR
## Motivation
LayoutXLM is a multilingual version of the successful LayoutLMv2 model. The main reason to use it over LayoutLMV2 is to handle different languages, yet the current API does not allow specifying the language to be used in apply_tesseract.
## Your contribution
I could submit a PR but I am not that familiar with the Transformers library to suggest the best place to add the lang argument.
| 11-24-2021 10:39:34 | 11-24-2021 10:39:34 | Hi,
Great point, thanks for the feature request.
It would be rather straightforward: one should just add a lang parameter to [this line](https://github.com/huggingface/transformers/blob/956a483173e77ebf655ca9636a5f7b6ef010b307/src/transformers/models/layoutlmv2/feature_extraction_layoutlmv2.py#L54).
Can you open a PR for this? |
transformers | 14,510 | closed | bidirectional training for GPT2 | # π Feature request
May I ask, if there is possible to fine-tune GPT2 in bidirectional training and bidirectional training together?
Like the UniLM: https://github.com/microsoft/unilm/tree/master/unilm-v1
## Motivation
GPT2 is widely used in text generation in unidirectional training (left-to-right). Could we use GPT2 to process context in bidirectional, while generating text in unidirectional.
Thanks for any feedback and really appreciate for any tips. | 11-24-2021 10:14:36 | 11-24-2021 10:14:36 | Hi,
I've answered your question on the [forum](https://discuss.huggingface.co/t/fine-tuning-gpt2-for-bidirectional-training/12155/4).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,509 | closed | Converted TF model cannot generate line breaks anymore | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.11.3
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.8.8
- PyTorch version (GPU?): 1.9.0 (True)
- Tensorflow version (GPU?): 2.7.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
Models:
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- TensorFlow: @Rocketknight1
Library:
- Text generation: @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...):
GPT-2
The problem arises when using:
* [X] the official example scripts: (give details below)
When using the command [transformers-cli convert](https://huggingface.co/transformers/converting_tensorflow_models.html)
The tasks I am working on is:
* [X] my own task or dataset: (give details below)
My dataset is structured like
```
[WP]Input data[RESPONSE]Output data
Output 1
Output 2
Output 3
...
<|endoftext|>
```
## To reproduce
Steps to reproduce the behavior:
1. I converted my Tensorflow model to Pytorch by running `transformers-cli convert --model_type gpt2 --tf_checkpoint checkpoint/run1 --pytorch_dump_output pytorch --config checkpoint/run1/hparams.json`
2. The resulting "pytorch_model.bin" is completely unable to generate line breaks, my specific application needs them to operate correctly or the output will not make any sense at all.
3. For simplicity sake, let's say the Tensorflow model output looks like this:
```
123
456
789
```
The converted "pytorch_model.bin" output looks like:
`123456789`
There's no way to untangle this mess, not even manually, fixes or suggestions are most appreciated.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The Pytorch converted model output should contain line breaks just like the Tensorflow source model.
Expected output:
```
123
456
789
```
Incorrect output:
`123456789` | 11-24-2021 09:47:26 | 11-24-2021 09:47:26 | Hi, this is an interesting issue, and I suspect it doesn't have much to do with either TF or PyTorch. The reason is that models like GPT2 output integer token IDs, and if you're getting the same text either way then it likely means your models are outputting the same tokens. The problem is therefore probably that your newline characters aren't getting printed properly. I suggest comparing both models to confirm they're outputting the same token IDs, and if so, following the code that decodes and prints those IDs as text to identify where the issue has arisen.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @MaxGodTier,
How did you train your model? <|||||>@patrickvonplaten
My model has been trained with https://github.com/minimaxir/gpt-2-simple
I create a huge text file (ie. dataset.txt) structured like this:
```
[WP]Input data X[RESPONSE]Output data X
Output X1
Output X2
...
<|endoftext|>
[WP]Input data Y[RESPONSE]Output data Y
Output Y1
Output Y2
...
<|endoftext|>
```
Then I preencode and compress the dataset like this:
```
import gpt_2_simple as gpt2
import os
import requests
file_name = "dataset.txt"
gpt2.encode_dataset(file_name)
```
Now I have "dataset.npz", I download a pretrained model:
```
model_name = "355M"
if not os.path.isdir(os.path.join("models", model_name)):
print(f"Downloading {model_name} model...")
gpt2.download_gpt2(model_name=model_name) # model is saved into current directory under /models/355M/
```
Then finetune it:
```
file_name = "dataset.npz"
sess = gpt2.start_tf_sess()
gpt2.finetune(sess,
file_name,
model_name=model_name,
steps=250000) # steps is max number of training steps
gpt2.generate(sess)
```
After training, the Tensorflow model can generate line breaks, but if I convert it to Pytorch with `transformers-cli convert --model_type gpt2 --tf_checkpoint checkpoint/run1 --pytorch_dump_output pytorch --config checkpoint/run1/hparams.json` the resulting "pytorch_model.bin" is completely unable to generate line breaks.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Sorry for bumping this, the problem still persists.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @MaxGodTier,
Sorry to be so incredibly slow here. I sadly won't find the time to look further into this. Especially since the model was trained with a library that is different to `transformers`. |
transformers | 14,508 | closed | add cache_dir for tokenizer verification loading | When loading a pretrained tokenizer, a verification is done to ensure
that the actual tokenizer class matches the class it was called from.
If the tokenizer is absent, its config file is loaded from the repo.
However, the cache_dir for downloading is not provided, which leads to
ignoring of the user-specified cache_dir, storing files in several
places and and may result in incorrect warnings when the default
cache_dir is unreachsble.
See Issue #14138 for more details.
This commit fixes that.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #14138 Propagates user-specified cache_dir to tokenizer download during the tokenizer class check
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@n1t0, @LysandreJik
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-24-2021 09:17:05 | 11-24-2021 09:17:05 | |
transformers | 14,507 | closed | Distributed Training with Triplet Loss and DistilRoberta encoder | I wasn't sure if this issue is suitable as a Bug Report, so I copied the template only, so that its readable
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.11.3
- Platform: Linux
- Python version: 3.6.2
- PyTorch version (GPU?): 1.3.1
- Tensorflow version (GPU?): --
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help
Hello @LysandreJik - I'm using 'distilroberta-base', so I wasn't sure whom I should ping, thanks for help! I was also using Trainer library, so I allowed myself to tag @sgugger
## Information
Model I am using: DistilRoberta
The problem arises when using:
* distributed training along with triplet loss training scheme
The tasks I am working on is:
* Semantic Search using Triplet Loss and Transformer Encoder
Note: when using single GPU without Distributed training, the training normally progresses and converges, the model is just training very slowly, hence I'm trying to implement distributed training
There were two errors I experienced, using Trainer API:
First, related to flagging in autograd one variable ready two times:
```
RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons:
1) Use of a module parameter outside the `forward` function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes. or try to use _set_static_graph() as a workaround if this module graph does not change during training loop.
2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple `checkpoint` functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases in default. You can try to use _set_static_graph() as a workaround if your module graph does not change over iterations.
Parameter at index 99 with name backbone.encoder.encoder.layer.5.output.LayerNorm.weight has been marked as ready twice. This means that multiple autograd engine hooks have fired for this particular parameter during this iteration.
```
Second, related to in place modification of tensor:
```
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.LongTensor [4, 447]] is at version 3; expected version 2 instead.
Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
backtrace:
File "[...]/lib/python3.6/site-packages/transformers/models/roberta/modeling_roberta.py", line 132, in forward
token_type_embeddings = self.token_type_embeddings(token_type_ids)
File "[...]/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "[...]/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 160, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "[...]/lib/python3.6/site-packages/torch/nn/functional.py", line 2043, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
```
## To reproduce
Currently I was able to reproduce the second error outside of Trainer API, still working on the first one (sorry for some hacky parts, like dataset definition and iteration, I hope it will still be understandable - just wanted to create something reproducible quickly):
```python
import os
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.multiprocessing as mp
from torch.utils.data import DataLoader
from torch.utils.data.distributed import DistributedSampler
from transformers import AutoConfig, AutoModel
def run_training(rank):
torch.autograd.set_detect_anomaly(True)
torch.distributed.init_process_group(backend="nccl", rank=rank)
device = torch.device('cuda', rank)
data = [{'sent1': {'input_ids': torch.randint(0, 200, (16, 256)), 'attention_mask': torch.ones(16, 256)},
'sent2': {'input_ids': torch.randint(0, 200, (16, 256)), 'attention_mask': torch.ones(16, 256)},
'sent3': {'input_ids': torch.randint(0, 200, (16, 256)), 'attention_mask': torch.ones(16, 256)}
}]*8
train_sampler = DistributedSampler(
data,
num_replicas=torch.cuda.device_count(),
rank=rank,
seed=44,
)
dll = DataLoader(
data,
batch_size=1,
sampler=train_sampler,
drop_last=False,
num_workers=2,
pin_memory=True,
)
config = AutoConfig.from_pretrained('distilroberta-base')
model = AutoModel.from_pretrained('distilroberta-base', config=config, add_pooling_layer=False)
model.to(device)
model.gradient_checkpointing_enable()
model = nn.parallel.DistributedDataParallel(model, device_ids=[rank], output_device=rank, find_unused_parameters=False)
optimizer = torch.optim.Adam(model.parameters())
metric = lambda x,y: 1.0 - F.cosine_similarity(x, y)
criterion = nn.TripletMarginWithDistanceLoss(distance_function=metric, margin=0.2, reduction='none')
for n, b in enumerate(dll):
print(n)
model.zero_grad()
sent1 = {k:v.squeeze(0) for k,v in b['sent1'].items()}
sent2 = {k:v.squeeze(0) for k,v in b['sent2'].items()}
sent3 = {k:v.squeeze(0) for k,v in b['sent3'].items()}
emb1 = model(**sent1)[0][:, 0, :]
emb2 = model(**sent2)[0][:, 0, :]
emb3 = model(**sent3)[0][:, 0, :]
losses = criterion(emb1, emb2, emb3)
loss = losses.mean()
loss.backward()
optimizer.step()
print('Model device: {}, loss device: {}, loss: {}'.format(model.device, loss.device, loss))
def main():
world_size = torch.cuda.device_count()
os.environ["MASTER_PORT"] = '1234'
os.environ["MASTER_ADDR"] = '127.0.0.1'
os.environ["WORLD_SIZE"] = str(world_size)
mp.spawn(run_training,
nprocs=world_size,
join=True)
if __name__ == "__main__":
main()
```
## Expected behavior
Training without errors
| 11-24-2021 08:49:50 | 11-24-2021 08:49:50 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,506 | closed | RuntimeError: The expanded size of the tensor (20) must match the existing size (101) at non-singleton dimension 0. Target sizes: [20]. Tensor sizes: [101] | My goal is to be able to pass 100 (or 300) tokens (or some other number) as input and get back at most 16 additional tokens (or some other number I can control).
Code is strait forward:
```
vocab_size = model.config.vocab_size
input_ids = torch.randint(vocab_size, (1, 300), dtype=torch.long, device=device)
model.generate(
input_ids,
language_ids=language_ids,
do_sample=False,
num_beams=num_beams,
max_length=max_length,
repetition_penalty=1.6,
pad_token_id=tokenizer.eos_token_id,
num_return_sequences=num_sequences
)
```
Can you elaborate on the semantics of what "max_length" refers to?
Per the documentation:
https://huggingface.co/transformers/main_classes/configuration.html#transformers.PretrainedConfig
**Parameters for sequence generation**
> max_length (int, optional, defaults to 20) β Maximum length that will be used by default in the generate method of the model.
If I am doing something wrong, please point me to how to control the behavior in the correct way π
Side note:
I also played with the model config (so many tests and various messages):
```
model.config.max_length = 16 # other numbers were tried as well
```
Depending on the setup I could also get a message like this (same `generate` call as above):
> Input length of input_ids is 300, but ``max_length`` is set to 20.This can lead to unexpected behavior. You should consider increasing ``config.max_length`` or ``max_length`` | 11-24-2021 07:45:13 | 11-24-2021 07:45:13 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,505 | closed | FillMaskPipeline assumes model return dict but return_dict is not set | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.11.3
- Platform: Linux
- Python version: 3.6.13
- PyTorch version (GPU?): 1.9
- Tensorflow version (GPU?): N/A
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help
@Narsil
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- encoder-decoder models (For example, BlenderBot, BART, Marian, Pegasus, T5, ByT5): @patrickvonplaten, @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj @patrickvonplaten
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten
- Tokenizers: @LysandreJik
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): Bert, Roberta, Deberta
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
I came across this error
`'tuple' object does not support item assignment`
when using `nlpaug` package ([link](https://github.com/makcedward/nlpaug)) for data augmentation. The code snippet is
```Python
import nlpaug.augmenter.word as naw
text = 'The chef cook the meal.'
aug = naw.ContextualWordEmbsAug(
model_path=model_path,
model_type='roberta',
action="insert")
augmented_text = aug.augment(text)
```
It is easy to locate this bug by checking `transformers.pipelines.fill_mask.py` [line 90](https://github.com/huggingface/transformers/blob/956a483173e77ebf655ca9636a5f7b6ef010b307/src/transformers/pipelines/fill_mask.py#L90):
```Python
def _forward(self, model_inputs):
model_outputs = self.model(**model_inputs)
model_outputs["input_ids"] = model_inputs["input_ids"]
return model_outputs
```
and [line 184](https://github.com/huggingface/transformers/blob/956a483173e77ebf655ca9636a5f7b6ef010b307/src/transformers/pipelines/fill_mask.py#L184):
```Python
def _sanitize_parameters(self, top_k=None, targets=None):
postprocess_params = {}
if targets is not None:
target_ids = self.get_target_ids(targets, top_k)
postprocess_params["target_ids"] = target_ids
if top_k is not None:
postprocess_params["top_k"] = top_k
if self.tokenizer.mask_token_id is None:
raise PipelineException(
"fill-mask", self.model.base_model_prefix, "The tokenizer does not define a `mask_token`."
)
return {}, {}, postprocess_params
```
~~The `_forward` assumes `model_outputs` is a `dict` but models like Bert, Roberta assume `return_dict=None` by default; the `_sanitize_parameters` returns empty `forward_params` so `model_inputs` does not contain `{'return_dict': True}`, thus causing the error.~~
Later on I realized the issue was that my checkpoint from `model_path` had set `return_dict=False` , so I modified the description as follows:
The `_forward` assumes `model_outputs` is a `dict` but the loaded checkpoint from `model_path` may have set `return_dict=False` and output a tuple to cause the error. The `_sanitize_parameters` returns empty `forward_params` so user cannot configure this in the code, but have to modify the `return_dict` property of the checkpoint.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Please do either
- Explicitly set `self.model(**model_inputs, return_dict=True)` in `_forward` (as @Narsil suggests).
- Allow `_sanitize_parameters` to accept `**kwargs`, allow `_forward` to accept `**forward_params` and pass them to model, so that the user can modify the behavior of the loaded model.
- Provide other suggestions.
<!-- A clear and concise description of what you would expect to happen. -->
| 11-24-2021 07:29:12 | 11-24-2021 07:29:12 | Ok, `_forward` assumes that the model will returns a `BaseModelOutput` which are Transformers specific dicts which are necessary to support all backends (PT, TF, flax).
There won't be support for tuples since it's harder and less readable to use.
The pipelines do need to change what get passed on to `postprocess` since information from `preprocess` might still be important (here `input_ids` for instance).
In addition, the pipeline can handle batching and auto padding which is impossible to handle without knowing tensor names. (since the padding value depends on the type of tensor)
AFAIK there is not downside to using `BaseModelOutput`. In general, `return_dict` does not have to be set, since it's the default of the library.
I am not sure where this information is lost in your script since you share only a very small amount of code.
Could you share a reproducible example so I can probably provide a better workaround ?<|||||>> Ok, `_forward` assumes that the model will returns a `BaseModelOutput` which are Transformers specific dicts which are necessary to support all backends (PT, TF, flax).
>
> There won't be support for tuples since it's harder and less readable to use. The pipelines do need to change what get passed on to `postprocess` since information from `preprocess` might still be important (here `input_ids` for instance). In addition, the pipeline can handle batching and auto padding which is impossible to handle without knowing tensor names. (since the padding value depends on the type of tensor)
>
> AFAIK there is not downside to using `BaseModelOutput`. In general, `return_dict` does not have to be set, since it's the default of the library.
>
> I am not sure where this information is lost in your script since you share only a very small amount of code. Could you share a reproducible example so I can probably provide a better workaround ?
Thanks for your reply and sorry for a waste of your time.
I realize the issue was on my side that `return_dict` was set to False in the provided checkpoint at `model_path`. The default of the library is indeed to return a dictionary.
I have modified the issue description to include this.<|||||>I think forcing `return_dict` to be `True` has no downsides and would prevent issues where a model has `return_dict` set to `False` before being loaded in the pipeline. I believe we would be open for an explicit setting of `return_dict` to `True` in the pipelines.<|||||>@LysandreJik do all models in all frameworks support the kwarg ?
If it's not an overridable option, we could definitely change every call to force `return_dict=True`
```python
self.model(**model_inputs)
```
to
```python
self.model(**model_inputs, return_dict=True)
```
Would that work ? |
transformers | 14,504 | closed | Add a new gradient regularization feature | # π Feature request
When we fine-tune a large-scale pre-trained model in the downstream task (especially with low-resource data), it would be helpful if we randomly discard some gradients, as introduced in this [paper](https://aclanthology.org/2021.emnlp-main.749.pdf).
Therefore, I wonder if I could add this simple feature to transformers?
## Motivation
Add a simple regularization feature to address the mismatch between large models and small training data.
## Your contribution
I implement the feature by adding the following codes in the `step()` function in the optimizer.
```
grad_mask = Bernoulli(grad.new_full(size=grad.size(), fill_value=self.reserve_p))
grad *= grad_mask.sample() / self.reserve_p
```
| 11-24-2021 06:24:31 | 11-24-2021 06:24:31 | Maybe of interest to @sgugger or @Rocketknight1 :)<|||||>It seems like an interesting paper, but there are **a lot** of papers with new training recipes like this, and I don't think we can implement them all until they become very famous and widely-used. Also, this particular method should be relatively easy for users to implement in their custom training loops or TF `train_step` code, so I think we probably would not be able to accept this PR right now (although we definitely appreciate the enthusiasm!). @sgugger do you agree?<|||||> I agree this is out of scope of the `Trainer` for now, unless it becomes a widely used technique. In the meantime, using our [`Accelerate`](https://github.com/huggingface/accelerate) library lets the user write their custom training loop where they can implement this feature easily :-)<|||||>Thanks so much :) |
transformers | 14,503 | closed | Add GPTJForQuestionAnswering | # What does this PR do?
- Add `GPTJForQuestionAnswering` class for GPTJ upstream and QA downstream task
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten, @LysandreJik
@sgugger, @patil-suraj
| 11-23-2021 22:20:48 | 11-23-2021 22:20:48 | Hi, @sgugger and @patrickvonplaten again.
Like the https://github.com/huggingface/transformers/pull/13290 PR that I made before, I made QA for GPT-J class and made this PR for that.
If you have time, I'd appreciate your review and feedback on it.<|||||>This seems good to me - @patil-suraj, do you mind taking a look at this?<|||||>Looks good to me as well! @patil-suraj - could you also take a quick look? :-)<|||||>Thanks again for your PR! |
transformers | 14,502 | closed | DeBERTa-v3 does not preserve spaces before/after additional special tokens in convert_tokens_to_string output | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.12.5
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.5
- PyTorch version (GPU?): 1.10.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No.
- Using distributed or parallel set-up in script?: No.
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- encoder-decoder models (For example, BlenderBot, BART, Marian, Pegasus, T5, ByT5): @patrickvonplaten, @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj @patrickvonplaten
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten
- Tokenizers: @LysandreJik
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@LysandreJik
## Information
Model I am using (Bert, XLNet ...): `microsoft/deberta-v3-small`
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Initialize a DeBERTa-v3 tokenizer with `additional_special_tokens`.
2. Tokenize some text with `tokenize` that contains one or more of those special tokens.
3. Attempt to convert the tokens to a string with `convert_tokens_to_string`
4. DeBERTa-v3 does not include a space before/after the special token in the resulting string. BERT (and earlier versions of DeBERTa) do.
```python
from transformers import AutoTokenizer, AutoModel
special_tokens = ["<SPECIAL>"]
text = "some text with an additional special token <SPECIAL>"
# BERT
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased", additional_special_tokens=special_tokens)
print(tokenizer.convert_tokens_to_string(tokenizer.tokenize(text)))
# => some text with an additional special token <SPECIAL>
# DeBERTa (original)
tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-base", additional_special_tokens=special_tokens)
print(tokenizer.convert_tokens_to_string(tokenizer.tokenize(text)))
# => some text with an additional special token <SPECIAL>
# DeBERTa (v3)
tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-v3-small", additional_special_tokens=special_tokens)
print(tokenizer.convert_tokens_to_string(tokenizer.tokenize(text)))
# => some text with an additional special token<SPECIAL>
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I expect that spaces before/after any special tokens added with `additional_special_tokens` will be preserved when calling `tokenizer.convert_tokens_to_string(tokenizer.tokenize(text))`.
| 11-23-2021 17:21:32 | 11-23-2021 17:21:32 | Sorry for the delay in answering this, pinging @SaulLu so she can take a look when she has the time :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@LysandreJik @SaulLu This still happens on the latest version of Transformers and with the latest version of DeBERTa-v3, so I am commenting to keep it open.<|||||>Thank you very much for the detailed issue!
Indeed, you have put your finger on an inconsistency: what is happening is that the slow and fast tokenizer of DeBERTa (original) do not behave in the same way:
```python
tokenizer = AutoTokenizer.from_pretrained(
"microsoft/deberta-base",
additional_special_tokens=special_tokens,
use_fast=False
)
print(f"Output with {type(tokenizer)}:\n", tokenizer.convert_tokens_to_string(tokenizer.tokenize(text)))
# => Output with <class 'transformers.models.deberta.tokenization_deberta.DebertaTokenizer'>:
# some text with an additional special token<SPECIAL>
# DeBERTa (original) fast
tokenizer = AutoTokenizer.from_pretrained(
"microsoft/deberta-base",
additional_special_tokens=special_tokens,
use_fast=True
)
print(f"Output with {type(tokenizer)}:\n", tokenizer.convert_tokens_to_string(tokenizer.tokenize(text)))
# => Output with <class 'transformers.models.deberta.tokenization_deberta_fast.DebertaTokenizerFast'>:
# some text with an additional special token <SPECIAL>
```
As a result the issue seems more linked to the the workflow of the slow tokenizers.
However, finding the right way to fix the problem is less obvious because:
- `convert_tokens_to_string` is used in `_decode` of `PreTrainedTokenizer` (the base class of all slow tokenizers)
- `DebertaTokenizer` (original) inherits from `GPT2Tokenizer` where `convert_tokens_to_string` is defined
- `DebertaV2Tokenizer` uses a different strategy than `GPT2Tokenizer` to implement `convert_tokens_to_string`
To get an broader view of the problem, could you share with us what your use case is for this command (what do you want to see with it? Is it manual work? in production?)?<|||||>Thanks for the detailed response @SaulLu!
I have a task where I need to add special tokens to the text to introduce some structure. A common use case of this are the "marker tokens" used in named relation extraction. A simplified example is:
```python
text = "<ORG> Apple </ORG> is looking at buying <GPE> U.K. </GPE> startup for <MONEY> $1 billion </MONEY>"
```
Ideally, we could add all these tokens as `additional_special_tokens` so they don't get split. Indeed, it works fine with BERT and the original DeBERTa, so I was curious as to why it doesn't work with DeBERTa V3.<|||||>Thank you very much for your answer! Very interesting use case!
And in particular, why on this use case do you need to use `tokenizer.convert_tokens_to_string(tokenizer.tokenize(text)`?
For DeBERTa (original and V3), I guess the `tokenizer.decode(tokenizer.encode(text)` command should give the result you were expecting initially. :blush: <|||||>Ahhh, `tokenizer.decode(tokenizer.encode(text)` does work! And it works for BERT as well.
There was no specific reason to use `convert_tokens_to_string`, I just thought that would be the correct method to use! Thanks for the tip with `tokenizer.decode(tokenizer.encode(text)`<|||||>Actually, I now remember why I wanted to use `convert_tokens_to_string`. In the case of an autoregressive decoder, generating some output token by token, that may include some special tokens. I would like to recover from output a string, which maintains the expected spaces around special tokens. Here is a simplified example:
```python
special_tokens = ["<DISEASE>", "</DISEASE>", "<DRUG>", "</DRUG>"]
text = "<DISEASE> Anaphylaxis </DISEASE> to <DRUG> cisplatin </DRUG> is an infrequent life-threatening complication"
tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-v3-small", additional_special_tokens=special_tokens)
# Tokenize the text to mimic what a decoder would have generated, token-by-token
decoder_output = tokenizer.tokenize(text)
print(decoder_output)
# => ['<DISEASE>', 'βAna', 'phyl', 'axis', '</DISEASE>', 'βto', '<DRUG>', 'βcisplatin', '</DRUG>', 'βis', 'βan', 'βinfrequent', 'βlife', '-', 'threatening', 'βcomplication']
# Try to go backwards
print(tokenizer.convert_tokens_to_string(decoder_output))
# => <DISEASE> Anaphylaxis</DISEASE> to<DRUG> cisplatin</DRUG> is an infrequent life-threatening complication
```
Which doesn't produce the correct spacing. I can solve that using the `decode(encode())` strategy
```python
print(tokenizer.decode(tokenizer.encode(tokenizer.convert_tokens_to_string(decoder_output), add_special_tokens=False)))
# => <DISEASE> Anaphylaxis </DISEASE> to <DRUG> cisplatin </DRUG> is an infrequent life-threatening complication
```
I guess the only downside is that you have to call 3 (!) `tokenizer` methods to get the job done (`decode`, `encode` and `convert_tokens_to_string`).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,501 | closed | [TAPAS] Tiny fix | # What does this PR do?
This PR includes a finy fix for TAPAS. Namely, when `config.select_one_column` is set to `False`, the model should not recompute the token logits.
Relevant to #13393 | 11-23-2021 16:30:23 | 11-23-2021 16:30:23 | Can you include a test to ensure the behavior doesn't happen in the future?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,500 | closed | (TF) InternalError: Multiple CPU devices | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.12.5
- Platform: Linux-5.4.0-90-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.6
- PyTorch version (GPU?): 1.9.0+cu102 (False)
- Tensorflow version (GPU?): 2.3.4 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: distributed
### Who can help
@patrickvonplaten @Rocketknight1
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- encoder-decoder models (For example, BlenderBot, BART, Marian, Pegasus, T5, ByT5): @patrickvonplaten, @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj @patrickvonplaten
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten
- Tokenizers: @LysandreJik
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): Longformer
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. instantiate `TFAutoModelForSequenceClassification.from_pretrained("allenai/longformer-base-4096")`
2. compile the model under `tf.distribute.experimental.MultiWorkerMirroredStrategy` strategy
3. run `fit()`
```
Traceback (most recent call last):
File "training.py", line 20, in <module>
classifier.fit()
File "/opt/miniconda/lib/python3.7/site-packages/pml/training/training.py", line 27, in fit
self._fit()
File "/opt/miniconda/lib/python3.7/site-packages/pml/training/_mixins.py", line 50, in _fit
self._fit_model()
File "/opt/miniconda/lib/python3.7/site-packages/pml/training/_mixins.py", line 430, in _fit_model
class_weight=self.class_weight_
File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/opt/miniconda/lib/python3.7/site-packages/tensorflow/python/eager/execute.py", line 59, in quick_execute
inputs, attrs, num_outputs)
tensorflow.python.framework.errors_impl.InternalError: Multiple CPU devices [/job:worker/replica:0/task:0/device:GPU:7,/job:worker/replica:0/task:0/device:CPU:0,/job:localhost/replica:0/task:0/device:CPU:0] [Op:__inference_train_function_800368]
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I would expect the model to begin training however it errors with the traceback above. I have tested this same script with `TFDistilBertForSequenceClassification` and the model successfully trains, so I'm kinda stuck here.
EDIT: This isn't running locally on my machine, but in Azure on the [A100 VM series](https://docs.microsoft.com/en-us/azure/virtual-machines/nda100-v4-series)
<!-- A clear and concise description of what you would expect to happen. -->
| 11-23-2021 16:20:57 | 11-23-2021 16:20:57 | This is unusual - is this issue unique to Hugging Face models? Does it still occur with a basic Keras model? <|||||>Hi @Rocketknight1! I haven't tried with basic Keras models. I can try that if you think it would help, however, `TFDistilBertForSequenceClassification` trained successfully, so I suspect it is something specific to `Longformer`<|||||>So I traced it to [here](https://github.com/tensorflow/tensorflow/blob/b1b76e24284d55156ad4e12fb847bfbc637b9a20/tensorflow/compiler/jit/device_util.cc#L173-L176). Must be some issue with the JIT compilation. I'll test some stuff out.<|||||>Still testing stuff, but I didn't pay close attention when I ran the cli tool to get system info. I'm actually using a docker container on the A100 VMs that is based on Debian + NVIDIA (cudnn 8/ cuda 11.2).
The `tensorflow` version is actually 2.7.*
The `torch` version is actually 1.10.0<|||||>[this](https://github.com/tensorflow/tensorflow/issues/53192) seems suspiciously related as they're having JIT issues on the A100. I think in my case though, TF isn't sure which device to use.<|||||>I was able to train the model successfully in the end by disabling XLA JIT compilation with `ENV TF_XLA_FLAGS="--tf_xla_auto_jit=-1"`<|||||>Ok, so it works for a single epoch. When running more than 1 epoch, the first epoch succeeds and upon beginning the second epoch, we get this (first epoch logging included here for completeness).
```
1/232 [..............................] - ETA: 35:39:11 - loss: 0.5014 - Accuracy: 0.8750
2/232 [..............................] - ETA: 4:16 - loss: 0.4386 - Accuracy: 0.8125
3/232 [..............................] - ETA: 4:02 - loss: 0.3492 - Accuracy: 0.6250
4/232 [..............................] - ETA: 3:51 - loss: 0.3182 - Accuracy: 0.6250
5/232 [..............................] - ETA: 3:46 - loss: 0.2899 - Accuracy: 0.6750
6/232 [..............................] - ETA: 3:42 - loss: 0.2548 - Accuracy: 0.7083
7/232 [..............................] - ETA: 3:38 - loss: 0.2454 - Accuracy: 0.6786
8/232 [>.............................] - ETA: 3:37 - loss: 0.2363 - Accuracy: 0.6719
9/232 [>.............................] - ETA: 3:37 - loss: 0.2329 - Accuracy: 0.6389
10/232 [>.............................] - ETA: 3:36 - loss: 0.2262 - Accuracy: 0.6375
11/232 [>.............................] - ETA: 3:34 - loss: 0.2139 - Accuracy: 0.6705
12/232 [>.............................] - ETA: 3:33 - loss: 0.2021 - Accuracy: 0.6875
13/232 [>.............................] - ETA: 3:32 - loss: 0.1965 - Accuracy: 0.6923
14/232 [>.............................] - ETA: 3:30 - loss: 0.1912 - Accuracy: 0.7054
15/232 [>.............................] - ETA: 3:30 - loss: 0.1894 - Accuracy: 0.7000
16/232 [=>............................] - ETA: 3:29 - loss: 0.1802 - Accuracy: 0.7188
17/232 [=>............................] - ETA: 3:27 - loss: 0.1793 - Accuracy: 0.7132
18/232 [=>............................] - ETA: 3:25 - loss: 0.1753 - Accuracy: 0.7153
19/232 [=>............................] - ETA: 3:24 - loss: 0.1798 - Accuracy: 0.6974
20/232 [=>............................] - ETA: 3:22 - loss: 0.1771 - Accuracy: 0.7063
21/232 [=>............................] - ETA: 3:21 - loss: 0.1741 - Accuracy: 0.7083
22/232 [=>............................] - ETA: 3:20 - loss: 0.1751 - Accuracy: 0.6989
23/232 [=>............................] - ETA: 3:19 - loss: 0.1753 - Accuracy: 0.6902
24/232 [==>...........................] - ETA: 3:17 - loss: 0.1702 - Accuracy: 0.7031
25/232 [==>...........................] - ETA: 3:16 - loss: 0.1677 - Accuracy: 0.7050
26/232 [==>...........................] - ETA: 3:15 - loss: 0.1656 - Accuracy: 0.7067
27/232 [==>...........................] - ETA: 3:14 - loss: 0.1619 - Accuracy: 0.7130
28/232 [==>...........................] - ETA: 3:12 - loss: 0.1616 - Accuracy: 0.7098
29/232 [==>...........................] - ETA: 3:11 - loss: 0.1619 - Accuracy: 0.7069
30/232 [==>...........................] - ETA: 3:10 - loss: 0.1644 - Accuracy: 0.6958
31/232 [===>..........................] - ETA: 3:09 - loss: 0.1616 - Accuracy: 0.7016
32/232 [===>..........................] - ETA: 3:08 - loss: 0.1605 - Accuracy: 0.7031
33/232 [===>..........................] - ETA: 3:08 - loss: 0.1612 - Accuracy: 0.7008
34/232 [===>..........................] - ETA: 3:06 - loss: 0.1588 - Accuracy: 0.7059
35/232 [===>..........................] - ETA: 3:06 - loss: 0.1563 - Accuracy: 0.7143
36/232 [===>..........................] - ETA: 3:05 - loss: 0.1550 - Accuracy: 0.7153
37/232 [===>..........................] - ETA: 3:04 - loss: 0.1549 - Accuracy: 0.7128
38/232 [===>..........................] - ETA: 3:03 - loss: 0.1535 - Accuracy: 0.7138
39/232 [====>.........................] - ETA: 3:02 - loss: 0.1527 - Accuracy: 0.7147
40/232 [====>.........................] - ETA: 3:01 - loss: 0.1533 - Accuracy: 0.7125
41/232 [====>.........................] - ETA: 2:59 - loss: 0.1500 - Accuracy: 0.7195
42/232 [====>.........................] - ETA: 2:59 - loss: 0.1524 - Accuracy: 0.7113
43/232 [====>.........................] - ETA: 2:58 - loss: 0.1523 - Accuracy: 0.7122
44/232 [====>.........................] - ETA: 2:57 - loss: 0.1526 - Accuracy: 0.7102
45/232 [====>.........................] - ETA: 2:56 - loss: 0.1528 - Accuracy: 0.7083
46/232 [====>.........................] - ETA: 2:55 - loss: 0.1509 - Accuracy: 0.7147
47/232 [=====>........................] - ETA: 2:54 - loss: 0.1496 - Accuracy: 0.7181
48/232 [=====>........................] - ETA: 2:53 - loss: 0.1507 - Accuracy: 0.7135
49/232 [=====>........................] - ETA: 2:52 - loss: 0.1507 - Accuracy: 0.7117
50/232 [=====>........................] - ETA: 2:51 - loss: 0.1508 - Accuracy: 0.7100
51/232 [=====>........................] - ETA: 2:50 - loss: 0.1508 - Accuracy: 0.7083
52/232 [=====>........................] - ETA: 2:49 - loss: 0.1522 - Accuracy: 0.7019
53/232 [=====>........................] - ETA: 2:48 - loss: 0.1504 - Accuracy: 0.7075
54/232 [=====>........................] - ETA: 2:47 - loss: 0.1500 - Accuracy: 0.7083
55/232 [======>.......................] - ETA: 2:46 - loss: 0.1494 - Accuracy: 0.7091
56/232 [======>.......................] - ETA: 2:45 - loss: 0.1487 - Accuracy: 0.7098
57/232 [======>.......................] - ETA: 2:44 - loss: 0.1491 - Accuracy: 0.7061
58/232 [======>.......................] - ETA: 2:44 - loss: 0.1485 - Accuracy: 0.7069
59/232 [======>.......................] - ETA: 2:43 - loss: 0.1473 - Accuracy: 0.7097
60/232 [======>.......................] - ETA: 2:42 - loss: 0.1467 - Accuracy: 0.7104
61/232 [======>.......................] - ETA: 2:41 - loss: 0.1458 - Accuracy: 0.7111
62/232 [=======>......................] - ETA: 2:40 - loss: 0.1447 - Accuracy: 0.7137
63/232 [=======>......................] - ETA: 2:39 - loss: 0.1449 - Accuracy: 0.7123
64/232 [=======>......................] - ETA: 2:38 - loss: 0.1446 - Accuracy: 0.7129
65/232 [=======>......................] - ETA: 2:37 - loss: 0.1470 - Accuracy: 0.7038
66/232 [=======>......................] - ETA: 2:36 - loss: 0.1468 - Accuracy: 0.7027
67/232 [=======>......................] - ETA: 2:35 - loss: 0.1472 - Accuracy: 0.6996
68/232 [=======>......................] - ETA: 2:34 - loss: 0.1472 - Accuracy: 0.6967
69/232 [=======>......................] - ETA: 2:33 - loss: 0.1481 - Accuracy: 0.6957
70/232 [========>.....................] - ETA: 2:32 - loss: 0.1482 - Accuracy: 0.7000
71/232 [========>.....................] - ETA: 2:31 - loss: 0.1488 - Accuracy: 0.6972
72/232 [========>.....................] - ETA: 2:30 - loss: 0.1481 - Accuracy: 0.6997
73/232 [========>.....................] - ETA: 2:29 - loss: 0.1485 - Accuracy: 0.6969
74/232 [========>.....................] - ETA: 2:28 - loss: 0.1496 - Accuracy: 0.6926
75/232 [========>.....................] - ETA: 2:28 - loss: 0.1515 - Accuracy: 0.6867
76/232 [========>.....................] - ETA: 2:26 - loss: 0.1505 - Accuracy: 0.6891
77/232 [========>.....................] - ETA: 2:25 - loss: 0.1511 - Accuracy: 0.6867
78/232 [=========>....................] - ETA: 2:24 - loss: 0.1506 - Accuracy: 0.6875
79/232 [=========>....................] - ETA: 2:23 - loss: 0.1513 - Accuracy: 0.6835
80/232 [=========>....................] - ETA: 2:22 - loss: 0.1516 - Accuracy: 0.6812
81/232 [=========>....................] - ETA: 2:21 - loss: 0.1514 - Accuracy: 0.6806
82/232 [=========>....................] - ETA: 2:20 - loss: 0.1513 - Accuracy: 0.6799
83/232 [=========>....................] - ETA: 2:20 - loss: 0.1507 - Accuracy: 0.6822
84/232 [=========>....................] - ETA: 2:19 - loss: 0.1512 - Accuracy: 0.6786
85/232 [=========>....................] - ETA: 2:18 - loss: 0.1515 - Accuracy: 0.6750
86/232 [==========>...................] - ETA: 2:17 - loss: 0.1512 - Accuracy: 0.6744
87/232 [==========>...................] - ETA: 2:16 - loss: 0.1509 - Accuracy: 0.6753
88/232 [==========>...................] - ETA: 2:15 - loss: 0.1506 - Accuracy: 0.6776
89/232 [==========>...................] - ETA: 2:14 - loss: 0.1505 - Accuracy: 0.6784
90/232 [==========>...................] - ETA: 2:13 - loss: 0.1501 - Accuracy: 0.6806
91/232 [==========>...................] - ETA: 2:12 - loss: 0.1496 - Accuracy: 0.6827
92/232 [==========>...................] - ETA: 2:11 - loss: 0.1501 - Accuracy: 0.6793
93/232 [===========>..................] - ETA: 2:10 - loss: 0.1495 - Accuracy: 0.6815
94/232 [===========>..................] - ETA: 2:09 - loss: 0.1497 - Accuracy: 0.6795
95/232 [===========>..................] - ETA: 2:08 - loss: 0.1493 - Accuracy: 0.6803
96/232 [===========>..................] - ETA: 2:07 - loss: 0.1489 - Accuracy: 0.6810
97/232 [===========>..................] - ETA: 2:06 - loss: 0.1494 - Accuracy: 0.6791
98/232 [===========>..................] - ETA: 2:05 - loss: 0.1486 - Accuracy: 0.6811
99/232 [===========>..................] - ETA: 2:04 - loss: 0.1478 - Accuracy: 0.6831
100/232 [===========>..................] - ETA: 2:03 - loss: 0.1479 - Accuracy: 0.6825
101/232 [============>.................] - ETA: 2:02 - loss: 0.1483 - Accuracy: 0.6807
102/232 [============>.................] - ETA: 2:01 - loss: 0.1496 - Accuracy: 0.6765
103/232 [============>.................] - ETA: 2:01 - loss: 0.1492 - Accuracy: 0.6772
104/232 [============>.................] - ETA: 2:00 - loss: 0.1493 - Accuracy: 0.6767
105/232 [============>.................] - ETA: 1:59 - loss: 0.1497 - Accuracy: 0.6750
106/232 [============>.................] - ETA: 1:58 - loss: 0.1493 - Accuracy: 0.6757
107/232 [============>.................] - ETA: 1:57 - loss: 0.1503 - Accuracy: 0.6717
108/232 [============>.................] - ETA: 1:56 - loss: 0.1496 - Accuracy: 0.6736
109/232 [=============>................] - ETA: 1:55 - loss: 0.1496 - Accuracy: 0.6732
110/232 [=============>................] - ETA: 1:54 - loss: 0.1500 - Accuracy: 0.6716
111/232 [=============>................] - ETA: 1:53 - loss: 0.1496 - Accuracy: 0.6723
112/232 [=============>................] - ETA: 1:52 - loss: 0.1496 - Accuracy: 0.6719
113/232 [=============>................] - ETA: 1:51 - loss: 0.1496 - Accuracy: 0.6715
114/232 [=============>................] - ETA: 1:50 - loss: 0.1490 - Accuracy: 0.6732
115/232 [=============>................] - ETA: 1:49 - loss: 0.1490 - Accuracy: 0.6728
116/232 [==============>...............] - ETA: 1:48 - loss: 0.1486 - Accuracy: 0.6746
117/232 [==============>...............] - ETA: 1:47 - loss: 0.1484 - Accuracy: 0.6752
118/232 [==============>...............] - ETA: 1:47 - loss: 0.1480 - Accuracy: 0.6769
119/232 [==============>...............] - ETA: 1:46 - loss: 0.1472 - Accuracy: 0.6796
120/232 [==============>...............] - ETA: 1:45 - loss: 0.1469 - Accuracy: 0.6802
121/232 [==============>...............] - ETA: 1:44 - loss: 0.1471 - Accuracy: 0.6787
122/232 [==============>...............] - ETA: 1:43 - loss: 0.1468 - Accuracy: 0.6793
123/232 [==============>...............] - ETA: 1:42 - loss: 0.1462 - Accuracy: 0.6809
124/232 [===============>..............] - ETA: 1:41 - loss: 0.1459 - Accuracy: 0.6815
125/232 [===============>..............] - ETA: 1:40 - loss: 0.1456 - Accuracy: 0.6820
126/232 [===============>..............] - ETA: 1:39 - loss: 0.1450 - Accuracy: 0.6835
127/232 [===============>..............] - ETA: 1:38 - loss: 0.1452 - Accuracy: 0.6831
128/232 [===============>..............] - ETA: 1:37 - loss: 0.1450 - Accuracy: 0.6836
129/232 [===============>..............] - ETA: 1:36 - loss: 0.1454 - Accuracy: 0.6822
130/232 [===============>..............] - ETA: 1:35 - loss: 0.1452 - Accuracy: 0.6827
131/232 [===============>..............] - ETA: 1:34 - loss: 0.1449 - Accuracy: 0.6832
132/232 [================>.............] - ETA: 1:33 - loss: 0.1450 - Accuracy: 0.6828
133/232 [================>.............] - ETA: 1:32 - loss: 0.1448 - Accuracy: 0.6833
134/232 [================>.............] - ETA: 1:32 - loss: 0.1448 - Accuracy: 0.6828
135/232 [================>.............] - ETA: 1:31 - loss: 0.1446 - Accuracy: 0.6833
136/232 [================>.............] - ETA: 1:30 - loss: 0.1440 - Accuracy: 0.6847
137/232 [================>.............] - ETA: 1:29 - loss: 0.1435 - Accuracy: 0.6861
138/232 [================>.............] - ETA: 1:28 - loss: 0.1436 - Accuracy: 0.6857
139/232 [================>.............] - ETA: 1:27 - loss: 0.1433 - Accuracy: 0.6862
140/232 [=================>............] - ETA: 1:26 - loss: 0.1432 - Accuracy: 0.6866
141/232 [=================>............] - ETA: 1:25 - loss: 0.1427 - Accuracy: 0.6879
142/232 [=================>............] - ETA: 1:24 - loss: 0.1425 - Accuracy: 0.6884
143/232 [=================>............] - ETA: 1:23 - loss: 0.1423 - Accuracy: 0.6888
144/232 [=================>............] - ETA: 1:22 - loss: 0.1419 - Accuracy: 0.6901
145/232 [=================>............] - ETA: 1:21 - loss: 0.1419 - Accuracy: 0.6897
146/232 [=================>............] - ETA: 1:20 - loss: 0.1423 - Accuracy: 0.6884
147/232 [==================>...........] - ETA: 1:19 - loss: 0.1418 - Accuracy: 0.6896
148/232 [==================>...........] - ETA: 1:18 - loss: 0.1422 - Accuracy: 0.6883
149/232 [==================>...........] - ETA: 1:17 - loss: 0.1420 - Accuracy: 0.6888
150/232 [==================>...........] - ETA: 1:16 - loss: 0.1423 - Accuracy: 0.6875
151/232 [==================>...........] - ETA: 1:15 - loss: 0.1430 - Accuracy: 0.6846
152/232 [==================>...........] - ETA: 1:15 - loss: 0.1427 - Accuracy: 0.6859
153/232 [==================>...........] - ETA: 1:14 - loss: 0.1431 - Accuracy: 0.6838
154/232 [==================>...........] - ETA: 1:13 - loss: 0.1427 - Accuracy: 0.6851
155/232 [===================>..........] - ETA: 1:12 - loss: 0.1427 - Accuracy: 0.6847
156/232 [===================>..........] - ETA: 1:11 - loss: 0.1429 - Accuracy: 0.6835
157/232 [===================>..........] - ETA: 1:10 - loss: 0.1430 - Accuracy: 0.6823
158/232 [===================>..........] - ETA: 1:09 - loss: 0.1429 - Accuracy: 0.6835
159/232 [===================>..........] - ETA: 1:08 - loss: 0.1428 - Accuracy: 0.6832
160/232 [===================>..........] - ETA: 1:07 - loss: 0.1427 - Accuracy: 0.6836
161/232 [===================>..........] - ETA: 1:06 - loss: 0.1426 - Accuracy: 0.6840
162/232 [===================>..........] - ETA: 1:05 - loss: 0.1425 - Accuracy: 0.6844
163/232 [====================>.........] - ETA: 1:04 - loss: 0.1426 - Accuracy: 0.6840
164/232 [====================>.........] - ETA: 1:03 - loss: 0.1423 - Accuracy: 0.6845
165/232 [====================>.........] - ETA: 1:02 - loss: 0.1427 - Accuracy: 0.6826
166/232 [====================>.........] - ETA: 1:01 - loss: 0.1425 - Accuracy: 0.6830
167/232 [====================>.........] - ETA: 1:00 - loss: 0.1420 - Accuracy: 0.6849
168/232 [====================>.........] - ETA: 59s - loss: 0.1420 - Accuracy: 0.6845
169/232 [====================>.........] - ETA: 59s - loss: 0.1428 - Accuracy: 0.6812
170/232 [====================>.........] - ETA: 58s - loss: 0.1433 - Accuracy: 0.6794
171/232 [=====================>........] - ETA: 57s - loss: 0.1429 - Accuracy: 0.6806
172/232 [=====================>........] - ETA: 56s - loss: 0.1428 - Accuracy: 0.6802
173/232 [=====================>........] - ETA: 55s - loss: 0.1427 - Accuracy: 0.6806
174/232 [=====================>........] - ETA: 54s - loss: 0.1433 - Accuracy: 0.6782
175/232 [=====================>........] - ETA: 53s - loss: 0.1430 - Accuracy: 0.6793
176/232 [=====================>........] - ETA: 52s - loss: 0.1429 - Accuracy: 0.6790
177/232 [=====================>........] - ETA: 51s - loss: 0.1430 - Accuracy: 0.6787
178/232 [======================>.......] - ETA: 50s - loss: 0.1429 - Accuracy: 0.6791
179/232 [======================>.......] - ETA: 49s - loss: 0.1429 - Accuracy: 0.6788
180/232 [======================>.......] - ETA: 48s - loss: 0.1429 - Accuracy: 0.6785
181/232 [======================>.......] - ETA: 47s - loss: 0.1427 - Accuracy: 0.6789
182/232 [======================>.......] - ETA: 46s - loss: 0.1426 - Accuracy: 0.6793
183/232 [======================>.......] - ETA: 45s - loss: 0.1426 - Accuracy: 0.6790
184/232 [======================>.......] - ETA: 44s - loss: 0.1428 - Accuracy: 0.6780
185/232 [======================>.......] - ETA: 44s - loss: 0.1429 - Accuracy: 0.6777
186/232 [=======================>......] - ETA: 43s - loss: 0.1432 - Accuracy: 0.6761
187/232 [=======================>......] - ETA: 42s - loss: 0.1431 - Accuracy: 0.6765
188/232 [=======================>......] - ETA: 41s - loss: 0.1434 - Accuracy: 0.6749
189/232 [=======================>......] - ETA: 40s - loss: 0.1431 - Accuracy: 0.6759
190/232 [=======================>......] - ETA: 39s - loss: 0.1431 - Accuracy: 0.6757
191/232 [=======================>......] - ETA: 38s - loss: 0.1429 - Accuracy: 0.6767
192/232 [=======================>......] - ETA: 37s - loss: 0.1426 - Accuracy: 0.6777
193/232 [=======================>......] - ETA: 36s - loss: 0.1423 - Accuracy: 0.6794
194/232 [========================>.....] - ETA: 35s - loss: 0.1419 - Accuracy: 0.6811
195/232 [========================>.....] - ETA: 34s - loss: 0.1419 - Accuracy: 0.6808
196/232 [========================>.....] - ETA: 33s - loss: 0.1418 - Accuracy: 0.6811
197/232 [========================>.....] - ETA: 32s - loss: 0.1414 - Accuracy: 0.6821
198/232 [========================>.....] - ETA: 31s - loss: 0.1413 - Accuracy: 0.6824
199/232 [========================>.....] - ETA: 30s - loss: 0.1412 - Accuracy: 0.6828
200/232 [========================>.....] - ETA: 30s - loss: 0.1413 - Accuracy: 0.6825
201/232 [========================>.....] - ETA: 29s - loss: 0.1416 - Accuracy: 0.6816
202/232 [=========================>....] - ETA: 28s - loss: 0.1415 - Accuracy: 0.6819
203/232 [=========================>....] - ETA: 27s - loss: 0.1414 - Accuracy: 0.6823
204/232 [=========================>....] - ETA: 26s - loss: 0.1411 - Accuracy: 0.6832
205/232 [=========================>....] - ETA: 25s - loss: 0.1409 - Accuracy: 0.6835
206/232 [=========================>....] - ETA: 24s - loss: 0.1414 - Accuracy: 0.6820
207/232 [=========================>....] - ETA: 23s - loss: 0.1415 - Accuracy: 0.6818
208/232 [=========================>....] - ETA: 22s - loss: 0.1414 - Accuracy: 0.6821
209/232 [==========================>...] - ETA: 21s - loss: 0.1409 - Accuracy: 0.6836
210/232 [==========================>...] - ETA: 20s - loss: 0.1407 - Accuracy: 0.6839
211/232 [==========================>...] - ETA: 19s - loss: 0.1405 - Accuracy: 0.6848
212/232 [==========================>...] - ETA: 18s - loss: 0.1400 - Accuracy: 0.6863
213/232 [==========================>...] - ETA: 17s - loss: 0.1400 - Accuracy: 0.6860
214/232 [==========================>...] - ETA: 16s - loss: 0.1401 - Accuracy: 0.6857
215/232 [==========================>...] - ETA: 15s - loss: 0.1403 - Accuracy: 0.6849
216/232 [==========================>...] - ETA: 14s - loss: 0.1400 - Accuracy: 0.6858
217/232 [===========================>..] - ETA: 14s - loss: 0.1404 - Accuracy: 0.6843
218/232 [===========================>..] - ETA: 13s - loss: 0.1402 - Accuracy: 0.6846
219/232 [===========================>..] - ETA: 12s - loss: 0.1403 - Accuracy: 0.6844
220/232 [===========================>..] - ETA: 11s - loss: 0.1403 - Accuracy: 0.6841
221/232 [===========================>..] - ETA: 10s - loss: 0.1403 - Accuracy: 0.6844
222/232 [===========================>..] - ETA: 9s - loss: 0.1407 - Accuracy: 0.6824
223/232 [===========================>..] - ETA: 8s - loss: 0.1406 - Accuracy: 0.6822
224/232 [===========================>..] - ETA: 7s - loss: 0.1404 - Accuracy: 0.6830
225/232 [============================>.] - ETA: 6s - loss: 0.1404 - Accuracy: 0.6828
226/232 [============================>.] - ETA: 5s - loss: 0.1404 - Accuracy: 0.6825
227/232 [============================>.] - ETA: 4s - loss: 0.1404 - Accuracy: 0.6823
228/232 [============================>.] - ETA: 3s - loss: 0.1405 - Accuracy: 0.6815
229/232 [============================>.] - ETA: 2s - loss: 0.1406 - Accuracy: 0.6807
230/232 [============================>.] - ETA: 1s - loss: 0.1408 - Accuracy: 0.6799
231/232 [============================>.] - ETA: 0s - loss: 0.1409 - Accuracy: 0.6802
232/232 [==============================] - ETA: 0s - loss: 0.1408 - Accuracy: 0.68002021-12-08 03:38:06.835804: W tensorflow/core/grappler/optimizers/data/auto_shard.cc:766] AUTO sharding policy will apply DATA sharding policy as it failed to apply FILE sharding policy because of the following reason: Did not find a shardable source, walked to a node which is not a dataset: name: "FlatMapDataset/_9"
op: "FlatMapDataset"
input: "PrefetchDataset/_8"
attr {
key: "Targuments"
value {
list {
}
}
}
attr {
key: "_cardinality"
value {
i: -2
}
}
attr {
key: "f"
value {
func {
name: "__inference_Dataset_flat_map_slice_batch_indices_819036"
}
}
}
attr {
key: "metadata"
value {
s: "\n\021FlatMapDataset:53"
}
}
attr {
key: "output_shapes"
value {
list {
shape {
dim {
size: -1
}
}
}
}
}
attr {
key: "output_types"
value {
list {
type: DT_INT64
}
}
}
. Consider either turning off auto-sharding or switching the auto_shard_policy to DATA to shard this dataset. You can do this by creating a new `tf.data.Options()` object then setting `options.experimental_distribute.auto_shard_policy = AutoShardPolicy.DATA` before applying the options object to the dataset via `dataset.with_options(options)`.
2021-12-08 03:38:06.891410: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:06.893979: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:06.907358: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:06.910977: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:06.924507: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:06.927559: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:06.939629: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:06.941576: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:07.108656: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:07.111741: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:07.125587: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:07.128333: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:07.129748: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:07.143568: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:07.146326: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:07.159566: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:07.162239: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:07.163579: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:07.177174: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:07.179804: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:07.193266: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:07.195900: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:07.197273: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:07.210907: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:07.213517: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:07.226549: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:07.229243: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:07.230606: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:07.244240: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:07.246853: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:07.259984: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:07.262666: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:07.264019: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:07.277687: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:07.280270: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:07.293432: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:07.296074: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:07.297418: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:07.310926: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:07.313478: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:07.326484: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:07.329043: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:07.330331: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:07.344024: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:07.346536: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:07.359540: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:07.362145: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:38:07.363468: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 03:39:50.853125: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1991] Converted 19176/53881 nodes to float16 precision using 992 cast(s) to float16 (excluding Const and Variable casts)
2021-12-08 03:40:12.677238: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1991] Converted 0/50369 nodes to float16 precision using 0 cast(s) to float16 (excluding Const and Variable casts)
232/232 [==============================] - 924s 2s/step - loss: 0.1408 - Accuracy: 0.6800 - val_loss: 0.9255 - val_Accuracy: 0.6700
Epoch 2/5
9 root error(s) found.
(0) INVALID_ARGUMENT: Requires start <= limit when delta > 0: 0/-2147483648
[[node replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2
(defined at /opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py:1214)
]]
[[replica_7/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/Cast_1/_775]]
(1) INVALID_ARGUMENT: Requires start <= limit when delta > 0: 0/-2147483648
[[node replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2
(defined at /opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py:1214)
]]
[[replica_5/tf_longformer_for_sequence_classification/longformer/embeddings/Cumsum/_524]]
(2) INVALID_ARGUMENT: Requires start <= limit when delta > 0: 0/-2147483648
[[node replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2
(defined at /opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py:1214)
]]
[[replica_4/tf_longformer_for_sequence_classification/longformer/Greater/_658]]
(3) INVALID_ARGUMENT: Requires start <= limit when delta > 0: 0/-2147483648
[[node replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2
(defined at /opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py:1214)
]]
[[replica_3/tf_longformer_for_sequence_classification/longformer/Abs/_797]]
(4) INVALID_ARGUMENT: Requires start <= limit when delta > 0: 0/-2147483648
[[node replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2
(defined at /opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py:1214)
]]
[[replica_2/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/Tile_2/_2065]]
(5) INVALID_ARGUMENT: Requires start <= limit when delta > 0: 0/-2147483648
[[node replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2
(defined at /opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py:1214)
]]
[[replica_1/tf_longformer_for_sequence_classification/longformer/Any/_741]]
(6) INVALID_ARGUMENT: Requires start <= limit when delta > 0: 0/-2147483648
[[node replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2
(defined at /opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py:1214)
]]
[[div_no_nan_1/ReadVariableOp_2/_12654]]
(7) INVALID_ARGUMENT: Requires start <= limit when delta > 0: 0/-2147483648
[[node replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2
(defined at /opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py:1214)
]]
[[cond/output/_50/_393]]
(8) INVALID_ARGUMENT: Requires start <= limit when delta > 0: 0/-2147483648
[[node replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2
(defined at /opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py:1214)
]]
0 successful operations.
0 derived errors ignored. [Op:__inference_train_function_817844]
Errors may have originated from an input operation.
Input Source operations connected to node replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2:
In[0] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2/start:
In[1] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/Max (defined at /opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py:1208)
In[2] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2/delta:
Operation defined at: (most recent call last)
>>> File "/opt/miniconda/lib/python3.7/threading.py", line 890, in _bootstrap
>>> self._bootstrap_inner()
>>>
>>> File "/opt/miniconda/lib/python3.7/threading.py", line 926, in _bootstrap_inner
>>> self.run()
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/training.py", line 860, in run_step
>>> outputs = model.train_step(data)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 796, in train_step
>>> y_pred = self(x, training=True)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 2424, in call
>>> outputs = self.longformer(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1733, in call
>>> encoder_outputs = self.encoder(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1550, in call
>>> for idx, layer_module in enumerate(self.layer):
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1555, in call
>>> layer_outputs = layer_module(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1513, in call
>>> attention_outputs = self.attention(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1485, in call
>>> self_outputs = self.self_attention(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 778, in call
>>> (
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1214, in _get_global_attn_indices
>>> is_local_index_global_attn = tf.range(max_num_global_attn_indices) < tf.expand_dims(
>>>
Input Source operations connected to node replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2:
In[0] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2/start:
In[1] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/Max (defined at /opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py:1208)
In[2] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2/delta:
Operation defined at: (most recent call last)
>>> File "/opt/miniconda/lib/python3.7/threading.py", line 890, in _bootstrap
>>> self._bootstrap_inner()
>>>
>>> File "/opt/miniconda/lib/python3.7/threading.py", line 926, in _bootstrap_inner
>>> self.run()
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/training.py", line 860, in run_step
>>> outputs = model.train_step(data)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 796, in train_step
>>> y_pred = self(x, training=True)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 2424, in call
>>> outputs = self.longformer(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1733, in call
>>> encoder_outputs = self.encoder(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1550, in call
>>> for idx, layer_module in enumerate(self.layer):
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1555, in call
>>> layer_outputs = layer_module(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1513, in call
>>> attention_outputs = self.attention(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1485, in call
>>> self_outputs = self.self_attention(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 778, in call
>>> (
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1214, in _get_global_attn_indices
>>> is_local_index_global_attn = tf.range(max_num_global_attn_indices) < tf.expand_dims(
>>>
Input Source operations connected to node replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2:
In[0] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2/start:
In[1] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/Max (defined at /opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py:1208)
In[2] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2/delta:
Operation defined at: (most recent call last)
>>> File "/opt/miniconda/lib/python3.7/threading.py", line 890, in _bootstrap
>>> self._bootstrap_inner()
>>>
>>> File "/opt/miniconda/lib/python3.7/threading.py", line 926, in _bootstrap_inner
>>> self.run()
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/training.py", line 860, in run_step
>>> outputs = model.train_step(data)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 796, in train_step
>>> y_pred = self(x, training=True)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 2424, in call
>>> outputs = self.longformer(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1733, in call
>>> encoder_outputs = self.encoder(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1550, in call
>>> for idx, layer_module in enumerate(self.layer):
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1555, in call
>>> layer_outputs = layer_module(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1513, in call
>>> attention_outputs = self.attention(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1485, in call
>>> self_outputs = self.self_attention(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 778, in call
>>> (
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1214, in _get_global_attn_indices
>>> is_local_index_global_attn = tf.range(max_num_global_attn_indices) < tf.expand_dims(
>>>
Input Source operations connected to node replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2:
In[0] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2/start:
In[1] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/Max (defined at /opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py:1208)
In[2] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2/delta:
Operation defined at: (most recent call last)
>>> File "/opt/miniconda/lib/python3.7/threading.py", line 890, in _bootstrap
>>> self._bootstrap_inner()
>>>
>>> File "/opt/miniconda/lib/python3.7/threading.py", line 926, in _bootstrap_inner
>>> self.run()
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/training.py", line 860, in run_step
>>> outputs = model.train_step(data)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 796, in train_step
>>> y_pred = self(x, training=True)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 2424, in call
>>> outputs = self.longformer(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1733, in call
>>> encoder_outputs = self.encoder(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1550, in call
>>> for idx, layer_module in enumerate(self.layer):
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1555, in call
>>> layer_outputs = layer_module(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1513, in call
>>> attention_outputs = self.attention(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1485, in call
>>> self_outputs = self.self_attention(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 778, in call
>>> (
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1214, in _get_global_attn_indices
>>> is_local_index_global_attn = tf.range(max_num_global_attn_indices) < tf.expand_dims(
>>>
Input Source operations connected to node replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2:
In[0] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2/start:
In[1] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/Max (defined at /opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py:1208)
In[2] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2/delta:
Operation defined at: (most recent call last)
>>> File "/opt/miniconda/lib/python3.7/threading.py", line 890, in _bootstrap
>>> self._bootstrap_inner()
>>>
>>> File "/opt/miniconda/lib/python3.7/threading.py", line 926, in _bootstrap_inner
>>> self.run()
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/training.py", line 860, in run_step
>>> outputs = model.train_step(data)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 796, in train_step
>>> y_pred = self(x, training=True)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 2424, in call
>>> outputs = self.longformer(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1733, in call
>>> encoder_outputs = self.encoder(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1550, in call
>>> for idx, layer_module in enumerate(self.layer):
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1555, in call
>>> layer_outputs = layer_module(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1513, in call
>>> attention_outputs = self.attention(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1485, in call
>>> self_outputs = self.self_attention(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 778, in call
>>> (
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1214, in _get_global_attn_indices
>>> is_local_index_global_attn = tf.range(max_num_global_attn_indices) < tf.expand_dims(
>>>
Input Source operations connected to node replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2:
In[0] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2/start:
In[1] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/Max (defined at /opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py:1208)
In[2] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2/delta:
Operation defined at: (most recent call last)
>>> File "/opt/miniconda/lib/python3.7/threading.py", line 890, in _bootstrap
>>> self._bootstrap_inner()
>>>
>>> File "/opt/miniconda/lib/python3.7/threading.py", line 926, in _bootstrap_inner
>>> self.run()
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/training.py", line 860, in run_step
>>> outputs = model.train_step(data)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 796, in train_step
>>> y_pred = self(x, training=True)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 2424, in call
>>> outputs = self.longformer(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1733, in call
>>> encoder_outputs = self.encoder(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1550, in call
>>> for idx, layer_module in enumerate(self.layer):
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1555, in call
>>> layer_outputs = layer_module(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1513, in call
>>> attention_outputs = self.attention(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1485, in call
>>> self_outputs = self.self_attention(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 778, in call
>>> (
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1214, in _get_global_attn_indices
>>> is_local_index_global_attn = tf.range(max_num_global_attn_indices) < tf.expand_dims(
>>>
Input Source operations connected to node replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2:
In[0] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2/start:
In[1] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/Max (defined at /opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py:1208)
In[2] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2/delta:
Operation defined at: (most recent call last)
>>> File "/opt/miniconda/lib/python3.7/threading.py", line 890, in _bootstrap
>>> self._bootstrap_inner()
>>>
>>> File "/opt/miniconda/lib/python3.7/threading.py", line 926, in _bootstrap_inner
>>> self.run()
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/training.py", line 860, in run_step
>>> outputs = model.train_step(data)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 796, in train_step
>>> y_pred = self(x, training=True)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 2424, in call
>>> outputs = self.longformer(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1733, in call
>>> encoder_outputs = self.encoder(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1550, in call
>>> for idx, layer_module in enumerate(self.layer):
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1555, in call
>>> layer_outputs = layer_module(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1513, in call
>>> attention_outputs = self.attention(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1485, in call
>>> self_outputs = self.self_attention(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 778, in call
>>> (
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1214, in _get_global_attn_indices
>>> is_local_index_global_attn = tf.range(max_num_global_attn_indices) < tf.expand_dims(
>>>
Input Source operations connected to node replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2:
In[0] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2/start:
In[1] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/Max (defined at /opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py:1208)
In[2] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2/delta:
Operation defined at: (most recent call last)
>>> File "/opt/miniconda/lib/python3.7/threading.py", line 890, in _bootstrap
>>> self._bootstrap_inner()
>>>
>>> File "/opt/miniconda/lib/python3.7/threading.py", line 926, in _bootstrap_inner
>>> self.run()
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/training.py", line 860, in run_step
>>> outputs = model.train_step(data)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 796, in train_step
>>> y_pred = self(x, training=True)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 2424, in call
>>> outputs = self.longformer(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1733, in call
>>> encoder_outputs = self.encoder(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1550, in call
>>> for idx, layer_module in enumerate(self.layer):
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1555, in call
>>> layer_outputs = layer_module(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1513, in call
>>> attention_outputs = self.attention(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1485, in call
>>> self_outputs = self.self_attention(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 778, in call
>>> (
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1214, in _get_global_attn_indices
>>> is_local_index_global_attn = tf.range(max_num_global_attn_indices) < tf.expand_dims(
>>>
Input Source operations connected to node replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2:
In[0] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2/start:
In[1] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/Max (defined at /opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py:1208)
In[2] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2/delta:
Operation defined at: (most recent call last)
>>> File "/opt/miniconda/lib/python3.7/threading.py", line 890, in _bootstrap
>>> self._bootstrap_inner()
>>>
>>> File "/opt/miniconda/lib/python3.7/threading.py", line 926, in _bootstrap_inner
>>> self.run()
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/training.py", line 860, in run_step
>>> outputs = model.train_step(data)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 796, in train_step
>>> y_pred = self(x, training=True)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 2424, in call
>>> outputs = self.longformer(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1733, in call
>>> encoder_outputs = self.encoder(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1550, in call
>>> for idx, layer_module in enumerate(self.layer):
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1555, in call
>>> layer_outputs = layer_module(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1513, in call
>>> attention_outputs = self.attention(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1485, in call
>>> self_outputs = self.self_attention(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 778, in call
>>> (
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1214, in _get_global_attn_indices
>>> is_local_index_global_attn = tf.range(max_num_global_attn_indices) < tf.expand_dims(
>>>
Function call stack:
train_function -> train_function -> train_function -> train_function -> train_function -> train_function -> train_function -> train_function -> train_function
[2021-12-08T03:40:38.420250] The experiment failed. Finalizing run...
Cleaning up all outstanding Run operations, waiting 900.0 seconds
2 items cleaning up...
Cleanup took 0.1139984130859375 seconds
Traceback (most recent call last):
File "training.py", line 20, in <module>
classifier.fit()
File "/opt/miniconda/lib/python3.7/site-packages/pml/training/training.py", line 27, in fit
self._fit()
File "/opt/miniconda/lib/python3.7/site-packages/pml/training/_mixins.py", line 50, in _fit
self._fit_model()
File "/opt/miniconda/lib/python3.7/site-packages/pml/training/_mixins.py", line 418, in _fit_model
class_weight=self.class_weight_
File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/opt/miniconda/lib/python3.7/site-packages/tensorflow/python/eager/execute.py", line 59, in quick_execute
inputs, attrs, num_outputs)
tensorflow.python.framework.errors_impl.InvalidArgumentError: 9 root error(s) found.
(0) INVALID_ARGUMENT: Requires start <= limit when delta > 0: 0/-2147483648
[[node replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2
(defined at /opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py:1214)
]]
[[replica_7/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/Cast_1/_775]]
(1) INVALID_ARGUMENT: Requires start <= limit when delta > 0: 0/-2147483648
[[node replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2
(defined at /opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py:1214)
]]
[[replica_5/tf_longformer_for_sequence_classification/longformer/embeddings/Cumsum/_524]]
(2) INVALID_ARGUMENT: Requires start <= limit when delta > 0: 0/-2147483648
[[node replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2
(defined at /opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py:1214)
]]
[[replica_4/tf_longformer_for_sequence_classification/longformer/Greater/_658]]
(3) INVALID_ARGUMENT: Requires start <= limit when delta > 0: 0/-2147483648
[[node replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2
(defined at /opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py:1214)
]]
[[replica_3/tf_longformer_for_sequence_classification/longformer/Abs/_797]]
(4) INVALID_ARGUMENT: Requires start <= limit when delta > 0: 0/-2147483648
[[node replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2
(defined at /opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py:1214)
]]
[[replica_2/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/Tile_2/_2065]]
(5) INVALID_ARGUMENT: Requires start <= limit when delta > 0: 0/-2147483648
[[node replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2
(defined at /opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py:1214)
]]
[[replica_1/tf_longformer_for_sequence_classification/longformer/Any/_741]]
(6) INVALID_ARGUMENT: Requires start <= limit when delta > 0: 0/-2147483648
[[node replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2
(defined at /opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py:1214)
]]
[[div_no_nan_1/ReadVariableOp_2/_12654]]
(7) INVALID_ARGUMENT: Requires start <= limit when delta > 0: 0/-2147483648
[[node replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2
(defined at /opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py:1214)
]]
[[cond/output/_50/_393]]
(8) INVALID_ARGUMENT: Requires start <= limit when delta > 0: 0/-2147483648
[[node replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2
(defined at /opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py:1214)
]]
0 successful operations.
0 derived errors ignored. [Op:__inference_train_function_817844]
Errors may have originated from an input operation.
Input Source operations connected to node replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2:
In[0] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2/start:
In[1] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/Max (defined at /opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py:1208)
In[2] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2/delta:
Operation defined at: (most recent call last)
>>> File "/opt/miniconda/lib/python3.7/threading.py", line 890, in _bootstrap
>>> self._bootstrap_inner()
>>>
>>> File "/opt/miniconda/lib/python3.7/threading.py", line 926, in _bootstrap_inner
>>> self.run()
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/training.py", line 860, in run_step
>>> outputs = model.train_step(data)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 796, in train_step
>>> y_pred = self(x, training=True)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 2424, in call
>>> outputs = self.longformer(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1733, in call
>>> encoder_outputs = self.encoder(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1550, in call
>>> for idx, layer_module in enumerate(self.layer):
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1555, in call
>>> layer_outputs = layer_module(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1513, in call
>>> attention_outputs = self.attention(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1485, in call
>>> self_outputs = self.self_attention(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 778, in call
>>> (
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1214, in _get_global_attn_indices
>>> is_local_index_global_attn = tf.range(max_num_global_attn_indices) < tf.expand_dims(
>>>
Input Source operations connected to node replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2:
In[0] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2/start:
In[1] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/Max (defined at /opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py:1208)
In[2] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2/delta:
Operation defined at: (most recent call last)
>>> File "/opt/miniconda/lib/python3.7/threading.py", line 890, in _bootstrap
>>> self._bootstrap_inner()
>>>
>>> File "/opt/miniconda/lib/python3.7/threading.py", line 926, in _bootstrap_inner
>>> self.run()
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/training.py", line 860, in run_step
>>> outputs = model.train_step(data)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 796, in train_step
>>> y_pred = self(x, training=True)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 2424, in call
>>> outputs = self.longformer(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1733, in call
>>> encoder_outputs = self.encoder(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1550, in call
>>> for idx, layer_module in enumerate(self.layer):
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1555, in call
>>> layer_outputs = layer_module(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1513, in call
>>> attention_outputs = self.attention(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1485, in call
>>> self_outputs = self.self_attention(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 778, in call
>>> (
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1214, in _get_global_attn_indices
>>> is_local_index_global_attn = tf.range(max_num_global_attn_indices) < tf.expand_dims(
>>>
Input Source operations connected to node replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2:
In[0] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2/start:
In[1] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/Max (defined at /opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py:1208)
In[2] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2/delta:
Operation defined at: (most recent call last)
>>> File "/opt/miniconda/lib/python3.7/threading.py", line 890, in _bootstrap
>>> self._bootstrap_inner()
>>>
>>> File "/opt/miniconda/lib/python3.7/threading.py", line 926, in _bootstrap_inner
>>> self.run()
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/training.py", line 860, in run_step
>>> outputs = model.train_step(data)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 796, in train_step
>>> y_pred = self(x, training=True)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 2424, in call
>>> outputs = self.longformer(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1733, in call
>>> encoder_outputs = self.encoder(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1550, in call
>>> for idx, layer_module in enumerate(self.layer):
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1555, in call
>>> layer_outputs = layer_module(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1513, in call
>>> attention_outputs = self.attention(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1485, in call
>>> self_outputs = self.self_attention(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 778, in call
>>> (
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1214, in _get_global_attn_indices
>>> is_local_index_global_attn = tf.range(max_num_global_attn_indices) < tf.expand_dims(
>>>
Input Source operations connected to node replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2:
In[0] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2/start:
In[1] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/Max (defined at /opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py:1208)
In[2] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2/delta:
Operation defined at: (most recent call last)
>>> File "/opt/miniconda/lib/python3.7/threading.py", line 890, in _bootstrap
>>> self._bootstrap_inner()
>>>
>>> File "/opt/miniconda/lib/python3.7/threading.py", line 926, in _bootstrap_inner
>>> self.run()
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/training.py", line 860, in run_step
>>> outputs = model.train_step(data)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 796, in train_step
>>> y_pred = self(x, training=True)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 2424, in call
>>> outputs = self.longformer(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1733, in call
>>> encoder_outputs = self.encoder(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1550, in call
>>> for idx, layer_module in enumerate(self.layer):
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1555, in call
>>> layer_outputs = layer_module(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1513, in call
>>> attention_outputs = self.attention(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1485, in call
>>> self_outputs = self.self_attention(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 778, in call
>>> (
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1214, in _get_global_attn_indices
>>> is_local_index_global_attn = tf.range(max_num_global_attn_indices) < tf.expand_dims(
>>>
Input Source operations connected to node replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2:
In[0] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2/start:
In[1] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/Max (defined at /opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py:1208)
In[2] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2/delta:
Operation defined at: (most recent call last)
>>> File "/opt/miniconda/lib/python3.7/threading.py", line 890, in _bootstrap
>>> self._bootstrap_inner()
>>>
>>> File "/opt/miniconda/lib/python3.7/threading.py", line 926, in _bootstrap_inner
>>> self.run()
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/training.py", line 860, in run_step
>>> outputs = model.train_step(data)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 796, in train_step
>>> y_pred = self(x, training=True)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 2424, in call
>>> outputs = self.longformer(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1733, in call
>>> encoder_outputs = self.encoder(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1550, in call
>>> for idx, layer_module in enumerate(self.layer):
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1555, in call
>>> layer_outputs = layer_module(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1513, in call
>>> attention_outputs = self.attention(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1485, in call
>>> self_outputs = self.self_attention(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 778, in call
>>> (
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1214, in _get_global_attn_indices
>>> is_local_index_global_attn = tf.range(max_num_global_attn_indices) < tf.expand_dims(
>>>
Input Source operations connected to node replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2:
In[0] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2/start:
In[1] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/Max (defined at /opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py:1208)
In[2] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2/delta:
Operation defined at: (most recent call last)
>>> File "/opt/miniconda/lib/python3.7/threading.py", line 890, in _bootstrap
>>> self._bootstrap_inner()
>>>
>>> File "/opt/miniconda/lib/python3.7/threading.py", line 926, in _bootstrap_inner
>>> self.run()
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/training.py", line 860, in run_step
>>> outputs = model.train_step(data)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 796, in train_step
>>> y_pred = self(x, training=True)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 2424, in call
>>> outputs = self.longformer(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1733, in call
>>> encoder_outputs = self.encoder(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1550, in call
>>> for idx, layer_module in enumerate(self.layer):
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1555, in call
>>> layer_outputs = layer_module(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1513, in call
>>> attention_outputs = self.attention(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1485, in call
>>> self_outputs = self.self_attention(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 778, in call
>>> (
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1214, in _get_global_attn_indices
>>> is_local_index_global_attn = tf.range(max_num_global_attn_indices) < tf.expand_dims(
>>>
Input Source operations connected to node replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2:
In[0] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2/start:
In[1] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/Max (defined at /opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py:1208)
In[2] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2/delta:
Operation defined at: (most recent call last)
>>> File "/opt/miniconda/lib/python3.7/threading.py", line 890, in _bootstrap
>>> self._bootstrap_inner()
>>>
>>> File "/opt/miniconda/lib/python3.7/threading.py", line 926, in _bootstrap_inner
>>> self.run()
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/training.py", line 860, in run_step
>>> outputs = model.train_step(data)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 796, in train_step
>>> y_pred = self(x, training=True)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 2424, in call
>>> outputs = self.longformer(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1733, in call
>>> encoder_outputs = self.encoder(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1550, in call
>>> for idx, layer_module in enumerate(self.layer):
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1555, in call
>>> layer_outputs = layer_module(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1513, in call
>>> attention_outputs = self.attention(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1485, in call
>>> self_outputs = self.self_attention(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 778, in call
>>> (
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1214, in _get_global_attn_indices
>>> is_local_index_global_attn = tf.range(max_num_global_attn_indices) < tf.expand_dims(
>>>
Input Source operations connected to node replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2:
In[0] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2/start:
In[1] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/Max (defined at /opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py:1208)
In[2] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2/delta:
Operation defined at: (most recent call last)
>>> File "/opt/miniconda/lib/python3.7/threading.py", line 890, in _bootstrap
>>> self._bootstrap_inner()
>>>
>>> File "/opt/miniconda/lib/python3.7/threading.py", line 926, in _bootstrap_inner
>>> self.run()
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/training.py", line 860, in run_step
>>> outputs = model.train_step(data)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 796, in train_step
>>> y_pred = self(x, training=True)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 2424, in call
>>> outputs = self.longformer(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1733, in call
>>> encoder_outputs = self.encoder(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1550, in call
>>> for idx, layer_module in enumerate(self.layer):
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1555, in call
>>> layer_outputs = layer_module(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1513, in call
>>> attention_outputs = self.attention(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1485, in call
>>> self_outputs = self.self_attention(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 778, in call
>>> (
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1214, in _get_global_attn_indices
>>> is_local_index_global_attn = tf.range(max_num_global_attn_indices) < tf.expand_dims(
>>>
Input Source operations connected to node replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2:
In[0] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2/start:
In[1] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/Max (defined at /opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py:1208)
In[2] replica_6/tf_longformer_for_sequence_classification/longformer/encoder/layer_._11/attention/self/range_2/delta:
Operation defined at: (most recent call last)
>>> File "/opt/miniconda/lib/python3.7/threading.py", line 890, in _bootstrap
>>> self._bootstrap_inner()
>>>
>>> File "/opt/miniconda/lib/python3.7/threading.py", line 926, in _bootstrap_inner
>>> self.run()
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/training.py", line 860, in run_step
>>> outputs = model.train_step(data)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 796, in train_step
>>> y_pred = self(x, training=True)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 2424, in call
>>> outputs = self.longformer(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1733, in call
>>> encoder_outputs = self.encoder(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1550, in call
>>> for idx, layer_module in enumerate(self.layer):
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1555, in call
>>> layer_outputs = layer_module(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1513, in call
>>> attention_outputs = self.attention(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1485, in call
>>> self_outputs = self.self_attention(
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 64, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/engine/base_layer.py", line 1083, in __call__
>>> outputs = call_fn(inputs, *args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 92, in error_handler
>>> return fn(*args, **kwargs)
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 778, in call
>>> (
>>>
>>> File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py", line 1214, in _get_global_attn_indices
>>> is_local_index_global_attn = tf.range(max_num_global_attn_indices) < tf.expand_dims(
>>>
Function call stack:
train_function -> train_function -> train_function -> train_function -> train_function -> train_function -> train_function -> train_function -> train_function
[2021-12-08T03:40:38.708621] Finished context manager injector with Exception.
```<|||||>Any thoughts @Rocketknight1? I was able to get it to successfully train for 2 epochs if I limit the visible devices to 2 GPUs relevant logs below
```
[5 rows x 5 columns]
2021-12-08 21:15:29.463814: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-12-08 21:15:30.377640: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
2021-12-08 21:15:30.377729: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 38414 MB memory: -> device: 0, name: A100-SXM4-40GB, pci bus id: 0001:00:00.0, compute capability: 8.0
2021-12-08 21:15:30.380280: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
2021-12-08 21:15:30.380309: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:1 with 38414 MB memory: -> device: 1, name: A100-SXM4-40GB, pci bus id: 0002:00:00.0, compute capability: 8.0
Downloading: 0%| | 0.00/694 [00:00<?, ?B/s]
Downloading: 100%|ββββββββββ| 694/694 [00:00<00:00, 566kB/s]
Downloading: 0%| | 0.00/729M [00:00<?, ?B/s]
Downloading: 1%| | 6.10M/729M [00:00<00:11, 63.9MB/s]
Downloading: 2%|β | 13.0M/729M [00:00<00:10, 69.1MB/s]
Downloading: 3%|β | 19.9M/729M [00:00<00:10, 70.3MB/s]
Downloading: 4%|β | 26.7M/729M [00:00<00:10, 71.0MB/s]
Downloading: 5%|β | 33.6M/729M [00:00<00:10, 71.4MB/s]
Downloading: 6%|β | 40.5M/729M [00:00<00:10, 71.6MB/s]
Downloading: 6%|β | 47.3M/729M [00:00<00:10, 71.3MB/s]
Downloading: 7%|β | 54.1M/729M [00:00<00:09, 71.4MB/s]
Downloading: 8%|β | 61.1M/729M [00:00<00:09, 71.8MB/s]
Downloading: 9%|β | 68.0M/729M [00:01<00:09, 72.0MB/s]
Downloading: 10%|β | 75.0M/729M [00:01<00:09, 72.6MB/s]
Downloading: 11%|β | 82.0M/729M [00:01<00:09, 72.7MB/s]
Downloading: 12%|ββ | 88.9M/729M [00:01<00:09, 72.7MB/s]
Downloading: 13%|ββ | 95.9M/729M [00:01<00:09, 72.7MB/s]
Downloading: 14%|ββ | 103M/729M [00:01<00:09, 72.7MB/s]
Downloading: 15%|ββ | 110M/729M [00:01<00:08, 72.9MB/s]
Downloading: 16%|ββ | 117M/729M [00:01<00:08, 72.9MB/s]
Downloading: 17%|ββ | 124M/729M [00:01<00:08, 72.9MB/s]
Downloading: 18%|ββ | 131M/729M [00:01<00:08, 72.9MB/s]
Downloading: 19%|ββ | 138M/729M [00:02<00:08, 72.7MB/s]
Downloading: 20%|ββ | 145M/729M [00:02<00:08, 72.7MB/s]
Downloading: 21%|ββ | 151M/729M [00:02<00:08, 72.7MB/s]
Downloading: 22%|βββ | 158M/729M [00:02<00:08, 72.9MB/s]
Downloading: 23%|βββ | 165M/729M [00:02<00:08, 72.8MB/s]
Downloading: 24%|βββ | 172M/729M [00:02<00:07, 73.0MB/s]
Downloading: 25%|βββ | 179M/729M [00:02<00:07, 72.8MB/s]
Downloading: 26%|βββ | 186M/729M [00:02<00:07, 72.7MB/s]
Downloading: 27%|βββ | 193M/729M [00:02<00:07, 72.7MB/s]
Downloading: 27%|βββ | 200M/729M [00:02<00:07, 72.7MB/s]
Downloading: 28%|βββ | 207M/729M [00:03<00:07, 72.7MB/s]
Downloading: 29%|βββ | 214M/729M [00:03<00:07, 72.7MB/s]
Downloading: 30%|βββ | 221M/729M [00:03<00:07, 72.7MB/s]
Downloading: 31%|ββββ | 228M/729M [00:03<00:07, 72.7MB/s]
Downloading: 32%|ββββ | 235M/729M [00:03<00:07, 73.0MB/s]
Downloading: 33%|ββββ | 242M/729M [00:03<00:07, 72.8MB/s]
Downloading: 34%|ββββ | 249M/729M [00:03<00:06, 72.9MB/s]
Downloading: 35%|ββββ | 256M/729M [00:03<00:06, 72.9MB/s]
Downloading: 36%|ββββ | 263M/729M [00:03<00:06, 72.7MB/s]
Downloading: 37%|ββββ | 270M/729M [00:03<00:06, 72.6MB/s]
Downloading: 38%|ββββ | 277M/729M [00:04<00:06, 72.4MB/s]
Downloading: 39%|ββββ | 284M/729M [00:04<00:06, 72.5MB/s]
Downloading: 40%|ββββ | 291M/729M [00:04<00:06, 72.7MB/s]
Downloading: 41%|ββββ | 298M/729M [00:04<00:06, 72.6MB/s]
Downloading: 42%|βββββ | 304M/729M [00:04<00:06, 72.5MB/s]
Downloading: 43%|βββββ | 311M/729M [00:04<00:06, 72.5MB/s]
Downloading: 44%|βββββ | 318M/729M [00:04<00:05, 72.5MB/s]
Downloading: 45%|βββββ | 325M/729M [00:04<00:05, 72.7MB/s]
Downloading: 46%|βββββ | 332M/729M [00:04<00:05, 72.6MB/s]
Downloading: 47%|βββββ | 339M/729M [00:04<00:05, 72.7MB/s]
Downloading: 47%|βββββ | 346M/729M [00:05<00:05, 72.5MB/s]
Downloading: 48%|βββββ | 353M/729M [00:05<00:05, 72.5MB/s]
Downloading: 49%|βββββ | 360M/729M [00:05<00:05, 72.5MB/s]
Downloading: 50%|βββββ | 367M/729M [00:05<00:05, 72.6MB/s]
Downloading: 51%|ββββββ | 374M/729M [00:05<00:05, 72.6MB/s]
Downloading: 52%|ββββββ | 381M/729M [00:05<00:05, 72.3MB/s]
Downloading: 53%|ββββββ | 388M/729M [00:05<00:04, 72.0MB/s]
Downloading: 54%|ββββββ | 395M/729M [00:05<00:04, 72.2MB/s]
Downloading: 55%|ββββββ | 402M/729M [00:05<00:04, 72.4MB/s]
Downloading: 56%|ββββββ | 408M/729M [00:05<00:04, 72.5MB/s]
Downloading: 57%|ββββββ | 415M/729M [00:06<00:04, 72.5MB/s]
Downloading: 58%|ββββββ | 422M/729M [00:06<00:04, 72.6MB/s]
Downloading: 59%|ββββββ | 429M/729M [00:06<00:04, 72.8MB/s]
Downloading: 60%|ββββββ | 436M/729M [00:06<00:04, 72.8MB/s]
Downloading: 61%|ββββββ | 443M/729M [00:06<00:04, 73.0MB/s]
Downloading: 62%|βββββββ | 450M/729M [00:06<00:04, 72.9MB/s]
Downloading: 63%|βββββββ | 457M/729M [00:06<00:03, 72.9MB/s]
Downloading: 64%|βββββββ | 464M/729M [00:06<00:03, 72.7MB/s]
Downloading: 65%|βββββββ | 471M/729M [00:06<00:03, 72.5MB/s]
Downloading: 66%|βββββββ | 478M/729M [00:06<00:03, 72.7MB/s]
Downloading: 67%|βββββββ | 485M/729M [00:07<00:03, 72.7MB/s]
Downloading: 67%|βββββββ | 492M/729M [00:07<00:03, 72.7MB/s]
Downloading: 68%|βββββββ | 499M/729M [00:07<00:03, 72.6MB/s]
Downloading: 69%|βββββββ | 506M/729M [00:07<00:03, 72.6MB/s]
Downloading: 70%|βββββββ | 513M/729M [00:07<00:03, 72.9MB/s]
Downloading: 71%|ββββββββ | 520M/729M [00:07<00:03, 72.8MB/s]
Downloading: 72%|ββββββββ | 527M/729M [00:07<00:02, 72.9MB/s]
Downloading: 73%|ββββββββ | 534M/729M [00:07<00:02, 72.8MB/s]
Downloading: 74%|ββββββββ | 541M/729M [00:07<00:02, 72.5MB/s]
Downloading: 75%|ββββββββ | 548M/729M [00:07<00:02, 72.5MB/s]
Downloading: 76%|ββββββββ | 554M/729M [00:08<00:02, 72.1MB/s]
Downloading: 77%|ββββββββ | 561M/729M [00:08<00:02, 72.4MB/s]
Downloading: 78%|ββββββββ | 568M/729M [00:08<00:02, 72.5MB/s]
Downloading: 79%|ββββββββ | 575M/729M [00:08<00:02, 72.6MB/s]
Downloading: 80%|ββββββββ | 582M/729M [00:08<00:02, 72.7MB/s]
Downloading: 81%|ββββββββ | 589M/729M [00:08<00:02, 72.7MB/s]
Downloading: 82%|βββββββββ | 596M/729M [00:08<00:01, 72.6MB/s]
Downloading: 83%|βββββββββ | 603M/729M [00:08<00:01, 72.6MB/s]
Downloading: 84%|βββββββββ | 610M/729M [00:08<00:01, 72.6MB/s]
Downloading: 85%|βββββββββ | 617M/729M [00:08<00:01, 72.9MB/s]
Downloading: 86%|βββββββββ | 624M/729M [00:09<00:01, 72.7MB/s]
Downloading: 87%|βββββββββ | 631M/729M [00:09<00:01, 72.6MB/s]
Downloading: 87%|βββββββββ | 638M/729M [00:09<00:01, 72.7MB/s]
Downloading: 88%|βββββββββ | 645M/729M [00:09<00:01, 72.6MB/s]
Downloading: 89%|βββββββββ | 652M/729M [00:09<00:01, 72.8MB/s]
Downloading: 90%|βββββββββ | 659M/729M [00:09<00:01, 72.6MB/s]
Downloading: 91%|ββββββββββ| 666M/729M [00:09<00:00, 72.8MB/s]
Downloading: 92%|ββββββββββ| 673M/729M [00:09<00:00, 72.7MB/s]
Downloading: 93%|ββββββββββ| 680M/729M [00:09<00:00, 72.7MB/s]
Downloading: 94%|ββββββββββ| 687M/729M [00:09<00:00, 72.7MB/s]
Downloading: 95%|ββββββββββ| 693M/729M [00:10<00:00, 72.6MB/s]
Downloading: 96%|ββββββββββ| 700M/729M [00:10<00:00, 72.7MB/s]
Downloading: 97%|ββββββββββ| 707M/729M [00:10<00:00, 72.6MB/s]
Downloading: 98%|ββββββββββ| 714M/729M [00:10<00:00, 72.6MB/s]
Downloading: 99%|ββββββββββ| 721M/729M [00:10<00:00, 72.5MB/s]
Downloading: 100%|ββββββββββ| 728M/729M [00:10<00:00, 72.6MB/s]
Downloading: 100%|ββββββββββ| 729M/729M [00:10<00:00, 72.5MB/s]
2021-12-08 21:15:42.678981: I tensorflow/stream_executor/cuda/cuda_blas.cc:1774] TensorFloat-32 will be used for the matrix multiplication. This will only be logged once.
Some layers from the model checkpoint at allenai/longformer-base-4096 were not used when initializing TFLongformerForSequenceClassification: ['lm_head']
- This IS expected if you are initializing TFLongformerForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing TFLongformerForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some layers of TFLongformerForSequenceClassification were not initialized from the model checkpoint at allenai/longformer-base-4096 and are newly initialized: ['classifier']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Some layers from the model checkpoint at allenai/longformer-base-4096 were not used when initializing TFLongformerForSequenceClassification: ['lm_head']
- This IS expected if you are initializing TFLongformerForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing TFLongformerForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some layers of TFLongformerForSequenceClassification were not initialized from the model checkpoint at allenai/longformer-base-4096 and are newly initialized: ['classifier']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
2021-12-08 21:15:47.914719: W tensorflow/core/grappler/optimizers/data/auto_shard.cc:766] AUTO sharding policy will apply DATA sharding policy as it failed to apply FILE sharding policy because of the following reason: Did not find a shardable source, walked to a node which is not a dataset: name: "FlatMapDataset/_9"
op: "FlatMapDataset"
input: "PrefetchDataset/_8"
attr {
key: "Targuments"
value {
list {
}
}
}
attr {
key: "_cardinality"
value {
i: -2
}
}
attr {
key: "f"
value {
func {
name: "__inference_Dataset_flat_map_slice_batch_indices_59118"
}
}
}
attr {
key: "metadata"
value {
s: "\n\020FlatMapDataset:4"
}
}
attr {
key: "output_shapes"
value {
list {
shape {
dim {
size: -1
}
}
}
}
}
attr {
key: "output_types"
value {
list {
type: DT_INT64
}
}
}
. Consider either turning off auto-sharding or switching the auto_shard_policy to DATA to shard this dataset. You can do this by creating a new `tf.data.Options()` object then setting `options.experimental_distribute.auto_shard_policy = AutoShardPolicy.DATA` before applying the options object to the dataset via `dataset.with_options(options)`.
2021-12-08 21:15:47.967533: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:15:47.969043: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:15:47.978416: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:15:47.980483: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:15:47.985689: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:15:47.987135: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:15:47.991900: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:15:47.993490: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:15:47.997905: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:15:47.999031: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:15:48.038998: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:15:48.040759: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:15:48.048322: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:15:48.049956: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:15:48.050807: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:15:48.056971: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:15:48.058550: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:15:48.064085: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:15:48.065707: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:15:48.066561: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
Epoch 1/2
WARNING:tensorflow:Efficient allreduce is not supported for 3 IndexedSlices
WARNING:tensorflow:Efficient allreduce is not supported for 3 IndexedSlices
2021-12-08 21:17:31.182628: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1991] Converted 10772/50331 nodes to float16 precision using 1486 cast(s) to float16 (excluding Const and Variable casts)
2021-12-08 21:17:49.083818: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1991] Converted 0/42321 nodes to float16 precision using 0 cast(s) to float16 (excluding Const and Variable casts)
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1172 [1] NCCL INFO NCCL_SOCKET_IFNAME set by environment to eth0
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1172 [1] NCCL INFO Bootstrap : Using eth0:10.0.0.5<0>
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1172 [1] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1172 [1] NCCL INFO NCCL_IB_DISABLE set by environment to 0.
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1172 [1] NCCL INFO NCCL_SOCKET_IFNAME set by environment to eth0
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1172 [1] NCCL INFO NET/IB : Using [0]mlx5_ib4:1/IB [1]mlx5_ib2:1/IB [2]mlx5_ib0:1/IB [3]mlx5_ib7:1/IB [4]mlx5_ib5:1/IB [5]mlx5_ib3:1/IB [6]mlx5_ib1:1/IB [7]mlx5_ib6:1/IB ; OOB eth0:10.0.0.5<0>
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1172 [1] NCCL INFO Using network IB
NCCL version 2.8.3+cudaCUDA_MAJOR.CUDA_MINOR
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO NCCL_IB_TIMEOUT set by environment to 16.
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO NCCL_TOPO_FILE set by environment to /opt/microsoft/ndv4-topo.xml
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO NCCL_TOPO_FILE set by environment to /opt/microsoft/ndv4-topo.xml
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO Loading unnamed topology
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Loading unnamed topology
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO Attribute coll of node net not found
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO Attribute coll of node net not found
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO Attribute coll of node net not found
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO Attribute coll of node net not found
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO Attribute coll of node net not found
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO Attribute coll of node net not found
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO Attribute coll of node net not found
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO Attribute coll of node net not found
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO === System : maxWidth 252.0 totalWidth 252.0 ===
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO CPU/0 (1/2/-1)
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO + SYS[5000.0] - CPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO + SYS[5000.0] - CPU/2
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO + SYS[5000.0] - CPU/3
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO + PCI[24.0] - PCI/FFFFFF010
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO + PCI[24.0] - GPU/100000 (0)
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO + NVL[252.0] - NVS/0
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO + PCI[24.0] - NIC/10100000
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO + PCI[24.0] - GPU/200000 (1)
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO + NVL[252.0] - NVS/0
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO + PCI[24.0] - NIC/10200000
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO CPU/1 (1/2/-1)
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO + SYS[5000.0] - CPU/0
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO + SYS[5000.0] - CPU/2
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO + SYS[5000.0] - CPU/3
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO + PCI[24.0] - PCI/FFFFFF020
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO + PCI[24.0] - NIC/10300000
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO + PCI[24.0] - NIC/10400000
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO CPU/2 (1/2/-1)
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO + SYS[5000.0] - CPU/0
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO + SYS[5000.0] - CPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO + SYS[5000.0] - CPU/3
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO + PCI[24.0] - PCI/FFFFFF030
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO + PCI[24.0] - NIC/10500000
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO + PCI[24.0] - NIC/10600000
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO CPU/3 (1/2/-1)
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO + SYS[5000.0] - CPU/0
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO + SYS[5000.0] - CPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO + SYS[5000.0] - CPU/2
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO + PCI[24.0] - PCI/FFFFFF040
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO + PCI[24.0] - NIC/10700000
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO + PCI[24.0] - NIC/10800000
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO ==========================================
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO GPU/100000 :GPU/100000 (0/5000.000000/LOC) GPU/200000 (2/252.000000/NVL) CPU/0 (2/24.000000/PHB) CPU/1 (3/24.000000/SYS) CPU/2 (3/24.000000/SYS) CPU/3 (3/24.000000/SYS)
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO GPU/200000 :GPU/100000 (2/252.000000/NVL) GPU/200000 (0/5000.000000/LOC) CPU/0 (2/24.000000/PHB) CPU/1 (3/24.000000/SYS) CPU/2 (3/24.000000/SYS) CPU/3 (3/24.000000/SYS)
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO Pattern 4, crossNic 0, nChannels 12, speed 21.000000/21.000000, type NVL/PIX, sameChannels 1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO 0 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO 1 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO 2 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO 3 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO 4 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO 5 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO 6 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO 7 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO 8 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO 9 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO 10 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO 11 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO Pattern 3, crossNic 0, nChannels 12, speed 21.000000/21.000000, type NVL/PIX, sameChannels 1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO 0 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO 1 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO 2 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO 3 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO 4 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO 5 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO 6 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO 7 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO 8 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO 9 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO 10 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO 11 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO Pattern 3, crossNic 0, nChannels 12, speed 42.000000/42.000000, type NVL/PIX, sameChannels 0
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO 0 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO 1 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO 2 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO 3 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO 4 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO 5 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO 6 : GPU/1 GPU/0
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO 7 : GPU/1 GPU/0
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO 8 : GPU/1 GPU/0
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO 9 : GPU/1 GPU/0
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO 10 : GPU/1 GPU/0
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO 11 : GPU/1 GPU/0
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Attribute coll of node net not found
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Attribute coll of node net not found
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Attribute coll of node net not found
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Attribute coll of node net not found
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Attribute coll of node net not found
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Attribute coll of node net not found
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Attribute coll of node net not found
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Attribute coll of node net not found
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO === System : maxWidth 252.0 totalWidth 252.0 ===
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO CPU/0 (1/2/-1)
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO + SYS[5000.0] - CPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO + SYS[5000.0] - CPU/2
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO + SYS[5000.0] - CPU/3
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO + PCI[24.0] - PCI/FFFFFF010
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO + PCI[24.0] - GPU/100000 (0)
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO + NVL[252.0] - NVS/0
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO + PCI[24.0] - NIC/10100000
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO + PCI[24.0] - GPU/200000 (1)
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO + NVL[252.0] - NVS/0
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO + PCI[24.0] - NIC/10200000
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO CPU/1 (1/2/-1)
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO + SYS[5000.0] - CPU/0
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO + SYS[5000.0] - CPU/2
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO + SYS[5000.0] - CPU/3
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO + PCI[24.0] - PCI/FFFFFF020
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO + PCI[24.0] - NIC/10300000
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO + PCI[24.0] - NIC/10400000
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO CPU/2 (1/2/-1)
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO + SYS[5000.0] - CPU/0
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO + SYS[5000.0] - CPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO + SYS[5000.0] - CPU/3
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO + PCI[24.0] - PCI/FFFFFF030
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO + PCI[24.0] - NIC/10500000
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO + PCI[24.0] - NIC/10600000
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO CPU/3 (1/2/-1)
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO + SYS[5000.0] - CPU/0
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO + SYS[5000.0] - CPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO + SYS[5000.0] - CPU/2
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO + PCI[24.0] - PCI/FFFFFF040
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO + PCI[24.0] - NIC/10700000
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO + PCI[24.0] - NIC/10800000
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO ==========================================
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO GPU/100000 :GPU/100000 (0/5000.000000/LOC) GPU/200000 (2/252.000000/NVL) CPU/0 (2/24.000000/PHB) CPU/1 (3/24.000000/SYS) CPU/2 (3/24.000000/SYS) CPU/3 (3/24.000000/SYS)
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO GPU/200000 :GPU/100000 (2/252.000000/NVL) GPU/200000 (0/5000.000000/LOC) CPU/0 (2/24.000000/PHB) CPU/1 (3/24.000000/SYS) CPU/2 (3/24.000000/SYS) CPU/3 (3/24.000000/SYS)
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Pattern 4, crossNic 0, nChannels 12, speed 21.000000/21.000000, type NVL/PIX, sameChannels 1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO 0 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO 1 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO 2 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO 3 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO 4 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO 5 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO 6 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO 7 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO 8 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO 9 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO 10 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO 11 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Pattern 3, crossNic 0, nChannels 12, speed 21.000000/21.000000, type NVL/PIX, sameChannels 1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO 0 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO 1 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO 2 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO 3 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO 4 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO 5 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO 6 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO 7 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO 8 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO 9 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO 10 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO 11 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Pattern 3, crossNic 0, nChannels 12, speed 42.000000/42.000000, type NVL/PIX, sameChannels 0
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO 0 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO 1 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO 2 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO 3 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO 4 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO 5 : GPU/0 GPU/1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO 6 : GPU/1 GPU/0
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO 7 : GPU/1 GPU/0
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO 8 : GPU/1 GPU/0
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO 9 : GPU/1 GPU/0
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO 10 : GPU/1 GPU/0
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO 11 : GPU/1 GPU/0
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Tree 0 : -1 -> 0 -> 1/-1/-1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Tree 12 : -1 -> 0 -> 1/-1/-1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Tree 1 : -1 -> 0 -> 1/-1/-1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Tree 13 : -1 -> 0 -> 1/-1/-1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Tree 2 : -1 -> 0 -> 1/-1/-1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Tree 14 : -1 -> 0 -> 1/-1/-1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Tree 3 : -1 -> 0 -> 1/-1/-1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Tree 15 : -1 -> 0 -> 1/-1/-1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Tree 4 : -1 -> 0 -> 1/-1/-1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Tree 16 : -1 -> 0 -> 1/-1/-1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Tree 5 : -1 -> 0 -> 1/-1/-1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Tree 17 : -1 -> 0 -> 1/-1/-1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Tree 6 : -1 -> 0 -> 1/-1/-1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Tree 18 : -1 -> 0 -> 1/-1/-1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Tree 7 : -1 -> 0 -> 1/-1/-1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Tree 19 : -1 -> 0 -> 1/-1/-1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Tree 8 : -1 -> 0 -> 1/-1/-1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Tree 20 : -1 -> 0 -> 1/-1/-1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Tree 9 : -1 -> 0 -> 1/-1/-1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] -1/-1/-1->1->0 [2] -1/-1/-1->1->0 [3] -1/-1/-1->1->0 [4] -1/-1/-1->1->0 [5] -1/-1/-1->1->0 [6] -1/-1/-1->1->0 [7] -1/-1/-1->1->0 [8] -1/-1/-1->1->0 [9] -1/-1/-1->1->0 [10] -1/-1/-1->1->0 [11] -1/-1/-1->1->0 [12] -1/-1/-1->1->0 [13] -1/-1/-1->1->0 [14] -1/-1/-1->1->0 [15] -1/-1/-1->1->0 [16] -1/-1/-1->1->0 [17] -1/-1/-1->1->0 [18] -1/-1/-1->1->0 [19] -1/-1/-1->1->0 [20] -1/-1/-1->1->0 [21] -1/-1/-1->1->0 [22] -1/-1/-1->1->0 [23] -1/-1/-1->1->0
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Tree 21 : -1 -> 0 -> 1/-1/-1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Tree 10 : -1 -> 0 -> 1/-1/-1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Tree 22 : -1 -> 0 -> 1/-1/-1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Tree 11 : -1 -> 0 -> 1/-1/-1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO Setting affinity for GPU 1 to ffff,0000ffff
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Tree 23 : -1 -> 0 -> 1/-1/-1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 00/24 : 0 1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 01/24 : 0 1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 02/24 : 0 1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 03/24 : 0 1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 04/24 : 0 1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 05/24 : 0 1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 06/24 : 0 1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 07/24 : 0 1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 08/24 : 0 1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 09/24 : 0 1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 10/24 : 0 1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 11/24 : 0 1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 12/24 : 0 1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 13/24 : 0 1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 14/24 : 0 1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 15/24 : 0 1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 16/24 : 0 1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 17/24 : 0 1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 18/24 : 0 1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 19/24 : 0 1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 20/24 : 0 1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 21/24 : 0 1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 22/24 : 0 1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 23/24 : 0 1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1 [2] 1/-1/-1->0->-1 [3] 1/-1/-1->0->-1 [4] 1/-1/-1->0->-1 [5] 1/-1/-1->0->-1 [6] 1/-1/-1->0->-1 [7] 1/-1/-1->0->-1 [8] 1/-1/-1->0->-1 [9] 1/-1/-1->0->-1 [10] 1/-1/-1->0->-1 [11] 1/-1/-1->0->-1 [12] 1/-1/-1->0->-1 [13] 1/-1/-1->0->-1 [14] 1/-1/-1->0->-1 [15] 1/-1/-1->0->-1 [16] 1/-1/-1->0->-1 [17] 1/-1/-1->0->-1 [18] 1/-1/-1->0->-1 [19] 1/-1/-1->0->-1 [20] 1/-1/-1->0->-1 [21] 1/-1/-1->0->-1 [22] 1/-1/-1->0->-1 [23] 1/-1/-1->0->-1
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Setting affinity for GPU 0 to ffff,0000ffff
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 00 : 0[100000] -> 1[200000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO Channel 00 : 1[200000] -> 0[100000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 01 : 0[100000] -> 1[200000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO Channel 01 : 1[200000] -> 0[100000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 02 : 0[100000] -> 1[200000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO Channel 02 : 1[200000] -> 0[100000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 03 : 0[100000] -> 1[200000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO Channel 03 : 1[200000] -> 0[100000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 04 : 0[100000] -> 1[200000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO Channel 04 : 1[200000] -> 0[100000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 05 : 0[100000] -> 1[200000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO Channel 05 : 1[200000] -> 0[100000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 06 : 0[100000] -> 1[200000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO Channel 06 : 1[200000] -> 0[100000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 07 : 0[100000] -> 1[200000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO Channel 07 : 1[200000] -> 0[100000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 08 : 0[100000] -> 1[200000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO Channel 08 : 1[200000] -> 0[100000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 09 : 0[100000] -> 1[200000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO Channel 09 : 1[200000] -> 0[100000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 10 : 0[100000] -> 1[200000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO Channel 10 : 1[200000] -> 0[100000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 11 : 0[100000] -> 1[200000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO Channel 11 : 1[200000] -> 0[100000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 12 : 0[100000] -> 1[200000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO Channel 12 : 1[200000] -> 0[100000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 13 : 0[100000] -> 1[200000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO Channel 13 : 1[200000] -> 0[100000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 14 : 0[100000] -> 1[200000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO Channel 14 : 1[200000] -> 0[100000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 15 : 0[100000] -> 1[200000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO Channel 15 : 1[200000] -> 0[100000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 16 : 0[100000] -> 1[200000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO Channel 16 : 1[200000] -> 0[100000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 17 : 0[100000] -> 1[200000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO Channel 17 : 1[200000] -> 0[100000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 18 : 0[100000] -> 1[200000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO Channel 18 : 1[200000] -> 0[100000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 19 : 0[100000] -> 1[200000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO Channel 19 : 1[200000] -> 0[100000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 20 : 0[100000] -> 1[200000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO Channel 20 : 1[200000] -> 0[100000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 21 : 0[100000] -> 1[200000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO Channel 21 : 1[200000] -> 0[100000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 22 : 0[100000] -> 1[200000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO Channel 22 : 1[200000] -> 0[100000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Channel 23 : 0[100000] -> 1[200000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO Channel 23 : 1[200000] -> 0[100000] via P2P/direct pointer/read
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO Connected all rings
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO Connected all trees
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Connected all rings
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/64
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO Connected all trees
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/64
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO 24 coll channels, 32 p2p channels, 32 p2p channels per peer
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO 24 coll channels, 32 p2p channels, 32 p2p channels per peer
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1337 [0] NCCL INFO comm 0x7fca28002e00 rank 0 nranks 2 cudaDev 0 busId 100000 - Init COMPLETE
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1338 [1] NCCL INFO comm 0x7fc494002dd0 rank 1 nranks 2 cudaDev 1 busId 200000 - Init COMPLETE
0bfd6a2c57ad4d419fa42cd7f69f5160000001:160:1326 [0] NCCL INFO Launch mode Group/CGMD
1/232 [..............................] - ETA: 8:43:48 - loss: 0.5153 - Accuracy: 0.8750
2/232 [..............................] - ETA: 6:34 - loss: 0.4137 - Accuracy: 0.8125
3/232 [..............................] - ETA: 6:37 - loss: 0.3161 - Accuracy: 0.6250
4/232 [..............................] - ETA: 6:35 - loss: 0.2721 - Accuracy: 0.6250
5/232 [..............................] - ETA: 6:38 - loss: 0.2351 - Accuracy: 0.6750
6/232 [..............................] - ETA: 6:31 - loss: 0.2066 - Accuracy: 0.7083
7/232 [..............................] - ETA: 6:28 - loss: 0.1930 - Accuracy: 0.6786
8/232 [>.............................] - ETA: 6:26 - loss: 0.1812 - Accuracy: 0.6719
9/232 [>.............................] - ETA: 6:26 - loss: 0.1753 - Accuracy: 0.6389
10/232 [>.............................] - ETA: 6:23 - loss: 0.1670 - Accuracy: 0.6375
11/232 [>.............................] - ETA: 6:20 - loss: 0.1559 - Accuracy: 0.6705
12/232 [>.............................] - ETA: 6:18 - loss: 0.1467 - Accuracy: 0.6875
13/232 [>.............................] - ETA: 6:16 - loss: 0.1405 - Accuracy: 0.6923
14/232 [>.............................] - ETA: 6:16 - loss: 0.1339 - Accuracy: 0.7054
15/232 [>.............................] - ETA: 6:14 - loss: 0.1317 - Accuracy: 0.7000
16/232 [=>............................] - ETA: 6:11 - loss: 0.1250 - Accuracy: 0.7188
17/232 [=>............................] - ETA: 6:10 - loss: 0.1233 - Accuracy: 0.7132
18/232 [=>............................] - ETA: 6:07 - loss: 0.1199 - Accuracy: 0.7153
19/232 [=>............................] - ETA: 6:05 - loss: 0.1210 - Accuracy: 0.6974
20/232 [=>............................] - ETA: 6:04 - loss: 0.1190 - Accuracy: 0.7063
21/232 [=>............................] - ETA: 6:01 - loss: 0.1176 - Accuracy: 0.7083
22/232 [=>............................] - ETA: 5:59 - loss: 0.1176 - Accuracy: 0.6989
23/232 [=>............................] - ETA: 5:57 - loss: 0.1179 - Accuracy: 0.6902
24/232 [==>...........................] - ETA: 5:56 - loss: 0.1141 - Accuracy: 0.7031
25/232 [==>...........................] - ETA: 5:54 - loss: 0.1122 - Accuracy: 0.7050
26/232 [==>...........................] - ETA: 5:52 - loss: 0.1104 - Accuracy: 0.7067
27/232 [==>...........................] - ETA: 5:49 - loss: 0.1092 - Accuracy: 0.7130
28/232 [==>...........................] - ETA: 5:48 - loss: 0.1089 - Accuracy: 0.7098
29/232 [==>...........................] - ETA: 5:46 - loss: 0.1085 - Accuracy: 0.7069
30/232 [==>...........................] - ETA: 5:44 - loss: 0.1098 - Accuracy: 0.6958
31/232 [===>..........................] - ETA: 5:42 - loss: 0.1080 - Accuracy: 0.7016
32/232 [===>..........................] - ETA: 5:40 - loss: 0.1069 - Accuracy: 0.7031
33/232 [===>..........................] - ETA: 5:38 - loss: 0.1065 - Accuracy: 0.7008
34/232 [===>..........................] - ETA: 5:36 - loss: 0.1048 - Accuracy: 0.7059
35/232 [===>..........................] - ETA: 5:35 - loss: 0.1026 - Accuracy: 0.7143
36/232 [===>..........................] - ETA: 5:33 - loss: 0.1017 - Accuracy: 0.7153
37/232 [===>..........................] - ETA: 5:32 - loss: 0.1016 - Accuracy: 0.7128
38/232 [===>..........................] - ETA: 5:30 - loss: 0.1007 - Accuracy: 0.7138
39/232 [====>.........................] - ETA: 5:28 - loss: 0.0998 - Accuracy: 0.7147
40/232 [====>.........................] - ETA: 5:26 - loss: 0.0997 - Accuracy: 0.7125
41/232 [====>.........................] - ETA: 5:25 - loss: 0.0981 - Accuracy: 0.7195
42/232 [====>.........................] - ETA: 5:23 - loss: 0.0988 - Accuracy: 0.7113
43/232 [====>.........................] - ETA: 5:22 - loss: 0.0982 - Accuracy: 0.7122
44/232 [====>.........................] - ETA: 5:20 - loss: 0.0980 - Accuracy: 0.7102
45/232 [====>.........................] - ETA: 5:18 - loss: 0.0979 - Accuracy: 0.7083
46/232 [====>.........................] - ETA: 5:16 - loss: 0.0966 - Accuracy: 0.7147
47/232 [=====>........................] - ETA: 5:15 - loss: 0.0955 - Accuracy: 0.7181
48/232 [=====>........................] - ETA: 5:13 - loss: 0.0958 - Accuracy: 0.7135
49/232 [=====>........................] - ETA: 5:11 - loss: 0.0959 - Accuracy: 0.7117
50/232 [=====>........................] - ETA: 5:10 - loss: 0.0959 - Accuracy: 0.7100
51/232 [=====>........................] - ETA: 5:09 - loss: 0.0959 - Accuracy: 0.7083
52/232 [=====>........................] - ETA: 5:07 - loss: 0.0967 - Accuracy: 0.7019
53/232 [=====>........................] - ETA: 5:05 - loss: 0.0956 - Accuracy: 0.7075
54/232 [=====>........................] - ETA: 5:03 - loss: 0.0954 - Accuracy: 0.7083
55/232 [======>.......................] - ETA: 5:02 - loss: 0.0949 - Accuracy: 0.7091
56/232 [======>.......................] - ETA: 5:00 - loss: 0.0945 - Accuracy: 0.7098
57/232 [======>.......................] - ETA: 4:59 - loss: 0.0948 - Accuracy: 0.7061
58/232 [======>.......................] - ETA: 4:57 - loss: 0.0944 - Accuracy: 0.7069
59/232 [======>.......................] - ETA: 4:55 - loss: 0.0936 - Accuracy: 0.7097
60/232 [======>.......................] - ETA: 4:54 - loss: 0.0932 - Accuracy: 0.7104
61/232 [======>.......................] - ETA: 4:52 - loss: 0.0929 - Accuracy: 0.7111
62/232 [=======>......................] - ETA: 4:50 - loss: 0.0921 - Accuracy: 0.7137
63/232 [=======>......................] - ETA: 4:48 - loss: 0.0922 - Accuracy: 0.7123
64/232 [=======>......................] - ETA: 4:47 - loss: 0.0918 - Accuracy: 0.7129
65/232 [=======>......................] - ETA: 4:48 - loss: 0.0932 - Accuracy: 0.7038
66/232 [=======>......................] - ETA: 4:46 - loss: 0.0931 - Accuracy: 0.7027
67/232 [=======>......................] - ETA: 4:45 - loss: 0.0934 - Accuracy: 0.6996
68/232 [=======>......................] - ETA: 4:43 - loss: 0.0935 - Accuracy: 0.6967
69/232 [=======>......................] - ETA: 4:41 - loss: 0.0937 - Accuracy: 0.6957
70/232 [========>.....................] - ETA: 4:39 - loss: 0.0928 - Accuracy: 0.7000
71/232 [========>.....................] - ETA: 4:37 - loss: 0.0932 - Accuracy: 0.6972
72/232 [========>.....................] - ETA: 4:35 - loss: 0.0925 - Accuracy: 0.6997
73/232 [========>.....................] - ETA: 4:33 - loss: 0.0929 - Accuracy: 0.6969
74/232 [========>.....................] - ETA: 4:32 - loss: 0.0936 - Accuracy: 0.6926
75/232 [========>.....................] - ETA: 4:30 - loss: 0.0945 - Accuracy: 0.6867
76/232 [========>.....................] - ETA: 4:28 - loss: 0.0939 - Accuracy: 0.6891
77/232 [========>.....................] - ETA: 4:26 - loss: 0.0942 - Accuracy: 0.6867
78/232 [=========>....................] - ETA: 4:24 - loss: 0.0940 - Accuracy: 0.6875
79/232 [=========>....................] - ETA: 4:23 - loss: 0.0944 - Accuracy: 0.6835
80/232 [=========>....................] - ETA: 4:21 - loss: 0.0945 - Accuracy: 0.6812
81/232 [=========>....................] - ETA: 4:19 - loss: 0.0945 - Accuracy: 0.6806
82/232 [=========>....................] - ETA: 4:18 - loss: 0.0944 - Accuracy: 0.6799
83/232 [=========>....................] - ETA: 4:16 - loss: 0.0940 - Accuracy: 0.6822
84/232 [=========>....................] - ETA: 4:14 - loss: 0.0944 - Accuracy: 0.6786
85/232 [=========>....................] - ETA: 4:12 - loss: 0.0948 - Accuracy: 0.6750
86/232 [==========>...................] - ETA: 4:10 - loss: 0.0948 - Accuracy: 0.6744
87/232 [==========>...................] - ETA: 4:09 - loss: 0.0946 - Accuracy: 0.6753
88/232 [==========>...................] - ETA: 4:07 - loss: 0.0942 - Accuracy: 0.6776
89/232 [==========>...................] - ETA: 4:05 - loss: 0.0939 - Accuracy: 0.6784
90/232 [==========>...................] - ETA: 4:04 - loss: 0.0935 - Accuracy: 0.6806
91/232 [==========>...................] - ETA: 4:02 - loss: 0.0930 - Accuracy: 0.6827
92/232 [==========>...................] - ETA: 4:00 - loss: 0.0935 - Accuracy: 0.6793
93/232 [===========>..................] - ETA: 3:59 - loss: 0.0930 - Accuracy: 0.6815
94/232 [===========>..................] - ETA: 3:57 - loss: 0.0932 - Accuracy: 0.6795
95/232 [===========>..................] - ETA: 3:55 - loss: 0.0929 - Accuracy: 0.6803
96/232 [===========>..................] - ETA: 3:53 - loss: 0.0927 - Accuracy: 0.6810
97/232 [===========>..................] - ETA: 3:52 - loss: 0.0930 - Accuracy: 0.6791
98/232 [===========>..................] - ETA: 3:50 - loss: 0.0925 - Accuracy: 0.6811
99/232 [===========>..................] - ETA: 3:48 - loss: 0.0920 - Accuracy: 0.6831
100/232 [===========>..................] - ETA: 3:46 - loss: 0.0920 - Accuracy: 0.6825
101/232 [============>.................] - ETA: 3:44 - loss: 0.0922 - Accuracy: 0.6807
102/232 [============>.................] - ETA: 3:43 - loss: 0.0929 - Accuracy: 0.6765
103/232 [============>.................] - ETA: 3:41 - loss: 0.0927 - Accuracy: 0.6772
104/232 [============>.................] - ETA: 3:39 - loss: 0.0927 - Accuracy: 0.6767
105/232 [============>.................] - ETA: 3:37 - loss: 0.0928 - Accuracy: 0.6750
106/232 [============>.................] - ETA: 3:36 - loss: 0.0927 - Accuracy: 0.6757
107/232 [============>.................] - ETA: 3:34 - loss: 0.0931 - Accuracy: 0.6717
108/232 [============>.................] - ETA: 3:32 - loss: 0.0929 - Accuracy: 0.6736
109/232 [=============>................] - ETA: 3:31 - loss: 0.0929 - Accuracy: 0.6732
110/232 [=============>................] - ETA: 3:29 - loss: 0.0930 - Accuracy: 0.6716
111/232 [=============>................] - ETA: 3:27 - loss: 0.0928 - Accuracy: 0.6723
112/232 [=============>................] - ETA: 3:25 - loss: 0.0928 - Accuracy: 0.6719
113/232 [=============>................] - ETA: 3:23 - loss: 0.0927 - Accuracy: 0.6715
114/232 [=============>................] - ETA: 3:22 - loss: 0.0924 - Accuracy: 0.6732
115/232 [=============>................] - ETA: 3:20 - loss: 0.0924 - Accuracy: 0.6728
116/232 [==============>...............] - ETA: 3:18 - loss: 0.0920 - Accuracy: 0.6746
117/232 [==============>...............] - ETA: 3:17 - loss: 0.0918 - Accuracy: 0.6752
118/232 [==============>...............] - ETA: 3:15 - loss: 0.0914 - Accuracy: 0.6769
119/232 [==============>...............] - ETA: 3:13 - loss: 0.0908 - Accuracy: 0.6796
120/232 [==============>...............] - ETA: 3:12 - loss: 0.0906 - Accuracy: 0.6802
121/232 [==============>...............] - ETA: 3:10 - loss: 0.0909 - Accuracy: 0.6787
122/232 [==============>...............] - ETA: 3:08 - loss: 0.0907 - Accuracy: 0.6793
123/232 [==============>...............] - ETA: 3:07 - loss: 0.0903 - Accuracy: 0.6809
124/232 [===============>..............] - ETA: 3:05 - loss: 0.0901 - Accuracy: 0.6815
125/232 [===============>..............] - ETA: 3:03 - loss: 0.0899 - Accuracy: 0.6820
126/232 [===============>..............] - ETA: 3:01 - loss: 0.0896 - Accuracy: 0.6835
127/232 [===============>..............] - ETA: 3:00 - loss: 0.0896 - Accuracy: 0.6831
128/232 [===============>..............] - ETA: 2:58 - loss: 0.0895 - Accuracy: 0.6836
129/232 [===============>..............] - ETA: 2:56 - loss: 0.0897 - Accuracy: 0.6822
130/232 [===============>..............] - ETA: 2:54 - loss: 0.0896 - Accuracy: 0.6827
131/232 [===============>..............] - ETA: 2:53 - loss: 0.0894 - Accuracy: 0.6832
132/232 [================>.............] - ETA: 2:51 - loss: 0.0895 - Accuracy: 0.6828
133/232 [================>.............] - ETA: 2:49 - loss: 0.0893 - Accuracy: 0.6833
134/232 [================>.............] - ETA: 2:47 - loss: 0.0893 - Accuracy: 0.6828
135/232 [================>.............] - ETA: 2:46 - loss: 0.0892 - Accuracy: 0.6833
136/232 [================>.............] - ETA: 2:44 - loss: 0.0889 - Accuracy: 0.6847
137/232 [================>.............] - ETA: 2:42 - loss: 0.0886 - Accuracy: 0.6861
138/232 [================>.............] - ETA: 2:41 - loss: 0.0886 - Accuracy: 0.6857
139/232 [================>.............] - ETA: 2:39 - loss: 0.0885 - Accuracy: 0.6862
140/232 [=================>............] - ETA: 2:37 - loss: 0.0884 - Accuracy: 0.6866
141/232 [=================>............] - ETA: 2:35 - loss: 0.0881 - Accuracy: 0.6879
142/232 [=================>............] - ETA: 2:34 - loss: 0.0879 - Accuracy: 0.6884
143/232 [=================>............] - ETA: 2:32 - loss: 0.0878 - Accuracy: 0.6888
144/232 [=================>............] - ETA: 2:30 - loss: 0.0874 - Accuracy: 0.6901
145/232 [=================>............] - ETA: 2:28 - loss: 0.0875 - Accuracy: 0.6897
146/232 [=================>............] - ETA: 2:27 - loss: 0.0878 - Accuracy: 0.6884
147/232 [==================>...........] - ETA: 2:25 - loss: 0.0875 - Accuracy: 0.6896
148/232 [==================>...........] - ETA: 2:23 - loss: 0.0877 - Accuracy: 0.6883
149/232 [==================>...........] - ETA: 2:22 - loss: 0.0875 - Accuracy: 0.6888
150/232 [==================>...........] - ETA: 2:20 - loss: 0.0877 - Accuracy: 0.6875
151/232 [==================>...........] - ETA: 2:18 - loss: 0.0882 - Accuracy: 0.6846
152/232 [==================>...........] - ETA: 2:16 - loss: 0.0879 - Accuracy: 0.6859
153/232 [==================>...........] - ETA: 2:15 - loss: 0.0882 - Accuracy: 0.6838
154/232 [==================>...........] - ETA: 2:13 - loss: 0.0880 - Accuracy: 0.6851
155/232 [===================>..........] - ETA: 2:11 - loss: 0.0880 - Accuracy: 0.6847
156/232 [===================>..........] - ETA: 2:10 - loss: 0.0881 - Accuracy: 0.6835
157/232 [===================>..........] - ETA: 2:08 - loss: 0.0882 - Accuracy: 0.6823
158/232 [===================>..........] - ETA: 2:06 - loss: 0.0881 - Accuracy: 0.6835
159/232 [===================>..........] - ETA: 2:04 - loss: 0.0881 - Accuracy: 0.6832
160/232 [===================>..........] - ETA: 2:03 - loss: 0.0880 - Accuracy: 0.6836
161/232 [===================>..........] - ETA: 2:01 - loss: 0.0880 - Accuracy: 0.6840
162/232 [===================>..........] - ETA: 1:59 - loss: 0.0878 - Accuracy: 0.6844
163/232 [====================>.........] - ETA: 1:57 - loss: 0.0879 - Accuracy: 0.6840
164/232 [====================>.........] - ETA: 1:56 - loss: 0.0877 - Accuracy: 0.6845
165/232 [====================>.........] - ETA: 1:54 - loss: 0.0881 - Accuracy: 0.6826
166/232 [====================>.........] - ETA: 1:52 - loss: 0.0880 - Accuracy: 0.6830
167/232 [====================>.........] - ETA: 1:51 - loss: 0.0875 - Accuracy: 0.6849
168/232 [====================>.........] - ETA: 1:49 - loss: 0.0876 - Accuracy: 0.6845
169/232 [====================>.........] - ETA: 1:47 - loss: 0.0883 - Accuracy: 0.6812
170/232 [====================>.........] - ETA: 1:46 - loss: 0.0886 - Accuracy: 0.6794
171/232 [=====================>........] - ETA: 1:44 - loss: 0.0883 - Accuracy: 0.6806
172/232 [=====================>........] - ETA: 1:42 - loss: 0.0884 - Accuracy: 0.6802
173/232 [=====================>........] - ETA: 1:40 - loss: 0.0882 - Accuracy: 0.6806
174/232 [=====================>........] - ETA: 1:39 - loss: 0.0886 - Accuracy: 0.6782
175/232 [=====================>........] - ETA: 1:37 - loss: 0.0884 - Accuracy: 0.6793
176/232 [=====================>........] - ETA: 1:35 - loss: 0.0884 - Accuracy: 0.6790
177/232 [=====================>........] - ETA: 1:34 - loss: 0.0884 - Accuracy: 0.6787
178/232 [======================>.......] - ETA: 1:32 - loss: 0.0884 - Accuracy: 0.6791
179/232 [======================>.......] - ETA: 1:30 - loss: 0.0884 - Accuracy: 0.6788
180/232 [======================>.......] - ETA: 1:28 - loss: 0.0884 - Accuracy: 0.6785
181/232 [======================>.......] - ETA: 1:27 - loss: 0.0883 - Accuracy: 0.6789
182/232 [======================>.......] - ETA: 1:25 - loss: 0.0882 - Accuracy: 0.6793
183/232 [======================>.......] - ETA: 1:23 - loss: 0.0882 - Accuracy: 0.6790
184/232 [======================>.......] - ETA: 1:22 - loss: 0.0884 - Accuracy: 0.6780
185/232 [======================>.......] - ETA: 1:20 - loss: 0.0884 - Accuracy: 0.6777
186/232 [=======================>......] - ETA: 1:18 - loss: 0.0886 - Accuracy: 0.6761
187/232 [=======================>......] - ETA: 1:16 - loss: 0.0885 - Accuracy: 0.6765
188/232 [=======================>......] - ETA: 1:15 - loss: 0.0888 - Accuracy: 0.6749
189/232 [=======================>......] - ETA: 1:13 - loss: 0.0886 - Accuracy: 0.6759
190/232 [=======================>......] - ETA: 1:11 - loss: 0.0886 - Accuracy: 0.6757
191/232 [=======================>......] - ETA: 1:10 - loss: 0.0884 - Accuracy: 0.6767
192/232 [=======================>......] - ETA: 1:08 - loss: 0.0882 - Accuracy: 0.6777
193/232 [=======================>......] - ETA: 1:06 - loss: 0.0879 - Accuracy: 0.6794
194/232 [========================>.....] - ETA: 1:04 - loss: 0.0876 - Accuracy: 0.6811
195/232 [========================>.....] - ETA: 1:03 - loss: 0.0876 - Accuracy: 0.6808
196/232 [========================>.....] - ETA: 1:01 - loss: 0.0875 - Accuracy: 0.6811
197/232 [========================>.....] - ETA: 59s - loss: 0.0873 - Accuracy: 0.6821
198/232 [========================>.....] - ETA: 58s - loss: 0.0872 - Accuracy: 0.6824
199/232 [========================>.....] - ETA: 56s - loss: 0.0871 - Accuracy: 0.6828
200/232 [========================>.....] - ETA: 54s - loss: 0.0872 - Accuracy: 0.6825
201/232 [========================>.....] - ETA: 52s - loss: 0.0874 - Accuracy: 0.6816
202/232 [=========================>....] - ETA: 51s - loss: 0.0873 - Accuracy: 0.6819
203/232 [=========================>....] - ETA: 49s - loss: 0.0872 - Accuracy: 0.6823
204/232 [=========================>....] - ETA: 47s - loss: 0.0870 - Accuracy: 0.6832
205/232 [=========================>....] - ETA: 46s - loss: 0.0869 - Accuracy: 0.6835
206/232 [=========================>....] - ETA: 44s - loss: 0.0872 - Accuracy: 0.6820
207/232 [=========================>....] - ETA: 42s - loss: 0.0873 - Accuracy: 0.6818
208/232 [=========================>....] - ETA: 40s - loss: 0.0872 - Accuracy: 0.6821
209/232 [==========================>...] - ETA: 39s - loss: 0.0869 - Accuracy: 0.6836
210/232 [==========================>...] - ETA: 37s - loss: 0.0868 - Accuracy: 0.6839
211/232 [==========================>...] - ETA: 35s - loss: 0.0866 - Accuracy: 0.6848
212/232 [==========================>...] - ETA: 34s - loss: 0.0863 - Accuracy: 0.6863
213/232 [==========================>...] - ETA: 32s - loss: 0.0864 - Accuracy: 0.6860
214/232 [==========================>...] - ETA: 30s - loss: 0.0864 - Accuracy: 0.6857
215/232 [==========================>...] - ETA: 29s - loss: 0.0865 - Accuracy: 0.6849
216/232 [==========================>...] - ETA: 27s - loss: 0.0863 - Accuracy: 0.6858
217/232 [===========================>..] - ETA: 25s - loss: 0.0866 - Accuracy: 0.6843
218/232 [===========================>..] - ETA: 23s - loss: 0.0865 - Accuracy: 0.6846
219/232 [===========================>..] - ETA: 22s - loss: 0.0865 - Accuracy: 0.6844
220/232 [===========================>..] - ETA: 20s - loss: 0.0865 - Accuracy: 0.6841
221/232 [===========================>..] - ETA: 18s - loss: 0.0865 - Accuracy: 0.6844
222/232 [===========================>..] - ETA: 17s - loss: 0.0867 - Accuracy: 0.6824
223/232 [===========================>..] - ETA: 15s - loss: 0.0868 - Accuracy: 0.6822
224/232 [===========================>..] - ETA: 13s - loss: 0.0867 - Accuracy: 0.6830
225/232 [============================>.] - ETA: 11s - loss: 0.0867 - Accuracy: 0.6828
226/232 [============================>.] - ETA: 10s - loss: 0.0867 - Accuracy: 0.6825
227/232 [============================>.] - ETA: 8s - loss: 0.0867 - Accuracy: 0.6823
228/232 [============================>.] - ETA: 6s - loss: 0.0869 - Accuracy: 0.6815
229/232 [============================>.] - ETA: 5s - loss: 0.0870 - Accuracy: 0.6807
230/232 [============================>.] - ETA: 3s - loss: 0.0871 - Accuracy: 0.6799
231/232 [============================>.] - ETA: 1s - loss: 0.0870 - Accuracy: 0.6802
232/232 [==============================] - ETA: 0s - loss: 0.0871 - Accuracy: 0.68002021-12-08 21:24:38.031434: W tensorflow/core/grappler/optimizers/data/auto_shard.cc:766] AUTO sharding policy will apply DATA sharding policy as it failed to apply FILE sharding policy because of the following reason: Did not find a shardable source, walked to a node which is not a dataset: name: "FlatMapDataset/_9"
op: "FlatMapDataset"
input: "PrefetchDataset/_8"
attr {
key: "Targuments"
value {
list {
}
}
}
attr {
key: "_cardinality"
value {
i: -2
}
}
attr {
key: "f"
value {
func {
name: "__inference_Dataset_flat_map_slice_batch_indices_244154"
}
}
}
attr {
key: "metadata"
value {
s: "\n\021FlatMapDataset:35"
}
}
attr {
key: "output_shapes"
value {
list {
shape {
dim {
size: -1
}
}
}
}
}
attr {
key: "output_types"
value {
list {
type: DT_INT64
}
}
}
. Consider either turning off auto-sharding or switching the auto_shard_policy to DATA to shard this dataset. You can do this by creating a new `tf.data.Options()` object then setting `options.experimental_distribute.auto_shard_policy = AutoShardPolicy.DATA` before applying the options object to the dataset via `dataset.with_options(options)`.
2021-12-08 21:24:38.065957: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:24:38.068157: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:24:38.074784: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:24:38.077624: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:24:38.083942: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:24:38.085927: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:24:38.090917: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:24:38.092183: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:24:38.135684: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:24:38.138288: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:24:38.145706: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:24:38.148147: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:24:38.149358: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:24:38.157175: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:24:38.159599: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:24:38.166636: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:24:38.169011: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:24:38.170183: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:25:02.591269: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1991] Converted 4794/13482 nodes to float16 precision using 248 cast(s) to float16 (excluding Const and Variable casts)
2021-12-08 21:25:06.903502: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1991] Converted 0/12606 nodes to float16 precision using 0 cast(s) to float16 (excluding Const and Variable casts)
232/232 [==============================] - 579s 2s/step - loss: 0.0871 - Accuracy: 0.6800 - val_loss: 1.1778 - val_Accuracy: 0.6700
Epoch 2/2
1/232 [..............................] - ETA: 4:56 - loss: 0.0820 - Accuracy: 0.6667
2/232 [..............................] - ETA: 6:36 - loss: 0.0660 - Accuracy: 0.7857
3/232 [..............................] - ETA: 6:24 - loss: 0.0625 - Accuracy: 0.8182
4/232 [..............................] - ETA: 6:19 - loss: 0.0699 - Accuracy: 0.7667
5/232 [..............................] - ETA: 6:20 - loss: 0.0697 - Accuracy: 0.7632
6/232 [..............................] - ETA: 6:20 - loss: 0.0768 - Accuracy: 0.7174
7/232 [..............................] - ETA: 6:17 - loss: 0.0756 - Accuracy: 0.7222
8/232 [>.............................] - ETA: 6:15 - loss: 0.0743 - Accuracy: 0.7258
9/232 [>.............................] - ETA: 6:13 - loss: 0.0759 - Accuracy: 0.7143
10/232 [>.............................] - ETA: 6:10 - loss: 0.0751 - Accuracy: 0.7179
11/232 [>.............................] - ETA: 6:09 - loss: 0.0763 - Accuracy: 0.7093
12/232 [>.............................] - ETA: 6:07 - loss: 0.0757 - Accuracy: 0.7128
13/232 [>.............................] - ETA: 6:07 - loss: 0.0800 - Accuracy: 0.6863
14/232 [>.............................] - ETA: 6:05 - loss: 0.0826 - Accuracy: 0.6727
15/232 [>.............................] - ETA: 6:03 - loss: 0.0820 - Accuracy: 0.6780
16/232 [=>............................] - ETA: 6:02 - loss: 0.0838 - Accuracy: 0.6667
17/232 [=>............................] - ETA: 6:04 - loss: 0.0839 - Accuracy: 0.6716
18/232 [=>............................] - ETA: 6:02 - loss: 0.0833 - Accuracy: 0.6761
19/232 [=>............................] - ETA: 6:01 - loss: 0.0836 - Accuracy: 0.6733
20/232 [=>............................] - ETA: 5:59 - loss: 0.0853 - Accuracy: 0.6646
21/232 [=>............................] - ETA: 5:58 - loss: 0.0866 - Accuracy: 0.6566
22/232 [=>............................] - ETA: 5:56 - loss: 0.0869 - Accuracy: 0.6552
23/232 [=>............................] - ETA: 5:55 - loss: 0.0872 - Accuracy: 0.6538
24/232 [==>...........................] - ETA: 5:53 - loss: 0.0875 - Accuracy: 0.6526
25/232 [==>...........................] - ETA: 5:51 - loss: 0.0859 - Accuracy: 0.6616
26/232 [==>...........................] - ETA: 5:51 - loss: 0.0844 - Accuracy: 0.6699
27/232 [==>...........................] - ETA: 5:48 - loss: 0.0830 - Accuracy: 0.6776
28/232 [==>...........................] - ETA: 5:47 - loss: 0.0833 - Accuracy: 0.6757
29/232 [==>...........................] - ETA: 5:45 - loss: 0.0827 - Accuracy: 0.6783
30/232 [==>...........................] - ETA: 5:43 - loss: 0.0821 - Accuracy: 0.6807
31/232 [===>..........................] - ETA: 5:41 - loss: 0.0809 - Accuracy: 0.6870
32/232 [===>..........................] - ETA: 5:39 - loss: 0.0798 - Accuracy: 0.6929
33/232 [===>..........................] - ETA: 5:38 - loss: 0.0794 - Accuracy: 0.6947
34/232 [===>..........................] - ETA: 5:36 - loss: 0.0797 - Accuracy: 0.6926
35/232 [===>..........................] - ETA: 5:34 - loss: 0.0816 - Accuracy: 0.6835
36/232 [===>..........................] - ETA: 5:32 - loss: 0.0813 - Accuracy: 0.6853
37/232 [===>..........................] - ETA: 5:30 - loss: 0.0814 - Accuracy: 0.6837
38/232 [===>..........................] - ETA: 5:29 - loss: 0.0814 - Accuracy: 0.6854
39/232 [====>.........................] - ETA: 5:27 - loss: 0.0808 - Accuracy: 0.6903
40/232 [====>.........................] - ETA: 5:26 - loss: 0.0818 - Accuracy: 0.6855
41/232 [====>.........................] - ETA: 5:24 - loss: 0.0834 - Accuracy: 0.6779
42/232 [====>.........................] - ETA: 5:23 - loss: 0.0823 - Accuracy: 0.6826
43/232 [====>.........................] - ETA: 5:21 - loss: 0.0820 - Accuracy: 0.6842
44/232 [====>.........................] - ETA: 5:19 - loss: 0.0810 - Accuracy: 0.6886
45/232 [====>.........................] - ETA: 5:18 - loss: 0.0813 - Accuracy: 0.6872
46/232 [====>.........................] - ETA: 5:16 - loss: 0.0811 - Accuracy: 0.6885
47/232 [=====>........................] - ETA: 5:14 - loss: 0.0825 - Accuracy: 0.6818
48/232 [=====>........................] - ETA: 5:13 - loss: 0.0823 - Accuracy: 0.6832
49/232 [=====>........................] - ETA: 5:12 - loss: 0.0816 - Accuracy: 0.6872
50/232 [=====>........................] - ETA: 5:10 - loss: 0.0814 - Accuracy: 0.6884
51/232 [=====>........................] - ETA: 5:09 - loss: 0.0828 - Accuracy: 0.6798
52/232 [=====>........................] - ETA: 5:08 - loss: 0.0830 - Accuracy: 0.6787
53/232 [=====>........................] - ETA: 5:06 - loss: 0.0831 - Accuracy: 0.6777
54/232 [=====>........................] - ETA: 5:04 - loss: 0.0831 - Accuracy: 0.6791
55/232 [======>.......................] - ETA: 5:03 - loss: 0.0838 - Accuracy: 0.6735
56/232 [======>.......................] - ETA: 5:01 - loss: 0.0834 - Accuracy: 0.6771
57/232 [======>.......................] - ETA: 4:59 - loss: 0.0835 - Accuracy: 0.6762
58/232 [======>.......................] - ETA: 4:57 - loss: 0.0829 - Accuracy: 0.6797
59/232 [======>.......................] - ETA: 4:55 - loss: 0.0830 - Accuracy: 0.6787
60/232 [======>.......................] - ETA: 4:53 - loss: 0.0824 - Accuracy: 0.6820
61/232 [======>.......................] - ETA: 4:52 - loss: 0.0821 - Accuracy: 0.6831
62/232 [=======>......................] - ETA: 4:50 - loss: 0.0814 - Accuracy: 0.6862
63/232 [=======>......................] - ETA: 4:48 - loss: 0.0821 - Accuracy: 0.6833
64/232 [=======>......................] - ETA: 4:46 - loss: 0.0820 - Accuracy: 0.6843
65/232 [=======>......................] - ETA: 4:45 - loss: 0.0809 - Accuracy: 0.6892
66/232 [=======>......................] - ETA: 4:43 - loss: 0.0803 - Accuracy: 0.6920
67/232 [=======>......................] - ETA: 4:41 - loss: 0.0802 - Accuracy: 0.6929
68/232 [=======>......................] - ETA: 4:39 - loss: 0.0805 - Accuracy: 0.6919
69/232 [=======>......................] - ETA: 4:38 - loss: 0.0812 - Accuracy: 0.6891
70/232 [========>.....................] - ETA: 4:36 - loss: 0.0810 - Accuracy: 0.6900
71/232 [========>.....................] - ETA: 4:34 - loss: 0.0808 - Accuracy: 0.6908
72/232 [========>.....................] - ETA: 4:32 - loss: 0.0810 - Accuracy: 0.6899
73/232 [========>.....................] - ETA: 4:30 - loss: 0.0809 - Accuracy: 0.6907
74/232 [========>.....................] - ETA: 4:28 - loss: 0.0807 - Accuracy: 0.6915
75/232 [========>.....................] - ETA: 4:27 - loss: 0.0814 - Accuracy: 0.6873
76/232 [========>.....................] - ETA: 4:25 - loss: 0.0811 - Accuracy: 0.6898
77/232 [========>.....................] - ETA: 4:23 - loss: 0.0816 - Accuracy: 0.6857
78/232 [=========>....................] - ETA: 4:21 - loss: 0.0820 - Accuracy: 0.6849
79/232 [=========>....................] - ETA: 4:20 - loss: 0.0820 - Accuracy: 0.6857
80/232 [=========>....................] - ETA: 4:18 - loss: 0.0815 - Accuracy: 0.6881
81/232 [=========>....................] - ETA: 4:16 - loss: 0.0816 - Accuracy: 0.6873
82/232 [=========>....................] - ETA: 4:14 - loss: 0.0811 - Accuracy: 0.6896
83/232 [=========>....................] - ETA: 4:12 - loss: 0.0815 - Accuracy: 0.6873
84/232 [=========>....................] - ETA: 4:11 - loss: 0.0816 - Accuracy: 0.6866
85/232 [=========>....................] - ETA: 4:09 - loss: 0.0824 - Accuracy: 0.6829
86/232 [==========>...................] - ETA: 4:07 - loss: 0.0828 - Accuracy: 0.6808
87/232 [==========>...................] - ETA: 4:06 - loss: 0.0824 - Accuracy: 0.6830
88/232 [==========>...................] - ETA: 4:04 - loss: 0.0825 - Accuracy: 0.6823
89/232 [==========>...................] - ETA: 4:02 - loss: 0.0825 - Accuracy: 0.6817
90/232 [==========>...................] - ETA: 4:00 - loss: 0.0820 - Accuracy: 0.6852
91/232 [==========>...................] - ETA: 3:59 - loss: 0.0826 - Accuracy: 0.6818
92/232 [==========>...................] - ETA: 3:57 - loss: 0.0829 - Accuracy: 0.6798
93/232 [===========>..................] - ETA: 3:55 - loss: 0.0830 - Accuracy: 0.6792
94/232 [===========>..................] - ETA: 3:54 - loss: 0.0832 - Accuracy: 0.6773
95/232 [===========>..................] - ETA: 3:52 - loss: 0.0835 - Accuracy: 0.6755
96/232 [===========>..................] - ETA: 3:50 - loss: 0.0835 - Accuracy: 0.6762
97/232 [===========>..................] - ETA: 3:49 - loss: 0.0837 - Accuracy: 0.6757
98/232 [===========>..................] - ETA: 3:47 - loss: 0.0834 - Accuracy: 0.6777
99/232 [===========>..................] - ETA: 3:45 - loss: 0.0833 - Accuracy: 0.6785
100/232 [===========>..................] - ETA: 3:44 - loss: 0.0835 - Accuracy: 0.6767
101/232 [============>.................] - ETA: 3:42 - loss: 0.0834 - Accuracy: 0.6774
102/232 [============>.................] - ETA: 3:40 - loss: 0.0837 - Accuracy: 0.6757
103/232 [============>.................] - ETA: 3:38 - loss: 0.0838 - Accuracy: 0.6752
104/232 [============>.................] - ETA: 3:37 - loss: 0.0837 - Accuracy: 0.6759
105/232 [============>.................] - ETA: 3:35 - loss: 0.0835 - Accuracy: 0.6766
106/232 [============>.................] - ETA: 3:33 - loss: 0.0834 - Accuracy: 0.6773
107/232 [============>.................] - ETA: 3:32 - loss: 0.0837 - Accuracy: 0.6756
108/232 [============>.................] - ETA: 3:30 - loss: 0.0840 - Accuracy: 0.6740
109/232 [=============>................] - ETA: 3:28 - loss: 0.0838 - Accuracy: 0.6747
110/232 [=============>................] - ETA: 3:27 - loss: 0.0835 - Accuracy: 0.6765
111/232 [=============>................] - ETA: 3:25 - loss: 0.0834 - Accuracy: 0.6772
112/232 [=============>................] - ETA: 3:23 - loss: 0.0833 - Accuracy: 0.6779
113/232 [=============>................] - ETA: 3:22 - loss: 0.0836 - Accuracy: 0.6763
114/232 [=============>................] - ETA: 3:20 - loss: 0.0836 - Accuracy: 0.6758
115/232 [=============>................] - ETA: 3:18 - loss: 0.0833 - Accuracy: 0.6776
116/232 [==============>...............] - ETA: 3:16 - loss: 0.0834 - Accuracy: 0.6771
117/232 [==============>...............] - ETA: 3:15 - loss: 0.0833 - Accuracy: 0.6777
118/232 [==============>...............] - ETA: 3:13 - loss: 0.0837 - Accuracy: 0.6752
119/232 [==============>...............] - ETA: 3:11 - loss: 0.0836 - Accuracy: 0.6758
120/232 [==============>...............] - ETA: 3:10 - loss: 0.0834 - Accuracy: 0.6775
121/232 [==============>...............] - ETA: 3:08 - loss: 0.0836 - Accuracy: 0.6760
122/232 [==============>...............] - ETA: 3:06 - loss: 0.0831 - Accuracy: 0.6786
123/232 [==============>...............] - ETA: 3:05 - loss: 0.0832 - Accuracy: 0.6782
124/232 [===============>..............] - ETA: 3:03 - loss: 0.0829 - Accuracy: 0.6798
125/232 [===============>..............] - ETA: 3:01 - loss: 0.0828 - Accuracy: 0.6804
126/232 [===============>..............] - ETA: 3:00 - loss: 0.0829 - Accuracy: 0.6799
127/232 [===============>..............] - ETA: 2:58 - loss: 0.0824 - Accuracy: 0.6824
128/232 [===============>..............] - ETA: 2:56 - loss: 0.0822 - Accuracy: 0.6830
129/232 [===============>..............] - ETA: 2:54 - loss: 0.0822 - Accuracy: 0.6835
130/232 [===============>..............] - ETA: 2:53 - loss: 0.0818 - Accuracy: 0.6850
131/232 [===============>..............] - ETA: 2:51 - loss: 0.0817 - Accuracy: 0.6855
132/232 [================>.............] - ETA: 2:49 - loss: 0.0816 - Accuracy: 0.6860
133/232 [================>.............] - ETA: 2:48 - loss: 0.0821 - Accuracy: 0.6836
134/232 [================>.............] - ETA: 2:46 - loss: 0.0820 - Accuracy: 0.6841
135/232 [================>.............] - ETA: 2:44 - loss: 0.0823 - Accuracy: 0.6827
136/232 [================>.............] - ETA: 2:43 - loss: 0.0821 - Accuracy: 0.6842
137/232 [================>.............] - ETA: 2:41 - loss: 0.0821 - Accuracy: 0.6837
138/232 [================>.............] - ETA: 2:39 - loss: 0.0825 - Accuracy: 0.6815
139/232 [================>.............] - ETA: 2:38 - loss: 0.0825 - Accuracy: 0.6811
140/232 [=================>............] - ETA: 2:36 - loss: 0.0825 - Accuracy: 0.6816
141/232 [=================>............] - ETA: 2:34 - loss: 0.0823 - Accuracy: 0.6829
142/232 [=================>............] - ETA: 2:33 - loss: 0.0821 - Accuracy: 0.6843
143/232 [=================>............] - ETA: 2:31 - loss: 0.0818 - Accuracy: 0.6856
144/232 [=================>............] - ETA: 2:29 - loss: 0.0817 - Accuracy: 0.6861
145/232 [=================>............] - ETA: 2:27 - loss: 0.0814 - Accuracy: 0.6874
146/232 [=================>............] - ETA: 2:26 - loss: 0.0816 - Accuracy: 0.6870
147/232 [==================>...........] - ETA: 2:24 - loss: 0.0819 - Accuracy: 0.6857
148/232 [==================>...........] - ETA: 2:22 - loss: 0.0818 - Accuracy: 0.6861
149/232 [==================>...........] - ETA: 2:21 - loss: 0.0819 - Accuracy: 0.6857
150/232 [==================>...........] - ETA: 2:19 - loss: 0.0826 - Accuracy: 0.6828
151/232 [==================>...........] - ETA: 2:17 - loss: 0.0829 - Accuracy: 0.6816
152/232 [==================>...........] - ETA: 2:16 - loss: 0.0829 - Accuracy: 0.6812
153/232 [==================>...........] - ETA: 2:14 - loss: 0.0828 - Accuracy: 0.6817
154/232 [==================>...........] - ETA: 2:12 - loss: 0.0826 - Accuracy: 0.6829
155/232 [===================>..........] - ETA: 2:10 - loss: 0.0827 - Accuracy: 0.6826
156/232 [===================>..........] - ETA: 2:09 - loss: 0.0824 - Accuracy: 0.6846
157/232 [===================>..........] - ETA: 2:07 - loss: 0.0825 - Accuracy: 0.6834
158/232 [===================>..........] - ETA: 2:05 - loss: 0.0829 - Accuracy: 0.6807
159/232 [===================>..........] - ETA: 2:04 - loss: 0.0829 - Accuracy: 0.6803
160/232 [===================>..........] - ETA: 2:02 - loss: 0.0832 - Accuracy: 0.6784
161/232 [===================>..........] - ETA: 2:00 - loss: 0.0833 - Accuracy: 0.6788
162/232 [===================>..........] - ETA: 1:59 - loss: 0.0832 - Accuracy: 0.6793
163/232 [====================>.........] - ETA: 1:57 - loss: 0.0832 - Accuracy: 0.6797
164/232 [====================>.........] - ETA: 1:56 - loss: 0.0833 - Accuracy: 0.6794
165/232 [====================>.........] - ETA: 1:54 - loss: 0.0834 - Accuracy: 0.6783
166/232 [====================>.........] - ETA: 1:52 - loss: 0.0834 - Accuracy: 0.6780
167/232 [====================>.........] - ETA: 1:50 - loss: 0.0835 - Accuracy: 0.6777
168/232 [====================>.........] - ETA: 1:49 - loss: 0.0836 - Accuracy: 0.6766
169/232 [====================>.........] - ETA: 1:47 - loss: 0.0835 - Accuracy: 0.6770
170/232 [====================>.........] - ETA: 1:45 - loss: 0.0835 - Accuracy: 0.6775
171/232 [=====================>........] - ETA: 1:44 - loss: 0.0834 - Accuracy: 0.6779
172/232 [=====================>........] - ETA: 1:42 - loss: 0.0834 - Accuracy: 0.6776
173/232 [=====================>........] - ETA: 1:40 - loss: 0.0835 - Accuracy: 0.6773
174/232 [=====================>........] - ETA: 1:39 - loss: 0.0833 - Accuracy: 0.6784
175/232 [=====================>........] - ETA: 1:37 - loss: 0.0835 - Accuracy: 0.6774
176/232 [=====================>........] - ETA: 1:35 - loss: 0.0835 - Accuracy: 0.6771
177/232 [=====================>........] - ETA: 1:33 - loss: 0.0836 - Accuracy: 0.6768
178/232 [======================>.......] - ETA: 1:32 - loss: 0.0835 - Accuracy: 0.6772
179/232 [======================>.......] - ETA: 1:30 - loss: 0.0836 - Accuracy: 0.6762
180/232 [======================>.......] - ETA: 1:28 - loss: 0.0835 - Accuracy: 0.6766
181/232 [======================>.......] - ETA: 1:27 - loss: 0.0834 - Accuracy: 0.6770
182/232 [======================>.......] - ETA: 1:25 - loss: 0.0834 - Accuracy: 0.6774
183/232 [======================>.......] - ETA: 1:23 - loss: 0.0836 - Accuracy: 0.6765
184/232 [======================>.......] - ETA: 1:21 - loss: 0.0837 - Accuracy: 0.6762
185/232 [======================>.......] - ETA: 1:20 - loss: 0.0837 - Accuracy: 0.6759
186/232 [=======================>......] - ETA: 1:18 - loss: 0.0837 - Accuracy: 0.6763
187/232 [=======================>......] - ETA: 1:16 - loss: 0.0836 - Accuracy: 0.6767
188/232 [=======================>......] - ETA: 1:15 - loss: 0.0834 - Accuracy: 0.6778
189/232 [=======================>......] - ETA: 1:13 - loss: 0.0834 - Accuracy: 0.6781
190/232 [=======================>......] - ETA: 1:11 - loss: 0.0836 - Accuracy: 0.6765
191/232 [=======================>......] - ETA: 1:09 - loss: 0.0839 - Accuracy: 0.6750
192/232 [=======================>......] - ETA: 1:08 - loss: 0.0840 - Accuracy: 0.6747
193/232 [=======================>......] - ETA: 1:06 - loss: 0.0839 - Accuracy: 0.6751
194/232 [========================>.....] - ETA: 1:04 - loss: 0.0839 - Accuracy: 0.6748
195/232 [========================>.....] - ETA: 1:03 - loss: 0.0841 - Accuracy: 0.6739
196/232 [========================>.....] - ETA: 1:01 - loss: 0.0841 - Accuracy: 0.6737
197/232 [========================>.....] - ETA: 59s - loss: 0.0839 - Accuracy: 0.6747
198/232 [========================>.....] - ETA: 57s - loss: 0.0838 - Accuracy: 0.6751
199/232 [========================>.....] - ETA: 56s - loss: 0.0837 - Accuracy: 0.6761
200/232 [========================>.....] - ETA: 54s - loss: 0.0836 - Accuracy: 0.6765
201/232 [========================>.....] - ETA: 52s - loss: 0.0837 - Accuracy: 0.6762
202/232 [=========================>....] - ETA: 51s - loss: 0.0836 - Accuracy: 0.6766
203/232 [=========================>....] - ETA: 49s - loss: 0.0836 - Accuracy: 0.6763
204/232 [=========================>....] - ETA: 47s - loss: 0.0836 - Accuracy: 0.6767
205/232 [=========================>....] - ETA: 46s - loss: 0.0838 - Accuracy: 0.6758
206/232 [=========================>....] - ETA: 44s - loss: 0.0839 - Accuracy: 0.6750
207/232 [=========================>....] - ETA: 42s - loss: 0.0838 - Accuracy: 0.6753
208/232 [=========================>....] - ETA: 40s - loss: 0.0836 - Accuracy: 0.6769
209/232 [==========================>...] - ETA: 39s - loss: 0.0837 - Accuracy: 0.6760
210/232 [==========================>...] - ETA: 37s - loss: 0.0836 - Accuracy: 0.6770
211/232 [==========================>...] - ETA: 35s - loss: 0.0836 - Accuracy: 0.6767
212/232 [==========================>...] - ETA: 34s - loss: 0.0835 - Accuracy: 0.6771
213/232 [==========================>...] - ETA: 32s - loss: 0.0836 - Accuracy: 0.6769
214/232 [==========================>...] - ETA: 30s - loss: 0.0835 - Accuracy: 0.6772
215/232 [==========================>...] - ETA: 28s - loss: 0.0833 - Accuracy: 0.6781
216/232 [==========================>...] - ETA: 27s - loss: 0.0830 - Accuracy: 0.6796
217/232 [===========================>..] - ETA: 25s - loss: 0.0831 - Accuracy: 0.6794
218/232 [===========================>..] - ETA: 23s - loss: 0.0829 - Accuracy: 0.6803
219/232 [===========================>..] - ETA: 22s - loss: 0.0829 - Accuracy: 0.6800
220/232 [===========================>..] - ETA: 20s - loss: 0.0831 - Accuracy: 0.6792
221/232 [===========================>..] - ETA: 18s - loss: 0.0830 - Accuracy: 0.6795
222/232 [===========================>..] - ETA: 17s - loss: 0.0831 - Accuracy: 0.6793
223/232 [===========================>..] - ETA: 15s - loss: 0.0830 - Accuracy: 0.6796
224/232 [===========================>..] - ETA: 13s - loss: 0.0829 - Accuracy: 0.6799
225/232 [============================>.] - ETA: 11s - loss: 0.0833 - Accuracy: 0.6780
226/232 [============================>.] - ETA: 10s - loss: 0.0832 - Accuracy: 0.6783
227/232 [============================>.] - ETA: 8s - loss: 0.0833 - Accuracy: 0.6781
228/232 [============================>.] - ETA: 6s - loss: 0.0833 - Accuracy: 0.6778
229/232 [============================>.] - ETA: 5s - loss: 0.0833 - Accuracy: 0.6776
230/232 [============================>.] - ETA: 3s - loss: 0.0833 - Accuracy: 0.6774
231/232 [============================>.] - ETA: 1s - loss: 0.0832 - Accuracy: 0.6782
232/232 [==============================] - ETA: 0s - loss: 0.0831 - Accuracy: 0.67912021-12-08 21:32:02.219818: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:32:02.222070: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:32:02.228852: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:32:02.232040: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:32:02.238568: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:32:02.240707: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:32:02.246288: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:32:02.247601: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:32:02.294783: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:32:02.297749: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:32:02.305270: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:32:02.307566: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:32:02.308656: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:32:02.316060: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:32:02.318295: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:32:02.324953: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:32:02.327110: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
2021-12-08 21:32:02.328203: I tensorflow/core/grappler/optimizers/auto_mixed_precision.cc:1386] No allowlist ops found, nothing to do
232/232 [==============================] - 412s 2s/step - loss: 0.0831 - Accuracy: 0.6791 - val_loss: 1.2342 - val_Accuracy: 0.6700
```<|||||>Training also works with 3 GPUs.<|||||>Gently pinging @Rocketknight1 here
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Been doing library upgrade testing for our stuff lately (we were still using `transformers==2.11.0`) :sweat_smile: and that is almost complete, so hoping to get back to more testing around this in the next few weeks.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I still have planned testing coming up, so let's re-open |
transformers | 14,499 | closed | Doc fixes | # What does this PR do?
This PR adds multiple fixes for the new frontend, mainly:
- remove all the sphinx-specific files
- convert the old index.rst to index.mdx
- adapt the scripts that touched the index.rst (to update the list of models and the table of frameworks)
- fix the image links in the parallelism.md file
- convert the detr.rst file to have the table in MarkDown format. | 11-23-2021 16:12:50 | 11-23-2021 16:12:50 | |
transformers | 14,498 | closed | How to get logits from generate() method ? | Hello,
I am using RL to train Seq2Seq models and I need logits from generate method ? Since in RL we need to sample from current policy?
Does anyone know how I can adapt generate method to get logits ?
Specifically I am using BART based models.
If you guys could give a code snippet or something. That will be a lo of help.
Please let me know. | 11-23-2021 15:02:16 | 11-23-2021 15:02:16 | Hi,
This can be done easily by setting the `output_scores` flag of the [generate](https://huggingface.co/transformers/main_classes/model.html?highlight=generate#transformers.generation_utils.GenerationMixin.generate) method to `True`.<|||||>output_scores will be of shape ```(max_length-input_ids.shape[-1], )``` with each tensor of shape ```(bs, config.vocab_size)```.
How do I convert output_scores to log probabilities ?
<|||||>The logits are just the raw scores, you can get log probabilities by applying a `log_softmax` (which is a softmax followed by a logarithm) on the last dimension, i.e.
```
import torch
logits = torch.randn((batch_size, vocab_size)
log_probs = torch.nn.functional.log_softmax(logits, dim=-1)
```<|||||>Thanks. That helps.<|||||>@Atharva-Phatak @NielsRogge Where are these logits returned to?
out = model.generate(
input_ids,
attention_mask=attention_mask,
max_length=max_target_length,
output_scores = True
)
Here out still only contains predictions.<|||||>I believe you have to also specify `return_dict_in_generate=True` to get a `ModelOutput`.<|||||>Thanks!<|||||>@Atharva-Phatak Did you publish your RL training experiments? Sounds interesting ! |
transformers | 14,497 | open | FLAX core dump error on CloudTPU when running run_clm_flax.py | Hi, I'm having a weird problem trying to train a gpt-neo model from scratch on a v3-8 cloud TPU. Something similar to the [closed issue here](https://github.com/huggingface/transformers/issues/12404). Getting:
```
https://symbolize.stripped_domain/r/?trace=7fb5dbf8a3f4,7fb5dbfe020f,7f&map=
*** SIGTERM received by PID 64823 (TID 64823) on cpu 26 from PID 63364; stack trace: *** | 0/1 [00:00<?, ?ba/s]
PC: @ 0x7fb5dbf8a3f4 (unknown) do_futex_wait.constprop.0
@ 0x7fb52fa377ed 976 (unknown)
@ 0x7fb5dbfe0210 440138896 (unknown) | 0/1 [00:00<?, ?ba/s]
@ 0x80 (unknown) (unknown) | 0/1 [00:00<?, ?ba/s]
https://symbolize.stripped_domain/r/?trace=7fb5dbf8a3f4,7fb52fa377ec,7fb5dbfe020f,7f&map=44c8b163be936ec2996e56972aa94d48:7fb521e7d000-7fb52fd90330
E1122 14:13:36.933620 64823 coredump_hook.cc:255] RAW: Remote crash gathering disabled for SIGTERM. | 0/1 [00:00<?, ?ba/s]
E1122 14:13:36.960024 64823 process_state.cc:776] RAW: Raising signal 15 with default behavior
```
randomly during preprocessing/loading the dataset.
The env is clean, setup according to the Quickstart Flax guide from google's [help page](https://cloud.google.com/tpu/docs/jax-quickstart-tpu-vm), and as well from [here](https://github.com/huggingface/transformers/tree/master/examples/research_projects/jax-projects#how-to-install-relevant-libraries). Jax is installed okay, sees 8 TPUs. I tried the standard pip install as well as the local install as some people suggested in the [issue](https://github.com/huggingface/transformers/issues/12404) above, still getting the same behavior.
This error does **not** kill the training.
So, question number 1 would be **how to get rid of this error ?**
Something else happens that _might_ be related: Running a dummy 300MB Wiki dataset for training only produces the error above, but training progresses. However, when running the full 40GB dataset, at a point during the first epoch I get:
``list([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, .... (many 1s) .. 1, 1, 1])]]' of type <class 'numpy.ndarray'> is not a valid JAX type.``
This error kills the training. I've found this related [issue](https://github.com/huggingface/transformers/issues/12502), but the last suggestion of increasing ``max_seq_len`` does not apply here, as the preprocessing should automatically concatenate and cut the model len (and it is set in the config file). The dataset itself is clean, does not contain long words or chars or anything weird.
Thus, question 2: **Any pointers on how to solve this second error?**
Unfortunately I cannot share the dataset as it's private :disappointed: so I don't know how to help reproduce this error. There are 2 questions in this single issue as maybe there's a chance they are related (?).
Thanks a bunch!
Update: [here is the output of the run_clm_flax.py](https://wtools.io/paste-code/b7US). Because there's a limit on how much you can paste online, I've deleted a few chunks of repeating lines in the output. | 11-23-2021 14:51:36 | 11-23-2021 14:51:36 | Pinging @patil-suraj :)<|||||>Update: after reading [this issue](https://github.com/huggingface/transformers/issues/12606) I tried **setting the number of preprocessing workers to 1**, and after a lot of time, preprocessing finished without any crashes. So that 'solves' problem 1.
However, problem 2 still shows up. At least it's not related to problem 1.
Here is the error:
```
Training...: 5%|β | 2319/43228 [1:05:47<19:18:26, 1.70s/it].[A/home/stefan/transformers/examples/flax/language-modeling/run_clm_flax.py:202: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a lis
t-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
batch = {k: np.array(v) for k, v in batch.items()}
.[A
Epoch ... : 0%| | 0/100 [1:05:47<?, ?it/s]
Traceback (most recent call last):
File "/home/stefan/transformers/examples/flax/language-modeling/run_clm_flax.py", line 677, in <module>
main()
File "/home/stefan/transformers/examples/flax/language-modeling/run_clm_flax.py", line 618, in main
state, train_metric = p_train_step(state, batch)
File "/home/stefan/dev/lib/python3.8/site-packages/jax/_src/traceback_util.py", line 162, in reraise_with_filtered_traceback
return fun(*args, **kwargs)
File "/home/stefan/dev/lib/python3.8/site-packages/jax/_src/api.py", line 1946, in cache_miss
out_tree, out_flat = f_pmapped_(*args, **kwargs)
File "/home/stefan/dev/lib/python3.8/site-packages/jax/_src/api.py", line 1801, in f_pmapped
for arg in args: _check_arg(arg)
File "/home/stefan/dev/lib/python3.8/site-packages/jax/_src/api.py", line 2687, in _check_arg
raise TypeError(f"Argument '{arg}' of type {type(arg)} is not a valid JAX type.")
jax._src.traceback_util.UnfilteredStackTrace: TypeError: Argument '[[list([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 [etc - many lists of 1s]
```
Any idea what might cause this? Thanks!<|||||>Update: setting ``preprocessing_num_workers = 1`` seems to solve the core dump. It takes a long time to preprocess and load the dataset on the 96 core machine with 1 worker :) but I'm not seeing the dumps anymore.
Regarding the Argument is not a valid Jax type, I am not sure whether I "fixed" the problem, but now I've managed to train an epoch without crashing. What I did was set ``truncation=True`` in ``run_clm_flax.py``. This costs me some lost text when the line is longer than the model's max len, but hey, it's running. I'm not very affected by this as GPT-Neo has a 2048 len, but I'm thinking if I had to train a model with the standard 512 size, a lot of text would have been needlessly lost if it was not split manually beforehand to avoid this error. Again, this is strange because the code seems to chunk the tokenized text in seq_len blocks, so this shouldn't be a problem, but setting truncation=True in the tokenizer seems to fix it. Also, this is not related to the core dumps, as after setting workers = 1, the Jax error still happened until I set to truncate the texts.
So, I kind-of "fixed" my problems, please close this issue if you think it's not helpful. Leaving this here for other people if they bump into these things.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I've re-opened that issue, because I've seen this problem over a long time. The reported error still occurs in latest Datasets and Transformers version on TPU.<|||||>I also had the same issue with another dataset and t5 model training.
This problem seems to be related to datasets because I cut out the code of t5 training except for the data generation part, and I had the same "SIGTERM" error on TPU V4 VM.
I have tested it with Python 3.8 and python 3.7, and the same error occurs.
@stefan-it @dumitrescustefan did you find a solution rather than setting preprocessing_num_workers to 1 because it is extremely slow?
@patil-suraj Is there any solution to this problem?
<|||||>I think this may have to do with jax, libtpu, torch, xla or tf not matching up. Versions of the VM used (f.e. v2-alpha), and jax, jaxlib, and if torch and xla were changed. And tpu driver version. |
transformers | 14,496 | closed | Add necessary new front files | # What does this PR do?
1. Add new file toctree.yml, which serves the same purose as index.rst for Sphinx (i.e. the entire outline of the docs)
2. Add new file versions.yml, which specifices which versions are available (and whether it should be redirected to sphinx website)
3. Changes from https://github.com/huggingface/transformers/pull/14476
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-23-2021 13:40:16 | 11-23-2021 13:40:16 | |
transformers | 14,495 | closed | Cannot find 'blob' directory in your 'transformers' repository | We are reading the paper 'EVALUATION OF NEURAL ARCHITECTURES TRAINED
WITH SQUARE LOSS VS CROSS-ENTROPY IN CLASSIFICATION TASKS' which is published as a conference paper at ICLR 2021.
In 13page, we found that 'https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_glue.py'. is no longer accessed. | 11-23-2021 05:55:01 | 11-23-2021 05:55:01 | It is here, https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py @parkhaemi <|||||>Thank you so much!!
Have a nice day :) |
transformers | 14,494 | open | Add TAPAS trained on NQ | The authors of the "Open Domain Question Answering over Tables via Dense Retrieval" papers released the weights for TAPAS retriever trained on NQ here: https://github.com/google-research/tapas/blob/master/DENSE_TABLE_RETRIEVER.md
It would be nice if those weights could be converted to Huggingface model (I would presume it's fairly similar the other finetuned model since they share the same architecture, and I'd be happy to do it myself if there's some scripts I can run) | 11-23-2021 04:01:45 | 11-23-2021 04:01:45 | Hi,
Yes you can use the existing conversion script: https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/convert_tapas_original_tf_checkpoint_to_pytorch.py
Note that there was already an attempt for this in #11907.<|||||>Thanks @NielsRogge! I tagged you in #11907 about some differences between the current script and the new retrieval weights.
In the mean time I ran the scripts and uploaded the weights to [huggingface](https://huggingface.co/xhlu/tapas-large-finetuned-nq-hn). If it turns out `bert_1` was something important to add, I'm happy to run an updated script and push the changes to the repo.
Note I also uploaded a `Wq.pt` and `Wt.pt` as well so they can be loaded separately (otherwise, I think I would need to modify the underlying code of the TAPAS model; lmk if you know a simpler way)<|||||>Hi,
`bert_1` is important, yes. The retriever consists of 2 encoders, one to encode the question, one to encode the table. The authors released the retriever checkpoints as "dual encoders", meaning they contain the weights of both encoders.
The encoders each correspond to `TapasModel` in Huggingface, with an optional additional projection layer on top (see the "down project" column which is yes/no).
The reader on the other hand corresponds to `TapasForQuestionAnswering`, however it should produce span indexes and logits as output, which is not yet implemented.<|||||>Thank you for expanding on this! It's not clear whether `bert_1` corresponds to the table or the question encoder unfortunately. The model I uploaded only uses `bert`, I'm happy to update the name of the repo to the correct version (question or table encoder) once this is disambiguated.
Moreover, I'm not sure about the best way to add an extra layer to a huggingface model, hence I saved it as a supplementary `Wq.pt` file inside the repo (that can be easily downloaded by `requests` get) |
transformers | 14,493 | closed | fixes some key names for in LayoutLMv2 / LayoutXLM tokenizers |
# What does this PR do?
in case of left padding_side there was a (probably copy/paste) error assigning the bbox data to the labels.
I consider this to be a typo fix.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
I guess @sgugger as you reviewed https://github.com/huggingface/transformers/pull/12604
cc @NielsRogge
| 11-22-2021 19:50:54 | 11-22-2021 19:50:54 | |
transformers | 14,492 | closed | Add model checkpointing to push_to_hub and PushToHubCallback | null | 11-22-2021 18:33:53 | 11-22-2021 18:33:53 | |
transformers | 14,491 | closed | BART + ONNX torch.jit error iterabletree cannot be used as a value | ## Environment info
onnx 1.10.2
onnxruntime 1.9.0
- `transformers` version: transformers 4.13.0.dev0
- Platform: Ubuntu 18.4
- Python version: 3.8
- PyTorch version (GPU?): torch 1.8.0 gpu
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@fatcat-z @mfuntowicz @sgugger, @patil-suraj
## Information
Model I am using: BartForConditionalGeneration
The problem arises when using:
* [ ] the official example scripts: [run_onnx_exporter.py](https://github.com/huggingface/transformers/tree/master/examples/onnx/pytorch/translation)
## To reproduce
Steps to reproduce the behavior:
```
python3.8 run_onnx_exporter.py --model_name_or_path facebook/bart-base
2021-11-22 17:34:47 | INFO | __main__ | [run_onnx_exporter.py:224] Exporting model to ONNX
/home/pverzun/.local/lib/python3.8/site-packages/transformers/models/bart/modeling_bart.py:217: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len):
/home/pverzun/.local/lib/python3.8/site-packages/transformers/models/bart/modeling_bart.py:223: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if attention_mask.size() != (bsz, 1, tgt_len, src_len):
/home/pverzun/.local/lib/python3.8/site-packages/transformers/models/bart/modeling_bart.py:254: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim):
/home/pverzun/.local/lib/python3.8/site-packages/transformers/models/bart/modeling_bart.py:888: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if input_shape[-1] > 1:
/home/pverzun/.local/lib/python3.8/site-packages/torch/jit/_trace.py:934: TracerWarning: Encountering a list at the output of the tracer might cause the trace to be incorrect, this is only valid if the container structure does not change based on the module's inputs. Consider using a constant container instead (e.g. for `list`, use a `tuple` instead. for `dict`, use a `NamedTuple` instead). If you absolutely need this and know the side effects, pass strict=False to trace() to allow this behavior.
module._c._create_method_from_trace(
/home/pverzun/.local/lib/python3.8/site-packages/torch/jit/_trace.py:152: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the gradient for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations.
if a.grad is not None:
Traceback (most recent call last):
File "run_onnx_exporter.py", line 229, in <module>
main()
File "run_onnx_exporter.py", line 225, in main
export_and_validate_model(model, tokenizer, output_name, num_beams, max_length)
File "run_onnx_exporter.py", line 116, in export_and_validate_model
**bart_script_model = torch.jit.script(BARTBeamSearchGenerator(model))**
File "/home/pverzun/.local/lib/python3.8/site-packages/torch/jit/_script.py", line 942, in script
return torch.jit._recursive.create_script_module(
File "/home/pverzun/.local/lib/python3.8/site-packages/torch/jit/_recursive.py", line 391, in create_script_module
return create_script_module_impl(nn_module, concrete_type, stubs_fn)
File "/home/pverzun/.local/lib/python3.8/site-packages/torch/jit/_recursive.py", line 448, in create_script_module_impl
script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
File "/home/pverzun/.local/lib/python3.8/site-packages/torch/jit/_script.py", line 391, in _construct
init_fn(script_module)
File "/home/pverzun/.local/lib/python3.8/site-packages/torch/jit/_recursive.py", line 428, in init_fn
scripted = create_script_module_impl(orig_value, sub_concrete_type, stubs_fn)
File "/home/pverzun/.local/lib/python3.8/site-packages/torch/jit/_recursive.py", line 452, in create_script_module_impl
create_methods_and_properties_from_stubs(concrete_type, method_stubs, property_stubs)
File "/home/pverzun/.local/lib/python3.8/site-packages/torch/jit/_recursive.py", line 335, in create_methods_and_properties_from_stubs
concrete_type._create_methods_and_properties(property_defs, property_rcbs, method_defs, method_rcbs, method_defaults)
**RuntimeError:
iterabletree cannot be used as a value:
File "/home/pverzun/.local/lib/python3.8/site-packages/transformers/configuration_utils.py", line 387
if not hasattr(self, "id2label") or self.id2label is None or len(self.id2label) != num_labels:
self.id2label = {i: f"LABEL_{i}" for i in range(num_labels)}
self.label2id = dict(zip(self.id2label.values(), self.id2label.keys()))**
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
```
## Expected behavior
BART is converted to onnx with no issues | 11-22-2021 18:07:23 | 11-22-2021 18:07:23 | Pinging @michaelbenayoun on the issue :)<|||||>@LysandreJik I will take a look at this.<|||||>Thank you, @fatcat-z!<|||||>ok, so it seems like a problem with versions. Works with torch==1.10.0 numpy==1.21.4
onnx==1.10.2
onnxruntime==1.9.0
and latest transformers <|||||>@patil-suraj @LysandreJik
There is another bug(?) in [run_onnx_exporter.py](examples/onnx/pytorch/translation/run_onnx_exporter.py) script. [Line](https://github.com/huggingface/transformers/blob/69511cdcaec8c1c7f0d7f378964eca0ce74ed5a8/examples/onnx/pytorch/translation/run_onnx_exporter.py#L137), where dynamic axes are declared, the attention_mask isn't included in the set. Any reason why? 'Cause this hampers inputs of any other size than the onnx sample input.
However, adding the attention_mask object to the dyanmic_inputs set resolves the issue, able to convert+test the model.
Please let me know if this needs to be changed, I can open a PR, or somebody from the HF side can amend the changes instead.<|||||>Even after solving the attention mask issue I still wasn't able to get **faster** model after converting bart to onnx. Perhaps quantization could help, but like, on the same text I got 6sec on pytorch model GPU and 70sec on onnx optimized graph. <|||||>>
This is was designed as an example of showing how to export BART + Beam Search to ONNX successfully. It doesn't cover all of scenarios. Your PR is appreciated to make it better. Thanks!<|||||>I tested the versions of the major packages. It is determined that upgrading pytorch from 1.8.0 to 1.9.1 can solve this bug.However in 1.9.1 pytorch does not support opset_version 14 and needs to be upgraded to 1.10.0.
I think the version of pytorch in requirement.txt can be modified.<|||||>> Good catch. Fixed this in #14310
> I tested the versions of the major packages. It is determined that upgrading pytorch from 1.8.0 to 1.9.1 can solve this bug.However in 1.9.1 pytorch does not support opset_version 14 and needs to be upgraded to 1.10.0. I think the version of pytorch in requirement.txt can be modified.
Good catch! Fixed in https://github.com/huggingface/transformers/pull/14310<|||||>Hey @polly-morphism @diruoshui, given the PyTorch version fix in #14310 can we now close this issue?<|||||>> Hey @polly-morphism @diruoshui, given the PyTorch version fix in #14310 can we now close this issue?
Yes, thank you! |
transformers | 14,490 | closed | Feature request: Add built-in support for autorregressive text generation with ONNX models | # π Add built-in support for autorregressive text generation with ONNX models.
After converting a autorregressive model to ONNX, it would be nice to be able to generate text with it via something like:
```python
from transformers import OnnxTextGenerationModel, AutoTokenizer
model_path = "gpt-something.onnx"
tokenizer_name = "gpt2"
model = OnnxTextGenerationModel(model_path)
# and then
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model.generate(encoded_input)
```
With support to using `past_key_values` internally in the most efficient way.
## Motivation
When trying to accelerate inference with transformers, being unable to load our ONNX model with the lib and running a `model.generate` method to seamlessly generate sequences and perform Beam Search is somehow frustrating. That leads us to have to rely on custom implementations - which takes time and are a lot more prone to have bugs.
We can try to hack a subclass of `GenerationMixin`, but having to convert things to and from PyTorch makes everything too slow.
## Your contribution
I can try submitting a PR, but this will take long, as I work full-time and might not have enough time to make it fast.
| 11-22-2021 17:51:15 | 11-22-2021 17:51:15 | I believe such work is currently happening cc @michaelbenayoun @patrickvonplaten @mfuntowicz <|||||>Nice. Can I contribute to that by any means?
<|||||>@michaelbenayoun - is this supposed to be implemented in `optimum`?<|||||>Yes, this is planned.
Nice to know that there is interest for such features!
Pinging @lewisbails and @philschmid as they were the ones suggesting to add those kind of features to `optimum`.<|||||>Hey @piEsposito would you mind moving this feature request over to the `optimum` [repo](https://github.com/huggingface/optimum/issues)?
Moving forward, we're planning the keep the ONNX export functionality in `transformers` and handle optimisation / generation features in `optimum` (to separate concerns).
Thank you!<|||||>Sure, I can do that. <|||||>We are following this discussion on https://github.com/huggingface/optimum/issues/55 .<|||||>Thank you! |
transformers | 14,489 | closed | tokenizer.save_pretrained() doest not save an important part of tokenizer_config of FlaubertTokenizer | ## Environment info
- `transformers` version: 4.10.3
- Platform: Linux-5.4.0-88-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyTorch version (GPU?): 1.9.0a0+df837d0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik @thomwolf
## Information
**model**:
Flaubert with `flaubert/flaubert_base_uncased` tokenizer
When loading the tokenizer, that has been saved with `save_pretrained()`, the behaviour of the tokenizer changes.
In fact, an important `"do_lowercase": true` config property of the Flaubert tokenizer (see [config file](https://huggingface.co/flaubert/flaubert_base_uncased/blob/main/tokenizer_config.json)) does not get saved to the newly created `tokenizer_config.json` file, so once the tokenizer is loaded, it becomes cased (default behaviour).
## To reproduce
Steps to reproduce the behavior:
```
from transformers import FlaubertTokenizer
save_directory = <some_directory>
tokenizer = FlaubertTokenizer.from_pretrained('flaubert/flaubert_base_uncased')
tokenizer.save_pretrained(save_directory)
tokenizer_reloaded = FlaubertTokenizer.from_pretrained(save_directory)
print(tokenizer.tokenize("Je m'appelle FlauBERT"))
print(tokenizer_reloaded.tokenize("Je m'appelle FlauBERT"))
print(f"do_lowercase: {tokenizer.do_lowercase}")
print(f"do_lowercase: {tokenizer_reloaded.do_lowercase}")
```
1. Check that 2 pairs of print messages differ
2. Check that `do_lowercase` is missing in the `<save_directory>/tokenizer_config.json` file
## Expected behavior
Loading the saved tokenizer should not change its behaviour.
I believe this happens because the initializer of `FlaubertTokenizer` does not pass the `do_lowercase` option to its parent class' initializer, so it does not populate the `self.init_inputs`, which constitutes the base of the future saved `tokenizer_config.json` file.
```
class FlaubertTokenizer(XLMTokenizer):
...
def __init__(self, do_lowercase=False, **kwargs):
super().__init__(**kwargs)
```
```
class PreTrainedTokenizerBase(SpecialTokensMixin, PushToHubMixin):
...
def __init__(self, **kwargs):
# inputs and kwargs for saving and re-loading (see ``from_pretrained`` and ``save_pretrained``)
self.init_inputs = ()
**self.init_kwargs = copy.deepcopy(kwargs)**
```
But I may be totally wrong here as I certainly don't have the full picture. Also this problem of not saving some important configs may be relevant to other models too. So I'm not sure that just fixing to `super().__init__(do_lowercase=do_lowercase, **kwargs)` is the best solution.
| 11-22-2021 12:12:57 | 11-22-2021 12:12:57 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Up<|||||>It is very possible that this is the case cc @SaulLu
Would you like to open a PR with a proposed fix?<|||||>Thank you for the reply. I will do that, yes.
Do you think that this may concern other models too?
Also, is there a place (doc/comment/etc), which states that in order to be saved, a config should be passed as a keyword argument to its base class? I don't remember seeing it anywhere, neither was it directly obvious. So I had to go down to `PreTrainedTokenizerBase` to understand what's going on.
I feel like at least some comment should be left somewhere for future developers, but I cannot come up with a good place for it. Maybe this existing line is already enough?
```
class XLMTokenizer(PreTrainedTokenizer):
"""
...
This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods.
Users should refer to this superclass for more information regarding those methods.
```<|||||>Thanks a lot for your fix @vmaryasin that you proposed in [this PR](https://github.com/huggingface/transformers/pull/14991). I think this closes the issue! |
transformers | 14,488 | closed | Fatal error in event_loop.c | ## Environment info
- `transformers` version: 4.12.5
- Platform: Windows-10-10.0.22504-SP0
- Python version: 3.8.3
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
## To reproduce
Steps to reproduce the behavior:
1. run any script using `datasets`
```
Fatal error condition occurred in D:\bld\aws-c-io_1633633258269\work\source\event_loop.c:74: aws_thread_launch(&cleanup_thread, s_event_loop_destroy_async_thread_fn, el_group, &thread_options) == AWS_OP_SUCCESS
Exiting Application
at 0x7FFD34C74380: aws_backtrace_print
at 0x7FFD34C63560: aws_fatal_assert
at 0x7FFD34B65F10: aws_event_loop_wait_for_stop_completion
at 0x7FFD34C71470: aws_ref_count_release
at 0x7FFD34B63D80: aws_server_bootstrap_set_alpn_callback
at 0x7FFD34C71470: aws_ref_count_release
at 0x7FFD34B63760: aws_client_bootstrap_release
at 0x7FFD4C7F76F0: Aws::Crt::Io::ClientBootstrap::~ClientBootstrap
at 0x7FFD34DFEB40: Aws::Utils::Stream::SimpleStreamBuf::xsputn
at 0x7FFE024D36C0: _sys_nerr
at 0x7FFE0249FFA0: execute_onexit_table
at 0x7FFE0249FFA0: execute_onexit_table
at 0x7FFD34DFEB40: Aws::Utils::Stream::SimpleStreamBuf::xsputn
at 0x7FFD34DFEB40: Aws::Utils::Stream::SimpleStreamBuf::xsputn
at 0x7FFE04A9EDC0: RtlActivateActivationContextUnsafeFast
at 0x7FFE04AF2310: LdrShutdownProcess
at 0x7FFE04AF2240: RtlExitUserProcess
at 0x7FFE03C8E080: ExitProcess
at 0x7FFE0249E040: exit
at 0x7FFE0249E040: exit
at 0x7FF69C8C1160: OPENSSL_Applink
at 0x7FFE03C86AA0: BaseThreadInitThunk
at 0x7FFE04AD1EB0: RtlUserThreadStart
```
Even this two lines produce this error:
```
from datasets import load_dataset
dataset = load_dataset('wikiann', 'en')
```
```
Downloading and preparing dataset wikiann/en (download: 223.17 MiB, generated: 8.88 MiB, post-processed: Unknown size, total: 232.05 MiB) to [...]
100%|ββββββββββ| 3/3 [00:00<00:00, 498.27it/s]
Dataset wikiann downloaded and prepared to [...]. Subsequent calls will reuse this data.
Fatal error condition occurred in D:\bld\aws-c-io_1633633258269\work\source\event_loop.c:74: aws_thread_launch(&cleanup_thread, s_event_loop_destroy_async_thread_fn, el_group, &thread_options) == AWS_OP_SUCCESS
Exiting Application
```
Also note that `D:\bld\aws-c-io_1633633258269\work\source\` is not a path on my PC.
## Expected behavior
I would expect no fatal errors.
| 11-22-2021 12:05:08 | 11-22-2021 12:05:08 | Moved to `datasets` repo: https://github.com/huggingface/datasets/issues/3310 |
transformers | 14,487 | closed | Add Perceiver IO | # What does this PR do?
This PR implements [Perceiver IO](https://arxiv.org/abs/2107.14795), by Google Deepmind.
Fixes #12996
It's a pretty cool piece of work: it applies a Transformer encoder to any kind of modality (images, text, audio, video) and for any problem (text classification, image classification, audio classification, video autoencoding, optical flow,...)! Perceiver is basically a BERT, ViT and Wav2Vec2 in one. However, the authors did apply the Perceiver on each problem separately, although I believe you could train a single (shared) encoder as backbone, and have different pre- and postprocessors for the different modalities.
The memory- and time requirements of the self-attention mechanism don't depend on the length of the inputs, as the inputs are only used for doing cross-attention. The bulk of computation happens on a set of latent variables, which are just randomly initialized at the beginning of training (i.e. `self.latents = nn.Parameter(torch.randn((num_latents, d_latents)))`).
Some demo notebooks to showcase what Perceiver can do:
- image classification & masked language modeling: [colab](https://colab.research.google.com/drive/1drKjC2EH8YvYAtIayUzmQ82A3KJkf-ka?usp=sharing)
- optical flow: [colab](https://colab.research.google.com/drive/1NE6BKj7JBlNgnntNWoyYOorsgOQCXy-u?usp=sharing)
- video autoencoding (and video classification): [colab](https://colab.research.google.com/drive/1V9BYnF4nK89NNcNOMw3431bw-Q6H8lsP?usp=sharing)
These colab notebooks are based on the original ones provided by Deepmind, in their (very clean) JAX/Haiku implementation which can be found [here](https://github.com/deepmind/deepmind-research/tree/master/perceiver).
cc @esceptico
| 11-22-2021 10:59:06 | 11-22-2021 10:59:06 | Thanks for the reviews, I've:
* removed the einops dependency, and replaced it by just PyTorch.
* removed the pre and postprocessors from the main init, they can still be imported using `from transformers.models.perceiver.modeling_perceiver import xxx`
* copied all arguments for the Perceiver model-specific outputs, to better reflect the existing philosophy
* fixed the tests on GPU
* renamed `PerceiverForImageClassification` to `PerceiverForImageClassificationLearned` (to better reflect that this one uses learned position embeddings, which is more in line with the other models, namely `PerceiverForImageClassification` and `PerceiverForImageClassificationConvProcessing`)
To do:
- now, it's mainly about writing more docstrings.<|||||>Hi, reading this nice perceiver io implementation I came up with a question: It seems that in Perceiver the input is iteratively mixed with the latents via cross attention but in Perceiver IO it seems that the input is fed only once in the network. Does somebody know if I am right and the reason of doing it in this way? If I got it right, it seems that also in this implementation the input is fed only once...<|||||>Hi @tonibagur
As mentioned in the paper:
> The encode and processing stages re-use the basic structure of the original Perceiver. With the exception of the repeated encoder cross-attends. We found it simpler to omit these modules, as
they increase compute without dramatically increasing performance, as reported in Appendix Table 6 in original Perceiver paper.<|||||>> Hi @tonibagur As mentioned in the paper:
>
> > The encode and processing stages re-use the basic structure of the original Perceiver. With the exception of the repeated encoder cross-attends. We found it simpler to omit these modules, as
> > they increase compute without dramatically increasing performance, as reported in Appendix Table 6 in original Perceiver paper.
Thanks esceptico, I completely missed this paragraph :' |
transformers | 14,486 | closed | DPR usage of BertPooler | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.8.2
- Platform: Linux-5.8.0-50-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.4
- PyTorch version (GPU?): 1.5.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help
RAG, DPR: @patrickvonplaten, @lhoestq
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- encoder-decoder models (For example, BlenderBot, BART, Marian, Pegasus, T5, ByT5): @patrickvonplaten, @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj @patrickvonplaten
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten
- Tokenizers: @LysandreJik
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
DPR [initializes BertModel with a BertPooler](https://github.com/huggingface/transformers/blob/master/src/transformers/models/dpr/modeling_dpr.py#L178) module which [is not used in the end](https://github.com/huggingface/transformers/blob/master/src/transformers/models/dpr/modeling_dpr.py#L206)
Although this seems consistent with [the original implementation](https://github.com/facebookresearch/DPR/blob/main/dpr/models/hf_models.py#L164), it is confusing for the user. One would expect that the `pooled_output`Β will come from the BertPooler module, if it is present, and the last layer of the model. Moreover it wastes memory and compute.
## How to fix
Simply add the `add_pooling_layer=False` flag in https://github.com/huggingface/transformers/blob/master/src/transformers/models/dpr/modeling_dpr.py#L178
Some other parts of the code need also to be fixed, like https://github.com/huggingface/transformers/blob/master/src/transformers/models/dpr/modeling_dpr.py#L205
should be `sequence_output = outputs[0]`
| 11-22-2021 10:43:15 | 11-22-2021 10:43:15 | Hi ! I think it was kept in case some version of DPR had a projection layer (but afaik none of the official versions have that and the paper doesn't mention that either).
I think it would also break the conversion script from DPR weights of the official repository to `transformers` weights.
Not sure exactly how such changes are welcomed in the library but @patrickvonplaten probably knows more?<|||||>DPR has an optional projection layer in the [original implementation](https://github.com/facebookresearch/DPR/blob/main/dpr/models/hf_models.py#L168) but it is [only applied on the sequence output](https://github.com/facebookresearch/DPR/blob/main/dpr/models/hf_models.py#L216), not on BertPooler's output.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Sorry to be so late on this. @PaulLerner - would you like to open a PR to fix it? Otherwise I can try to give it a shot<|||||>No problem, Iβm not sure to be familiar enough with the library to be able to fix it by myself.
For example, at first I thought one should simply add the `add_pooling_layer=False` flag in https://github.com/huggingface/transformers/blob/master/src/transformers/models/dpr/modeling_dpr.py#L178
But actually, some other parts of the code need also to be fixed, like https://github.com/huggingface/transformers/blob/master/src/transformers/models/dpr/modeling_dpr.py#L205
should be `sequence_output = outputs[0]`
And I have no clue about the conversion script for the weights that @lhoestq is talking about.<|||||>Hey @PaulLerner, don't worry about the conversion script. I think it'll be enough to just fix everything as suggested by you in `modeling_dpr.py` - would you like to give it a try? <|||||>Ok, Iβll let you know. Iβm quite busy atm. |
transformers | 14,485 | closed | Improve `add-new-pipeline` docs a bit | # What does this PR do?
This PR slightly improve syntaxs in code snippets in the `add-new-pipeline.rst` file.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
| 11-22-2021 10:40:47 | 11-22-2021 10:40:47 | |
transformers | 14,484 | closed | Loading from the wrong cache? | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:4.12.5
- Platform:linux
- Python version:3.8
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
@LysandreJik
## Information
Model I am using (Bert, XLNet ...):'google/t5-v1_1-small'
The problem arises when loading a tokenizer
## To reproduce
Steps to reproduce the behavior:
1. Two cache dirs one you don't have access to (coworker's)
2. load
3.
```
$ tokenizer_name
'google/t5-v1_1-small'
$ cache_dir
'/outputs/.cachetmp/'
$ AutoTokenizer.from_pretrained(tokenizer_name, use_fast=True, cache_dir=cache_dir)
loading configuration file https:/huggingface.co/google/t5-v1_1-small/resolve/main/config.json from cache at /outputs/.cachetmp/64521636c162517d8b5b18cbb5b1eda52138a4e70ab9de6f1f996bc3c668233f.e890bc92245b637cd45d26aae3f8c93f29e9d928c73b2e7e1007db14eb98c948
Model config T5Config {
"architectures": [
"T5ForConditionalGeneration"
],
"d_ff": 1024,
"d_kv": 64,
"d_model": 512,
"decoder_start_token_id": 0,
"dropout_rate": 0.1,
"eos_token_id": 1,
"feed_forward_proj": "gated-gelu",
"initializer_factor": 1.0,
"is_encoder_decoder": true,
"layer_norm_epsilon": 1e-06,
"model_type": "t5",
"num_decoder_layers": 8,
"num_heads": 6,
"num_layers": 8,
"output_past": true,
"pad_token_id": 0,
"relative_attention_num_buckets": 32,
"tie_word_embeddings": false,
"transformers_version": "4.12.5",
"use_cache": true,
"vocab_size": 32128
}
loading file https:/huggingface.co/google/t5-v1_1-small/resolve/main/spiece.model from cache at /outputs/.cachetmp/c8b0274da206819bf0ed2bd3b928edd6b170794dfd2c9186b7f4d8182c89a289.3b69006860e7b5d0a63ffdddc01ddcd6b7c318a6f4fd793596552c741734c62d
loading file https:/huggingface.co/google/t5-v1_1-small/resolve/main/tokenizer.json from cache at None
loading file https:/huggingface.co/google/t5-v1_1-small/resolve/main/added_tokens.json from cache at None
loading file https:/huggingface.co/google/t5-v1_1-small/resolve/main/special_tokens_map.json from cache at /outputs/.cachetmp/3ad6f8335c1b1ef8966245899d47dcf735abd134d21fd7d26f621fe45ac01184.c94798918c92ded6aeef2d2f0e666d2cc4145eca1aa6e1336fde07f2e13e2f46
loading file https:/huggingface.co/google/t5-v1_1-small/resolve/main/tokenizer_config.json from cache at /outputs/.cachetmp/385731228f2b42821bd28ef2bcab8b6c77982e1da41d17058802b306f5068ada.b1a2e3c152960fdc6b3d16520fa9f1591e2818d7dd66946c219e651f224894bf
[Errno 13] Permission denied: '**/.cache/huggingface/**64521636c162517d8b5b18cbb5b1eda52138a4e70ab9de6f1f996bc3c668233f.e890bc92245b637cd45d26aae3f8c93f29e9d928c73b2e7e1007db14eb98c948.lock'
Traceback (most recent call last):
File "/u/leshemc/.pycharm_helpers/pydev/_pydevd_bundle/pydevd_exec2.py", line 3, in Exec
exec(exp, global_vars, local_vars)
File "<input>", line 1, in <module>
File "/anaconda3/envs/fuse/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 505, in from_pretrained
return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/anaconda3/envs/fuse/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1744, in from_pretrained
return cls._from_pretrained(
File "/anaconda3/envs/fuse/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1872, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/anaconda3/envs/fuse/lib/python3.8/site-packages/transformers/models/t5/tokenization_t5_fast.py", line 128, in __init__
super().__init__(
File "/anaconda3/envs/fuse/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py", line 117, in __init__
raise ValueError(
ValueError: Couldn't instantiate the backend tokenizer from one of:
(1) a `tokenizers` library serialization file,
(2) a slow tokenizer instance to convert or
(3) an equivalent slow tokenizer class to instantiate and convert.
You need to have sentencepiece installed to convert a slow tokenizer to a fast one.```
## Expected behavior
Load the tokenizer and ignore the other cache dir, how did it even find out this dir exists
| 11-22-2021 09:50:38 | 11-22-2021 09:50:38 | If it helps, I see that the first cache_dir works well, but for some reason when "url_or_filename== 'https://huggingface.co/google/t5-v1_1-small/resolve/main/config.json'" the cache dir passed is None, and then it looks in the wrong cache and breaks. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,483 | closed | JAVA Predict | ### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- how save pertained BERT model in .pb, and load model by java and predict???
- this is model saved:
- model = TFBertForSequenceClassification.from_pretrained(model_path, num_labels=num_classes)
model.summary()
optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate, epsilon=1e-08, clipnorm=1)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model.compile(optimizer=optimizer, loss=loss, metrics=[tf.keras.metrics.SparseCategoricalAccuracy(name='accruacy')])
# fit model
bert_history = model.fit(ds_train_encoded, epochs=number_of_epochs, validation_data=ds_val_encoded,use_multiprocessing=True)
tf.saved_model.save(model, saved_model)
ValueError: Exception encountered when calling layer "tf_bert_for_sequence_classification" (type TFBertForSequenceClassification).
- `transformers` version:4.12.5
- Platform:java
- Python version:3.7
- PyTorch version (GPU?):
- Tensorflow version (GPU?):2.4.1
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
| 11-22-2021 09:37:13 | 11-22-2021 09:37:13 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,482 | closed | where can I find the dataset bert-base-chinese is pretrained on? | null | 11-22-2021 09:22:51 | 11-22-2021 09:22:51 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,481 | closed | Is the index of the vocabulary in Tokenizer the same as the index of WordEmbedding? | I have a word vector A, which has dimension `[1, 768]`. I compute the similarity between vector A and all words in `bert.embeddings.word_embeddings.weight` and get the most similar index is `idx`. I use the following code to see what token the `idx` is
```python
token = tokenizer.decode([idx])
```
But when I look at the index of the token in Tokenizer using the following code, I find that it is not equal to the `idx`
```python
new_idx = tokenizer(token)['input_ids'][1]
# new_idx != idx
``` | 11-22-2021 09:17:24 | 11-22-2021 09:17:24 | |
transformers | 14,480 | closed | Fine-tune Integer Bert for question answering task | How we can fine-tune the integer for the question answering task? Any available script | 11-22-2021 07:43:35 | 11-22-2021 07:43:35 | @sgugger ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,479 | closed | facebook / wav2vec2-base-100k-voxpopuli fails to load on huggingface.co (also on system) | ## Environment info
- `transformers` version: 4.13.0.dev0
- Platform: macOS-11.6.1-arm64-arm-64bit
- Python version: 3.9.7
- PyTorch version (GPU?): 1.10.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: idk
### Who can help
@patrickvonplaten, @anton-l
Library:
- Tokenizers: @LysandreJik
- Speech: @patrickvonplaten, @anton-l
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator
## Information
Model I am using (wav2vec2-base-100k-voxpopuli):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] my own task or dataset: (give details below)
Error message in browser is same as on cli:

## To reproduce
Steps to reproduce the behavior:
on huggingface.co:
1. Load https://huggingface.co/facebook/wav2vec2-base-100k-voxpopuli
2. Upload a file to analyze
3. Click the Compute button
Here is a cli script that fails in the same way:
```python3
from transformers import Wav2Vec2FeatureExtractor, Wav2Vec2Processor, Wav2Vec2ForCTC, Wav2Vec2CTCTokenizer
import librosa as lb
import torch
import numpy
model_name='facebook/wav2vec2-base-100k-voxpopuli'
# FE
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_name)
# Initialize the tokenizer
tokenizer = Wav2Vec2CTCTokenizer.from_pretrained(model_name)
# Initialize the model
model = Wav2Vec2ForCTC.from_pretrained(model_name)
# we need a processor, too
processor = Wav2Vec2Processor(feature_extractor,tokenizer)
# Read the sound file
waveform, rate = lb.load('./DD000.wav', sr = 16000)
# process
input_values = processor(waveform, sampling_rate=16000, return_tensors='pt').input_values
# Retrieve logits from the model
logits = model(input_values).logits
# Take argmax value and decode into transcription
predicted_ids = torch.argmax(logits, dim=-1)
transcription = tokenizer.batch_decode(predicted_ids)
# Print the output
print(transcription)
```
Running it nets you:
```bash
traceback (most recent call last):
File "/Users/genevera/src/ml/transformers/examples/./wave2vec_stt.py", line 13, in <module>
tokenizer = Wav2Vec2CTCTokenizer.from_pretrained(model_name)
File "/Users/genevera/.pyenv/versions/miniforge3-4.10.1-5/envs/trans/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1733, in from_pretrained
raise EnvironmentError(msg)
OSError: Can't load tokenizer for 'facebook/wav2vec2-base-100k-voxpopuli'. Make sure that:
- 'facebook/wav2vec2-base-100k-voxpopuli' is a correct model identifier listed on 'https://huggingface.co/models'
(make sure 'facebook/wav2vec2-base-100k-voxpopuli' is not a path to a local directory with something else, in that case)
- or 'facebook/wav2vec2-base-100k-voxpopuli' is the correct path to a directory containing relevant tokenizer files
```
## Expected behavior
I would expect the file to be analyzed without an error message about the tokenizer.
| 11-21-2021 23:39:13 | 11-21-2021 23:39:13 | @anton-l can chime in if I'm wrong, but I believe that it is expected that you should create your vocabulary as shown in https://huggingface.co/blog/fine-tune-wav2vec2-english
This could be made clearer in the model card, wdyt @anton-l?<|||||>Hi @genevera! The model you're using is a base model pretrained on unlabeled audio, as mentioned in the **Note** in the model card.
To transcribe speech you can either use a finetuned model, e.g. https://huggingface.co/facebook/wav2vec2-base-10k-voxpopuli-ft-en or fine-tune your own with a new tokenizer, like in the tutorial that @LysandreJik linked earlier. |
transformers | 14,478 | closed | Fix dummy objects for quantization | # What does this PR do?
The dummy objects introduced recently are actually not generated and maintained by the dummy scripts, because the `is_xxx_available` test they rely on contains an _ in the xxx. This PR fixes that.
Another fix concerns the models `ForNextSentencePrediction` which were not included in the list yet (a better fix will use the model mapping names from the auto mapping one day when I have more time :-) )
This is blocking the doc building for the new frontend so merging as soon as it's green. | 11-21-2021 22:03:08 | 11-21-2021 22:03:08 | Thanks for taking care of it! |
transformers | 14,477 | closed | Fix sentinel token IDs in data collator for Flax T5 pretraining script | # What does this PR do?
Modifies the sentinel token IDs used in the data collator for the Flax T5 pretraining script so that they go in decreasing order starting at `len(tokenizer) - 1`, which matches the [original T5 code](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/t5/data/preprocessors.py#L2895).
Fixes #14282
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | 11-21-2021 19:27:24 | 11-21-2021 19:27:24 | Hey @rahuln! I'm pinging @patrickvonplaten for review as you have worked with him until now, but please note that he's off until next week so he'll review your PR when he's back! Thanks for your understanding.<|||||>Great! Thanks a lot for digging into this issue and fixing it |
transformers | 14,476 | closed | Change examples.md <details> to use directly html | # What does this PR do?
As part of the new doc, svelte markdown processor currently only interprets content of `<details>` tag as html. Therefore, made the necessary change.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-21-2021 19:07:09 | 11-21-2021 19:07:09 | Can you push this on the [doc_new_front](https://github.com/huggingface/transformers/tree/doc_new_front) branch instead?<|||||>did so<|||||>Let's close this in the meantime then! |
transformers | 14,475 | closed | Where do I find the class documentation | Stupid question:
The website is great and all, but where do I find the FULL code documentation for the APIs (i.e. pydoc or something)? For instance any time a class is "highlighted" , i.e. `model` I would expect to be able to click on it and see the full set of functions/properties it provides.
Also, navigating the website is cumbersome. The left menu doesn't scroll to the section you are currently visiting, the additional nested sections (headers) for that subject should always be visible (and have a always visible "+" icon in the menu pane) to be easier to navigate to. I find myself clicking away (since I might have forgotten under what something is nested under) , going back and forth etc. because of that.
Thanks!
| 11-21-2021 09:49:28 | 11-21-2021 09:49:28 | Hy @avnerbarr, the documentation is undergoing a large refactor which should be out in a couple of days, and which should hopefully resolve most of your pain points, stay tuned.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @avnerbarr, do you have any feedback of the new documentation frontend?<|||||>seems much better. I'll update once I start working with it π
|
transformers | 14,474 | closed | How do I preserve HTML structure when putting data through transformers | I am trying to create a bulk paraphrasing tool. I have managed to get everything working fine with normal text but now I would like to also keep the <html> structure intact. Can anyone give me some tips on how to achieve this? When I put html through the models it is removed in the output. | 11-21-2021 04:42:16 | 11-21-2021 04:42:16 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discuss.huggingface.co) instead?
Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,473 | closed | add Tuple as possible type hint for EvalPredictions label_ids | # What does this PR do?
This adds Tuple as a type hint for the label_ids attribute in EvalPredictions. It is a valid type that label_ids can have.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
@stas00
| 11-20-2021 20:27:25 | 11-20-2021 20:27:25 | If that's the case, then for consistency the following 2 classes should have the proposed changed as well.
@sgugger, what do you think? |
transformers | 14,472 | closed | Switch from using sum for flattening lists of lists in group_texts | # Speed up list flattening in `group_texts` by changing `sum(list_of_lists, [])` to `functools.reduce(operator.iconcat, list_of_lists, [])`
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
I changed all list flattening from `sum(list_of_lists, [])` to `functools.reduce(operator.iconcat, list_of_lists, [])`.
Here is a stack overflow thread about which method is fastest: https://stackoverflow.com/a/45323085
Here is a colab notebook that shows a quick example between the old way and the new way and a couple of timed examples. The new way is about 5-6x faster. https://colab.research.google.com/drive/1Kxj_JbM9HMLFpjUduy6i3tfqDob_pYIp?usp=sharing
I discovered this while trying to use `group_texts` on many GB of data, and the speedup was greatly appreciated.
Nearly all of these changes are in `run_mlm` or `run_clm` examples, but there are a couple in `run_swag` and another
in `file_utils.py` which might be unnecessary.
I don't know why `make style` is moving `import functools` to its own line above the other imports in examples/flax/language-modeling/run_t5_mlm_flax.py and examples/tensorflow/language-modeling/run_clm.py
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
I think @sgugger wrote the original `group_texts`
| 11-20-2021 17:37:40 | 11-20-2021 17:37:40 | Thanks for investigating the performance of that line! I initially used the `sum()` because it's easy to read. The `functools.reduce(operator.iconcat, list_of_lists, [])` is way less readable, but it gives so much in speed that switching may be a good thing.
Wdyt @LysandreJik ?<|||||>I was actually confused at first because I didn't know that `sum()` could be used to flatten a list of lists like that. Perhaps just adding a comment explaining that it is flattening the list of lists into one list will suffice? Something along the lines of, "This concatenates all sequences together by flattening the list of lists"
Alternatively, would `chain.from_iterable(list_of_lists)` or `chain(*list_of_lists)` seem more readable?<|||||>I think `chain(*list_of_lists)` is the most readable, and is even clearer than the `sum` thing.
It makes almost no difference in time from your notebook, so let's go with this one?<|||||>Ok I'll make the changes. Do you know why `make style` moved `import functools` into its own line in `examples/flax/language-modeling/run_t5_mlm_flax.py` and `examples/tensorflow/language-modeling/run_clm.py`?<|||||>I did a couple more tests in this notebook: https://colab.research.google.com/drive/1Kxj_JbM9HMLFpjUduy6i3tfqDob_pYIp
Edit: This actually didn't work. Let me try to fix it.
~~One way to improve readability would be to make a utility function like this: `ravel = functools.partial(functools.reduce, operator.iconcat, [])` so then you could just use `ravel(x)` inside `group_texts`.~~
This works: `ravel = functools.partial(functools.reduce, operator.iconcat)` so then you could just use `ravel(x, [])` inside `group_texts`. `ravel` would mirror what `torch.ravel` and `np.ravel` do. I'm not sure if that is more or less confusing.
Edit: it is same with/without partial
~~Edit: double-checking this right now~~
~~This method actually performed the fastest out of everything I tried.~~
Here is a summary of the methods when using `group_texts` on SQuAD contexts where `x` is a list of lists (each time is a 'best of 5' except `sum` which is a 'best of 3'):
1. `sum(x, [])` - 2min 47s
2. `list(chain.from_iterable(x))` - 27.2 s
3. `list(chain(*x))` - 27.1 s
4. `functools.reduce(operator.iconcat, x, [])` - 26.8 s
5. `functools.partial(functools.reduce, operator.iconcat)(x, [])` - 26.8 s
6. `np.ravel(x)` - 28.9 s
7. `[b for a in x for b in a]` - 28.4 s per loop
<|||||>I think that option 3 (`list(chain(*x))`) is the best compromise in terms of readability vs speed (only 3ms longer than the best run, which might also be dataset-dependent).
Thanks a lot for benchmarking all options! |
transformers | 14,471 | closed | Wav2Vec2ForPreTraining in 4.12 broke SpeechBrain implementation | ## Environment info
- `transformers` version:
- Platform: Linux
- Python version: 3.8
- PyTorch version (GPU?): 1.9 (and 1.10)
- Using GPU in script?: 1-32 Tesla V100
- Using distributed or parallel set-up in script?: DDP
### Who can help
@patrickvonplaten, @anton-l
## Information
Model I am using (Bert, XLNet ...): wav2vec-base (original is on facebookai repo)
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Go to the [SpeechBrain PR](https://github.com/speechbrain/speechbrain/pull/1062) and use the corresponding branch
2. Install speechbrain (pip install -r requirements.txt / pip install -e .)
3. Install extra_requirements in recipes/CommonVoice/self-supervised-learning/wav2vec2/extra_requirements.txt)
4. Download and untar any CommonVoice english version (best using an old one to get less hours to debug ...)
5. start the training with a single GPU (as it doesn't work either anymore) with: `python recipes/CommonVoice/self-supervised-learning/wav2vec2/train.py recipes/CommonVoice/self-supervised-learning/wav2vec2/hparams/wav2vec2_base.yaml --data_folder=/path/to/CV/en --batch_size=adapttoyourgpu(12 if 32GB) --gradient_accumulation=8 or 16`
## Extra information about the code
The important code can be located in `recipes/CommonVoice/self-supervised-learning/wav2vec2/train.py` under the brain class for compute_forward and compute_objectives. The entire wrapping of the HF model into SpeechBrain happens at the bottom of the `speechbrain/lobes/models/hugginface_wav2vec2.py` file.
The batch that is received simply is of the form (batch, signal) just like for HF.
## Expected behavior
With 4.11 (code can be found in the same PR from earlier commit) everything was working well ! We were even able to submit papers based on this work. Here is a list of the different logs obtained with the old working version:
```
epoch: 1, lr: 1.87e-05, steps: 1027, optimizer: AdamW - train loss: 6.41e+03 - valid loss: 4.53e+03, valid acc: 0.14673814177513123
epoch: 2, lr: 3.75e-05, steps: 2054, optimizer: AdamW - train loss: 6.18e+03 - valid loss: 4.45e+03, valid acc: 0.21184375882148743
epoch: 3, lr: 5.62e-05, steps: 3081, optimizer: AdamW - train loss: 5.67e+03 - valid loss: 3.70e+03, valid acc: 0.26702988147735596
epoch: 4, lr: 7.50e-05, steps: 4108, optimizer: AdamW - train loss: 5.19e+03 - valid loss: 3.70e+03, valid acc: 0.301466703414917
epoch: 5, lr: 9.37e-05, steps: 5135, optimizer: AdamW - train loss: 5.15e+03 - valid loss: 3.58e+03, valid acc: 0.33249199390411377
epoch: 6, lr: 1.12e-04, steps: 6162, optimizer: AdamW - train loss: 5.05e+03 - valid loss: 3.49e+03, valid acc: 0.3265174329280853
```
Now, we the new implementation:
```
epoch: 1, lr: 1.87e-05, steps: 1027, optimizer: AdamW - train loss: 7.09e+03 - valid loss: 4.87e+03, valid acc: 0.15861859917640686
epoch: 2, lr: 3.75e-05, steps: 2054, optimizer: AdamW - train loss: 6.67e+03 - valid loss: 4.67e+03, valid acc: 0.19915643334388733
epoch: 3, lr: 5.62e-05, steps: 3081, optimizer: AdamW - train loss: 6.39e+03 - valid loss: 4.41e+03, valid acc: 0.22449128329753876
epoch: 4, lr: 7.50e-05, steps: 4108, optimizer: AdamW - train loss: 6.18e+03 - valid loss: 4.25e+03, valid acc: 0.24435752630233765
epoch: 5, lr: 9.37e-05, steps: 5135, optimizer: AdamW - train loss: 6.01e+03 - valid loss: 4.15e+03, valid acc: 0.2056254893541336
epoch: 6, lr: 1.12e-04, steps: 6162, optimizer: AdamW - train loss: 5.88e+03 - valid loss: 4.11e+03, valid acc: 0.2493399679660797
epoch: 7, lr: 1.31e-04, steps: 7189, optimizer: AdamW - train loss: 5.76e+03 - valid loss: 4.02e+03, valid acc: 0.27252206206321716
epoch: 8, lr: 1.50e-04, steps: 8216, optimizer: AdamW - train loss: 5.66e+03 - valid loss: 3.97e+03, valid acc: 0.26998990774154663
epoch: 9, lr: 1.69e-04, steps: 9243, optimizer: AdamW - train loss: 5.59e+03 - valid loss: 3.85e+03, valid acc: 0.24951176345348358
epoch: 10, lr: 1.87e-04, steps: 10270, optimizer: AdamW - train loss: 5.51e+03 - valid loss: 3.80e+03, valid acc: 0.24127712845802307
epoch: 11, lr: 2.06e-04, steps: 11297, optimizer: AdamW - train loss: 5.43e+03 - valid loss: 3.72e+03, valid acc: 0.2344648540019989
epoch: 12, lr: 2.25e-04, steps: 12324, optimizer: AdamW - train loss: 5.37e+03 - valid loss: 3.74e+03, valid acc: 0.20351676642894745
epoch: 13, lr: 2.44e-04, steps: 13351, optimizer: AdamW - train loss: 5.30e+03 - valid loss: 3.72e+03, valid acc: 0.1984717845916748
epoch: 14, lr: 2.62e-04, steps: 14378, optimizer: AdamW - train loss: 5.29e+03 - valid loss: 3.66e+03, valid acc: 0.2088804990053177
epoch: 15, lr: 2.81e-04, steps: 15405, optimizer: AdamW - train loss: 5.25e+03 - valid loss: 3.64e+03, valid acc: 0.21932080388069153
epoch: 16, lr: 3.00e-04, steps: 16432, optimizer: AdamW - train loss: 5.21e+03 - valid loss: 3.62e+03, valid acc: 0.20787915587425232
```
As a side not, I think that exporting masking and negative_sampling from the forward function is a bad idea for external toolkit compability. If everything was ambedded in the .forward() function, any toolkit could just instantiate your model and run it without worrying about the library version. Now, everytime HuggingFace generates a new transformers version, I will have to check and adapt the potential changes :-(
| 11-20-2021 17:31:48 | 11-20-2021 17:31:48 | I printed the mask_indices obtained by calling your function and it looks correct. Is it normal however that the call to `transformers.models.wav2vec2.modeling_wav2vec2._sample_negative_indices` returns this (batch_size being 6):
<img width="481" alt="Capture dβeΜcran 2021-11-20 aΜ 18 51 12" src="https://user-images.githubusercontent.com/11910731/142736330-77e6001b-e756-4a0c-9104-54ccd1404245.png">
<|||||>Ok I took a look at the differences between the two scripts - there are **two** main differences (which were bugs in 4.11 IMO).
1)
The diversity loss was completely incorrectly scaled in 4.11 - see: https://github.com/huggingface/transformers/blob/dc193c906dfb3b9663f8963735c46e030a15b914/src/transformers/models/wav2vec2/modeling_wav2vec2.py#[β¦]3 here you can see that while the contrastive loss is scaled by the number of target (quantized) vectors, the diversity loss is not multiplied by the number of target vectors. On master this has been corrected: https://github.com/huggingface/transformers/blob/6fc38adff272ea3148e05888edf67eeb00170453/src/transformers/models/wav2vec2/modeling_wav2vec2.py#[β¦]0
This was one of the major resaons why the training became very unstable for me - since the diversity loss was incorrectly scaled the quantization codebook vectors always collapsed
If in your case diversity loss seems to be very much unnecessary you could try scaling it down by a factor of 10 or more using current master to have more or less the same loss as in 4.11
2. (That's a big one) in 4.11 the negative quantized vectors were taken from all possible quantized input vectors - which can be seen here: https://github.com/huggingface/transformers/blob/dc193c906dfb3b9663f8963735c46e030a15b914/src/transformers/models/wav2vec2/modeling_wav2vec2.py#[β¦]7 as the attention_mask is passed as the mask
However this is not how the original fairseq models were trained
the original Wav2Vec2 models were trained using only target vectors (so the subset of all quantized input vectors as defined by mask_time_indices) as negative vectors
This is now also done on master: https://github.com/huggingface/transformers/blob/6fc38adff272ea3148e05888edf67eeb00170453/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L229
The authors of wav2vec2 told me that they tried both the 4.11 and the master approach and master always led to better final WER fine-tuning results while the 4.11 approach was more robust
You could revert back to the original change by simple passing sampled_negative_indices=attention_mask here: https://github.com/huggingface/transformers/blob/6fc38adff272ea3148e05888edf67eeb00170453/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L228 and see what happens
Those were the main changes between 4.11 and master which IMO are both bugfixes
Maybe a good start would be to:
a) Use a tiny diversity_loss weight and
b) generate the negative_sample_indices by passing the attention_mask instead of mask_time_indices
<|||||>> I printed the mask_indices obtained by calling your function and it looks correct. Is it normal however that the call to `transformers.models.wav2vec2.modeling_wav2vec2._sample_negative_indices` returns this (batch_size being 6): <img alt="Capture dβeΜcran 2021-11-20 aΜ 18 51 12" width="481" src="https://user-images.githubusercontent.com/11910731/142736330-77e6001b-e756-4a0c-9104-54ccd1404245.png">
yeah this looks correct<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@TParcollet - everything is fine now with the pretraining no? Or is there anything else that should be fixed? <|||||>It is partially fine. We had to revert to the whole sentence sampling due to small sentences<|||||>Ok! And now having replaced `mask_time_indices=mask_time_indices` with `mask_time_indices=torch.ones(...)` fixed the problem or still not 100%? <|||||>It's fixed, but mysterious research-wise. |
transformers | 14,470 | closed | Unable to load DeBERTa-v3 tokenizer | ```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("microsoft/mdeberta-v3-base")
```
Gives me an error:
ValueError: This tokenizer cannot be instantiated. Please make sure you have `sentencepiece` installed in order to use this tokenizer.
Installing sentencepiece doesn't help.
| 11-20-2021 13:16:24 | 11-20-2021 13:16:24 | Hello! Could you please share your environment information? Thanks.<|||||>> Hello! Could you please share your environment information? Thanks.
Hello, sure! transformers==4.12.5, python 3.7. I am using Colab for experiments<|||||>If you're using colab, then have you restarted the runtime after installing `sentencepiece`?<|||||>> If you're using colab, then have you restarted the runtime after installing `sentencepiece`?
Thank you!!! It solved the issue. Thank you once again!<|||||>Glad to hear it! |
transformers | 14,469 | closed | Enable automatic creation of decoder_input_ids given labels in TF | Added a way to automatically create the `decoder_input_ids` from the `labels` provided by the user the `TFEncoderDecoderModel`
| 11-20-2021 12:15:58 | 11-20-2021 12:15:58 | Hey @Carlosbogo, thanks for your contribution! I've pinged @patrickvonplaten to review your PR, but please note that Patrick if off until next Monday. Thanks for your patience!<|||||>Thanks for the PR @Carlosbogo!
I think we should also move the loss computation into the `TFEncoderDecoderModel` class similar to how @NielsRogge has done it in this PR: https://github.com/huggingface/transformers/pull/14139 . Right now the loss is computed in the decoder. However we should compute the loss directly in the `TFEncoderDecoderModel` class.<|||||>Hi @Carlosbogo, thanks for your contribution! Let me know if you are able to take into account the comment above.
Thanks!<|||||>Yeah, I think I can work on that.
Should I create a new PR for it when I finish?<|||||>No it can be included in this PR.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @Carlosbogo,
Hope you had a good start to 2022! Let me know if I can help with anything to get this PR merged :-) <|||||>I'm really sorry for the delay: I had my university exams and couldn't find time to work on it. I'll try to do it as soon as possible.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,468 | closed | Pretrained bare model have weights that are not beein used | ## Environment info
- `transformers` version: 4.12.5
- Platform: Linux-5.11.0-40-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 1.9.1+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
Models:
- DistilBERT, BERT @LysandreJik
Documentation: @sgugger
## Information
The problem arises when using:
* [x] the official example scripts: (give details below)
## To reproduce
Hi I want to use the raw hidden-states from a pretrained bert-like model as input to another model.
I don't want a too big models so I tried the bare DistilBERT like this from the [docs](https://huggingface.co/transformers/master/model_doc/distilbert.html#transformers.DistilBertModel):
```python
from transformers import DistilBertTokenizer, DistilBertModel
import torch
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
model = DistilBertModel.from_pretrained('distilbert-base-uncased')
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
But I get the warning about weights that are not used:
```
Some weights of the model checkpoint at distilbert-base-uncased were not used when initializing DistilBertModel: ['vocab_transform.weight', 'vocab_transform.bias', 'vocab_layer_norm.bias', 'vocab_projector.bias', 'vocab_projector.weight', 'vocab_layer_norm.weight']
- This IS expected if you are initializing DistilBertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing DistilBertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
```
Should I see this warning? I didn't expect to.
I also tried with the bare BERT model, also by copying from the [docs](https://huggingface.co/transformers/master/model_doc/bert.html#transformers.BertModel). But this too shows a warning about weights not being used:
```
Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertModel: ['cls.predictions.decoder.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.dense.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias']
- This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
```
Should I really be seeing this? I assume the weights are matched with the correct model. If this is expected behavior maybe that can be noted in the docs? | 11-20-2021 11:56:04 | 11-20-2021 11:56:04 | In both case you are using a checkpoint pretrained on masked language modeling. The warning tells you some weights of the checkpoint are not used, which is the case: you are using the bare model without the masked LM classification head, so you are not using those weights.<|||||>Alright so the default checkpoints contain the masked LM head? Is there a checkpoint without it?
Thanks <|||||>I'm not aware of any pretrained model without any head, so I think the answer is no.<|||||>Okay, thanks. I was just not expecting to see any warnings when following the examples. |
transformers | 14,467 | closed | `EncoderDecoderModel` `generate` for a `ViT` as encoder | I was wondering if there was a way to do `generate` for a ViT to GPT2 `EncoderDecoderModel`. I managed to figure out how to get the loss by using the outputs of the ViT and pushing in the `encoder_outputs` into the model as shown below. However, it seems that for `generate` it is explicitly expecting `inputs_ids`. I'm fairly certain that somewhere under the hood all you need is just the `encoder_outputs` (and `inputs_ids` is unnecessary in that case). Is there a way to do this?
Also I realise that there is a `VisionEncoderDecoderModel` but I am trying to do this as a learning exercise.
```python
vit2gpt2 = EncoderDecoderModel.from_encoder_decoder_pretrained(VIT_MODEL, DISTIL_GPT2)
tokenized_captions = gpt2_tokenizer_fn(captions)
labels = tokenized_captions["input_ids"].clone()
labels[tokenized_captions["attention_mask"]==0] = LABEL_MASK
encoder_outputs = vit2gpt2.encoder(pixel_values=images)
outputs = vit2gpt2(
encoder_outputs=encoder_outputs,
decoder_input_ids=tokenized_captions["input_ids"],
decoder_attention_mask=tokenized_captions["attention_mask"],
labels=labels,
return_dict=True,
)
```
[Here](https://www.kaggle.com/sachin/vit-to-gpt2-encoder-decoder-model) is a kaggle kernel to a runnable version of above snippet.
## Update 1:
So I've managed to narrow down my necessary search down to `generation_utils.py` but I cannot see where in there it loops over the predicted values and feeds it back into the model. I'm hoping to replicate the process from there. | 11-20-2021 08:33:08 | 11-20-2021 08:33:08 | Maybe of interest to @NielsRogge even if the question would be better asked on the forum :)<|||||>Sorry, I'll close this here, opened the issue [in the forum](https://discuss.huggingface.co/t/encoderdecodermodel-generate-text-for-a-vit-as-encoder/12332). |
transformers | 14,466 | closed | [test] add test for --config_overrides | https://github.com/huggingface/transformers/issues/14389 suggested that `--config_overrides` doesn't work.
The feature works just fine, it's just the multiple logging of the config done by the framework is confusing. I already flagged this issue here: https://github.com/huggingface/transformers/issues/11104 I have no idea why loading a tokenizer triggers dumping of model config - as its contents are mostly irrelevant to the tokenizer and surely doesn't contribute anything useful to the user, other than avoiding looking at the log completely.
So there will be no confusion I added an additional dump with the updated config (terrible!) and a test so that we don't accidentally break this feature.
I'm not sure how else to improve that other than revisiting the design of when the model config is dumped. IMHO:
1. it's dumped too soon for the model (before it can be updated)
2. it shouldn't be dumped at all for the tokenizer
Fixes: https://github.com/huggingface/transformers/issues/14389
@sgugger, @LysandreJik | 11-20-2021 00:49:11 | 11-20-2021 00:49:11 | |
transformers | 14,465 | closed | Auto processor | # What does this PR do?
This PR adds an `AutoProcessor` API, similar to `AutoTokenizer` and `AutoFeatureExtractor`. | 11-19-2021 20:12:25 | 11-19-2021 20:12:25 | I can amend the PR to have the auto-processor then go to a tokenizer (if available) or a feature extractor (if available), as I think that's the logic we want anyway.<|||||>Discussed it a bit with @patrickvonplaten that isn't as excited as I am about the `AutoProcessor` being an umbrella over all modalities' preprocessors and he raises important API questions.
Will let him comment so that we all align on the choices :) <|||||>I see the need for a `AutoProcessor` class, but I'm not a fan of making it an umbrella class for both tokenizers, feature extractors and processors because:
i) It goes a bit against our "no-magic" & easy-to-understand code IMO. Having `AutoProcessor` wrap both `AutoTokenizer` makes this code quite difficult to understand. E.g. if for some reason this class fails to load an NLP tokenizer, the traceback can become quite complex (`AutoProcessor` -> `AutoTokenizer` -> (here multiple ways of loading the `AutoTokenizer` via `tokenizer_config`, `tokenizer_type`, model `config`). Also I'm quite sure this function will become much more complex over time to handle all kinds of weird use cases. We could limit this complexity by not making it return `AutoFeatureExtractor` or `AutoTokenizer`
ii) IMO it breaks a design pattern. So far we had the following design pattern IMO:
- AutoTokenizer returns a tokenizer of type `PreTrainedTokenizer` and `PretrainedTokenizerFast`
- AutoFeatureExtractor returns a feature extractor of type `FeatureExtractionMixin`.
-> both of those classes has IMO more or less the same design.
It is much more intuitive IMO that `AutoProcessor` only return `...Processor` objects and nothing more. Also, I understand a `...Processor` not really as a general "whatever-you-can-process" class, but as a wrapper object that **always** includes two or more pre- or postprocessing objects (e.g. a speech input pre-processor and text output post-processor). Admittingly the naming is not great here though as `...Processor` does encompass pretty much all kinds of tokenization, feature extraction, etc...
iii) I don't see the use-case of this class really. IMO there is no need to force an `Auto...` class to be useful for more than one task (or modality). E.g. I don't think many users are keen to have a single script in which they can quickly switch between an text tokenizer and a speech recognition processor => for me the beauty of `Auto...` is to be able to quickly try out multiple different checkpoints for the **same** task. To do so, it's enough to pick one `Auto...` model class such as `AutoModelForCausalLM` together with, *e.g.* `AutoTokenizer`. I don't see at all the need to be able to quickly switch between different tasks in the same script. If one wants to switch for a `language-generation` task to let's say speech classification, I don't think the convenience of not having to change `AutoTokenizer` to `AutoFeatureExtraction` is worth much compared to the complexity added to this function.
iiii) The idea here is really that `AutoProcessor` can be used for **all** kinds of preprocessing. This might make the user believe that the same holds true for `AutoModel`. But `AutoModel` is different IMO as it only returns the encoder of the models and can **not** really include all models (e.g. RAG, EncoderDecoder, SpeechEncoderDecoder, ...)
To conclude, I would prefer to have `AutoProcessor` just return `...Processor` objects and neither feature extractors nor tokenizers.
There is one thing, where I see my logic a bit flawed and where I understand why this class is coded the way it is:
a) The "...Processing" name. I agree that all pre- and post- tokenization, feature extraction, etc... can be summarized by the name "processing".
Very interested in discussing this a bit more!
<|||||>i know nothing about the details of this but from my superficial understanding of this I agree that "I'm not a fan of making it an umbrella class for both tokenizers, feature extractors and processors"<|||||>(note that thanks to @sgugger automated metadata sharing we will soon be able to display an actually sensible sample code for tokenization/preprocessing/etc on the transformers models in the hub)<|||||>I have absolutely no strong opinion on this, I added this because @LysandreJik told me to :-)<|||||>Following the discussion of https://github.com/huggingface/moon-landing/issues/3632 ,
Want to maybe kick-start a discussion here as this PR / issue has been hanging a bit in the air and I think it was mostly me that was blocking this PR and it might be time to unblock it.
Having revisited my points [here](https://github.com/huggingface/transformers/pull/14465#issuecomment-983810350), I guess my opinion changed a bit with regard to:
i) The no-magic philosophy applies a bit less to `Auto....` class I guess since they can now also directly load from the Hub, cover all kinds of models etc... so would not count this as a strong argument anymore. I do think we'll quickly get quite complex code in `AutoProcessor` to handle all the weird use cases
ii) Still feel strong about this as it clearly breaks a pattern to me and is somewhat unexpected to someone that knows `transformers` well IMO (AutoProcessor returns ...Tokenizer) even though there is a `AutoTokenizer` is not clean IMO.
iii) I do see a clearer use case now! So happy to scratch that
iv) think it's also not that big of a deal and think models and processors can just be treated differently
=> so overall if @LysandreJik @thomwolf @sgugger you're more in favor of merging this PR, happy to be outvoted here :-) Don't feel that strongly about it anymore.<|||||>If we pick a different name for the auto class (not `AutoProcessor`), I think it makes ii) a moot point. Since it seems to be your biggest argument against, would that be enough of a compromise for you?<|||||>Yes, also totally fine for me to go with `AutoProcessor` - I do see the easy of use as a big argument that outweighs 2) -> so also ok for me to use `AutoProcessor` if that's the best name here <|||||>Glad to see we're aligned for a good coverage of processors! I personally like `AutoProcessor` and think it's not necessarily unclear if we have some good documentation.
`AutoProcessor` is also the current UI for some models, so the API won't be changed for these models (which is good). |
transformers | 14,464 | closed | [OPTIMIZATION] remove a redundant condition statement for empty str judgement in `whitespace_tokenize` | It seems a redundant conditional judgement here for spliting a str object, because it still works when the str object is None or empty string. Maybe I just don't know what the `if` conditional judgement here used to represent some special meaning. | 11-19-2021 15:42:48 | 11-19-2021 15:42:48 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,463 | closed | Moving pipeline tests from `Narsil` to `hf-internal-testing`. | # What does this PR do?
Moving pipeline tests from `Narsil` to `hf-internal-testing`.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 11-19-2021 15:01:13 | 11-19-2021 15:01:13 | |
transformers | 14,462 | closed | Fixes torch jit tracing for LayoutLMv2 model. | Pytorch seems to reuse memory for input_shape which caused a mismatch in shapes later in the forward pass.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #14457
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [(x)] Did you write any new necessary tests? <- made some changes to the tests.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@NielsRogge
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-19-2021 12:49:14 | 11-19-2021 12:49:14 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@NielsRogge any idea why this never got merged?<|||||>Is the only way to jit.trace layoutlmv2 or a finetuned layoutlmv2 to install transformers in editable mode and add this PR?<|||||>For others that find this, I was able to get jit.trace to work on my finetuned layoutlmv2 model:
1. I forked transformers (at v4.17.0 specifically because its required by Sagemaker)
2. I copied the code changes from this PR over
3. I created a public repo where this modified version could be built (git+https://github.com/piercelamb/transformers_fork.git@layoutlmv2_torchscript#egg=transformers)
4. I re-used the encoded dataset I created for training to get a single instance of data for tracing:
```python
cpu_model = best_model.cpu()
cpu_model.eval()
dataset = load_from_disk(encoded_data_path)
train_data = dataset['train']
train_data.set_format(type="torch")
sample_instance = train_data[0]
for key, value in sample_instance.items():
if key != 'labels':
sample_instance[key] = value.unsqueeze(0)
traced_cpu = torch.jit.trace(
func=cpu_model,
example_inputs=[
sample_instance['input_ids'].cpu(),
sample_instance['bbox'].cpu(),
sample_instance['image'].cpu(),
sample_instance['attention_mask'].cpu(),
sample_instance['token_type_ids'].cpu(),
],
check_trace=False # when traced model is checked, an error is produced due to name mangling
)
```
Note the `.unsqueeze(0)` line which appropriately appends a dimension of 1 on each input which is the batch_size expected for a single instance by LayoutLMv2
<|||||>Hi,
Not sure why this PR was closed. Let's merge in case this works as intended. For some reason I can't reopen the PR, referring this to @LysandreJik <|||||>Indeed, that's on me! I can't reopen the PR either. Feel free to open a PR with the same changes and credit @mikkeldenker as a co-author.
Please ping me on it and we'll have it merged. Thanks!<|||||>Pinging @mikkeldenker, can open a PR in case we don't have a response |
transformers | 14,461 | closed | Function `ByT5Tokenizer.convert_tokens_to_string()` fails with certain tokens | ## Environment info
- `transformers` version: 4.13.0.dev0
- Platform: Linux-5.11.0-37-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten
## Information
After fine-tuning `ByT5`, the pipeline for `text2text-generation` fails when certain tokens are predicted and passed to the `ByT5Tokenizer.convert_tokens_to_string()` function.
## To reproduce
Steps to reproduce the behavior:
```python
from transformers import pipeline
generate = pipeline("text2text-generation", model="versae/modernisa-pre")
generate("presuncion")
```
Which produces a string missing some characters:
```
[{'generated_text': 'presuncin'}]
```
## Expected behavior
I would have expected not to miss any of the characters is supposed to generate.
The error seems related to the way tokens are treated. A possible fix could be:
```python
from transformers import ByT5Tokenizer
class FixByT5Tokenizer(ByT5Tokenizer):
def convert_tokens_to_string(self, tokens):
"""Converts a sequence of tokens (string) in a single string."""
bstring = b""
for token in tokens:
if token in self.special_tokens_decoder:
tok_string = self.special_tokens_decoder[token].encode("utf-8")
elif token in self.added_tokens_decoder:
tok_string = self.special_tokens_decoder[token].encode("utf-8")
elif token in self.special_tokens_encoder:
tok_string = token.encode("utf-8")
elif token in self.added_tokens_encoder:
tok_string = token.encode("utf-8")
else:
tok_string = bytes(token, encoding="utf8") # bytes([ord(token)])
bstring += tok_string
string = bstring.decode("utf-8", errors="ignore")
return string
generate2 = pipeline("text2text-generation",
model="versae/modernisa-pre",
tokenizer=FixByT5Tokenizer("versae/modernisa-pre")
)
generate("presuncion")
```
Which correctly produces:
```
[{'generated_text': 'presunciΓ³n'}]
```
However, it also makes other tests fail π€·πΌ
| 11-19-2021 12:27:29 | 11-19-2021 12:27:29 | Maybe related: https://github.com/huggingface/transformers/issues/13779.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,460 | closed | [ImageGPT] Small fixes | # What does this PR do?
Improves #14240 with:
1) an integration test
2) adds `ImageGPTForImageClassification` to the Auto API
3) removes the local image and uses one from the hub instead | 11-19-2021 12:01:17 | 11-19-2021 12:01:17 | |
transformers | 14,459 | closed | Add GitPython to quality tools | Adds gitpython to the `quality` command of the setup, as otherwise the quality tools cannot be used.
cc https://github.com/huggingface/transformers/pull/14379 | 11-19-2021 11:45:34 | 11-19-2021 11:45:34 | Should we thus remove this line in the circleCI config?
```
- run: pip install isort GitPython
```
It's there twice (code quality and repo consistency jobs).<|||||>Correct, just pushed the update. Thanks! |
transformers | 14,458 | closed | [Tests] Improve vision tests | # What does this PR do?
Small fixes for the tests of the vision models. | 11-19-2021 11:36:51 | 11-19-2021 11:36:51 | Done. |
transformers | 14,457 | closed | Tracing LayoutLMv2 results in wrong input_shape dimension | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.11.3
- Platform: macOS-12.0-arm64-arm-64bit
- Python version: 3.9.0
- PyTorch version (GPU?): 1.10.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- encoder-decoder models (For example, BlenderBot, BART, Marian, Pegasus, T5, ByT5): @patrickvonplaten, @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj @patrickvonplaten
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten
- Tokenizers: @LysandreJik
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@NielsRogge
## Information
When tracing the LayoutLMv2 model, I get the following error:
```
RuntimeError: The expanded size of the tensor (561) must match the existing size (512) at non-singleton dimension 1. Target sizes: [1, 561]. Tensor sizes: [1, 512]
```
This seems to be caused by [this line](https://github.com/huggingface/transformers/blob/efea0f868bd381244e3cef51b388293e41a36d1e/src/transformers/models/layoutlmv2/modeling_layoutlmv2.py#L862) in the model. I think pytorch might be reusing the memory for `final_shape` and `input_shape` when tracing, so when `final_shape` is updated it also updates `input_shape` which leads to the mismatch in dimensions later on. I've currently solved the problem by changing the shapes to
```
final_shape = list(torch.empty(size=input_shape).size())
visual_shape = list(torch.empty(size=input_shape).size())
```
I've also changed `visual_shape` for good measure, but it doesn't really seem to be necessary. With the above changes I'm able to successfully trace the model. It seems a bit overkill to allocate an entire new tensor, just to get a copy of the shape but I didn't really find any other solution. Please let me know if there is a better solution.
I'm happy to submit a PR with the fixes if you want.
## To reproduce
Steps to reproduce the behavior:
1. Initialise `LayoutLMv2Model`
2. Trace the model using `torch.jit.trace`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
| 11-19-2021 10:32:10 | 11-19-2021 10:32:10 | Hi,
Thanks for working on making LayoutLMv2 torchscriptable. It would indeed be great if you can open a PR for this.
Also, if LayoutLMv2 works with TorchScript, we can update `tests/test_modeling_layoutlmv2.py`, to also take into account the TorchScript tests (as these are not run right now as seen [here](https://github.com/huggingface/transformers/blob/efea0f868bd381244e3cef51b388293e41a36d1e/tests/test_modeling_layoutlmv2.py#L262)).<|||||>That's really the least I can do compared to how you guys make NLP easily accessible to everyone. So thank you for all your amazing work!
I've created a pull request that should fix the issue, and will therefore close this. We can continue the discussion in the PR if needed. |
transformers | 14,456 | open | [New Model] DocFormer: End-to-End Transformer for Document Understanding | # π New model addition
## Model description
See _"DocFormer: End-to-End Transformer for Document Understanding", Appalaraju et al (ICCV 2021)_ on [CVF](https://openaccess.thecvf.com/content/ICCV2021/papers/Appalaraju_DocFormer_End-to-End_Transformer_for_Document_Understanding_ICCV_2021_paper.pdf) and [arXiv](https://arxiv.org/abs/2106.11539)
DocFormer is a multi-modal transformer model for 2D/visual documents from Amazon (where, fair disclosure, I also currently work but not in research) - which I would characterize at a high level as being broadly along the same use cases as LayoutLMv2 (already in `transformers`), but achieving better (state-of-the-art) results with smaller datasets per the benchmarks in the paper.
I've found this kind of multi-modal, spatial/linguistic model very useful in the past (actually released an [AWS sample](https://github.com/aws-samples/amazon-textract-transformer-pipeline) and [blog post](https://aws.amazon.com/blogs/machine-learning/bring-structure-to-diverse-documents-with-amazon-textract-and-transformer-based-models-on-amazon-sagemaker/) with Hugging Face LayoutLMv1 earlier this year) and would love the improvements from DocFormer could be available through HF Transformers.
## Open source status
* [X] the model implementation is available: (give details)
* Looks like there's an (MIT-0) implementation at https://github.com/shabie/docformer
* [ ] the model weights are available: (give details)
* Not currently as far as I can tell?
* [X] who are the authors: (mention them, if possible by @gh-username)
* Srikar Appalaraju, Bhavan Jasani, Bhargava Urala Kota, Yusheng Xie, and R. Manmatha - all of AWS AI. Not sure of GitHub usernames
* @shabie for the currently available implementation | 11-19-2021 07:08:44 | 11-19-2021 07:08:44 | Haha thank you for this issue! Tagging @uakarsh since both of us have managed to get the architecture largely down (we think!)
It would be awesome to get this integrated with some help :)
Directly inspired by the journey of @NielsRogge <|||||>@shabie Thanks for the tag. @athewsey, as far as the weights are concerned, I have tried implementing their MLM task (described in the repo), as well as Image Reconstruction Part (for the Unsupervised Case), and on the basis of the performance, I can say that it is working nearly close to that of the paper. So, we are hoping to release it as soon as possible. I am quite excited to share the model with the community since this is my first transformer(along with @shabie) implementation and nothing can be more excited than this. However, there are some approximations in the model, which may affect performance, but we would try to get the results as close as possible. Cheers,<|||||>Hi,
DocFormer would indeed be a great addition to the library. Note that pretrained weights are required for a model to be added.
Looking forward to this!<|||||>>
>
> Hi,
>
> DocFormer would indeed be a great addition to the library. Note that pretrained weights are required for a model to be added.
>
> Looking forward to this!
@NielsRogge
Thank you for the quick reply!
Its very clear to us that weights are needed. That's the reason we didn't create this new model issue so far. That is not to say that wasn't a good idea @athewsey!
So the two challenges in getting weights is compute and data.
Compute may be manageable but the main problem right now is that the OCR to be performed to extract words and their bounding boxes on [RVL-CDIP](https://www.cs.cmu.edu/~aharley/rvl-cdip/) dataset. The thing is `pytesseract` is ridiculously slow. I think `pytesseract` is just generally a poor implementation given its disk bound operations.
I didn't get the chance earlier but I was about to ask you if you guys have the dataset with OCR step completed and if that could also be made available. That would speed up things a lot. If not, we'd have to first overcome this hurdle which is where we're at basically. We'd need some kind of distributed computation (like a spark cluster job) to get this task completed in manageable time.<|||||>As an update, the authors would be sharing the Textract OCR for the RVL CDIP Dataset, and as soon as they release it, we would try to achieve the benchmark performance as mentioned in the paper. However, we are also trying from our end, to make our own OCR part, and then perfrom pre train and fine tune<|||||>Any updates on this?<|||||>Have completed the scripts for pre-training on MLM, and using DocFormer for Document Image Classification. Check it out here [DocFormer Examples with PyTorch-Lightening](https://github.com/uakarsh/docformer/tree/master/examples/docformer_pl)<|||||>Any updates on this? It would be very useful @uakarsh @shabie @athewsey @NielsRogge . LayoutLMV3 is cool but license doesn't allow commercial usage<|||||>Hi @WaterKnight1998 we have been able to train the model, you can find it [here](https://github.com/shabie/docformer/tree/master/examples/docformer_pl).
The list of things done till now are:
- [x] Pre-training script for DocFormer on any dataset either using Tesseract (means no OCR provided), or you can give OCR through any suitable tool
- [x] Training from Scratch/ Fine-tuning DocFormer on any dataset, you can check out the link, I mentioned above
- [ ] Got the same results as that of the authors
Due to limited resources, currently, I have been able to make the first two points and tried to show a demo of the same [here](https://huggingface.co/spaces/iakarshu/docformer_for_document_classification) and if @NielsRogge suggests, we can indeed integrate it with Hugging Face, since it would be easy to do so
Thanks,<|||||>@uakarsh I can help if you need help. Can this model be used for token classification?<|||||>Sure, with some modifications to the script of Document Image Classification and pre-processing, we would definitely be able to use it for token classification<|||||>Hello there, @uakarsh. Has this initiative of integrating DocFormer into Transformers been discontinued in the meantime?<|||||>Hi @vprecup, thanks for your comment, it really made me feel happy that, you are interested in integrating DocFormer into hugging face. However, the problem is, as a student, I don't have that much computing to pre-train the model. As mentioned in the paper, they took 5M documents (pg. 6, above section 4.2), and have not specified the data. I believe the current [IDL Dataset](https://github.com/furkanbiten/idl_data) would be sufficient for the pre-train dataset, and we have a demo notebook for [pre-training](https://github.com/shabie/docformer/tree/master/examples/docformer_pl/pre_training_task_on_idl_dataset).
So, maybe if somebody can do that, I can help them.
By the way, one interesting thing, In the DocFormer paper, on pg. 7, Table 6, without pre-training, the authors get an F1 Score of `4.18` on FUNSD (100 Epochs), while in our [notebook](https://www.kaggle.com/code/akarshu121/docformer-for-token-classification-on-funsd), we get `13.29` (3x improvement on 100 Epochs), and it overfits, so maybe the implementation is good to go for your use case.
Thanks,
Akarsh<|||||>Hi @uakarsh, if we could get you some compute power, would you like to give it a go?
It seems I can borrow a Z8 Fury workstation from HP, equipped with up to four of the latest NVIDIA RTX 6000 Ada generation GPUs, each boasting 48GB of VRAM. Additionally, it features Intel's most powerful CPU, potentially with up to 56 cores, and the option to be fully loaded with 2TB of RAM.
Creating the weights for the DocFormer should be a good use of this machine. What is your time availability?<|||||>Hi @mbertani, sorry for the late reply. If it is possible, I would surely like to give it a go. As of my experience with GPUs, I have worked on a DGX workstation, and I believe, the configurations you mentioned would work fine.
By time availability, do you mean to have a meet to discuss the plan further?
By that time, I would be working on arranging the code required for the pre-training as well as coming up with the plan about how to go next. I do have slight experience on pre-training (had pre-trained LayoutLMv3, and some related models for use case), so I can plan things and test them. <|||||>OK, good, then we can setup a meeting to discuss how we proceed. So as not to share emails on public forums, I can share with you my LI profile and we take it from there?
https://www.linkedin.com/in/marcobertaniokland/<|||||>Sure<|||||>Any update on this? |
transformers | 14,455 | closed | The multi-node / multi-gpu training and repeat logging on each process | How do we deal with repetitive warnings that can't be shut off on a multi-node/multi-gpu environment?
e.g. at BigScience we started using HF Tokenizer and now **this gets repeated hundreds of times**:
```
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
```
It comes from:
https://github.com/huggingface/transformers/blob/efea0f868bd381244e3cef51b388293e41a36d1e/src/transformers/tokenization_utils_base.py#L1934-L1936
The only way for me to fix this is to push the logging level to ERROR on the replicas:
```
if args.rank == 0:
transformers.utils.logging.set_verbosity(logging.INFO)
else:
transformers.utils.logging.set_verbosity(logging.ERROR)
```
but then if there is actually a real warning in some process, then I won't see it.
Any good suggestions here?
Thank you!
p.s. As a background story: I have each component we use in Megatron-DeepSpeed spitting just a few dozens of these which then get multiplied by say 512 or 1024 times. And the log file becomes unusable and makes the troubleshooting when things crash a very difficult experience. Hence I really need to find a way not to log anything that is not really pertinent to a specific replica process. Moreover many processes aren't replicas of rank 0 process and do unique things, e.g. in the pipeline setup. But in the case of tokenizer it is the same on all processes.
@sgugger, @LysandreJik | 11-19-2021 04:44:42 | 11-19-2021 04:44:42 | I don't know of any other way to limit the logs.<|||||>We could switch it to a `warnings.warn`, which should only be sent once.
Otherwise, we can also use the `self.deprecation_warnings` dictionary available here:
https://github.com/huggingface/transformers/blob/efea0f868bd381244e3cef51b388293e41a36d1e/src/transformers/tokenization_utils_base.py#L1470-L1472
This can store warnings, like done here:
https://github.com/huggingface/transformers/blob/efea0f868bd381244e3cef51b388293e41a36d1e/src/transformers/tokenization_utils_base.py#L1494-L1498
To make sure that the warnings are not sent more than once.<|||||>This unfortunately won't help. We are talking multi-process here, so the processes don't know anything about each other's doings. <|||||>OK, I have an idea. Many of our warnings are there to help new users know that something might not be right, or they may need to do something. This is great!
But when a user runs the same command line then Nth time, they don't want to see that warning anymore, because if it were important to them they would have done something about it already and the warning should have disappeared.
So I propose a new feature that turns all the advisory warnings off. Probably via env var.
```
HF_ADVISORY_WARNINGS=0 my_program.py
```
This is a feature for users who know what they are doing and actually pay attention to the logs.
Thoughts?
<|||||>I am fine with that on principle, we just need to have an API designed that makes it easy to use (e.g. not adding three lines of codes to issue a warning).<|||||>Yes, of course! It should be a single wrapper, like:
```
- logger.warning("Special tokens have been added in the vocabulary...)
+ logger.warning_advice("Special tokens have been added in the vocabulary...)
```
and:
```
# logging.py
advisory_warnings = code to check ENV[HF_ADVISORY_WARNINGS]
def warning_advice(*args, **kwargs):
if not advisory_warnings: return
logger.warning(*args, **kwargs)
```
Something like that?<|||||>This looks like a great solution to me. Wdyt @LysandreJik ?<|||||>Sounds great to me! |
transformers | 14,454 | closed | GPT2 Generate doesn't pass the user defined past_key_values. | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: MacOS Intel
- Python version: 3.7.10
- PyTorch version (GPU?): '1.8.1' (CPU)
- Tensorflow version (GPU?): '1.15.0' (CPU)
- Using GPU in script?: Nope
- Using distributed or parallel set-up in script?: Nope
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
@patrickvonplaten, @LysandreJik, @cccntu
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- encoder-decoder models (For example, BlenderBot, BART, Marian, Pegasus, T5, ByT5): @patrickvonplaten, @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj @patrickvonplaten
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten
- Tokenizers: @LysandreJik
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Bug in Huggingface Transformers "generate" for Auto-Regression Model (like GPT-2). If you wanna pass your own "past_key_values", the function will not pass it to the model as you wish.
## To reproduce
Send used defined "past_key_values" to the generate() function, the "past_key_values" in model forward() should be none.
| 11-19-2021 02:44:03 | 11-19-2021 02:44:03 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @KnightZhang625,
Could you please provide a reproducible code-snippet?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,453 | closed | How to implement huggingface BERT + CRF layer? | Hi
I wonder how to add CRF layer to a pretrained BERT model? Do I have to overwrite the loss function in the trainer? I really need some help on this, thank you. | 11-19-2021 01:37:47 | 11-19-2021 01:37:47 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discuss.huggingface.co) instead?
Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,452 | closed | Sampling sequences similar to a given sequence | # π Feature request
Sampling sequences similar to the previously generated sequence.
Example:
A language model produces the following "I went for a walk with my dog". During the next step we request a model to produce a similar sequence. It produces: "He went for a walk with his cat".
## Motivation
Very useful for downstream applications.
## Your contribution
Can review PR and help with implementation.
| 11-19-2021 01:36:17 | 11-19-2021 01:36:17 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,451 | closed | Patrick von Platen | Dear Patrick von Platen
I have some questions about the use of Helsinki-NLP models in commercial projects, can you please contact me by email? [email protected]
Sorry to use this issue tracker, I know of no other way to reach Patrick. | 11-18-2021 20:31:39 | 11-18-2021 20:31:39 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,450 | closed | [RFC] Amphere/tf32 defaults for transformers | ## Background
It's possible to use the new TF32 format automatically when doing fp32 processing for a ~3x speed up, without doing any changes to the code, other than flipping the switch on. But the speed up may come at a cost of accuracy. You can see the differences between the formats in the following image:

You can see that both TF32 and FP32 have the same dynamic range (the magnitude of numbers), but the former has a much lower precision, which depending on a situation may or may not impact the final outcome.
## Emerging Need
As Amphere hardware is emerging and TF32 automatic enabling seems to be going in the direction of being disabled by default (probably starting from pt-1.11?) as discussed here: https://github.com/pytorch/pytorch/issues/67384, we need to communicate to our users how to turn it on/off and what are the impacts on speed and accuracy might be.
Having it on could bring a ~3x speed improvement, and most likely according to the NVIDIA engineers the training quality shouldn't be impacted. But we don't have our first hand experiences yet to provide pragmatic recommendations.
## Available Guides
The on/off machinery is explained here: https://pytorch.org/docs/master/notes/cuda.html#tf32-on-ampere
The other crucial document is: https://pytorch.org/docs/master/notes/numerical_accuracy.html
TF32 Educational blog post: https://blogs.nvidia.com/blog/2020/05/14/tensorfloat-32-precision-format/
## Plan of action
So this issue is both an RFC and is also documenting the need to update: https://huggingface.co/transformers/performance.html
I trust that the commentary will emerge once you start experimenting with the new hardware.
@sgugger, @LysandreJik, @patil-suraj, @patrickvonplaten
| 11-18-2021 20:29:10 | 11-18-2021 20:29:10 | |
transformers | 14,449 | closed | OpenAIGPTTokenizer does not work with spacy 3.x installed | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.11.3
- Platform: Darwin-19.6.0-x86_64-i386-64bit
- Python version: 3.6.8
- PyTorch version (GPU?): 1.9.1 (False)
- Tensorflow version (GPU?): 2.6.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
@patrickvonplaten @LysandreJik since it is related to OpenAI GPT model.
## Information
It seems that OpenAIGPTTokenizer does not work with spacy>=3.0.0.
In `tokenization_openai.py`, there is logic that uses spacy tokenizer if spacy and ftfy are installed:
```
try:
import ftfy
from spacy.lang.en import English
_nlp = English()
self.nlp = _nlp.Defaults.create_tokenizer(_nlp)
self.fix_text = ftfy.fix_text
except ImportError:
logger.warning("ftfy or spacy is not installed using BERT BasicTokenizer instead of SpaCy & ftfy.")
self.nlp = BasicTokenizer(do_lower_case=True)
self.fix_text = None
```
This block executes correctly for spacy versions 2.x. But for spacy versions 3.x, the API changed and the proper way to create the tokenizer is like so (taken from the docs at https://spacy.io/api/tokenizer):
```
from spacy.lang.en import English
nlp = English()
# Create a Tokenizer with the default settings for English
# including punctuation rules and exceptions
tokenizer = nlp.tokenizer
```
## To reproduce
Steps to reproduce the behavior:
1. `pip install transformers==4.11.3 ftfy==6.0.3 spacy==3.0.0`
2. Open a python shell and run the following:
```
from transformers import OpenAIGPTTokenizer
tokenizer = OpenAIGPTTokenizer.from_pretrained('openai-gpt')
```
3. Should break with a stack trace that looks like the following:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/cody/.pyenvs/test/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 1750, in from_pretrained
**kwargs,
File "/Users/cody/.pyenvs/test/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 1872, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/Users/cody/.pyenvs/test/lib/python3.6/site-packages/transformers/models/openai/tokenization_openai.py", line 107, in __init__
self.nlp = _nlp.Defaults.create_tokenizer(_nlp)
AttributeError: type object 'EnglishDefaults' has no attribute 'create_tokenizer'
```
## Expected behavior
The tokenizer should be able to work with spacy 3.x.
| 11-18-2021 19:27:14 | 11-18-2021 19:27:14 | Hi,
The OpenAI tokenizer probably requires an update to work with Spacy v3 (as seen [here](https://github.com/explosion/spaCy/discussions/7398)). That line should be replaced by `self.nlp = _nlp.tokenizer`.
Do you mind opening a PR for this?
Thanks! |
transformers | 14,448 | closed | WIP: Add support for bfloat16 in Trainer and T5 | # What does this PR do?
This PR adds support for `bfloat16` with `deepspeed` and the `T5` models.
`bfloat16` is of great interest due to its larger dynamic range. `bfloat16` is especially of interest for large models that were originally trained on TPUs (which have native `bfloat16` support). Some newer GPUs (A6000, A100) have hardware support for `bfloat16`. Recently, `deepspeed` added support for `bfloat16` (see [this commit](https://github.com/microsoft/DeepSpeed/commit/648f7bfa5009484b822064d0c28d377da6dd71a0)).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ x ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Members/contributors who may be interested in this PR: @stas00 and @JamesDeAntonis.
Adding @patrickvonplaten and @patil-suraj due to the T5 changes in this PR.
## Coordination
@stas00 @JamesDeAntonis
I know James has been working on a PR to add `bfloat16` through amp. This PR is only focused on `deepspeed`. Maybe there is a way we can unite our efforts?
## Next steps
For extra context: I am aware of `bfloat16` coming to ZeRO stage 3, [this PR](https://github.com/microsoft/DeepSpeed/pull/1453). I hope to test compatibility with stage 3 as soon as it's ready. I've only aimed for stage 2 so far.
## TODO
- [ ] make sure that we require the right minimal deepspeed version before merging this.
- [ ] update deepspeed integration docs to include bf16 examples
- [ ] add bf16 tests
| 11-18-2021 18:52:58 | 11-18-2021 18:52:58 | OK, since we have 2 half-baked PRs,
https://github.com/huggingface/transformers/pull/13207
https://github.com/huggingface/transformers/pull/14448
I'm going to try to merge the 2 to keep the credits and start a new PR.
If you have something to push now is the time.<|||||>@stas00 Merging the two PRs is a good idea. I have nothing to push now. I was planning to work on this tomorrow. Tag me once you merge the two PRs and I will continue tomorrow in the combined PR.<|||||>Most of the work is already in the other PR so it makes sense to use that as a base so will need to ask @JamesDeAntonis for him to temporarily give you access to his fork, or alternatively you can push your changes here and I can replay it into that other PR, so all your work will be still credited.
But in any case let's complete this work in the next few days, since I need to enable Deepspeed bf16 integration and this foundation is required.
<|||||>So I merged our work here into https://github.com/huggingface/transformers/pull/13207, keeping the log intact - will work on polishing it now, there may have been some overlapping code left to clean up.
Thus ideally let's continue over at https://github.com/huggingface/transformers/pull/13207<|||||>@stas00 Thank you. I will close this PR now. We can continue in #13207 |
transformers | 14,447 | closed | [Bert, et al] fix early device assignment | As flagged by @cbalioglu we are doing device placement in a sub-module's `__init__` in some models, which is a wrong place to do that.
https://github.com/huggingface/transformers/blob/83ef8bcac2f6ce00a3c6256a4ba747c8802480f6/src/transformers/models/bert/modeling_bert.py#L185
It appears that it originally was in `forward` and then was moved to `__init__` without removing the device placement. Here is the use of the same in `forward`
https://github.com/huggingface/transformers/blob/83ef8bcac2f6ce00a3c6256a4ba747c8802480f6/src/transformers/models/luke/modeling_luke.py#L248
So this PR fixes 7 models where this happened.
@sgugger, should we add another check to our quality control to catch any `device=` in `__init__`?
@sgugger, @LysandreJik | 11-18-2021 18:30:17 | 11-18-2021 18:30:17 | |
transformers | 14,445 | closed | Fix finite IterableDataset test on multiple GPUs | # What does this PR do?
Simple fix for the new `test_training_finite_iterable_dataset` on multiple GPUs. | 11-18-2021 14:20:28 | 11-18-2021 14:20:28 | |
transformers | 14,444 | closed | Issues with Training VisionEncoderDecoder with Seq2SeqTrainer | Hello Team,
I am trying to code up a VisionEncoderDecoder with VIT + BERT and finetune it with `Seq2SeqTrainer.` (Presuming the support exists).
```python
from transformers import VisionEncoderDecoderModel
from transformers import ViTFeatureExtractor, AutoTokenizer
import torch
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224-in21k')
vitbert = VisionEncoderDecoderModel.from_encoder_decoder_pretrained('google/vit-base-patch16-224-in21k', 'bert-base-uncased')
vitbert.to(device)
print("Model loaded")
```
VITModel accepts `pixel_values` but the VisionEncoderDecoder model apparently fails to recognize the preprocessed inputs with image encodings as `pixel_values` and `labels`. So I changed the `pixel_values` into `input_ids` for the model to accept? (not sure if I am in the right direction) .
**The Pre-Process snippet**
```python
input_encodings = feature_extractor(images=inputs, return_tensors="pt")
input_encodings["input_ids"] = input_encodings["pixel_values"]
```
**Feature casting snippet**
```python
from datasets import Features, Array3D, Sequence, Value
features = Features({
"input_ids": Array3D(dtype="float32", shape=(3, 224, 224)),
"labels": Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)
})
preprocessed_train_ds = tokenized_train_datasets.map(ds_preprocess_function, batched=True, features=features)
preprocessed_val_ds = tokenized_eval_datasets.map(ds_preprocess_function, batched=True, features=features)
```
But `Trainer.train` throws the below error.
```python
ValueError: You should supply an encoding or a list of encodings to this method that includes input_ids, but you provided ['labels']
```
Not sure what I am missing here.
Please advice. | 11-18-2021 12:20:11 | 11-18-2021 12:20:11 | Hi,
Did you take a look at my [notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TrOCR/Fine_tune_TrOCR_on_IAM_Handwriting_Database_using_Seq2SeqTrainer.ipynb)?
I just tested, everything seems to work fine on my end. You don't need to rename the `pixel_values` to `input_ids`.
<|||||>> Hi,
>
>
>
> Did you take a look at my [notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TrOCR/Fine_tune_TrOCR_on_IAM_Handwriting_Database_using_Seq2SeqTrainer.ipynb)?
>
>
>
> I just tested, everything seems to work fine on my end. You don't need to rename the `pixel_values` to `input_ids`.
>
>
>
>
Thanks a lot will look at. <|||||>All good now !
I supplied the decoder tokeniser instead of the encoder feature extractor in the tokeniser parameter while setting up the seq2seq trainer.
Thanks |
transformers | 14,443 | closed | [Generation] Allow `inputs_embeds` as an input | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR allows `inputs_embeds` to be used as an input argument for `generate()`. Fixes: #12218
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-18-2021 11:58:16 | 11-18-2021 11:58:16 | |
transformers | 14,442 | closed | tokenizer | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- encoder-decoder models (For example, BlenderBot, BART, Marian, Pegasus, T5, ByT5): @patrickvonplaten, @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj @patrickvonplaten
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten
- Tokenizers: @LysandreJik
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 11-18-2021 08:33:58 | 11-18-2021 08:33:58 | |
transformers | 14,441 | closed | Fix EncoderDecoderModel code example | # What does this PR do?
This PR updates the code example of `EncoderDecoderModel`, as it included 2 mistakes.
Fixes #14439
Fixes #14381 | 11-18-2021 08:02:27 | 11-18-2021 08:02:27 | Thanks @NielsRogge ! |
transformers | 14,440 | closed | What does "is_beam_sample_gen_mode" mean | Hi, I find there are many ways for generating sequences in `Transformers`(when calling the `generate` method).
According to the code there:
https://github.com/huggingface/transformers/blob/01f8e639d35feb91f16fd3c31f035df11a726cc5/src/transformers/generation_utils.py#L947-L951
As far as I known:
`is_greedy_gen_mode` stands for Greedy Search.
`is_sample_gen_mode` stands for Sampling(with top_k and top_p).
`is_beam_gen_mode` stands for Beam Search .
But what does `is_beam_sample_gen_mode` mean?
Besides, I want to know how do I choose the correct way for generating. I have tried serval ways, but:
1. I find the sequences out from "beam search" mode becomes too similar.
2. I also find the sequences out from "sample" mode, while being diverse, are lacking context coherence.
Thank you! | 11-18-2021 06:31:52 | 11-18-2021 06:31:52 | Hi,
A good overview can be found in [this blog post](https://huggingface.co/blog/how-to-generate).
It explains the most prominent decoding methods, mainly Greedy search, Beam search, Top-K sampling and Top-p sampling.
I'm not sure "beam sampling" exists.<|||||>> Hi,
>
> A good overview can be found in [this blog post](https://huggingface.co/blog/how-to-generate).
>
> It explains the most prominent decoding methods, mainly Greedy search, Beam search, Top-K sampling and Top-p sampling.
>
> I'm not sure "beam sampling" exists.
Thanks for your response.
First, to be clear, according to the code there:
https://github.com/huggingface/transformers/blob/01f8e639d35feb91f16fd3c31f035df11a726cc5/src/transformers/generation_utils.py#L947-L951
`is_greedy_gen_mode` stands for Greedy Search.
`is_sample_gen_mode` stands for Sampling(with top_k and top_p).
`is_beam_gen_mode` stands for Beam Search .
But what does `is_beam_sample_gen_mode` mean?
In fact, I have already read the post, and learned concepts like Beam Search and Sampling(along with top_k and top_p) from it.
And that's why I'm confused: the post showed me that Beam Search and Sampling is two different way for generating. And I can't image what's "beam sample gen mode" looks like.<|||||>Hi,
Looking at the [docs](https://huggingface.co/transformers/main_classes/model.html?highlight=generate#transformers.generation_utils.GenerationMixin.beam_sample) of beam sample, it means beam search with multinomial sampling.
<|||||>> Hi,
>
> Looking at the [docs](https://huggingface.co/transformers/main_classes/model.html?highlight=generate#transformers.generation_utils.GenerationMixin.beam_sample) of beam sample, it means beam search with multinomial sampling.
Hi, I have checked the docs, but I need something more specific.
Assumes `num_beams`= 5, and there are 100 words in the vocab.
Does `beam sample` means at each time step:
1. Firstly generate 5 * 100 = 500 possible sequences, calculate and normalize their scores.
2. Secondly use Multinomial Sampling to randomly choose 5 sequences as output.
BTW I read the source code but fail to understand.
In the following code, why does it sample `2 * num_beams`(instead of `num_beams`) times in line 2152?
https://github.com/huggingface/transformers/blob/69e16abf98c94b8a6d2cf7d60ca36f13e4fbee58/src/transformers/generation_utils.py#L2150-L2159
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>> > Hi,
> > Looking at the [docs](https://huggingface.co/transformers/main_classes/model.html?highlight=generate#transformers.generation_utils.GenerationMixin.beam_sample) of beam sample, it means beam search with multinomial sampling.
>
> Hi, I have checked the docs, but I need something more specific. Assumes `num_beams`= 5, and there are 100 words in the vocab. Does `beam sample` means at each time step:
>
> 1. Firstly generate 5 * 100 = 500 possible sequences, calculate and normalize their scores.
> 2. Secondly use Multinomial Sampling to randomly choose 5 sequences as output.
>
> BTW I read the source code but fail to understand. In the following code, why does it sample `2 * num_beams`(instead of `num_beams`) times in line 2152?
>
> https://github.com/huggingface/transformers/blob/69e16abf98c94b8a6d2cf7d60ca36f13e4fbee58/src/transformers/generation_utils.py#L2150-L2159
haloοΌdo u understand details about this "is_beam_sample_gen_mode" now ?<|||||>What exactly is beam search with multinomial sampling? Googling returns no relevant results (except on Huggingface).<|||||>Ok. I think beam sample is almost the same as beam search, except that in each step, instead of picking the `num_beams` most-likely sequences, it chooses these sequences by sampling.<|||||>> Ok. I think beam sample is almost the same as beam search, except that in each step, instead of picking the `num_beams` most-likely sequences, it chooses these sequences by sampling.
I think @shunzh is right. And another difference is that when setting `num_return_sequences > 1`, the `beam_sample` will be done for `num_return_sequences` times and will sample 1 sequence every time, while the `beam_search` will be done for only 1 time and will return top-`num_return_sequences` sequences.<|||||>> 2 * num_beams
refer to [https://github.com/huggingface/transformers/issues/16095#issuecomment-1071123601](https://github.com/huggingface/transformers/issues/16095#issuecomment-1071123601) |
transformers | 14,439 | closed | (EncoderDecoderModel) Why decoder_start_token_id are different between train and generation? | As source code given below from [EncoderDecoderModel](https://huggingface.co/transformers/model_doc/encoderdecoder.html?highlight=decoder_start_token#transformers.EncoderDecoderModel.forward), **decoder_start_token_id** are different between train and generation.
```
>>> # training
>>> model.config.decoder_start_token_id = tokenizer.cls_token_id
```
```
>>> # generation
>>> generated = model.generate(input_ids, decoder_start_token_id=model.config.decoder.pad_token_id)
```
Why they are not same? | 11-18-2021 06:26:39 | 11-18-2021 06:26:39 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.