repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 10,011 | closed | OOM when trying to fine tune patrickvonplaten/led-large-16384-pubmed | I'm currently following this [notebook](https://colab.research.google.com/drive/12LjJazBl7Gam0XBPy_y0CTOJZeZ34c2v?usp=sharing#scrollTo=tLM3niQqhEzP) but instead I'm using `patrickvonplaten/led-large-16384-pubmed`
```python
tokenizer = AutoTokenizer.from_pretrained("patrickvonplaten/led-large-16384-pubmed",)
led = AutoModelForSeq2SeqLM.from_pretrained(
"patrickvonplaten/led-large-16384-pubmed",
gradient_checkpointing=True,
use_cache=False,
)
```
instead of `allenai/led-large-16384` as the base model and tokenizer. I'm also using my own train/test data. With the exception of that, I kept everything else the same/consistent to that notebook as far as fine tuning. However, I'm running into OOM errors
```
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 15.78 GiB total capacity; 13.96 GiB already allocated; 20.00 MiB free; 14.56 GiB reserved in total by PyTorch)
0%| | 0/3 [00:10<?, ?it/s]
```
on a couple of`Tesla V100-SXM2-16GB` and I'm not sure why that might be. The `batch_size=2` seems pretty small and I also set `gradient_checkpoint=True`. @patrickvonplaten and/or the surrounding community, I'd greatly appreciate any help with this | 02-04-2021 18:28:09 | 02-04-2021 18:28:09 | The model is actually quite big so I would expect it to OOM, if you are doing multi GPU training, you could try `fairscale/deepspeed` integration for saving memory and speeding up the training, check out this blog post
https://huggingface.co/blog/zero-deepspeed-fairscale<|||||>hi @patil-suraj thank you for your feedback and the blog post. So would I pip install deepspeed and use it as an argument in `Seq2SeqTrainingArguments`? If so, I noticed the documentation for that kwarg says
```
deepspeed (:obj:`str`, `optional`):
| Use `Deepspeed <https://github.com/microsoft/deepspeed>`__. This is an experimental feature and its API may
| evolve in the future. The value is the location of its json config file (usually ``ds_config.json``).
```
It says to give it the location of it's json config file, but I'm not sure what that means? Would that mean 1. create a json file like [this](https://raw.githubusercontent.com/huggingface/transformers/master/examples/seq2seq/ds_config.json) and save it to disk then 2. specify the location of that json file in disk?
I notice it says to also use it in command line, so would I need to run
```python
import subprocess
subprocess.check_call([ "deepspeed"])
```
as far as using `Seq2SeqTrainingArguments` is there anything else that I should set for distributed training? I noticed `local_rank=-1` by default so I assumed that was all I needed. Not sure if I was supposed to set `n_gpu`, `parallel_mode` or anything else so that it knows to do distributed training<|||||>@stas00 or surrounding community, I'd greatly appreciate any feedback on how to use deepseed. I tried pip installing it and adding deepspeed in my command line argument(in addition to `--local-rank=-1`), but I'm not sure what else I might need? I noticed `Seq2SeqTrainingArguments` also has a `deepspeed` argument,
```python
help(Seq2SeqTrainingArguments)
```
```
deepspeed (:obj:`str`, `optional`):
| Use `Deepspeed <https://github.com/microsoft/deepspeed>`__. This is an experimental feature and its API may
| evolve in the future. The value is the location of its json config file (usually ``ds_config.json``).
```
but I'm not sure if I need to create my own `ds_config.json` for it, save that json file to disk and then set that file location as the string for the `deepspeed` argument in `Seq2SeqTrainingArguments`. So I tried creating a `ds_config.json` file using
```python
import json
ds_config = {
"fp16": {
"enabled": "true",
"loss_scale": 0,
"loss_scale_window": 1000,
"hysteresis": 2,
"min_loss_scale": 1
},
"zero_optimization": {
"stage": 2,
"allgather_partitions": "true",
"allgather_bucket_size": 2e8,
"overlap_comm": "true",
"reduce_scatter": "true",
"reduce_bucket_size": 2e8,
"contiguous_gradients": "true",
"cpu_offload": "true"
},
"zero_allow_untested_optimizer": "true",
"optimizer": {
"type": "AdamW",
"params": {
"lr": 3e-5,
"betas": [
0.8,
0.999
],
"eps": 1e-8,
"weight_decay": 3e-7
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 3e-5,
"warmup_num_steps": 500
}
},
"steps_per_print": 2000,
"wall_clock_breakdown": "false"
}
with open('ds_config.json', 'w') as fp:
json.dump(ds_config, fp)
```
then setting
```python
training_args = Seq2SeqTrainingArguments(
deepspeed="ds_config.json"
```
but I got an import error as far as `mpi4py`. I'm not sure if what I'm doing to use deepseed is correct. I'd greatly appreciate any help with this<|||||>@mmoya01, let's sort it out.
1. You will find the full documentation at https://huggingface.co/transformers/master/main_classes/trainer.html#deepspeed
As this is new and I haven't thought of all the use-cases please don't hesitate to flag if something is missing or unclear in the documentation and it will get sorted out.
2. the `--deepspeed` cl arg (or the `deepspeed` argument of the Trainer) expects a path to a file that contains the deepspeed configuration, so your file should have just the config bit:
```
{
"fp16": {
"enabled": "true",
"loss_scale": 0,
"loss_scale_window": 1000,
"hysteresis": 2,
"min_loss_scale": 1
},
"zero_optimization": {
"stage": 2,
"allgather_partitions": "true",
"allgather_bucket_size": 2e8,
"overlap_comm": "true",
"reduce_scatter": "true",
"reduce_bucket_size": 2e8,
"contiguous_gradients": "true",
"cpu_offload": "true"
},
"zero_allow_untested_optimizer": "true",
"optimizer": {
"type": "AdamW",
"params": {
"lr": 3e-5,
"betas": [
0.8,
0.999
],
"eps": 1e-8,
"weight_decay": 3e-7
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 3e-5,
"warmup_num_steps": 500
}
},
"steps_per_print": 2000,
"wall_clock_breakdown": "false"
}
```
So in your case if you prefer to not use the CLI arguments:
```
training_args = Seq2SeqTrainingArguments(deepspeed="ds_config.json")
```
3. Note that the invocation of the script must change to have `deepspeed` as its launcher, please refer to one of:
- https://huggingface.co/transformers/master/main_classes/trainer.html#deployment-with-multiple-gpus
- https://huggingface.co/transformers/master/main_classes/trainer.html#deployment-with-one-gpu
Please give it a try and if you run into any errors please paste the exact command you used and the backtrace and we will take it from there<|||||>Hi @stas00 , thank you for getting back to me, I greatly appreciate it. Sounds good, so I removed `deepspeed` as a cl arg and instead specified the location of the `ds_config.json` file in
```python
training_args = Seq2SeqTrainingArguments(
predict_with_generate=True,
evaluation_strategy="steps",
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
fp16=True,
fp16_backend="amp",
output_dir= "/mnt/summarization_checkpoints",
logging_steps=1000,
eval_steps=1000,
save_steps=1000,
warmup_steps=2000,
save_total_limit=3,
gradient_accumulation_steps=4,
deepspeed="ds_config.json"
)
```
I also noticed, because of [this](https://github.com/microsoft/DeepSpeed/blob/master/deepspeed/runtime/fp16/onebit_adam.py#L14) import in deepspeed, I ended up pip installing `mpi4py` in addition to `deepspeed` and installing [libopenmpi-dev](https://stackoverflow.com/questions/28440834/error-when-installing-mpi4py) in my cuda image. Once I did all that, I was able to get most things running up until I came across this traceback below
```
[1/2] c++ -MMD -MF flatten_unflatten.o.d -DTORCH_EXTENSION_NAME=utils -DTORCH_API_INCLUDE_EXTENSION_H -isystem /usr/local/lib/python3.8/dist-packages/torch/include -isystem /usr/local/lib/python3.8/dist-packages/torch/include/torch/csrc/api/include -isystem /usr/local/lib/python3.8/dist-packages/torch/include/TH -isystem /usr/local/lib/python3.8/dist-packages/torch/include/THC -isystem /usr/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++14 -c /usr/local/lib/python3.8/dist-packages/deepspeed/ops/csrc/utils/flatten_unflatten.cpp -o flatten_unflatten.o
[2/2] c++ flatten_unflatten.o -shared -L/usr/local/lib/python3.8/dist-packages/torch/lib -lc10 -ltorch_cpu -ltorch -ltorch_python -o utils.so
Loading extension module utils...
Time to load utils op: 13.478780031204224 seconds
[2021-02-09 22:26:48,901] [INFO] [stage2.py:130:__init__] Reduce bucket size 200000000.0
[2021-02-09 22:26:48,901] [INFO] [stage2.py:131:__init__] Allgather bucket size 200000000.0
[2021-02-09 22:26:48,901] [INFO] [stage2.py:132:__init__] CPU Offload: true
group 0 param 0 = 459801600
[2021-02-09 22:26:52,231] [INFO] [stage2.py:399:__init__] optimizer state initialized
[2021-02-09 22:26:52,232] [INFO] [engine.py:586:_configure_optimizer] DeepSpeed Final Optimizer = <deepspeed.runtime.zero.stage2.FP16_DeepSpeedZeroOptimizer object at 0x7fea11ea1190>
[2021-02-09 22:26:52,232] [INFO] [engine.py:405:_configure_lr_scheduler] DeepSpeed using configured LR scheduler = WarmupLR
[2021-02-09 22:26:52,232] [INFO] [logging.py:60:log_dist] [Rank 0] DeepSpeed LR Scheduler = <deepspeed.runtime.lr_schedules.WarmupLR object at 0x7fe9b1759ca0>
[2021-02-09 22:26:52,232] [INFO] [logging.py:60:log_dist] [Rank 0] step=0, skipped=0, lr=[3e-05], mom=[[0.8, 0.999]]
[2021-02-09 22:26:52,232] [INFO] [config.py:733:print] DeepSpeedEngine configuration:
[2021-02-09 22:26:52,232] [INFO] [config.py:737:print] activation_checkpointing_config <deepspeed.runtime.activation_checkpointing.config.DeepSpeedActivationCheckpointingConfig object at 0x7fe9b26b1340>
[2021-02-09 22:26:52,232] [INFO] [config.py:737:print] allreduce_always_fp32 ........ False
[2021-02-09 22:26:52,232] [INFO] [config.py:737:print] amp_enabled .................. False
[2021-02-09 22:26:52,232] [INFO] [config.py:737:print] amp_params ................... False
[2021-02-09 22:26:52,233] [INFO] [config.py:737:print] checkpoint_tag_validation_enabled True
[2021-02-09 22:26:52,233] [INFO] [config.py:737:print] checkpoint_tag_validation_fail False
[2021-02-09 22:26:52,233] [INFO] [config.py:737:print] disable_allgather ............ False
[2021-02-09 22:26:52,233] [INFO] [config.py:737:print] dump_state ................... False
[2021-02-09 22:26:52,233] [INFO] [config.py:737:print] dynamic_loss_scale_args ...... {'init_scale': 4294967296, 'scale_window': 1000, 'delayed_shift': 2, 'min_scale': 1}
[2021-02-09 22:26:52,233] [INFO] [config.py:737:print] elasticity_enabled ........... False
[2021-02-09 22:26:52,233] [INFO] [config.py:737:print] flops_profiler_config ........ <deepspeed.profiling.config.DeepSpeedFlopsProfilerConfig object at 0x7fe9b26b1280>
[2021-02-09 22:26:52,233] [INFO] [config.py:737:print] fp16_enabled ................. true
[2021-02-09 22:26:52,233] [INFO] [config.py:737:print] global_rank .................. 0
[2021-02-09 22:26:52,233] [INFO] [config.py:737:print] gradient_accumulation_steps .. 4
[2021-02-09 22:26:52,233] [INFO] [config.py:737:print] gradient_clipping ............ 1.0
[2021-02-09 22:26:52,233] [INFO] [config.py:737:print] gradient_predivide_factor .... 1.0
[2021-02-09 22:26:52,233] [INFO] [config.py:737:print] initial_dynamic_scale ........ 4294967296
[2021-02-09 22:26:52,233] [INFO] [config.py:737:print] loss_scale ................... 0
[2021-02-09 22:26:52,233] [INFO] [config.py:737:print] memory_breakdown ............. False
[2021-02-09 22:26:52,233] [INFO] [config.py:737:print] optimizer_legacy_fusion ...... False
[2021-02-09 22:26:52,233] [INFO] [config.py:737:print] optimizer_name ............... adamw
[2021-02-09 22:26:52,233] [INFO] [config.py:737:print] optimizer_params ............. {'lr': 3e-05, 'betas': [0.8, 0.999], 'eps': 1e-08, 'weight_decay': 3e-07}
[2021-02-09 22:26:52,233] [INFO] [config.py:737:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0}
[2021-02-09 22:26:52,233] [INFO] [config.py:737:print] pld_enabled .................. False
[2021-02-09 22:26:52,233] [INFO] [config.py:737:print] pld_params ................... False
[2021-02-09 22:26:52,233] [INFO] [config.py:737:print] prescale_gradients ........... False
[2021-02-09 22:26:52,233] [INFO] [config.py:737:print] scheduler_name ............... WarmupLR
[2021-02-09 22:26:52,233] [INFO] [config.py:737:print] scheduler_params ............. {'warmup_min_lr': 0, 'warmup_max_lr': 3e-05, 'warmup_num_steps': 500}
[2021-02-09 22:26:52,234] [INFO] [config.py:737:print] sparse_attention ............. None
[2021-02-09 22:26:52,234] [INFO] [config.py:737:print] sparse_gradients_enabled ..... False
[2021-02-09 22:26:52,234] [INFO] [config.py:737:print] steps_per_print .............. 2000
[2021-02-09 22:26:52,234] [INFO] [config.py:737:print] tensorboard_enabled .......... False
[2021-02-09 22:26:52,234] [INFO] [config.py:737:print] tensorboard_job_name ......... DeepSpeedJobName
[2021-02-09 22:26:52,234] [INFO] [config.py:737:print] tensorboard_output_path ......
[2021-02-09 22:26:52,234] [INFO] [config.py:737:print] train_batch_size ............. 8
[2021-02-09 22:26:52,234] [INFO] [config.py:737:print] train_micro_batch_size_per_gpu 2
[2021-02-09 22:26:52,234] [INFO] [config.py:737:print] wall_clock_breakdown ......... false
[2021-02-09 22:26:52,234] [INFO] [config.py:737:print] world_size ................... 1
[2021-02-09 22:26:52,234] [INFO] [config.py:737:print] zero_allow_untested_optimizer true
[2021-02-09 22:26:52,234] [INFO] [config.py:737:print] zero_config .................. {
"allgather_bucket_size": 200000000.0,
"allgather_partitions": "true",
"contiguous_gradients": "true",
"cpu_offload": "true",
"elastic_checkpoint": true,
"load_from_fp32_weights": true,
"overlap_comm": "true",
"reduce_bucket_size": 200000000.0,
"reduce_scatter": "true",
"stage": 2
}
[2021-02-09 22:26:52,234] [INFO] [config.py:737:print] zero_enabled ................. True
[2021-02-09 22:26:52,234] [INFO] [config.py:737:print] zero_optimization_stage ...... 2
[2021-02-09 22:26:52,234] [INFO] [config.py:739:print] json = {
"fp16":{
"enabled":"true",
"hysteresis":2,
"loss_scale":0,
"loss_scale_window":1000,
"min_loss_scale":1
},
"gradient_accumulation_steps":4,
"gradient_clipping":1.0,
"optimizer":{
"params":{
"betas":[
0.8,
0.999
],
"eps":1e-08,
"lr":3e-05,
"weight_decay":3e-07
},
"type":"AdamW"
},
"scheduler":{
"params":{
"warmup_max_lr":3e-05,
"warmup_min_lr":0,
"warmup_num_steps":500
},
"type":"WarmupLR"
},
"steps_per_print":2000,
"train_micro_batch_size_per_gpu":2,
"wall_clock_breakdown":"false",
"zero_allow_untested_optimizer":"true",
"zero_optimization":{
"allgather_bucket_size":200000000.0,
"allgather_partitions":"true",
"contiguous_gradients":"true",
"cpu_offload":"true",
"overlap_comm":"true",
"reduce_bucket_size":200000000.0,
"reduce_scatter":"true",
"stage":2
}
}
Using /root/.cache/torch_extensions as PyTorch extensions root...
No modifications detected for re-loaded extension module utils, skipping build step...
Loading extension module utils...
Time to load utils op: 0.0004968643188476562 seconds
```
### Traceback
```
0%| | 0/3 [00:00<?, ?it/s]/usr/local/lib/python3.8/dist-packages/nlp/utils/py_utils.py:191: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:141.)
return function(data_struct)
Traceback (most recent call last):
File "abstractive_summarization.py", line 374, in <module>
run()
File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "abstractive_summarization.py", line 349, in run
trainer.train()
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 888, in train
tr_loss += self.training_step(model, inputs)
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 1250, in training_step
loss = self.compute_loss(model, inputs)
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 1277, in compute_loss
outputs = model(**inputs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/data_parallel.py", line 155, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/data_parallel.py", line 165, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/parallel_apply.py", line 85, in parallel_apply
output.reraise()
File "/usr/local/lib/python3.8/dist-packages/torch/_utils.py", line 395, in reraise
raise self.exc_type(msg)
AssertionError: Caught AssertionError in replica 1 on device 1.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker
output = module(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py", line 830, in forward
self.timers('forward_microstep').start()
File "/usr/local/lib/python3.8/dist-packages/deepspeed/utils/timer.py", line 38, in start
assert not self.started_, 'timer has already been started'
AssertionError: timer has already been started
0%| | 0/3 [00:09<?, ?it/s]
```
not sure if it's because of `checkpoint_tag_validation_fail`. I'd greatly appreciate your feedback<|||||>Glad to hear you were able to make progress, @mmoya01
What was the command line you used to launch this program? You have to launch it via `deepspeed` as the docs instruct.
**edit:** actually just learned that it doesn't have to be the case - will update the docs shortly, but I still need to know how you started the program. thank you.
> I also noticed, because of [this](https://github.com/microsoft/DeepSpeed/blob/master/deepspeed/runtime/fp16/onebit_adam.py#L14) import in deepspeed, I ended up pip installing `mpi4py` in addition to `deepspeed` and installing [libopenmpi-dev](https://stackoverflow.com/questions/28440834/error-when-installing-mpi4py) in my cuda image.
This is odd that you had to do it manually, DeepSpeed's pip installer should have installed all the dependencies automatically.
I will see if I can reproduce that.
> not sure if it's because of `checkpoint_tag_validation_fail`. I'd greatly appreciate your feedback
Have you tried w/o gradient checking?
The failure is not in the transformers land so it's a bit hard to guess what has happened.
I'd recommend filing an Issue with DeepSpeed: https://github.com/microsoft/DeepSpeed/issues<|||||>This is a pure DeepSpeed domain - totally unrelated to HF Trainer integrations:
I had a chance to look at the missing dependencies.
> I also noticed, because of this import in deepspeed, I ended up pip installing mpi4py in addition to deepspeed and installing libopenmpi-dev in my cuda image.
OK, for some reason you were trying to use `OneBitAdam` optimizer, which you haven't shown you were using above. This one requires extra dependencies that can be installed with:
```
pip install deepspeed[1bit_adam]
```
I tested and it works just fine with this config file:
```
{
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"hysteresis": 2,
"min_loss_scale": 1,
"initial_scale_power": 16
},
"zero_optimization": {
"stage": 2,
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"contiguous_gradients": true,
"cpu_offload": true
},
"zero_allow_untested_optimizer": true,
"optimizer": {
"type": "OneBitAdam",
"params": {
"lr": 2e-4,
"weight_decay": 0.01,
"bias_correction": false,
"freeze_step": 400,
"cuda_aware": true
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 3e-5,
"warmup_num_steps": 500
}
},
"steps_per_print": 2000,
"wall_clock_breakdown": false
}
```
You shouldn't need any of these extra dependencies to run, say, `AdamW`. <|||||>Hello @stas00 , first, thank you again for your reply/trying to help me through this. I realized I may have set my `local_rank` incorrectly(I set it at `local_rank=-1` which I believe disables distributed training). So I tried
1.) disabling gradient checkpointing
```python
led = AutoModelForSeq2SeqLM.from_pretrained(
"patrickvonplaten/led-large-16384-pubmed",
gradient_checkpointing=False,
use_cache=False,
)
```
2.) using this config
```json
{
"fp16": {
"enabled": "true",
"loss_scale": 0,
"loss_scale_window": 1000,
"hysteresis": 2,
"min_loss_scale": 1,
"initial_scale_power": 16
},
"zero_optimization": {
"stage": 2,
"allgather_partitions": "true",
"allgather_bucket_size": 2e8,
"overlap_comm": "true",
"reduce_scatter": "true",
"reduce_bucket_size": 2e8,
"contiguous_gradients": "true",
"cpu_offload": "true"
},
"zero_allow_untested_optimizer": "true",
"optimizer": {
"type": "AdamW",
"params": {
"lr": 0.001,
"betas": [0.8, 0.999],
"eps": 1e-8,
"weight_decay": 3e-7
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 3e-5,
"warmup_num_steps": 500
}
},
"steps_per_print": 2000,
"wall_clock_breakdown": "false"
}
```
3.) and setting `local_rank=0` in `Seq2SeqTrainingArguments`
```python
training_args = Seq2SeqTrainingArguments(
deepspeed="ds_config.json",
predict_with_generate=True,
evaluation_strategy="steps",
per_device_train_batch_size=2,
per_device_eval_batch_size=2,
fp16=True,
fp16_backend="amp",
output_dir= "/mnt/summarization_checkpoints",
logging_steps=1000,
eval_steps=1000,
save_steps=1000,
warmup_steps=2000,
save_total_limit=3,
gradient_accumulation_steps=4,
local_rank = 0,
# sharded_ddp = True,
)
```
I did not specify anything else in command line. I'm not sure if I set `local_rank` correctly in `Seq2SeqTrainingArguments`. I ended up getting a memory fragmentation error
```
[2021-02-10 20:43:26,268] [INFO] [config.py:733:print] DeepSpeedEngine configuration:
[2021-02-10 20:43:26,268] [INFO] [config.py:737:print] activation_checkpointing_config <deepspeed.runtime.activation_checkpointing.config.DeepSpeedActivationCheckpointingConfig object at 0x7f9d0b742dc0>
[2021-02-10 20:43:26,268] [INFO] [config.py:737:print] allreduce_always_fp32 ........ False
[2021-02-10 20:43:26,268] [INFO] [config.py:737:print] amp_enabled .................. False
[2021-02-10 20:43:26,268] [INFO] [config.py:737:print] amp_params ................... False
[2021-02-10 20:43:26,268] [INFO] [config.py:737:print] checkpoint_tag_validation_enabled True
[2021-02-10 20:43:26,268] [INFO] [config.py:737:print] checkpoint_tag_validation_fail False
[2021-02-10 20:43:26,268] [INFO] [config.py:737:print] disable_allgather ............ False
[2021-02-10 20:43:26,268] [INFO] [config.py:737:print] dump_state ................... False
[2021-02-10 20:43:26,269] [INFO] [config.py:737:print] dynamic_loss_scale_args ...... {'init_scale': 65536, 'scale_window': 1000, 'delayed_shift': 2, 'min_scale': 1}
[2021-02-10 20:43:26,269] [INFO] [config.py:737:print] elasticity_enabled ........... False
[2021-02-10 20:43:26,269] [INFO] [config.py:737:print] flops_profiler_config ........ <deepspeed.profiling.config.DeepSpeedFlopsProfilerConfig object at 0x7f9d0b742e20>
[2021-02-10 20:43:26,269] [INFO] [config.py:737:print] fp16_enabled ................. true
[2021-02-10 20:43:26,269] [INFO] [config.py:737:print] global_rank .................. 0
[2021-02-10 20:43:26,269] [INFO] [config.py:737:print] gradient_accumulation_steps .. 4
[2021-02-10 20:43:26,269] [INFO] [config.py:737:print] gradient_clipping ............ 1.0
[2021-02-10 20:43:26,269] [INFO] [config.py:737:print] gradient_predivide_factor .... 1.0
[2021-02-10 20:43:26,269] [INFO] [config.py:737:print] initial_dynamic_scale ........ 65536
[2021-02-10 20:43:26,269] [INFO] [config.py:737:print] loss_scale ................... 0
[2021-02-10 20:43:26,269] [INFO] [config.py:737:print] memory_breakdown ............. False
[2021-02-10 20:43:26,269] [INFO] [config.py:737:print] optimizer_legacy_fusion ...... False
[2021-02-10 20:43:26,269] [INFO] [config.py:737:print] optimizer_name ............... adamw
[2021-02-10 20:43:26,269] [INFO] [config.py:737:print] optimizer_params ............. {'lr': 0.001, 'betas': [0.8, 0.999], 'eps': 1e-08, 'weight_decay': 3e-07}
[2021-02-10 20:43:26,269] [INFO] [config.py:737:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0}
[2021-02-10 20:43:26,269] [INFO] [config.py:737:print] pld_enabled .................. False
[2021-02-10 20:43:26,269] [INFO] [config.py:737:print] pld_params ................... False
[2021-02-10 20:43:26,269] [INFO] [config.py:737:print] prescale_gradients ........... False
[2021-02-10 20:43:26,269] [INFO] [config.py:737:print] scheduler_name ............... WarmupLR
[2021-02-10 20:43:26,269] [INFO] [config.py:737:print] scheduler_params ............. {'warmup_min_lr': 0, 'warmup_max_lr': 3e-05, 'warmup_num_steps': 500}
[2021-02-10 20:43:26,269] [INFO] [config.py:737:print] sparse_attention ............. None
[2021-02-10 20:43:26,269] [INFO] [config.py:737:print] sparse_gradients_enabled ..... False
[2021-02-10 20:43:26,269] [INFO] [config.py:737:print] steps_per_print .............. 2000
[2021-02-10 20:43:26,269] [INFO] [config.py:737:print] tensorboard_enabled .......... False
[2021-02-10 20:43:26,269] [INFO] [config.py:737:print] tensorboard_job_name ......... DeepSpeedJobName
[2021-02-10 20:43:26,270] [INFO] [config.py:737:print] tensorboard_output_path ......
[2021-02-10 20:43:26,270] [INFO] [config.py:737:print] train_batch_size ............. 8
[2021-02-10 20:43:26,270] [INFO] [config.py:737:print] train_micro_batch_size_per_gpu 2
[2021-02-10 20:43:26,270] [INFO] [config.py:737:print] wall_clock_breakdown ......... false
[2021-02-10 20:43:26,270] [INFO] [config.py:737:print] world_size ................... 1
[2021-02-10 20:43:26,270] [INFO] [config.py:737:print] zero_allow_untested_optimizer true
[2021-02-10 20:43:26,270] [INFO] [config.py:737:print] zero_config .................. {
"allgather_bucket_size": 200000000.0,
"allgather_partitions": "true",
"contiguous_gradients": "true",
"cpu_offload": "true",
"elastic_checkpoint": true,
"load_from_fp32_weights": true,
"overlap_comm": "true",
"reduce_bucket_size": 200000000.0,
"reduce_scatter": "true",
"stage": 2
}
[2021-02-10 20:43:26,270] [INFO] [config.py:737:print] zero_enabled ................. True
[2021-02-10 20:43:26,270] [INFO] [config.py:737:print] zero_optimization_stage ...... 2
[2021-02-10 20:43:26,270] [INFO] [config.py:739:print] json = {
"fp16":{
"enabled":"true",
"hysteresis":2,
"initial_scale_power":16,
"loss_scale":0,
"loss_scale_window":1000,
"min_loss_scale":1
},
"gradient_accumulation_steps":4,
"gradient_clipping":1.0,
"optimizer":{
"params":{
"betas":[
0.8,
0.999
],
"eps":1e-08,
"lr":0.001,
"weight_decay":3e-07
},
"type":"AdamW"
},
"scheduler":{
"params":{
"warmup_max_lr":3e-05,
"warmup_min_lr":0,
"warmup_num_steps":500
},
"type":"WarmupLR"
},
"steps_per_print":2000,
"train_micro_batch_size_per_gpu":2,
"wall_clock_breakdown":"false",
"zero_allow_untested_optimizer":"true",
"zero_optimization":{
"allgather_bucket_size":200000000.0,
"allgather_partitions":"true",
"contiguous_gradients":"true",
"cpu_offload":"true",
"overlap_comm":"true",
"reduce_bucket_size":200000000.0,
"reduce_scatter":"true",
"stage":2
}
}
0%| | 0/3 [00:00<?, ?it/s]/usr/local/lib/python3.8/dist-packages/nlp/utils/py_utils.py:191: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:141.)
return function(data_struct)
Using /root/.cache/torch_extensions as PyTorch extensions root...
No modifications detected for re-loaded extension module utils, skipping build step...
Loading extension module utils...
Time to load utils op: 0.0005078315734863281 seconds
Traceback (most recent call last):
File "abstractive_summarization.py", line 374, in <module>
run()
File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "abstractive_summarization.py", line 349, in run
trainer.train()
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 886, in train
tr_loss += self.training_step(model, inputs)
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 1265, in training_step
self.model_wrapped.module.backward(loss)
File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py", line 903, in backward
self.optimizer.backward(loss)
File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/zero/stage2.py", line 1596, in backward
buf_0 = torch.empty(int(self.reduce_bucket_size * 4.5),
RuntimeError: CUDA out of memory. Tried to allocate 1.68 GiB (GPU 0; 15.78 GiB total capacity; 12.80 GiB already allocated; 1.63 GiB free; 12.97 GiB reserved in total by PyTorch)
0%| | 0/3 [00:00<?, ?it/s]
```
I'd greatly appreciate your advice on what I might be missing <|||||>I tried to run the notebook you referred to after adding the modifications to launch DeepSpeed and now I can see all the problems you were referring to.
I haven't yet tried running DeepSpeed in a jupyter notebook, but only as part of a normal program, so I will sort it out and get back to you.<|||||>It took some experimenting to figure out what it wants - basically we need to emulate the launcher, since it doesn't get run under notebooks
So I have adapted the original notebook - you will find a DeepSpeed section in it and it should be easy to see what was added
https://colab.research.google.com/drive/1DvcbpV-g_uKKa7KWBtlwJOX5b-mQUbR-?usp=sharing
I will shortly make a PR with the docs on how to do it, https://github.com/huggingface/transformers/pull/10130
But until the PR is merged you need:
```
# deepspeed requires a distributed environment even if one process is used
# emulating distributed env with a single gpu 0
import os
os.environ['CUDA_VISIBLE_DEVICES'] = "0"
os.environ['MASTER_ADDR'] = 'localhost' #
os.environ['MASTER_PORT'] = '9998'
os.environ['RANK'] = "0"
os.environ['LOCAL_RANK'] ="0"
os.environ['WORLD_SIZE'] = "1"
training_args = Seq2SeqTrainingArguments(
[... normal args ...]
# deepspeed-in-jupyter-notebook-special-args
local_rank=0, # XXX: this won't be needed when PR is merged
deepspeed="ds_config.json"
)
# XXX: this won't be needed when PR is merged
training_args._setup_devices
trainer = Seq2SeqTrainer(...)
trainer.train()
```
I don't yet know if it will help with OOM (check if perhaps you need to make the max length shorter than your dataset's entires), but this should make a smooth run otherwise.
But I think you already figured out that if you install `mpi4py` it sorts most of these things out too. I'm trying to see how to make it the simplest for the users here: https://github.com/microsoft/DeepSpeed/issues/748
If you're still getting OOM please create a notebook where I can reproduce the problem and I will have a look. Thank you.<|||||>It's important to understand that DeepSpeed ZeRO-Offload requires an ample CPU RAM to be available, so if you're on Colab you don't get too much there and that could be the culprit - i.e. you won't benefit from the offload much - which is the main feature on a single gpu to save on gpu memory.
So I'd try one of those tricks where you make colab give you double the memory by crashing the original session with a cell:
```
i = []
while(True):
i.append('a')
```
I haven't tried it, but people report it works.
Also need to tinker and try to turn perhaps some of its features off. Also you could try to make the buffers smaller try 1e8 or even 0.5e8 in the ds config.
I was able to run the notebook you started from to completion (when it didn't run out of disk space). But perhaps it was already running to completion w/o deepspeed.<|||||>hi @stas00 , thank you so much for your help throughout this. I greatly appreciate the PR and colab notebook example. I tried following your notebook and adjusting my script based on that notebook(I'm currently running this in kubeflow with 4 v100s. Each v100 GPU has 16Gi of memory though I can increase the memory). Such as: adding `LOCAL_RANK`,`RANK` and `WORLD_SIZE` env variables, adding `training_args._setup_devices` and changing some of the kwargs in `training_args` to be more consistent with the notebook. The example below produces a fake `train` and `test` dataset and my objective is to fine tune the `patrickvonplaten/led-large-16384-pubmed` on that fake dataset. That fake `train` dataset has a sample size of 2 and the `test` dataset has a sample size of 1. The snippet below should be reproducible. However, using that snippet, I'm still running into this OOM error
```
RuntimeError: CUDA out of memory. Tried to allocate 1.68 GiB (GPU 0; 15.78 GiB total capacity; 12.80 GiB already allocated; 1.63 GiB free; 12.97 GiB reserved in total by PyTorch)
0%| | 0/1 [00:00<?, ?it/s]
```
I'd greatly appreciate your two cents on what I might be missing in the snippet below
```python
import datasets
from datasets import load_dataset, load_metric
import click
import torch
import logging
import boto3
import json
from io import BytesIO
import pandas as pd
import pyarrow as pa
import pyarrow.parquet as pq
from nlp import arrow_dataset
import glob
import os
import tarfile
import os.path
from transformers import (
AutoTokenizer,
AutoModelForSeq2SeqLM,
Seq2SeqTrainer,
Seq2SeqTrainingArguments,
AutoTokenizer,
AutoModelForSeq2SeqLM,
)
import torch.utils.checkpoint
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
logging.basicConfig(
level=logging.INFO, format="[%(levelname)s] %(asctime)s %(module)s: %(message)s"
)
rouge = load_metric("rouge")
MODEL_NAME = "patrickvonplaten/led-large-16384-pubmed"
ds_config = {
"fp16": {
"enabled": "true",
"loss_scale": 0,
"loss_scale_window": 1000,
"hysteresis": 2,
"min_loss_scale": 1
},
"zero_optimization": {
"stage": 2,
"allgather_partitions": "true",
"allgather_bucket_size": 2e8,
"overlap_comm": "true",
"reduce_scatter": "true",
"reduce_bucket_size": 2e8,
"contiguous_gradients": "true",
"cpu_offload": "true"
},
"zero_allow_untested_optimizer": "true",
"optimizer": {
"type": "AdamW",
"params": {
"lr": 3e-5,
"betas": [0.8, 0.999],
"eps": 1e-8,
"weight_decay": 3e-7
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 3e-5,
"warmup_num_steps": 500
}
},
"steps_per_print": 2000,
"wall_clock_breakdown": "false"
}
with open('ds_config.json', 'w') as fp:
json.dump(ds_config, fp)
logger.info(f"load tokenizer using {MODEL_NAME}")
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
logger.info(f"Load {MODEL_NAME}. IMPORTANT NOTE:I'm enabling gradient checkpointing to save memory.")
# load model + enable gradient checkpointing & disable cache for checkpointing
led = AutoModelForSeq2SeqLM.from_pretrained(
MODEL_NAME,
gradient_checkpointing=True,
use_cache=False,
)
# max encoder length is 2048 for PubMed
encoder_max_length = 2048
decoder_max_length = 256
batch_size = 2
# set decoding params
led.config.num_beams = 2
led.config.max_length = 256
led.config.min_length = 100
led.config.length_penalty = 2.0
led.config.early_stopping = True
led.config.no_repeat_ngram_size = 3
def process_data_to_model_inputs(batch):
# tokenize the inputs and labels
inputs = tokenizer(
batch["extractive_summary"],
padding="max_length",
truncation=True,
max_length=encoder_max_length,
)
outputs = tokenizer(
batch["reference_summary"],
padding="max_length",
truncation=True,
max_length=decoder_max_length,
)
batch["input_ids"] = inputs.input_ids
batch["attention_mask"] = inputs.attention_mask
# create 0 global_attention_mask lists
batch["global_attention_mask"] = len(batch["input_ids"]) * [
[0 for _ in range(len(batch["input_ids"][0]))]
]
# since above lists are references, the following line changes the 0 index for all samples
batch["global_attention_mask"][0][0] = 1
batch["labels"] = outputs.input_ids
# We have to make sure that the PAD token is ignored
batch["labels"] = [
[-100 if token == tokenizer.pad_token_id else token for token in labels]
for labels in batch["labels"]
]
return batch
def compute_metrics(pred):
labels_ids = pred.label_ids
pred_ids = pred.predictions
pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True)
labels_ids[labels_ids == -100] = tokenizer.pad_token_id
label_str = tokenizer.batch_decode(labels_ids, skip_special_tokens=True)
rouge_output = rouge.compute(
predictions=pred_str, references=label_str, rouge_types=["rouge2"]
)["rouge2"].mid
return {
"rouge2_precision": round(rouge_output.precision, 4),
"rouge2_recall": round(rouge_output.recall, 4),
"rouge2_fmeasure": round(rouge_output.fmeasure, 4),
}
def run():
logger.info("create fictious train and test data")
train = pd.DataFrame({"reference_summary": [' '.join(["I am a reference summary"] * 200),
' '.join(["I am another reference summary"] * 200)],
"extractive_summary": [' '.join(["hello"] * 200), ' '.join(["goodbye"] * 200)]})
test = pd.DataFrame({"reference_summary": [' '.join(["I am another reference summary"] * 200)],
"extractive_summary": [' '.join(["goodbye"] * 200)]})
train = pa.Table.from_pandas(train)
train = arrow_dataset.Dataset(train)
test = pa.Table.from_pandas(test)
test = arrow_dataset.Dataset(test)
logger.info("map train data")
train = train.map(
process_data_to_model_inputs,
batched=True,
batch_size=batch_size,
remove_columns=["reference_summary", "extractive_summary"],
)
logger.info("map test data")
test = test.map(
process_data_to_model_inputs,
batched=True,
batch_size=batch_size,
remove_columns=["reference_summary", "extractive_summary"],
)
logger.info("set Python list in train to PyTorch tensor")
train.set_format(
type="torch",
columns=["input_ids", "attention_mask", "global_attention_mask", "labels"],
)
logger.info("set Python list in test to PyTorch tensor")
test.set_format(
type="torch",
columns=["input_ids", "attention_mask", "global_attention_mask", "labels"],
)
logger.info("enable fp16 amp training")
logger.info(f"checkpoint files will be written to a pvc mount")
#define env variables required for training
os.environ['RANK'] = "0"
os.environ['LOCAL_RANK'] = "0"
os.environ['WORLD_SIZE'] = "1"
checkpoint_dir_path = "/mnt/summarization_checkpoints"
training_args = Seq2SeqTrainingArguments(
predict_with_generate=True,
evaluation_strategy="steps",
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
fp16=True,
output_dir=checkpoint_dir_path,
logging_steps=5,
eval_steps=10,
save_steps=10,
save_total_limit=1,
gradient_accumulation_steps=4,
num_train_epochs=1,
local_rank=0,
deepspeed="ds_config.json"
)
training_args._setup_devices
os.makedirs(checkpoint_dir_path, exist_ok=True)
logger.info("instantiate trainer")
trainer = Seq2SeqTrainer(
model=led,
tokenizer=tokenizer,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=train,
eval_dataset=test,
)
logger.info("start training")
trainer.train()
if __name__ == "__main__":
run()
```
thank you for your help with this nonetheless<|||||>Thank you for supplying the reproducible script, @mmoya01 - it worked with some small tweaks.
Let's take a step back and go back to your original problem. That is let's remove the DeepSpeed for now.
I modified your script to have 1000 smaller train records instead of 1 and if I run it it doesn't use more than 9GB of GPU RAM including cuda kernels - the actual Peak memory used: 7116MB - with your original one it was around 9GB peak and under 11GB total gpu RAM.
So may be it's worthwhile to sort it out first and then see if you actually need DeepSpeed in this case. We need to find what eats up the rest of your GPU memory.
I added this at the end of the script:
```
import torch
print(f"Peak memory used: {torch.cuda.max_memory_reserved()>>20}MB")
import time
time.sleep(10) # check nvidia-smi
```
may be put some pauses through the script and observe if you get your gpu memory partially used up before the training starts?
and to make 1000 entries:
```
n_recs = 1000
frames = {"reference_summary": [' '.join([f"{i} I am a reference summary"] * 200) for i in range(n_recs)],
"extractive_summary": [' '.join([f"{i} hello"] * 200) for i in range(n_recs)],
}
train = pd.DataFrame(frames)
test = pd.DataFrame({"reference_summary": [' '.join(["I am another reference summary"] * 200)],
"extractive_summary": [' '.join(["goodbye"] * 200)]})
```
So if you have 16GB of gpu RAM, this should be more than enough. What are we missing here setup difference-wise? Do you have something else that consumes GPU RAM? Try to print the peak mem usage stats as I suggested above. But of course this might not work if you OOM.
I'm using: pt-nightly and transformers master for this test.
```
PyTorch version: 1.8.0.dev20210202+cu110
CUDA used to build PyTorch: 11.0
Python version: 3.8 (64-bit runtime)
```
**edit:**
I changed the mods to create the larger dataset to a cleaner way.
I have a feeling this has to do with your dataset.
I will get back to it shortly - will post an update.<|||||>hi @stas00 , thank you again for the update. The image I'm using uses `nvidia/cuda:10.2-devel-ubuntu18.04` and `torch==1.6.0`. I used your tweak of 1000 examples and I also tried looking at
```python
if device.type == "cuda":
logger.info(torch.cuda.get_device_name(0))
logger.info("Memory Usage:")
logger.info(
f"Allocated: "
+ str(round(torch.cuda.memory_allocated(0) / 1024 ** 3, 1))
+ " GB"
)
logger.info(
"Cached: " + str(round(torch.cuda.memory_reserved(0) / 1024 ** 3, 1)) + " GB"
)
logger.info("number of GPUs available: "+str(torch.cuda.device_count()))
logger.info(f"Peak memory used: {torch.cuda.max_memory_reserved()>>20}MB")
```
which gave me
```
[INFO] 2021-02-11 22:21:51,155 abstractive_summarization: Using device: cuda
[INFO] 2021-02-11 22:21:51,164 abstractive_summarization: Tesla V100-SXM2-16GB
[INFO] 2021-02-11 22:21:51,164 abstractive_summarization: Memory Usage:
[INFO] 2021-02-11 22:21:51,165 abstractive_summarization: Allocated: 0.0 GB
[INFO] 2021-02-11 22:21:51,165 abstractive_summarization: Cached: 0.0 GB
[INFO] 2021-02-11 22:21:51,165 abstractive_summarization: number of GPUs available: 4
[INFO] 2021-02-11 22:21:51,165 abstractive_summarization: Peak memory used: 0MB
```
If I omit deepspeed, I run into memory fragment error using those 1000 examples. I'm not sure why I might be getting 0MB peak memory, 0 GB cached memory and no memory usage. My full logs gave me the following:
```
[INFO] 2021-02-11 22:21:51,155 abstractive_summarization: Using device: cuda
[INFO] 2021-02-11 22:21:51,164 abstractive_summarization: Tesla V100-SXM2-16GB
[INFO] 2021-02-11 22:21:51,164 abstractive_summarization: Memory Usage:
[INFO] 2021-02-11 22:21:51,165 abstractive_summarization: Allocated: 0.0 GB
[INFO] 2021-02-11 22:21:51,165 abstractive_summarization: Cached: 0.0 GB
[INFO] 2021-02-11 22:21:51,165 abstractive_summarization: number of GPUs available: 4
[INFO] 2021-02-11 22:21:51,165 abstractive_summarization: Peak memory used: 0MB
[INFO] 2021-02-11 22:21:51,216 abstractive_summarization: map train data
0%| | 0/500 [00:00<?, ?it/s]
1%| | 4/500 [00:00<00:15, 32.53it/s]
2%|▏ | 8/500 [00:00<00:14, 32.86it/s]
2%|▏ | 12/500 [00:00<00:14, 32.72it/s]
3%|▎ | 16/500 [00:00<00:14, 32.76it/s]
4%|▍ | 20/500 [00:00<00:14, 32.54it/s]
5%|▍ | 24/500 [00:00<00:15, 31.73it/s]
6%|▌ | 28/500 [00:00<00:14, 32.05it/s]
6%|▋ | 32/500 [00:01<00:15, 30.78it/s]
7%|▋ | 36/500 [00:01<00:14, 31.31it/s]
8%|▊ | 40/500 [00:01<00:14, 31.41it/s]
9%|▉ | 44/500 [00:01<00:14, 31.86it/s]
10%|▉ | 48/500 [00:01<00:14, 31.81it/s]
10%|█ | 52/500 [00:01<00:13, 32.03it/s]
11%|█ | 56/500 [00:01<00:13, 32.17it/s]
12%|█▏ | 60/500 [00:01<00:13, 32.33it/s]
13%|█▎ | 64/500 [00:01<00:13, 32.35it/s]
14%|█▎ | 68/500 [00:02<00:13, 32.44it/s]
14%|█▍ | 72/500 [00:02<00:13, 32.37it/s]
15%|█▌ | 76/500 [00:02<00:13, 32.48it/s]
16%|█▌ | 80/500 [00:02<00:12, 32.35it/s]
17%|█▋ | 84/500 [00:02<00:12, 32.06it/s]
18%|█▊ | 88/500 [00:02<00:12, 31.89it/s]
18%|█▊ | 92/500 [00:02<00:13, 31.01it/s]
19%|█▉ | 96/500 [00:03<00:12, 31.47it/s]
20%|██ | 100/500 [00:03<00:12, 31.91it/s]
21%|██ | 104/500 [00:03<00:12, 32.16it/s]
22%|██▏ | 108/500 [00:03<00:12, 31.08it/s]
22%|██▏ | 112/500 [00:03<00:12, 30.71it/s]
23%|██▎ | 116/500 [00:03<00:12, 30.61it/s]
24%|██▍ | 120/500 [00:03<00:12, 31.19it/s]
25%|██▍ | 124/500 [00:03<00:11, 31.47it/s]
26%|██▌ | 128/500 [00:04<00:11, 31.78it/s]
26%|██▋ | 132/500 [00:04<00:11, 32.01it/s]
27%|██▋ | 136/500 [00:04<00:11, 32.11it/s]
28%|██▊ | 140/500 [00:04<00:11, 32.19it/s]
29%|██▉ | 144/500 [00:04<00:11, 31.53it/s]
30%|██▉ | 148/500 [00:04<00:11, 31.84it/s]
30%|███ | 152/500 [00:04<00:11, 31.18it/s]
31%|███ | 156/500 [00:04<00:10, 31.40it/s]
32%|███▏ | 160/500 [00:05<00:10, 31.59it/s]
33%|███▎ | 164/500 [00:05<00:11, 29.86it/s]
34%|███▎ | 168/500 [00:05<00:10, 30.59it/s]
34%|███▍ | 172/500 [00:05<00:10, 31.01it/s]
35%|███▌ | 176/500 [00:05<00:10, 30.73it/s]
36%|███▌ | 180/500 [00:05<00:10, 31.21it/s]
37%|███▋ | 184/500 [00:05<00:10, 31.02it/s]
38%|███▊ | 188/500 [00:05<00:09, 31.41it/s]
38%|███▊ | 192/500 [00:06<00:09, 31.29it/s]
39%|███▉ | 196/500 [00:06<00:09, 31.29it/s]
40%|████ | 200/500 [00:06<00:09, 31.12it/s]
41%|████ | 204/500 [00:06<00:09, 31.56it/s]
42%|████▏ | 208/500 [00:06<00:09, 31.78it/s]
42%|████▏ | 212/500 [00:06<00:09, 31.95it/s]
43%|████▎ | 216/500 [00:06<00:08, 32.01it/s]
44%|████▍ | 220/500 [00:06<00:08, 31.80it/s]
45%|████▍ | 224/500 [00:07<00:08, 31.63it/s]
46%|████▌ | 228/500 [00:07<00:08, 31.41it/s]
46%|████▋ | 232/500 [00:07<00:08, 31.10it/s]
47%|████▋ | 236/500 [00:07<00:08, 30.91it/s]
48%|████▊ | 240/500 [00:07<00:08, 30.88it/s]
49%|████▉ | 244/500 [00:07<00:08, 30.87it/s]
50%|████▉ | 248/500 [00:07<00:08, 30.78it/s]
50%|█████ | 252/500 [00:07<00:07, 31.05it/s]
51%|█████ | 256/500 [00:08<00:07, 30.93it/s]
52%|█████▏ | 260/500 [00:08<00:07, 30.62it/s]
53%|█████▎ | 264/500 [00:08<00:07, 30.72it/s]
54%|█████▎ | 268/500 [00:08<00:07, 30.68it/s]
54%|█████▍ | 272/500 [00:08<00:07, 30.62it/s]
55%|█████▌ | 276/500 [00:08<00:07, 28.52it/s]
56%|█████▌ | 280/500 [00:08<00:07, 29.09it/s]
57%|█████▋ | 284/500 [00:09<00:07, 29.45it/s]
58%|█████▊ | 288/500 [00:09<00:07, 29.80it/s]
58%|█████▊ | 292/500 [00:09<00:06, 30.08it/s]
59%|█████▉ | 296/500 [00:09<00:06, 30.19it/s]
60%|██████ | 300/500 [00:09<00:06, 30.23it/s]
61%|██████ | 304/500 [00:09<00:06, 29.57it/s]
61%|██████▏ | 307/500 [00:09<00:06, 29.58it/s]
62%|██████▏ | 311/500 [00:09<00:06, 29.21it/s]
63%|██████▎ | 315/500 [00:10<00:06, 29.40it/s]
64%|██████▎ | 318/500 [00:10<00:06, 29.50it/s]
64%|██████▍ | 322/500 [00:10<00:05, 29.75it/s]
65%|██████▌ | 326/500 [00:10<00:06, 28.45it/s]
66%|██████▌ | 329/500 [00:10<00:06, 27.29it/s]
66%|██████▋ | 332/500 [00:10<00:06, 27.94it/s]
67%|██████▋ | 336/500 [00:10<00:05, 28.73it/s]
68%|██████▊ | 340/500 [00:10<00:05, 29.01it/s]
69%|██████▊ | 343/500 [00:11<00:05, 29.18it/s]
69%|██████▉ | 347/500 [00:11<00:05, 29.44it/s]
70%|███████ | 351/500 [00:11<00:04, 29.95it/s]
71%|███████ | 354/500 [00:11<00:04, 29.88it/s]
71%|███████▏ | 357/500 [00:11<00:04, 29.84it/s]
72%|███████▏ | 360/500 [00:11<00:04, 29.28it/s]
73%|███████▎ | 364/500 [00:11<00:04, 29.68it/s]
74%|███████▎ | 368/500 [00:11<00:04, 29.95it/s]
74%|███████▍ | 372/500 [00:12<00:04, 30.12it/s]
75%|███████▌ | 376/500 [00:12<00:04, 29.80it/s]
76%|███████▌ | 379/500 [00:12<00:04, 29.83it/s]
77%|███████▋ | 383/500 [00:12<00:03, 30.09it/s]
77%|███████▋ | 387/500 [00:12<00:03, 30.03it/s]
78%|███████▊ | 391/500 [00:12<00:03, 29.54it/s]
79%|███████▉ | 394/500 [00:12<00:03, 29.49it/s]
80%|███████▉ | 398/500 [00:12<00:03, 29.42it/s]
80%|████████ | 402/500 [00:13<00:03, 29.05it/s]
81%|████████ | 406/500 [00:13<00:03, 29.39it/s]
82%|████████▏ | 410/500 [00:13<00:03, 29.72it/s]
83%|████████▎ | 413/500 [00:13<00:02, 29.78it/s]
83%|████████▎ | 416/500 [00:13<00:02, 29.82it/s]
84%|████████▍ | 419/500 [00:13<00:02, 29.21it/s]
85%|████████▍ | 423/500 [00:13<00:02, 29.58it/s]
85%|████████▌ | 427/500 [00:13<00:02, 29.75it/s]
86%|████████▌ | 431/500 [00:14<00:02, 29.95it/s]
87%|████████▋ | 434/500 [00:14<00:02, 29.72it/s]
87%|████████▋ | 437/500 [00:14<00:02, 29.68it/s]
88%|████████▊ | 440/500 [00:14<00:02, 29.66it/s]
89%|████████▉ | 444/500 [00:14<00:01, 29.78it/s]
90%|████████▉ | 448/500 [00:14<00:01, 29.78it/s]
90%|█████████ | 451/500 [00:14<00:01, 29.51it/s]
91%|█████████ | 455/500 [00:14<00:01, 29.71it/s]
92%|█████████▏| 458/500 [00:14<00:01, 29.76it/s]
92%|█████████▏| 461/500 [00:15<00:01, 28.39it/s]
93%|█████████▎| 465/500 [00:15<00:01, 29.07it/s]
94%|█████████▎| 468/500 [00:15<00:01, 28.43it/s]
94%|█████████▍| 471/500 [00:15<00:01, 28.80it/s]
95%|█████████▌| 475/500 [00:15<00:00, 29.40it/s]
96%|█████████▌| 479/500 [00:15<00:00, 29.63it/s]
96%|█████████▋| 482/500 [00:15<00:00, 29.16it/s]
97%|█████████▋| 486/500 [00:15<00:00, 29.70it/s]
98%|█████████▊| 490/500 [00:16<00:00, 29.87it/s]
99%|█████████▉| 494/500 [00:16<00:00, 30.03it/s]
100%|█████████▉| 498/500 [00:16<00:00, 30.17it/s]
100%|██████████| 500/500 [00:16<00:00, 30.52it/s]
[INFO] 2021-02-11 22:22:07,639 arrow_writer: Done writing 1000 examples in 51224000 bytes .
[INFO] 2021-02-11 22:22:07,647 abstractive_summarization: map test data
0%| | 0/1 [00:00<?, ?it/s]
100%|██████████| 1/1 [00:00<00:00, 91.30it/s]
[INFO] 2021-02-11 22:22:07,664 arrow_writer: Done writing 1 examples in 51232 bytes .
[INFO] 2021-02-11 22:22:07,665 abstractive_summarization: set Python list in train to PyTorch tensor
[INFO] 2021-02-11 22:22:07,665 arrow_dataset: Set __getitem__(key) output type to torch for ['input_ids', 'attention_mask', 'global_attention_mask', 'labels'] columns (when key is int or slice) and don't output other (un-formated) columns.
[INFO] 2021-02-11 22:22:07,665 abstractive_summarization: set Python list in test to PyTorch tensor
[INFO] 2021-02-11 22:22:07,665 arrow_dataset: Set __getitem__(key) output type to torch for ['input_ids', 'attention_mask', 'global_attention_mask', 'labels'] columns (when key is int or slice) and don't output other (un-formated) columns.
[INFO] 2021-02-11 22:22:07,665 abstractive_summarization: enable fp16 amp training
[INFO] 2021-02-11 22:22:07,665 abstractive_summarization: file will be written to /workspace
[2021-02-11 22:22:08,008] [INFO] [distributed.py:36:init_distributed] Not using the DeepSpeed or torch.distributed launchers, attempting to detect MPI environment...
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
[2021-02-11 22:22:08,356] [INFO] [distributed.py:83:mpi_discovery] Discovered MPI settings of world_rank=0, local_rank=0, world_size=1, master_addr=10.23.29.192, master_port=29500
[2021-02-11 22:22:08,356] [INFO] [distributed.py:46:init_distributed] Initializing torch distributed with backend: nccl
[INFO] 2021-02-11 22:22:08,359 abstractive_summarization: instantiate trainer
[INFO] 2021-02-11 22:22:11,706 abstractive_summarization: start training
[2021-02-11 22:22:11,706] [INFO] [logging.py:60:log_dist] [Rank 0] DeepSpeed info: version=0.3.11, git-hash=unknown, git-branch=unknown
[2021-02-11 22:22:11,732] [INFO] [engine.py:73:_initialize_parameter_parallel_groups] data_parallel_size: 1, parameter_parallel_size: 1
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
Using /root/.cache/torch_extensions as PyTorch extensions root...
Creating extension directory /root/.cache/torch_extensions/cpu_adam...
Detected CUDA files, patching ldflags
Emitting ninja build file /root/.cache/torch_extensions/cpu_adam/build.ninja...
Building extension module cpu_adam...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
[1/3] /usr/local/cuda/bin/nvcc -DTORCH_EXTENSION_NAME=cpu_adam -DTORCH_API_INCLUDE_EXTENSION_H -I/usr/local/lib/python3.8/dist-packages/deepspeed/ops/csrc/includes -I/usr/local/cuda/include -isystem /usr/local/lib/python3.8/dist-packages/torch/include -isystem /usr/local/lib/python3.8/dist-packages/torch/include/torch/csrc/api/include -isystem /usr/local/lib/python3.8/dist-packages/torch/include/TH -isystem /usr/local/lib/python3.8/dist-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /usr/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_70,code=sm_70 --compiler-options '-fPIC' -O3 --use_fast_math -std=c++14 -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERSIONS__ -U__CUDA_NO_HALF2_OPERATORS__ -gencode=arch=compute_70,code=sm_70 -gencode=arch=compute_70,code=compute_70 -c /usr/local/lib/python3.8/dist-packages/deepspeed/ops/csrc/adam/custom_cuda_kernel.cu -o custom_cuda_kernel.cuda.o
[2/3] c++ -MMD -MF cpu_adam.o.d -DTORCH_EXTENSION_NAME=cpu_adam -DTORCH_API_INCLUDE_EXTENSION_H -I/usr/local/lib/python3.8/dist-packages/deepspeed/ops/csrc/includes -I/usr/local/cuda/include -isystem /usr/local/lib/python3.8/dist-packages/torch/include -isystem /usr/local/lib/python3.8/dist-packages/torch/include/torch/csrc/api/include -isystem /usr/local/lib/python3.8/dist-packages/torch/include/TH -isystem /usr/local/lib/python3.8/dist-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /usr/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++14 -O3 -std=c++14 -L/usr/local/cuda/lib64 -lcudart -lcublas -g -Wno-reorder -march=native -fopenmp -D__AVX256__ -c /usr/local/lib/python3.8/dist-packages/deepspeed/ops/csrc/adam/cpu_adam.cpp -o cpu_adam.o
[3/3] c++ cpu_adam.o custom_cuda_kernel.cuda.o -shared -L/usr/local/lib/python3.8/dist-packages/torch/lib -lc10 -lc10_cuda -ltorch_cpu -ltorch_cuda -ltorch -ltorch_python -L/usr/local/cuda/lib64 -lcudart -o cpu_adam.so
Adam Optimizer #0 is created with AVX2 arithmetic capability.
Loading extension module cpu_adam...
Time to load cpu_adam op: 23.714597702026367 seconds
[2021-02-11 22:22:39,771] [INFO] [engine.py:551:_configure_optimizer] Using DeepSpeed Optimizer param name adamw as basic optimizer
[2021-02-11 22:22:39,771] [INFO] [engine.py:556:_configure_optimizer] DeepSpeed Basic Optimizer = DeepSpeedCPUAdam (
Parameter Group 0
amsgrad: False
betas: [0.8, 0.999]
bias_correction: True
eps: 1e-08
lr: 3e-05
weight_decay: 3e-07
)
Checking ZeRO support for optimizer=DeepSpeedCPUAdam type=<class 'deepspeed.ops.adam.cpu_adam.DeepSpeedCPUAdam'>
[2021-02-11 22:22:39,771] [INFO] [engine.py:672:_configure_zero_optimizer] Creating fp16 ZeRO stage 2 optimizer
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
Config: alpha=0.000030, betas=(0.800000, 0.999000), weight_decay=0.000000, adam_w=1
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
Using /root/.cache/torch_extensions as PyTorch extensions root...
Creating extension directory /root/.cache/torch_extensions/utils...
Emitting ninja build file /root/.cache/torch_extensions/utils/build.ninja...
Building extension module utils...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
[1/2] c++ -MMD -MF flatten_unflatten.o.d -DTORCH_EXTENSION_NAME=utils -DTORCH_API_INCLUDE_EXTENSION_H -isystem /usr/local/lib/python3.8/dist-packages/torch/include -isystem /usr/local/lib/python3.8/dist-packages/torch/include/torch/csrc/api/include -isystem /usr/local/lib/python3.8/dist-packages/torch/include/TH -isystem /usr/local/lib/python3.8/dist-packages/torch/include/THC -isystem /usr/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++14 -c /usr/local/lib/python3.8/dist-packages/deepspeed/ops/csrc/utils/flatten_unflatten.cpp -o flatten_unflatten.o
[2/2] c++ flatten_unflatten.o -shared -L/usr/local/lib/python3.8/dist-packages/torch/lib -lc10 -ltorch_cpu -ltorch -ltorch_python -o utils.so
Loading extension module utils...
Time to load utils op: 13.4954514503479 seconds
[2021-02-11 22:22:53,267] [INFO] [stage2.py:130:__init__] Reduce bucket size 200000000.0
[2021-02-11 22:22:53,267] [INFO] [stage2.py:131:__init__] Allgather bucket size 200000000.0
[2021-02-11 22:22:53,267] [INFO] [stage2.py:132:__init__] CPU Offload: true
group 0 param 0 = 459801600
[2021-02-11 22:22:56,596] [INFO] [stage2.py:399:__init__] optimizer state initialized
[2021-02-11 22:22:56,597] [INFO] [engine.py:586:_configure_optimizer] DeepSpeed Final Optimizer = <deepspeed.runtime.zero.stage2.FP16_DeepSpeedZeroOptimizer object at 0x7f9302607190>
[2021-02-11 22:22:56,597] [INFO] [engine.py:405:_configure_lr_scheduler] DeepSpeed using configured LR scheduler = WarmupLR
[2021-02-11 22:22:56,597] [INFO] [logging.py:60:log_dist] [Rank 0] DeepSpeed LR Scheduler = <deepspeed.runtime.lr_schedules.WarmupLR object at 0x7f9354837850>
[2021-02-11 22:22:56,597] [INFO] [logging.py:60:log_dist] [Rank 0] step=0, skipped=0, lr=[3e-05], mom=[[0.8, 0.999]]
[2021-02-11 22:22:56,597] [INFO] [config.py:733:print] DeepSpeedEngine configuration:
[2021-02-11 22:22:56,597] [INFO] [config.py:737:print] activation_checkpointing_config <deepspeed.runtime.activation_checkpointing.config.DeepSpeedActivationCheckpointingConfig object at 0x7f93016d3310>
[2021-02-11 22:22:56,597] [INFO] [config.py:737:print] allreduce_always_fp32 ........ False
[2021-02-11 22:22:56,597] [INFO] [config.py:737:print] amp_enabled .................. False
[2021-02-11 22:22:56,598] [INFO] [config.py:737:print] amp_params ................... False
[2021-02-11 22:22:56,598] [INFO] [config.py:737:print] checkpoint_tag_validation_enabled True
[2021-02-11 22:22:56,598] [INFO] [config.py:737:print] checkpoint_tag_validation_fail False
[2021-02-11 22:22:56,598] [INFO] [config.py:737:print] disable_allgather ............ False
[2021-02-11 22:22:56,598] [INFO] [config.py:737:print] dump_state ................... False
[2021-02-11 22:22:56,598] [INFO] [config.py:737:print] dynamic_loss_scale_args ...... {'init_scale': 4294967296, 'scale_window': 1000, 'delayed_shift': 2, 'min_scale': 1}
[2021-02-11 22:22:56,598] [INFO] [config.py:737:print] elasticity_enabled ........... False
[2021-02-11 22:22:56,598] [INFO] [config.py:737:print] flops_profiler_config ........ <deepspeed.profiling.config.DeepSpeedFlopsProfilerConfig object at 0x7f93016d3370>
[2021-02-11 22:22:56,598] [INFO] [config.py:737:print] fp16_enabled ................. true
[2021-02-11 22:22:56,598] [INFO] [config.py:737:print] global_rank .................. 0
[2021-02-11 22:22:56,598] [INFO] [config.py:737:print] gradient_accumulation_steps .. 4
[2021-02-11 22:22:56,598] [INFO] [config.py:737:print] gradient_clipping ............ 1.0
[2021-02-11 22:22:56,598] [INFO] [config.py:737:print] gradient_predivide_factor .... 1.0
[2021-02-11 22:22:56,598] [INFO] [config.py:737:print] initial_dynamic_scale ........ 4294967296
[2021-02-11 22:22:56,598] [INFO] [config.py:737:print] loss_scale ................... 0
[2021-02-11 22:22:56,598] [INFO] [config.py:737:print] memory_breakdown ............. False
[2021-02-11 22:22:56,598] [INFO] [config.py:737:print] optimizer_legacy_fusion ...... False
[2021-02-11 22:22:56,598] [INFO] [config.py:737:print] optimizer_name ............... adamw
[2021-02-11 22:22:56,598] [INFO] [config.py:737:print] optimizer_params ............. {'lr': 3e-05, 'betas': [0.8, 0.999], 'eps': 1e-08, 'weight_decay': 3e-07}
[2021-02-11 22:22:56,598] [INFO] [config.py:737:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0}
[2021-02-11 22:22:56,598] [INFO] [config.py:737:print] pld_enabled .................. False
[2021-02-11 22:22:56,598] [INFO] [config.py:737:print] pld_params ................... False
[2021-02-11 22:22:56,598] [INFO] [config.py:737:print] prescale_gradients ........... False
[2021-02-11 22:22:56,598] [INFO] [config.py:737:print] scheduler_name ............... WarmupLR
[2021-02-11 22:22:56,598] [INFO] [config.py:737:print] scheduler_params ............. {'warmup_min_lr': 0, 'warmup_max_lr': 3e-05, 'warmup_num_steps': 500}
[2021-02-11 22:22:56,598] [INFO] [config.py:737:print] sparse_attention ............. None
[2021-02-11 22:22:56,598] [INFO] [config.py:737:print] sparse_gradients_enabled ..... False
[2021-02-11 22:22:56,599] [INFO] [config.py:737:print] steps_per_print .............. 2000
[2021-02-11 22:22:56,599] [INFO] [config.py:737:print] tensorboard_enabled .......... False
[2021-02-11 22:22:56,599] [INFO] [config.py:737:print] tensorboard_job_name ......... DeepSpeedJobName
[2021-02-11 22:22:56,599] [INFO] [config.py:737:print] tensorboard_output_path ......
[2021-02-11 22:22:56,599] [INFO] [config.py:737:print] train_batch_size ............. 8
[2021-02-11 22:22:56,599] [INFO] [config.py:737:print] train_micro_batch_size_per_gpu 2
[2021-02-11 22:22:56,599] [INFO] [config.py:737:print] wall_clock_breakdown ......... false
[2021-02-11 22:22:56,599] [INFO] [config.py:737:print] world_size ................... 1
[2021-02-11 22:22:56,599] [INFO] [config.py:737:print] zero_allow_untested_optimizer true
[2021-02-11 22:22:56,599] [INFO] [config.py:737:print] zero_config .................. {
"allgather_bucket_size": 200000000.0,
"allgather_partitions": "true",
"contiguous_gradients": "true",
"cpu_offload": "true",
"elastic_checkpoint": true,
"load_from_fp32_weights": true,
"overlap_comm": "true",
"reduce_bucket_size": 200000000.0,
"reduce_scatter": "true",
"stage": 2
}
[2021-02-11 22:22:56,599] [INFO] [config.py:737:print] zero_enabled ................. True
[2021-02-11 22:22:56,599] [INFO] [config.py:737:print] zero_optimization_stage ...... 2
[2021-02-11 22:22:56,599] [INFO] [config.py:739:print] json = {
"fp16":{
"enabled":"true",
"hysteresis":2,
"loss_scale":0,
"loss_scale_window":1000,
"min_loss_scale":1
},
"gradient_accumulation_steps":4,
"gradient_clipping":1.0,
"optimizer":{
"params":{
"betas":[
0.8,
0.999
],
"eps":1e-08,
"lr":3e-05,
"weight_decay":3e-07
},
"type":"AdamW"
},
"scheduler":{
"params":{
"warmup_max_lr":3e-05,
"warmup_min_lr":0,
"warmup_num_steps":500
},
"type":"WarmupLR"
},
"steps_per_print":2000,
"train_micro_batch_size_per_gpu":2,
"wall_clock_breakdown":"false",
"zero_allow_untested_optimizer":"true",
"zero_optimization":{
"allgather_bucket_size":200000000.0,
"allgather_partitions":"true",
"contiguous_gradients":"true",
"cpu_offload":"true",
"overlap_comm":"true",
"reduce_bucket_size":200000000.0,
"reduce_scatter":"true",
"stage":2
}
}
Using /root/.cache/torch_extensions as PyTorch extensions root...
No modifications detected for re-loaded extension module utils, skipping build step...
Loading extension module utils...
Time to load utils op: 0.0005064010620117188 seconds
0%| | 0/125 [00:00<?, ?it/s]/usr/local/lib/python3.8/dist-packages/nlp/utils/py_utils.py:191: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:141.)
return function(data_struct)
Traceback (most recent call last):
File "abstractive_summarization.py", line 396, in <module>
run()
File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "abstractive_summarization.py", line 371, in run
trainer.train()
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 886, in train
tr_loss += self.training_step(model, inputs)
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 1265, in training_step
self.model_wrapped.module.backward(loss)
File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py", line 903, in backward
self.optimizer.backward(loss)
File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/zero/stage2.py", line 1596, in backward
buf_0 = torch.empty(int(self.reduce_bucket_size * 4.5),
RuntimeError: CUDA out of memory. Tried to allocate 1.68 GiB (GPU 0; 15.78 GiB total capacity; 12.80 GiB already allocated; 1.63 GiB free; 12.97 GiB reserved in total by PyTorch)
0%| | 0/125 [00:00<?, ?it/s]
```<|||||>> I'm not sure why I might be getting 0MB peak memory, 0 GB cached memory and no memory usage
Ah, yes, the older pytorch is buggy and you need to use the device context manager to get the correct numbers, e.g:
```
def get_current_gpu_memory_use():
""" returns a list of cuda memory allocations per GPU in MBs"""
per_device_memory = []
for id in range(torch.cuda.device_count()):
with torch.cuda.device(id):
per_device_memory.append(torch.cuda.memory_allocated() >> 20)
return per_device_memory
```
`pynvml` is another way, and it's more useful in this context since it shows the full memory usage and not just the pytorch's allocation - there are other things happening on the gpu that pytorch doesn't account for - primarily 0.5-1.5GB of cuda kernels preloading.
If you're working with notebooks you may want to consider using https://github.com/stas00/ipyexperiments/ and it'll tell you cell by cell all the memory usage stats automatically. It takes its measurements via `pynvml`.
But you can also use its util functions in a standalone script, e.g.: after `pip install ipyexperiments`
```
python -c "from ipyexperiments.utils.mem import gpu_mem_get_mbs; print(gpu_mem_get_mbs())"
GPUMemory(total=8119, free=8115, used=4)
```
This will give you identical numbers to `nvidia-smi` and not `torch.cuda` memory API. The latter is always smaller since it doesn't account for the cuda kernels.<|||||>> If I omit deepspeed, I run into memory fragment error using those 1000 examples.
Based on the log - you're not omitting deepspeed, you're running the same thing.
Since you keep getting the exact same error - something is telling me that you're editing one thing but running another thing - find a way to make sure that the script that you run is actually up-to-date with your edits.
<|||||>I tried playing with your script w/o DeepSpeed and I'm not sure how you're getting a much higher GPU memory usage, it shouldn't be very different regardless of gpu, as I suggested - is it possible that you modify one script but run another?
e.g. what happens if you set `decoder_max_length = 64` - it should cut off a few GBs for bs=2 that you're trying to get in.
The other thing I'd check is using a more recent pytorch version.
also, https://github.com/huggingface/transformers/pull/10130 is merged now, so you don't need to pass `local_rank=0` to trainer args class if you update to transformers master.
<|||||>hello @stas00 thank you for the update! I tried testing it without deepspeed. I also tried checking out the following:
```python
nvmlInit()
h = nvmlDeviceGetHandleByIndex(0)
info = nvmlDeviceGetMemoryInfo(h)
logger.info(f'GPU total Memory : {info.total}')
logger.info(f'GPU free Memory : {info.free}')
logger.info(f'GPU Memory used : {info.used}')
```
and I got
```
[INFO] 2021-02-12 02:02:42,596 abstractive_summarization: GPU total Memory : 16945512448
[INFO] 2021-02-12 02:02:42,596 abstractive_summarization: GPU free Memory : 16941842432
[INFO] 2021-02-12 02:02:42,596 abstractive_summarization: GPU Memory used : 3670016
```
but after running the snippet below, I still run into
```
RuntimeError: CUDA out of memory. Tried to allocate 194.00 MiB (GPU 0; 15.78 GiB total capacity; 14.12 GiB already allocated; 146.00 MiB free; 14.47 GiB reserved in total by PyTorch)
0%| | 0/125 [00:00<?, ?it/s]
```
it looks like I'm able to fine tune`MODEL_NAME='allenai/led-base-16384'` as the base model(currently testing it out) , but I run into issues when trying to fine tune `patrickvonplaten/led-large-16384-pubmed` using the snippet below. I'd greatly appreciate any other suggestions you might have
```python
import datasets
from datasets import load_dataset, load_metric
import click
import torch
import logging
import boto3
import json
from io import BytesIO
import pandas as pd
import pyarrow as pa
import pyarrow.parquet as pq
from nlp import arrow_dataset
import glob
import os
import tarfile
import os.path
from transformers import (
AutoTokenizer,
AutoModelForSeq2SeqLM,
Seq2SeqTrainer,
Seq2SeqTrainingArguments,
AutoTokenizer,
AutoModelForSeq2SeqLM,
)
import torch.utils.checkpoint
from pynvml import *
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
logging.basicConfig(
level=logging.INFO, format="[%(levelname)s] %(asctime)s %(module)s: %(message)s"
)
rouge = load_metric("rouge")
MODEL_NAME = "patrickvonplaten/led-large-16384-pubmed"
# ds_config = {
# "fp16": {
# "enabled": "true",
# "loss_scale": 0,
# "loss_scale_window": 1000,
# "hysteresis": 2,
# "min_loss_scale": 1
# },
# "zero_optimization": {
# "stage": 2,
# "allgather_partitions": "true",
# "allgather_bucket_size": 2e8,
# "overlap_comm": "true",
# "reduce_scatter": "true",
# "reduce_bucket_size": 2e8,
# "contiguous_gradients": "true",
# "cpu_offload": "true"
# },
# "zero_allow_untested_optimizer": "true",
# "optimizer": {
# "type": "AdamW",
# "params": {
# "lr": 3e-5,
# "betas": [0.8, 0.999],
# "eps": 1e-8,
# "weight_decay": 3e-7
# }
# },
# "scheduler": {
# "type": "WarmupLR",
# "params": {
# "warmup_min_lr": 0,
# "warmup_max_lr": 3e-5,
# "warmup_num_steps": 500
# }
# },
# "steps_per_print": 2000,
# "wall_clock_breakdown": "false"
# }
# with open('ds_config.json', 'w') as fp:
# json.dump(ds_config, fp)
logger.info(f"load tokenizer using {MODEL_NAME}")
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
logger.info(f"Load {MODEL_NAME}. IMPORTANT NOTE:I'm enabling gradient checkpointing to save memory.")
# load model + enable gradient checkpointing & disable cache for checkpointing
led = AutoModelForSeq2SeqLM.from_pretrained(
MODEL_NAME,
gradient_checkpointing=False,
use_cache=False,
)
# max encoder length is 2048 for PubMed
encoder_max_length = 2048
decoder_max_length = 256
batch_size = 2
# set decoding params
led.config.num_beams = 2
led.config.max_length = 256
led.config.min_length = 100
led.config.length_penalty = 2.0
led.config.early_stopping = True
led.config.no_repeat_ngram_size = 3
def process_data_to_model_inputs(batch):
# tokenize the inputs and labels
inputs = tokenizer(
batch["extractive_summary"],
padding="max_length",
truncation=True,
max_length=encoder_max_length,
)
outputs = tokenizer(
batch["reference_summary"],
padding="max_length",
truncation=True,
max_length=decoder_max_length,
)
batch["input_ids"] = inputs.input_ids
batch["attention_mask"] = inputs.attention_mask
# create 0 global_attention_mask lists
batch["global_attention_mask"] = len(batch["input_ids"]) * [
[0 for _ in range(len(batch["input_ids"][0]))]
]
# since above lists are references, the following line changes the 0 index for all samples
batch["global_attention_mask"][0][0] = 1
batch["labels"] = outputs.input_ids
# We have to make sure that the PAD token is ignored
batch["labels"] = [
[-100 if token == tokenizer.pad_token_id else token for token in labels]
for labels in batch["labels"]
]
return batch
def compute_metrics(pred):
labels_ids = pred.label_ids
pred_ids = pred.predictions
pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True)
labels_ids[labels_ids == -100] = tokenizer.pad_token_id
label_str = tokenizer.batch_decode(labels_ids, skip_special_tokens=True)
rouge_output = rouge.compute(
predictions=pred_str, references=label_str, rouge_types=["rouge2"]
)["rouge2"].mid
return {
"rouge2_precision": round(rouge_output.precision, 4),
"rouge2_recall": round(rouge_output.recall, 4),
"rouge2_fmeasure": round(rouge_output.fmeasure, 4),
}
def run():
nvmlInit()
h = nvmlDeviceGetHandleByIndex(0)
info = nvmlDeviceGetMemoryInfo(h)
logger.info(f'GPU total Memory : {info.total}')
logger.info(f'GPU free Memory : {info.free}')
logger.info(f'GPU Memory used : {info.used}')
logger.info("create fictious train and test data")
n_recs = 1000
frames = [
{"reference_summary": [' '.join([f"{i} I am a reference summary"] * 200),
' '.join(["I am another reference summary"] * 200)],
"extractive_summary": [' '.join([f"{i} hello"] * 200), ' '.join(["goodbye"] * 200)]} for i in range(n_recs)]
train = pd.DataFrame(frames)
test = pd.DataFrame({"reference_summary": [' '.join(["I am another reference summary"] * 200)],
"extractive_summary": [' '.join(["goodbye"] * 200)]})
train = pa.Table.from_pandas(train)
train = arrow_dataset.Dataset(train)
test = pa.Table.from_pandas(test)
test = arrow_dataset.Dataset(test)
logger.info("map train data")
train = train.map(
process_data_to_model_inputs,
batched=True,
batch_size=batch_size,
remove_columns=["reference_summary", "extractive_summary"],
)
logger.info("map test data")
test = test.map(
process_data_to_model_inputs,
batched=True,
batch_size=batch_size,
remove_columns=["reference_summary", "extractive_summary"],
)
logger.info("set Python list in train to PyTorch tensor")
train.set_format(
type="torch",
columns=["input_ids", "attention_mask", "global_attention_mask", "labels"],
)
logger.info("set Python list in test to PyTorch tensor")
test.set_format(
type="torch",
columns=["input_ids", "attention_mask", "global_attention_mask", "labels"],
)
logger.info("enable fp16 amp training")
#define env variables required for training
os.environ['MASTER_ADDR'] = "10.23.29.192"
os.environ['MASTER_PORT'] = "29500"
os.environ['RANK'] = "0"
os.environ['LOCAL_RANK'] = "0"
os.environ['WORLD_SIZE'] = "1"
checkpoint_dir_path = "/mnt/summarization_checkpoints"
training_args = Seq2SeqTrainingArguments(
predict_with_generate=True,
evaluation_strategy="steps",
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
fp16=True,
output_dir=checkpoint_dir_path,
logging_steps=5,
eval_steps=10,
save_steps=10,
save_total_limit=1,
gradient_accumulation_steps=4,
num_train_epochs=1,
local_rank=0,
# deepspeed="ds_config.json"
)
training_args._setup_devices
os.makedirs(checkpoint_dir_path, exist_ok=True)
logger.info("instantiate trainer")
trainer = Seq2SeqTrainer(
model=led,
tokenizer=tokenizer,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=train,
eval_dataset=test,
)
logger.info("start training")
trainer.train()
if __name__ == "__main__":
run()
```
```
[INFO] 2021-02-12 02:02:16,547 filelock: Lock 139661825384256 released on /root/.cache/huggingface/transformers/85a878681daf8945866e644056c360d1fefe287fc88b31b48c20478be4d12b24.d2560ecf8e14415e1113077ca8941c38e7512a1e8b82e19e4150c7ab9e45350a.lock
[INFO] 2021-02-12 02:02:42,587 abstractive_summarization: Using device: cuda
[INFO] 2021-02-12 02:02:42,596 abstractive_summarization: GPU total Memory : 16945512448
[INFO] 2021-02-12 02:02:42,596 abstractive_summarization: GPU free Memory : 16941842432
[INFO] 2021-02-12 02:02:42,596 abstractive_summarization: GPU Memory used : 3670016
[INFO] 2021-02-12 02:02:42,673 abstractive_summarization: map train data
0%| | 0/500 [00:00<?, ?it/s]
1%| | 4/500 [00:00<00:15, 31.16it/s]
2%|▏ | 8/500 [00:00<00:15, 32.18it/s]
2%|▏ | 12/500 [00:00<00:15, 32.37it/s]
3%|▎ | 16/500 [00:00<00:15, 32.17it/s]
4%|▍ | 20/500 [00:00<00:14, 32.20it/s]
5%|▍ | 24/500 [00:00<00:14, 32.11it/s]
6%|▌ | 28/500 [00:00<00:15, 30.96it/s]
6%|▋ | 32/500 [00:01<00:15, 31.08it/s]
7%|▋ | 36/500 [00:01<00:14, 31.49it/s]
8%|▊ | 40/500 [00:01<00:14, 31.94it/s]
9%|▉ | 44/500 [00:01<00:14, 31.91it/s]
10%|▉ | 48/500 [00:01<00:14, 32.20it/s]
10%|█ | 52/500 [00:01<00:13, 32.33it/s]
11%|█ | 56/500 [00:01<00:13, 32.40it/s]
12%|█▏ | 60/500 [00:01<00:13, 32.55it/s]
13%|█▎ | 64/500 [00:01<00:13, 32.58it/s]
14%|█▎ | 68/500 [00:02<00:13, 32.64it/s]
14%|█▍ | 72/500 [00:02<00:13, 32.75it/s]
15%|█▌ | 76/500 [00:02<00:12, 32.69it/s]
16%|█▌ | 80/500 [00:02<00:12, 32.68it/s]
17%|█▋ | 84/500 [00:02<00:12, 32.17it/s]
18%|█▊ | 88/500 [00:02<00:12, 32.16it/s]
18%|█▊ | 92/500 [00:02<00:12, 32.27it/s]
19%|█▉ | 96/500 [00:02<00:12, 32.32it/s]
20%|██ | 100/500 [00:03<00:12, 32.41it/s]
21%|██ | 104/500 [00:03<00:12, 32.52it/s]
22%|██▏ | 108/500 [00:03<00:12, 32.44it/s]
22%|██▏ | 112/500 [00:03<00:11, 32.57it/s]
23%|██▎ | 116/500 [00:03<00:11, 32.33it/s]
24%|██▍ | 120/500 [00:03<00:11, 31.91it/s]
25%|██▍ | 124/500 [00:03<00:12, 30.94it/s]
26%|██▌ | 128/500 [00:03<00:11, 31.47it/s]
26%|██▋ | 132/500 [00:04<00:11, 31.89it/s]
27%|██▋ | 136/500 [00:04<00:11, 32.22it/s]
28%|██▊ | 140/500 [00:04<00:11, 32.55it/s]
29%|██▉ | 144/500 [00:04<00:10, 32.57it/s]
30%|██▉ | 148/500 [00:04<00:10, 32.65it/s]
30%|███ | 152/500 [00:04<00:10, 32.65it/s]
31%|███ | 156/500 [00:04<00:11, 31.24it/s]
32%|███▏ | 160/500 [00:04<00:10, 31.56it/s]
33%|███▎ | 164/500 [00:05<00:10, 31.00it/s]
34%|███▎ | 168/500 [00:05<00:10, 31.50it/s]
34%|███▍ | 172/500 [00:05<00:10, 31.58it/s]
35%|███▌ | 176/500 [00:05<00:10, 31.86it/s]
36%|███▌ | 180/500 [00:05<00:09, 32.15it/s]
37%|███▋ | 184/500 [00:05<00:09, 32.31it/s]
38%|███▊ | 188/500 [00:05<00:09, 32.32it/s]
38%|███▊ | 192/500 [00:05<00:09, 32.16it/s]
39%|███▉ | 196/500 [00:06<00:09, 32.09it/s]
40%|████ | 200/500 [00:06<00:09, 31.76it/s]
41%|████ | 204/500 [00:06<00:09, 31.90it/s]
42%|████▏ | 208/500 [00:06<00:09, 31.94it/s]
42%|████▏ | 212/500 [00:06<00:09, 31.84it/s]
43%|████▎ | 216/500 [00:06<00:08, 31.90it/s]
44%|████▍ | 220/500 [00:06<00:08, 31.43it/s]
45%|████▍ | 224/500 [00:06<00:08, 31.20it/s]
46%|████▌ | 228/500 [00:07<00:08, 31.09it/s]
46%|████▋ | 232/500 [00:07<00:08, 30.88it/s]
47%|████▋ | 236/500 [00:07<00:08, 30.69it/s]
48%|████▊ | 240/500 [00:07<00:08, 30.71it/s]
49%|████▉ | 244/500 [00:07<00:08, 30.81it/s]
50%|████▉ | 248/500 [00:07<00:08, 30.49it/s]
50%|█████ | 252/500 [00:07<00:08, 30.63it/s]
51%|█████ | 256/500 [00:08<00:08, 30.16it/s]
52%|█████▏ | 260/500 [00:08<00:07, 30.22it/s]
53%|█████▎ | 264/500 [00:08<00:07, 30.17it/s]
54%|█████▎ | 268/500 [00:08<00:07, 30.11it/s]
54%|█████▍ | 272/500 [00:08<00:07, 30.21it/s]
55%|█████▌ | 276/500 [00:08<00:07, 29.75it/s]
56%|█████▌ | 280/500 [00:08<00:07, 29.45it/s]
57%|█████▋ | 284/500 [00:08<00:07, 29.73it/s]
57%|█████▋ | 287/500 [00:09<00:07, 29.79it/s]
58%|█████▊ | 291/500 [00:09<00:06, 30.13it/s]
59%|█████▉ | 295/500 [00:09<00:06, 30.11it/s]
60%|█████▉ | 299/500 [00:09<00:06, 30.29it/s]
61%|██████ | 303/500 [00:09<00:06, 30.54it/s]
61%|██████▏ | 307/500 [00:09<00:06, 30.60it/s]
62%|██████▏ | 311/500 [00:09<00:06, 30.46it/s]
63%|██████▎ | 315/500 [00:10<00:06, 29.67it/s]
64%|██████▎ | 318/500 [00:10<00:06, 29.63it/s]
64%|██████▍ | 321/500 [00:10<00:06, 29.68it/s]
65%|██████▌ | 325/500 [00:10<00:05, 29.86it/s]
66%|██████▌ | 328/500 [00:10<00:06, 28.25it/s]
66%|██████▋ | 332/500 [00:10<00:05, 29.00it/s]
67%|██████▋ | 336/500 [00:10<00:05, 29.48it/s]
68%|██████▊ | 339/500 [00:10<00:05, 29.49it/s]
68%|██████▊ | 342/500 [00:10<00:05, 29.58it/s]
69%|██████▉ | 346/500 [00:11<00:05, 29.82it/s]
70%|██████▉ | 349/500 [00:11<00:05, 29.74it/s]
71%|███████ | 353/500 [00:11<00:04, 30.13it/s]
71%|███████▏ | 357/500 [00:11<00:04, 29.24it/s]
72%|███████▏ | 360/500 [00:11<00:04, 29.36it/s]
73%|███████▎ | 364/500 [00:11<00:04, 29.53it/s]
73%|███████▎ | 367/500 [00:11<00:04, 29.56it/s]
74%|███████▍ | 371/500 [00:11<00:04, 29.89it/s]
75%|███████▍ | 374/500 [00:12<00:04, 29.64it/s]
76%|███████▌ | 378/500 [00:12<00:04, 29.90it/s]
76%|███████▋ | 382/500 [00:12<00:03, 30.15it/s]
77%|███████▋ | 386/500 [00:12<00:03, 30.31it/s]
78%|███████▊ | 390/500 [00:12<00:03, 30.44it/s]
79%|███████▉ | 394/500 [00:12<00:03, 30.53it/s]
80%|███████▉ | 398/500 [00:12<00:03, 30.31it/s]
80%|████████ | 402/500 [00:12<00:03, 30.13it/s]
81%|████████ | 406/500 [00:13<00:03, 30.27it/s]
82%|████████▏ | 410/500 [00:13<00:03, 29.79it/s]
83%|████████▎ | 413/500 [00:13<00:02, 29.24it/s]
83%|████████▎ | 416/500 [00:13<00:02, 29.16it/s]
84%|████████▍ | 419/500 [00:13<00:02, 29.09it/s]
85%|████████▍ | 423/500 [00:13<00:02, 29.44it/s]
85%|████████▌ | 427/500 [00:13<00:02, 29.74it/s]
86%|████████▌ | 431/500 [00:13<00:02, 29.89it/s]
87%|████████▋ | 435/500 [00:14<00:02, 30.06it/s]
88%|████████▊ | 439/500 [00:14<00:02, 30.15it/s]
89%|████████▊ | 443/500 [00:14<00:01, 30.08it/s]
89%|████████▉ | 447/500 [00:14<00:01, 29.99it/s]
90%|█████████ | 451/500 [00:14<00:01, 30.03it/s]
91%|█████████ | 455/500 [00:14<00:01, 30.05it/s]
92%|█████████▏| 459/500 [00:14<00:01, 30.04it/s]
93%|█████████▎| 463/500 [00:14<00:01, 30.14it/s]
93%|█████████▎| 467/500 [00:15<00:01, 30.10it/s]
94%|█████████▍| 471/500 [00:15<00:00, 29.80it/s]
95%|█████████▍| 474/500 [00:15<00:00, 29.67it/s]
96%|█████████▌| 478/500 [00:15<00:00, 29.75it/s]
96%|█████████▋| 482/500 [00:15<00:00, 29.95it/s]
97%|█████████▋| 486/500 [00:15<00:00, 30.07it/s]
98%|█████████▊| 490/500 [00:15<00:00, 29.73it/s]
99%|█████████▉| 494/500 [00:16<00:00, 29.84it/s]
100%|█████████▉| 498/500 [00:16<00:00, 30.03it/s]
100%|██████████| 500/500 [00:16<00:00, 30.82it/s]
[INFO] 2021-02-12 02:02:58,936 arrow_writer: Done writing 1000 examples in 51224000 bytes .
[INFO] 2021-02-12 02:02:58,945 abstractive_summarization: map test data
0%| | 0/1 [00:00<?, ?it/s]
100%|██████████| 1/1 [00:00<00:00, 91.93it/s]
[INFO] 2021-02-12 02:02:58,961 arrow_writer: Done writing 1 examples in 51232 bytes .
[INFO] 2021-02-12 02:02:58,962 abstractive_summarization: set Python list in train to PyTorch tensor
[INFO] 2021-02-12 02:02:58,962 arrow_dataset: Set __getitem__(key) output type to torch for ['input_ids', 'attention_mask', 'global_attention_mask', 'labels'] columns (when key is int or slice) and don't output other (un-formated) columns.
[INFO] 2021-02-12 02:02:58,962 abstractive_summarization: set Python list in test to PyTorch tensor
[INFO] 2021-02-12 02:02:58,962 arrow_dataset: Set __getitem__(key) output type to torch for ['input_ids', 'attention_mask', 'global_attention_mask', 'labels'] columns (when key is int or slice) and don't output other (un-formated) columns.
[INFO] 2021-02-12 02:02:58,962 abstractive_summarization: enable fp16 amp training
[INFO] 2021-02-12 02:02:58,962 abstractive_summarization: file will be written to /workspace
[INFO] 2021-02-12 02:02:59,261 abstractive_summarization: instantiate trainer
[INFO] 2021-02-12 02:03:02,626 abstractive_summarization: start training
0%| | 0/125 [00:00<?, ?it/s]/usr/local/lib/python3.8/dist-packages/nlp/utils/py_utils.py:191: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:141.)
return function(data_struct)
Traceback (most recent call last):
File "abstractive_summarization.py", line 408, in <module>
run()
File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "abstractive_summarization.py", line 383, in run
trainer.train()
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 938, in train
tr_loss += self.training_step(model, inputs)
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 1302, in training_step
loss = self.compute_loss(model, inputs)
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 1334, in compute_loss
outputs = model(**inputs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/distributed.py", line 511, in forward
output = self.module(*inputs[0], **kwargs[0])
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/models/led/modeling_led.py", line 2344, in forward
outputs = self.led(
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/models/led/modeling_led.py", line 2193, in forward
encoder_outputs = self.encoder(
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/models/led/modeling_led.py", line 1831, in forward
layer_outputs = encoder_layer(
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/models/led/modeling_led.py", line 907, in forward
attn_outputs = self.self_attn(
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/models/led/modeling_led.py", line 718, in forward
self_outputs = self.longformer_self_attn(
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/models/led/modeling_led.py", line 276, in forward
attn_output = self._compute_attn_output_with_global_indices(
File "/usr/local/lib/python3.8/dist-packages/transformers/models/led/modeling_led.py", line 597, in _compute_attn_output_with_global_indices
attn_output_without_global = self._sliding_chunks_matmul_attn_probs_value(
File "/usr/local/lib/python3.8/dist-packages/transformers/models/led/modeling_led.py", line 505, in _sliding_chunks_matmul_attn_probs_value
chunked_attn_probs = self._pad_and_diagonalize(chunked_attn_probs)
File "/usr/local/lib/python3.8/dist-packages/transformers/models/led/modeling_led.py", line 356, in _pad_and_diagonalize
chunked_hidden_states = F.pad(
File "/usr/local/lib/python3.8/dist-packages/torch/nn/functional.py", line 3552, in _pad
return _VF.constant_pad_nd(input, pad, value)
RuntimeError: CUDA out of memory. Tried to allocate 194.00 MiB (GPU 0; 15.78 GiB total capacity; 14.12 GiB already allocated; 146.00 MiB free; 14.47 GiB reserved in total by PyTorch)
0%| | 0/125 [00:00<?, ?it/s]
```<|||||>Have you read the suggestions at https://github.com/huggingface/transformers/issues/10011#issuecomment-777918847?
<|||||>Hi @stas00 thank you for the update and merge! If possible, I'm trying to avoid reducing the decoder output. We would love summaries that are around 200 tokens in length.
I'm noticing, if I try using deepspeed, it's now hanging on here:
```
[2021-02-12 16:55:53,106] [INFO] [distributed.py:46:init_distributed] Initializing torch distributed with backend: nccl
```
and then times out
```
Traceback (most recent call last):
File "abstractive_summarization.py", line 407, in <module>
run()
File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "abstractive_summarization.py", line 349, in run
training_args = Seq2SeqTrainingArguments(
File "<string>", line 61, in __init__
File "/usr/local/lib/python3.8/dist-packages/transformers/training_args.py", line 478, in __post_init__
if is_torch_available() and self.device.type != "cuda" and self.fp16:
File "/usr/local/lib/python3.8/dist-packages/transformers/file_utils.py", line 1346, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/training_args.py", line 583, in device
return self._setup_devices
File "/usr/local/lib/python3.8/dist-packages/transformers/file_utils.py", line 1336, in __get__
cached = self.fget(obj)
File "/usr/local/lib/python3.8/dist-packages/transformers/file_utils.py", line 1346, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/training_args.py", line 551, in _setup_devices
deepspeed.init_distributed()
File "/usr/local/lib/python3.8/dist-packages/deepspeed/utils/distributed.py", line 49, in init_distributed
torch.distributed.init_process_group(backend=dist_backend,
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/distributed_c10d.py", line 422, in init_process_group
store, rank, world_size = next(rendezvous_iterator)
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/rendezvous.py", line 172, in _env_rendezvous_handler
store = TCPStore(master_addr, master_port, world_size, start_daemon, timeout)
RuntimeError: connect() timed out.
```
if I don't use deepspeed, I get
```
[INFO] 2021-02-12 17:44:39,161 filelock: Lock 140104053693120 released on /root/.cache/huggingface/transformers/85a878681daf8945866e644056c360d1fefe287fc88b31b48c20478be4d12b24.d2560ecf8e14415e1113077ca8941c38e7512a1e8b82e19e4150c7ab9e45350a.lock
[INFO] 2021-02-12 17:45:05,102 abstractive_summarization: Using device: cuda
[INFO] 2021-02-12 17:45:05,111 abstractive_summarization: GPU total Memory : 16945512448
[INFO] 2021-02-12 17:45:05,111 abstractive_summarization: GPU free Memory : 16941842432
[INFO] 2021-02-12 17:45:05,111 abstractive_summarization: GPU Memory used : 3670016
[INFO] 2021-02-12 17:45:05,166 abstractive_summarization: map train data
0%| | 0/500 [00:00<?, ?it/s]
1%| | 3/500 [00:00<00:17, 28.33it/s]
1%|▏ | 7/500 [00:00<00:16, 29.83it/s]
2%|▏ | 11/500 [00:00<00:15, 31.06it/s]
3%|▎ | 15/500 [00:00<00:15, 31.43it/s]
4%|▍ | 19/500 [00:00<00:15, 31.97it/s]
5%|▍ | 23/500 [00:00<00:14, 32.19it/s]
5%|▌ | 27/500 [00:00<00:14, 32.25it/s]
6%|▌ | 31/500 [00:00<00:14, 32.31it/s]
7%|▋ | 35/500 [00:01<00:14, 31.67it/s]
8%|▊ | 39/500 [00:01<00:14, 31.92it/s]
9%|▊ | 43/500 [00:01<00:14, 31.44it/s]
9%|▉ | 47/500 [00:01<00:14, 31.64it/s]
10%|█ | 51/500 [00:01<00:14, 30.68it/s]
11%|█ | 55/500 [00:01<00:14, 31.12it/s]
12%|█▏ | 59/500 [00:01<00:14, 31.44it/s]
13%|█▎ | 63/500 [00:01<00:13, 31.84it/s]
13%|█▎ | 67/500 [00:02<00:13, 32.09it/s]
14%|█▍ | 71/500 [00:02<00:13, 32.37it/s]
15%|█▌ | 75/500 [00:02<00:13, 31.68it/s]
16%|█▌ | 79/500 [00:02<00:13, 31.91it/s]
17%|█▋ | 83/500 [00:02<00:13, 31.98it/s]
17%|█▋ | 87/500 [00:02<00:12, 32.10it/s]
18%|█▊ | 91/500 [00:02<00:12, 32.28it/s]
19%|█▉ | 95/500 [00:02<00:12, 32.27it/s]
20%|█▉ | 99/500 [00:03<00:12, 31.89it/s]
21%|██ | 103/500 [00:03<00:12, 31.60it/s]
21%|██▏ | 107/500 [00:03<00:12, 31.75it/s]
22%|██▏ | 111/500 [00:03<00:12, 31.95it/s]
23%|██▎ | 115/500 [00:03<00:11, 32.12it/s]
24%|██▍ | 119/500 [00:03<00:11, 32.21it/s]
25%|██▍ | 123/500 [00:03<00:11, 32.23it/s]
25%|██▌ | 127/500 [00:03<00:11, 32.28it/s]
26%|██▌ | 131/500 [00:04<00:11, 31.77it/s]
27%|██▋ | 135/500 [00:04<00:11, 32.01it/s]
28%|██▊ | 139/500 [00:04<00:11, 32.07it/s]
29%|██▊ | 143/500 [00:04<00:11, 32.29it/s]
29%|██▉ | 147/500 [00:04<00:10, 32.43it/s]
30%|███ | 151/500 [00:04<00:10, 32.43it/s]
31%|███ | 155/500 [00:04<00:10, 32.27it/s]
32%|███▏ | 159/500 [00:04<00:10, 32.26it/s]
33%|███▎ | 163/500 [00:05<00:10, 30.81it/s]
33%|███▎ | 167/500 [00:05<00:10, 31.26it/s]
34%|███▍ | 171/500 [00:05<00:10, 31.56it/s]
35%|███▌ | 175/500 [00:05<00:10, 31.68it/s]
36%|███▌ | 179/500 [00:05<00:10, 31.88it/s]
37%|███▋ | 183/500 [00:05<00:09, 31.87it/s]
37%|███▋ | 187/500 [00:05<00:09, 32.08it/s]
38%|███▊ | 191/500 [00:06<00:09, 31.48it/s]
39%|███▉ | 195/500 [00:06<00:09, 31.16it/s]
40%|███▉ | 199/500 [00:06<00:09, 30.59it/s]
41%|████ | 203/500 [00:06<00:09, 30.72it/s]
41%|████▏ | 207/500 [00:06<00:09, 31.31it/s]
42%|████▏ | 211/500 [00:06<00:09, 31.58it/s]
43%|████▎ | 215/500 [00:06<00:08, 31.79it/s]
44%|████▍ | 219/500 [00:06<00:08, 31.72it/s]
45%|████▍ | 223/500 [00:07<00:08, 31.47it/s]
45%|████▌ | 227/500 [00:07<00:08, 31.32it/s]
46%|████▌ | 231/500 [00:07<00:08, 31.12it/s]
47%|████▋ | 235/500 [00:07<00:08, 30.91it/s]
48%|████▊ | 239/500 [00:07<00:08, 30.54it/s]
49%|████▊ | 243/500 [00:07<00:08, 30.43it/s]
49%|████▉ | 247/500 [00:07<00:08, 30.45it/s]
50%|█████ | 251/500 [00:07<00:08, 30.46it/s]
51%|█████ | 255/500 [00:08<00:07, 30.80it/s]
52%|█████▏ | 259/500 [00:08<00:07, 30.63it/s]
53%|█████▎ | 263/500 [00:08<00:07, 30.51it/s]
53%|█████▎ | 267/500 [00:08<00:07, 30.46it/s]
54%|█████▍ | 271/500 [00:08<00:07, 30.45it/s]
55%|█████▌ | 275/500 [00:08<00:07, 30.01it/s]
56%|█████▌ | 279/500 [00:08<00:07, 30.10it/s]
57%|█████▋ | 283/500 [00:09<00:07, 30.22it/s]
57%|█████▋ | 287/500 [00:09<00:07, 30.12it/s]
58%|█████▊ | 291/500 [00:09<00:06, 30.30it/s]
59%|█████▉ | 295/500 [00:09<00:06, 29.63it/s]
60%|█████▉ | 298/500 [00:09<00:06, 29.62it/s]
60%|██████ | 302/500 [00:09<00:06, 29.92it/s]
61%|██████ | 305/500 [00:09<00:06, 29.47it/s]
62%|██████▏ | 309/500 [00:09<00:06, 29.59it/s]
62%|██████▏ | 312/500 [00:09<00:06, 29.58it/s]
63%|██████▎ | 315/500 [00:10<00:06, 29.65it/s]
64%|██████▍ | 319/500 [00:10<00:06, 29.88it/s]
65%|██████▍ | 323/500 [00:10<00:05, 30.03it/s]
65%|██████▌ | 326/500 [00:10<00:06, 28.54it/s]
66%|██████▌ | 329/500 [00:10<00:05, 28.77it/s]
67%|██████▋ | 333/500 [00:10<00:05, 29.18it/s]
67%|██████▋ | 336/500 [00:10<00:05, 29.37it/s]
68%|██████▊ | 339/500 [00:10<00:05, 29.50it/s]
68%|██████▊ | 342/500 [00:11<00:05, 29.59it/s]
69%|██████▉ | 345/500 [00:11<00:05, 27.98it/s]
70%|██████▉ | 348/500 [00:11<00:05, 28.37it/s]
70%|███████ | 352/500 [00:11<00:05, 29.10it/s]
71%|███████ | 355/500 [00:11<00:04, 29.15it/s]
72%|███████▏ | 359/500 [00:11<00:04, 29.51it/s]
73%|███████▎ | 363/500 [00:11<00:04, 29.80it/s]
73%|███████▎ | 367/500 [00:11<00:04, 30.16it/s]
74%|███████▍ | 371/500 [00:11<00:04, 30.30it/s]
75%|███████▌ | 375/500 [00:12<00:04, 30.22it/s]
76%|███████▌ | 379/500 [00:12<00:03, 30.29it/s]
77%|███████▋ | 383/500 [00:12<00:03, 30.30it/s]
77%|███████▋ | 387/500 [00:12<00:03, 30.32it/s]
78%|███████▊ | 391/500 [00:12<00:03, 30.33it/s]
79%|███████▉ | 395/500 [00:12<00:03, 30.35it/s]
80%|███████▉ | 399/500 [00:12<00:03, 29.86it/s]
81%|████████ | 403/500 [00:13<00:03, 29.92it/s]
81%|████████ | 406/500 [00:13<00:03, 29.08it/s]
82%|████████▏ | 409/500 [00:13<00:03, 29.31it/s]
82%|████████▏ | 412/500 [00:13<00:03, 28.97it/s]
83%|████████▎ | 415/500 [00:13<00:03, 27.09it/s]
84%|████████▍ | 419/500 [00:13<00:02, 28.02it/s]
85%|████████▍ | 423/500 [00:13<00:02, 28.80it/s]
85%|████████▌ | 427/500 [00:13<00:02, 29.20it/s]
86%|████████▌ | 430/500 [00:13<00:02, 29.32it/s]
87%|████████▋ | 433/500 [00:14<00:02, 29.41it/s]
87%|████████▋ | 436/500 [00:14<00:02, 29.29it/s]
88%|████████▊ | 439/500 [00:14<00:02, 28.79it/s]
88%|████████▊ | 442/500 [00:14<00:01, 29.08it/s]
89%|████████▉ | 446/500 [00:14<00:01, 29.51it/s]
90%|█████████ | 450/500 [00:14<00:01, 29.84it/s]
91%|█████████ | 454/500 [00:14<00:01, 30.09it/s]
92%|█████████▏| 458/500 [00:14<00:01, 30.08it/s]
92%|█████████▏| 462/500 [00:15<00:01, 30.16it/s]
93%|█████████▎| 466/500 [00:15<00:01, 30.25it/s]
94%|█████████▍| 470/500 [00:15<00:00, 30.31it/s]
95%|█████████▍| 474/500 [00:15<00:00, 30.32it/s]
96%|█████████▌| 478/500 [00:15<00:00, 30.30it/s]
96%|█████████▋| 482/500 [00:15<00:00, 30.25it/s]
97%|█████████▋| 486/500 [00:15<00:00, 30.28it/s]
98%|█████████▊| 490/500 [00:15<00:00, 30.23it/s]
99%|█████████▉| 494/500 [00:16<00:00, 30.14it/s]
100%|█████████▉| 498/500 [00:16<00:00, 30.12it/s]
100%|██████████| 500/500 [00:16<00:00, 30.63it/s]
[INFO] 2021-02-12 17:45:21,532 arrow_writer: Done writing 1000 examples in 51224000 bytes .
[INFO] 2021-02-12 17:45:21,539 abstractive_summarization: map test data
0%| | 0/1 [00:00<?, ?it/s]
100%|██████████| 1/1 [00:00<00:00, 91.35it/s]
[INFO] 2021-02-12 17:45:21,556 arrow_writer: Done writing 1 examples in 51232 bytes .
[INFO] 2021-02-12 17:45:21,557 abstractive_summarization: set Python list in train to PyTorch tensor
[INFO] 2021-02-12 17:45:21,557 arrow_dataset: Set __getitem__(key) output type to torch for ['input_ids', 'attention_mask', 'global_attention_mask', 'labels'] columns (when key is int or slice) and don't output other (un-formated) columns.
[INFO] 2021-02-12 17:45:21,557 abstractive_summarization: set Python list in test to PyTorch tensor
[INFO] 2021-02-12 17:45:21,557 arrow_dataset: Set __getitem__(key) output type to torch for ['input_ids', 'attention_mask', 'global_attention_mask', 'labels'] columns (when key is int or slice) and don't output other (un-formated) columns.
[INFO] 2021-02-12 17:45:21,557 abstractive_summarization: enable fp16 amp training
[INFO] 2021-02-12 17:45:21,557 abstractive_summarization: file will be written to /workspace
[INFO] 2021-02-12 17:45:21,882 abstractive_summarization: instantiate trainer
[INFO] 2021-02-12 17:45:25,224 abstractive_summarization: start training
0%| | 0/31 [00:00<?, ?it/s]/usr/local/lib/python3.8/dist-packages/nlp/utils/py_utils.py:191: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:141.)
return function(data_struct)
Traceback (most recent call last):
File "abstractive_summarization.py", line 407, in <module>
run()
File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "abstractive_summarization.py", line 382, in run
trainer.train()
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 940, in train
tr_loss += self.training_step(model, inputs)
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 1302, in training_step
loss = self.compute_loss(model, inputs)
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 1334, in compute_loss
outputs = model(**inputs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/data_parallel.py", line 155, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/data_parallel.py", line 165, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/parallel_apply.py", line 85, in parallel_apply
output.reraise()
File "/usr/local/lib/python3.8/dist-packages/torch/_utils.py", line 395, in reraise
raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker
output = module(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/models/led/modeling_led.py", line 2344, in forward
outputs = self.led(
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/models/led/modeling_led.py", line 2193, in forward
encoder_outputs = self.encoder(
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/models/led/modeling_led.py", line 1831, in forward
layer_outputs = encoder_layer(
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/models/led/modeling_led.py", line 907, in forward
attn_outputs = self.self_attn(
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/models/led/modeling_led.py", line 718, in forward
self_outputs = self.longformer_self_attn(
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/models/led/modeling_led.py", line 201, in forward
attn_scores = self._sliding_chunks_query_key_matmul(
File "/usr/local/lib/python3.8/dist-packages/transformers/models/led/modeling_led.py", line 431, in _sliding_chunks_query_key_matmul
diagonal_chunked_attention_scores = self._pad_and_transpose_last_two_dims(
File "/usr/local/lib/python3.8/dist-packages/transformers/models/led/modeling_led.py", line 329, in _pad_and_transpose_last_two_dims
hidden_states_padded = F.pad(
File "/usr/local/lib/python3.8/dist-packages/torch/nn/functional.py", line 3552, in _pad
return _VF.constant_pad_nd(input, pad, value)
RuntimeError: CUDA out of memory. Tried to allocate 386.00 MiB (GPU 0; 15.78 GiB total capacity; 14.09 GiB already allocated; 162.00 MiB free; 14.42 GiB reserved in total by PyTorch)
0%| | 0/31 [00:09<?, ?it/s]
```<|||||>> Hi @stas00 , I'm trying to avoid reducing the decoder output if possible. We would love summaries that are around 200 tokens in length. Thank you for the update and merge!
For sure, we are trying to get things running first - removing OOM, then comes the optimization.
> I'm noticing, if I try using deepspeed, it's now hanging on here:
>
> ```
> [2021-02-12 16:55:53,106] [INFO] [distributed.py:46:init_distributed] Initializing torch distributed with backend: nccl
looks like distributed gets stuck there - you might be having another instance using the same port, try using a different
`os.environ['MASTER_PORT']` or kill any run-away processes.
when pre-1.8.0 pytorch crashes it often leave zombies, you have to kill those manually. this has been fixed in pt-1.8.0.
The zombies also consume gpu ram - this could be your problem too. might also help to watch nvidia-smi
```
watch -n 1 nvidia-smi
```
to ensure you have no memory used by other programs when you start a new one.
As I mentioned earlier, you don't need DeepSpeed here, you need to figure out why your setup takes much more gpu ram than if I run the same script. Can you try a more recent pytorch version?
> if I don't use deepspeed, I get
> RuntimeError: CUDA out of memory. Tried to allocate 386.00 MiB (GPU 0; 15.78 GiB total capacity; 14.09 GiB already allocated; 162.00 MiB free; 14.42 GiB reserved in total by PyTorch)
Here we are going in circles - if you didn't change anything in the program how would this change?
To repeat using the latest pytorch release the memory consumption appears to be much smaller than what you get - so if possible try to to upgrade it?
e.g. have you tried running the same on colab? It also gives you a 16GB gpu if you use the freebie version.<|||||>oh okay, so I tried testing this in colab
```python
import datasets
from datasets import load_dataset, load_metric
import click
import torch
import logging
import json
from io import BytesIO
import pandas as pd
import pyarrow as pa
import pyarrow.parquet as pq
from nlp import arrow_dataset
import os
from transformers import (
AutoTokenizer,
AutoModelForSeq2SeqLM,
Seq2SeqTrainer,
Seq2SeqTrainingArguments,
AutoTokenizer,
AutoModelForSeq2SeqLM,
)
import torch.utils.checkpoint
from pynvml import *
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
logging.basicConfig(
level=logging.INFO, format="[%(levelname)s] %(asctime)s %(module)s: %(message)s"
)
rouge = load_metric("rouge")
MODEL_NAME = "patrickvonplaten/led-large-16384-pubmed"
# ds_config = {
# "fp16": {
# "enabled": "true",
# "loss_scale": 0,
# "loss_scale_window": 1000,
# "hysteresis": 2,
# "min_loss_scale": 1
# },
# "zero_optimization": {
# "stage": 2,
# "allgather_partitions": "true",
# "allgather_bucket_size": 1e8,
# "overlap_comm": "true",
# "reduce_scatter": "true",
# "reduce_bucket_size": 1e8,
# "contiguous_gradients": "true",
# "cpu_offload": "true"
# },
# "zero_allow_untested_optimizer": "true",
# "optimizer": {
# "type": "AdamW",
# "params": {
# "lr": 3e-5,
# "betas": [0.8, 0.999],
# "eps": 1e-8,
# "weight_decay": 3e-7
# }
# },
# "scheduler": {
# "type": "WarmupLR",
# "params": {
# "warmup_min_lr": 0,
# "warmup_max_lr": 3e-5,
# "warmup_num_steps": 500
# }
# },
# "steps_per_print": 2000,
# "wall_clock_breakdown": "false"
# }
# with open('ds_config.json', 'w') as fp:
# json.dump(ds_config, fp)
logger.info(f"load tokenizer using {MODEL_NAME}")
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
logger.info(f"Load {MODEL_NAME}. IMPORTANT NOTE:I'm enabling gradient checkpointing to save memory.")
# load model + enable gradient checkpointing & disable cache for checkpointing
led = AutoModelForSeq2SeqLM.from_pretrained(
MODEL_NAME,
gradient_checkpointing=False,
use_cache=False,
)
# max encoder length is 2048 for PubMed
encoder_max_length = 2048
decoder_max_length = 64
batch_size = 2
# set decoding params
led.config.num_beams = 2
led.config.max_length = 256
led.config.min_length = 100
led.config.length_penalty = 2.0
led.config.early_stopping = True
led.config.no_repeat_ngram_size = 3
def make_tarfile(output_filename, source_dir):
with tarfile.open(output_filename, "w:gz") as tar:
tar.add(source_dir, arcname=os.path.basename(source_dir))
def process_data_to_model_inputs(batch):
# tokenize the inputs and labels
inputs = tokenizer(
batch["extractive_summary"],
padding="max_length",
truncation=True,
max_length=encoder_max_length,
)
outputs = tokenizer(
batch["reference_summary"],
padding="max_length",
truncation=True,
max_length=decoder_max_length,
)
batch["input_ids"] = inputs.input_ids
batch["attention_mask"] = inputs.attention_mask
# create 0 global_attention_mask lists
batch["global_attention_mask"] = len(batch["input_ids"]) * [
[0 for _ in range(len(batch["input_ids"][0]))]
]
# since above lists are references, the following line changes the 0 index for all samples
batch["global_attention_mask"][0][0] = 1
batch["labels"] = outputs.input_ids
# We have to make sure that the PAD token is ignored
batch["labels"] = [
[-100 if token == tokenizer.pad_token_id else token for token in labels]
for labels in batch["labels"]
]
return batch
def compute_metrics(pred):
labels_ids = pred.label_ids
pred_ids = pred.predictions
pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True)
labels_ids[labels_ids == -100] = tokenizer.pad_token_id
label_str = tokenizer.batch_decode(labels_ids, skip_special_tokens=True)
rouge_output = rouge.compute(
predictions=pred_str, references=label_str, rouge_types=["rouge2"]
)["rouge2"].mid
return {
"rouge2_precision": round(rouge_output.precision, 4),
"rouge2_recall": round(rouge_output.recall, 4),
"rouge2_fmeasure": round(rouge_output.fmeasure, 4),
}
# def run():
nvmlInit()
h = nvmlDeviceGetHandleByIndex(0)
info = nvmlDeviceGetMemoryInfo(h)
logger.info(f'GPU total Memory : {info.total}')
logger.info(f'GPU free Memory : {info.free}')
logger.info(f'GPU Memory used : {info.used}')
logger.info("create fictious train and test data")
n_recs = 1000
frames = [
{"reference_summary": [' '.join([f"{i} I am a reference summary"] * 200),
' '.join(["I am another reference summary"] * 200)],
"extractive_summary": [' '.join([f"{i} hello"] * 200), ' '.join(["goodbye"] * 200)]} for i in range(n_recs)]
train = pd.DataFrame(frames)
test = pd.DataFrame({"reference_summary": [' '.join(["I am another reference summary"] * 200)],
"extractive_summary": [' '.join(["goodbye"] * 200)]})
train = pa.Table.from_pandas(train)
train = arrow_dataset.Dataset(train)
test = pa.Table.from_pandas(test)
test = arrow_dataset.Dataset(test)
logger.info("map train data")
train = train.map(
process_data_to_model_inputs,
batched=True,
batch_size=batch_size,
remove_columns=["reference_summary", "extractive_summary"],
)
logger.info("map test data")
test = test.map(
process_data_to_model_inputs,
batched=True,
batch_size=batch_size,
remove_columns=["reference_summary", "extractive_summary"],
)
logger.info("set Python list in train to PyTorch tensor")
train.set_format(
type="torch",
columns=["input_ids", "attention_mask", "global_attention_mask", "labels"],
)
logger.info("set Python list in test to PyTorch tensor")
test.set_format(
type="torch",
columns=["input_ids", "attention_mask", "global_attention_mask", "labels"],
)
logger.info("enable fp16 amp training")
logger.info(f"file will be written to {os.getcwd()}")
#define env variables required for training
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '9994'
os.environ['RANK'] = "0"
os.environ['LOCAL_RANK'] = "0"
os.environ['WORLD_SIZE'] = "1"
checkpoint_dir_path = "/mnt/summarization_checkpoints"
training_args = Seq2SeqTrainingArguments(
predict_with_generate=True,
evaluation_strategy="steps",
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
fp16=True,
output_dir=checkpoint_dir_path,
logging_steps=5,
eval_steps=10,
save_steps=10,
save_total_limit=1,
gradient_accumulation_steps=4,
num_train_epochs=1,
# deepspeed="ds_config.json"
)
# training_args._setup_devices
os.makedirs(checkpoint_dir_path, exist_ok=True)
logger.info("instantiate trainer")
trainer = Seq2SeqTrainer(
model=led,
tokenizer=tokenizer,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=train,
eval_dataset=test,
)
logger.info("start training")
trainer.train()
```
and setting the decoder max length to 64 but it's still giving me memory issues:
https://colab.research.google.com/drive/1IN1tHkey0It_LWZHvOuCbbcgtglGizw4?usp=sharing<|||||>This is great, so that we can work on the same environment. I will work on it later today and hopefully find the culprit. I will keep you posted, @mmoya01 <|||||>I started working on it but haven't figured it out yet - colab is not very friendly to debug OOM - not better than running a script - have to restart it all the time - will continue tomorrow - hopefully will have a resolution soon.
<|||||>Hi @stas00 thank you for the update and for looking into this<|||||>OK, so I experimented a bit and sat with various profilers to make sense out of it all, since there are many different nuances to understand.
Here is what I have to share with you.
1. DeepSpeed's primary use is for distributed training (multi-gpu), and while it can shine on a single gpu - it needs general RAM - which collab doesn't have much of - you can't do anything serious with 12GB of RAM for the whole vm. It just kept on crashing. If on your original setup you have much more RAM then it's definitely worth trying to deploy DeepSpeed.
I have several extra things to experiment with in the DeepSpeed-land hopefully in the next few days which may help a bit, but since I haven't tried it yet, I can't tell.
2. Now let's look at reality - you took a notebook that was tuned to fit into the available 15GB gpu and swapped in a model that is ~3x bigger. So there is not much you can do given the RAM limitation.
I did multiple experiments and found this to fit very snugly - i.e. a few bytes away from OOM:
```
encoder_max_length = 2048
decoder_max_length = 64
batch_size = 1
gradient_accumulation_steps=8
GPU Memory used : 15802040320
```
So your effective batch is 8, but `decoder_max_length` is unsatisfactory. I am aware of that.
Also I added to the notebook `ipyexperiments` which memory profiles each cell automatically for you. So that you can easily see what's happening w/o needing to manually add printouts.
https://colab.research.google.com/drive/1rEspdkR839xZzh561OwSYLtFnnKhQdEl?usp=sharing
Note that it reports the memory at current and also the delta that was consumed and peaked. So if after training it shows a lot more memory still left, it's after clearing the cache - so if you take the used memory + peaked delta you will get the total peak memory the program reached during that cell.
Running the same experiments on a larger gpu, they all surpass 15GB peak memory with bs=2. In one of my very first reports I suggested that I get much less memory used on my larger card, but I was wrong, I didn't account for the peak memory in my first measurements.
Just in case you are not familiar with the term - Peak memory - is when a program consumes some memory temporarily and then releases it, so the reported total is less.
3. Research if perhaps someone has made a distilled model of the same, in which case it'll be less of everything and probably fit better. I see other models finetuned on pubmed on the datasets hub - I don't know if they fit your needs.
4. In your experiments be aware that colab is terrible at gpu memory management, and doesn't quite free memory, so it's full restart on each experiment :( I'm mentioning that so that you won't be getting false negatives if you decided to re-run the same cell that trains.
As I mentioned earlier there is at least one more thing I hope to try in the next few days. If I succeed I will send you an update.
<|||||>One other thing you may want to try is fp16 training. I have no idea how LED takes to that.
```
pip install apex
```
```
training_args = Seq2SeqTrainingArguments(
[...]
fp16=True,
fp16_backend="apex",
fp16_opt_level="O3",
```
This will use significantly less memory, but your training may or may not converge.
It's very likely that you will want to keep batch norm at fp32 though - but the current trainer doesn't have a way to enable that from the user side. So either you need to change the trainer source code
```
# trainer.py
def _wrap_model(self, model, training=True):
# Mixed precision training with apex (torch < 1.6)
if self.use_apex and training:
model, self.optimizer = amp.initialize(model, self.optimizer, opt_level=self.args.fp16_opt_level, keep_batchnorm_fp32=True)
```
I added a new argument `keep_batchnorm_fp32=True` there.
or perhaps it's easier to monkey patch `amp` in your script/notebook:
```
from apex import amp
orig_amp_init = amp.initialize
def new_amp_init(model, optimiser, **kwargs):
return orig_amp_init(model, optimiser, keep_batchnorm_fp32=True, **kwargs)
amp.initialize = new_amp_init
trainer = ...
```
or the same can be done in a simpler way with `partial`:
```
from functools import partial
from apex import amp
amp.initialize = partial(amp.initialize, keep_batchnorm_fp32=True)
trainer = ...
```
just don't re-run this cell more than once per session
**edit:** transformers doesn't actually use batchnorm so that 2nd part was irrelevant.
To understand exactly what I proposed see: https://nvidia.github.io/apex/amp.html#o3-fp16-training
<|||||>ok, figured it out - I suggested for you try to disable the gradient checkpointing in the context of being unable to use Deepspeed, but I didn't think of asking you to restore this config...
So enable `from_pretrained(MODEL_NAME, gradient_checkpointing=True,...`
And voila, this config works just fine:
```
encoder_max_length = 2048
decoder_max_length = 256
batch_size = 4
```
You can go for even larger length, it should have a very small impact. And I think your batch size can now be even larger, so that you can remove `gradient_accumulation_steps` if wanted - or reduce it.
I updated the notebook, so you can see it working:
https://colab.research.google.com/drive/1rEspdkR839xZzh561OwSYLtFnnKhQdEl?usp=sharing<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 10,010 | closed | Problem fine-tuning BERTweet | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.2
- Platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.3
- PyTorch version (GPU?): 1.7.1+cu101 (True)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no, single gpu
Maybe @LysandreJik could help?
## Information
Model I am using BERTweet
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
I get the following error:
```
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [549,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [549,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [549,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [549,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [549,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
Traceback (most recent call last):
File "../models/jigsaw/tr-3.4//run_puppets.py", line 284, in <module>
main()
File "../models/jigsaw/tr-3.4//run_puppets.py", line 195, in main
trainer.train(
File "/dccstor/redrug_ier/envs/attack/lib/python3.8/site-packages/transformers/trainer.py", line 888, in train
tr_loss += self.training_step(model, inputs)
File "/dccstor/redrug_ier/envs/attack/lib/python3.8/site-packages/transformers/trainer.py", line 1250, in training_step
loss = self.compute_loss(model, inputs)
File "/dccstor/redrug_ier/envs/attack/lib/python3.8/site-packages/transformers/trainer.py", line 1277, in compute_loss
outputs = model(**inputs)
File "/dccstor/redrug_ier/envs/attack/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/dccstor/redrug_ier/envs/attack/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py", line 1137, in forward
outputs = self.roberta(
File "/dccstor/redrug_ier/envs/attack/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/dccstor/redrug_ier/envs/attack/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py", line 791, in forward
embedding_output = self.embeddings(
File "/dccstor/redrug_ier/envs/attack/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/dccstor/redrug_ier/envs/attack/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py", line 121, in forward
embeddings += position_embeddings
```
I just upgraded to the latest pytorch and transformers, I had same issue with different versions (transformers 3.4, torch 1.5.1).
Some more info on how I got here: https://github.com/VinAIResearch/BERTweet/issues/26
I've used the same code with 10+ other models (e.g., bert, roberta, distillbert) with no issues. One difference that I noticed in the config files for these models compared to BERTweet: max_position_embeddings is 512 for the models I'm using with no issues, while it is set to 130 in the config file for BERTweet.
One (related?) clarification question: what's the relation between `max_position_embeddings` and `max_seq_length`?
Any insights, more than welcome. Thanks!
| 02-04-2021 18:12:36 | 02-04-2021 18:12:36 | Hi @ioana-blue ,
`max_seq_length` is the naming "convention" when talking about the tokenization side, e.g. when you tokenize your tweets they will be converted to ids and normally padded to a `max_seq_length`.
`max_position_embeddings` is the naming convention when talking about pre-training the model, so e.g. BERT has seen 512 subtokens during pre-training phase.
So `max_seq_length` should be less or equal `max_position_embeddings`.
In your case it seems that the model has seen 130 subtokens during pre-training phase (which is ok, because tweets usually are much shorter than 512 subtokens).
Could you check your tokenization part and the number of subtokens that you're later passing to the model :thinking: <|||||>It would also help if you could paste the tokenization part (e.g. converting plain text/tweets into model ids) here, so we can have a look into it!<|||||>Ok, I thought that was one potential issue. So I ran with `max_seq_length` of 100 and I still get the same problem. I'll probably ask for a feature request to print an error when trying to run with a `max_seq_length` that is higher than `max_possition_embeddings` (I used to run it with 512 for `max_seq_length` and there is no complain).
I'm using slightly modified version of the glue example in the code, so I didn't modify any of the tokenization part. The only thing that I added is the data processors/loaders.
Let me know if I could provide any more info to help debug this issue. Greatly appreciate your help!
<|||||>I just wanted to reproduce the error message with an example after my :pizza: , but it is working with the GLUE example:
```bash
python3 run_glue.py \
--model_name_or_path vinai/bertweet-base \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3 \
--use_fast False \
--output_dir /tmp/$TASK_NAME/
```
Important argument is to pass `--use_fast False` to avoid an error message. I set `TASK_NAME` to the `wnli` task.
Could you specify what version of Transformers you're using :thinking: I'm using a 4.3 version (d5888ef0ab949ec82ed4768556c2b2743e3ca1df).<|||||>Is it also possible that you paste the trainer output that shows e.g.:
```bash
02/04/2021 20:59:33 - INFO - __main__ - Sample 281 of the training set: {'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'idx': 281, 'input_ids': [0, 1717, 966, 9, 329, 2
125, 24, 6, 52562, 7, 42, 58, 8215, 29, 41118, 7939, 4, 2, 2, 2125, 8215, 29, 41118, 7939, 4, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1], 'label': 0, 'sentence1': "Paul tried to call George on the phone, but he wasn't successful.", 'sentence2': "George wasn't su
ccessful.", 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0]}.
```
<|||||>Oh, and could you also pass `--overwrite_cache` to the training script, this is really helpful espec. when you experiment with different sequence lengths 😀<|||||>Thanks for your help!
I tried the code with 3.4 and 4.2, similar behavior.
The encoding look fine to me:
```
02/04/2021 16:15:57 - INFO - util_processors - *** Example ***
02/04/2021 16:15:57 - INFO - util_processors - guid: test-1178818409812746240_twitter
02/04/2021 16:15:57 - INFO - util_processors - features: InputFeatures(input_ids=[0, 10584, 56843, 241, 66, 103, 6, 289, 1389, 32, 38, 97, 11, 23465, 72, 618, 27658, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], attention_mask=[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], token_type_ids=[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], label=0)
02/04/2021 16:15:57 - INFO - util_processors - *** Example ***
02/04/2021 16:15:57 - INFO - util_processors - guid: test-19346774_gab
02/04/2021 16:15:57 - INFO - util_processors - features: InputFeatures(input_ids=[0, 52, 112, 52, 37, 1621, 11, 8812, 15, 634, 5230, 37, 116, 45, 96, 11, 3559, 25, 37, 56, 140, 28748, 701, 24, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], attention_mask=[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], token_type_ids=[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], label=0)
02/04/2021 16:15:57 - INFO - util_processors - *** Example ***
02/04/2021 16:15:57 - INFO - util_processors - guid: test-1165819983701643266_twitter
02/04/2021 16:15:57 - INFO - util_processors - features: InputFeatures(input_ids=[0, 557, 31, 39, 94, 11, 397, 31, 1844, 46154, 13, 1190, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], attention_mask=[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], token_type_ids=[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], label=1)
```
`--use_fast` doesn't work for me (no such param).
I always use overwrite cache 👍
This is my command line:
```
python ../models/jigsaw/tr-3.4//run_puppets.py --model_name_or_path vinai/bertweet-base --task_name binary_hatex --do_train --do_eval --do_logits --do_predict --data_dir /dccstor/redrug_ier/ioana/fairnlp/toxi-data/hatex/processed/ --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 1 --output_dir /dccstor/redrug_ier/ioana/fairnlp/toxi-data/hatex/run-results/runpred_e1_binary_hatex_bertweet_20210204_12_03_58 --cache_dir /dccstor/redrug_ier/ioana/fairnlp/toxi-data/hatex/run-results/runpred_e1_binary_hatex_bertweet_20210204_12_03_58/cache/ --overwrite_cache --logging_steps 10000 --save_steps 200000
```
I implemented some command line args for predicting and printing logits, etc., but it doesn't get there, the problem is in the training.
If I feel adventurous, I will probably try to step through the training and see if I notice any issues. It looks like an out of bounds indexing somewhere. <|||||>I'm going to try the run_glue on my side to see if I can reproduce your successful run.
<|||||>Ayayay. I got a successful run with 3.4 and the command line above (my own code, I mean). Strange. <|||||>Yep, I can confirm running with different seq size and it works. I think what happened was the following:
- Initially I was running with a seq size that was too large.
- I upgraded the transformers to 4.2 and also realized the seq size problem. I started using smaller seq sizes, but there was a problem. I'm guessing the problem comes from some backward-compatibility (my code was inspired by the sample code from version 3.4; I'm guessing something changed that breaks the code with 4.2)
- Once I went back to 3.4 AND small seq size, it worked. I'll open a feature request. I don't think runs should be allowed with `max_seq_size > max_position_embeddings`
Thanks for your help, much appreciated. I'm closing this one.
|
transformers | 10,009 | closed | Why two separators? | I want to fine-tune the model in Keras with my own dataset and I'm trying to figure out the format of the input sentences. When I take a look at `input_ids` I can see two sentence separators (`</s>` which has id 2) between each sentence after tokenization. Is this the expected behaviour? In that case, why are two separators needed? Will I get the same performance if I use one?
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: `4.2.2`
- Platform: Ubuntu 18.04
- Python version: `3.7.5`
- PyTorch version (GPU?):
- Tensorflow version (GPU): `2.3.1`
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Nope
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (TFRoberta, TFXLMRoberta...):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: https://huggingface.co/transformers/training.html#fine-tuning-in-native-tensorflow-2
* [x] my own task or dataset:
## To reproduce
```python
from transformers import RobertaTokenizer, glue_convert_examples_to_features
import tensorflow_datasets as tfds
tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
data = tfds.load('glue/mrpc')
train_dataset = glue_convert_examples_to_features(data['train'].take(4), tokenizer, max_length=128, task='mrpc')
list(train_dataset.as_numpy_iterator())
```
```
Out[48]:
[({'input_ids': array([ 0, 133, 14085, 4533, 3697, 40, 1760, 25, 20701,
5473, 10974, 2156, 6062, 13, 1283, 9, 375, 514,
479, 2, 2, 133, 4533, 3697, 1760, 25, 20701,
5473, 10974, 2156, 1375, 15, 411, 10562, 479, 2,
1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1], dtype=int32),
'attention_mask': array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=int32)},
0),
...
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 02-04-2021 17:04:37 | 02-04-2021 17:04:37 | Good question! That's how the original `roberta-base` model was pretrained. I recommend you stick to the way the model was pretrained in order to obtain best performance.
You can find the [roberta paper here](https://arxiv.org/pdf/1907.11692.pdf). I believe the section 4.2 contains information regarding model inputs.<|||||>Alright, thanks! |
transformers | 10,008 | closed | [models] why aren't .bin files compressed for faster download? | Why are not the pretrained model files compressed?
It took ~25minutes yesterday to download 45GB t5-11b on a slow connection.
I did a quick test on a random `pytorch_model.bin` with default gzip options and it's 1/3rd less in size. And surely there must be a better compressor that can be used - but will require to be on the client's side so gzip might be good enough. This is not much of a difference for under 1GB files, but for large models this starts to add up.
It's not like you can diff a .bin file, so there is little value in having it stored as is from the RCS point of view. But perhaps I'm missing other aspects.
Perhaps 2 versions can be stored and the retriever could favor the compressed version for large files?
Cost-wise this change would introduce some 60% increase in storage if both versions are to be stored, but will have a huge saving in downloads.
The compression process can be a simple cronjob, so that it won't need to make users do anything special.
@julien-c, @LysandreJik, @patrickvonplaten, @sgugger | 02-04-2021 16:52:52 | 02-04-2021 16:52:52 | I'm in favor of this! It's also a bit annoying to me that downloading takes that much time and I think for people that try out a bunch of different checkpoitns to decide on which model they want to use, a 60% speed-up in downloading would be very nice.<|||||>A related issue is that t5-11b is actually too large to be served by Cloudfront (they have a limit of 20GB for a single file) so we have to fallback to serving using S3, which is way, way slower. (up to a few MB/s depending on where you are, vs. basically saturating your downlink when from Cloudfront)
If large models are here to stay, then we probably need to support the **splitting of models** in `save_pretrained`/`from_pretrained`
also cc @mfuntowicz @n1t0 <|||||>> If large models are here to stay, then we probably need to support the splitting of models in save_pretrained/from_pretrained
If we go with compression why not do the normal volumes of whatever common compressor tool we choose, as in:
```
pytorch.bin.rar00
pytorch.bin.rar01
```
So you kill 2 birds at the same time, get the compression and the splitting.
Again, the user doesn't need to do anything. the compression and splitting can be triggered upon the upload.
e.g. with tar.gz:
on upload:
```
tar cvzf - pytorch_model.bin | split -b 10G - pytorch_model.bin.tar.gz.
```
which should give:
```
pytorch_model.bin.tar.gz.aa
pytorch_model.bin.tar.gz.ab
pytorch_model.bin.tar.gz.ac
```
on download:
```
cat pytorch_model.bin.tar.gz.a* | tar xzvf -
```
well, we don't need tar here - it's just one file. so just gzip would be enough.
Just need to choose which compression is good enough, and doesn't take too long to decompress - e.d. don't use the highest compression possible and 100% available on all clients - so gzip and uncompress for sure, 7zip/rar can't be trusted to have, but if there is a python client that can handle it, it may work anywhere?<|||||>Operationally I'm wondering if instead of doing it at rest in an async way (which might prove difficult to easily scale to a much larger number of models), we should probably handle this in the `save_pretrained` (which means users will upload their models already in the supported format)<|||||>Is the intention to upload both compressed and uncompressed or just the former?
I propose to manage compression/decompression transparently on the server side and leave everything as is on the client side (other than download of the compressed version).
Here are some quick pros/cons for 3 different scenarios I see.
### 1. Having only the compressed version on the client side:
Cons:
1. Will create a constant overheard of compression on `save_pretrained` and check pointing
2. Will create a constant wasteful overhead of decompression during `from_pretrained`
3. Should the max split size change - how do you tell the users that they all need to update their repo?
Pros:
1. Will make the upload faster
### 2. Having only the decompressed version on the client side:
Cons:
1. More expensive upload
Pros:
1. Everything else is simple
### 3. Having both versions on the client side:
This one is like case 2, but with an one additional change in each up/down direction:
Pros: same as in case 2
Cons:
1. Could be confusing to the user during upload if they need to upload only the compressed files
2. More to upload
Extra notes:
1. Need to make sure that the decompression will happen once upon download and not on every `from_pretrained()` call.
Please feel free to edit this post directly as I'm sure I've missed some aspects.<|||||>I agree that 2) is simpler, and is in line with the goal of keeping things simple for the user. As a PyTorch user I would prefer seeing my files in the native PyTorch format rather than a compressed format I don't know about, on which I'll need to apply additional pre-processing before using it in a custom model. Especially since we've seen users use `torch.load` instead of `from_pretrained` in the past.<|||||>Should we keep this one alive? Is this on someone's TODO list?<|||||>i think checkpoint-splitting (#13548) is going to be a better/more future-proof solution than compression (on top of already compressed binary files) where the size delta is going to be rather minimal
So I'd vote to close this issue and focus on #13548<|||||>Sounds good, Julien. Let's close this one. |
transformers | 10,007 | closed | Fix TF LED/Longformer attentions computation | # What does this PR do?
This PR fixes the test `test_saved_model_with_attentions_output` for TF Longformer and LED that was failing due to an issue in computing some shapes in the attentions.
All the slow tests are now passing 🎉 | 02-04-2021 16:49:36 | 02-04-2021 16:49:36 | > a) tf.tile should be used instead of tf.broadcast_to &
There are two reasons for this, the first one is because `broadcast_to` does `reshape` + `tile`, here we don't need to reshape, just `tile` is enough. The second reason is that `broadcast_to` is not compliant with ONNXRuntime.
> b) why we cannot simply use the shape of attn_probs since we apply the mask on attn_probs itself? So we know that shape_list(masked_index) == shape_list(attn_probs)
This part is a bit tricky to explain. The issue here was that `attn_probs` was not always the same shape, if `is_global_attn` is True, then the shape of `attn_probs` is `[batch_size, seq_len, self.num_heads, self.one_sided_attn_window_size * 2 + max_num_global_attn_indices + 1]`, while if it equals False its shape is `[batch_size, seq_len, self.num_heads, self.one_sided_attn_window_size * 2 + 1]`. Now, because the shape is never potentially the same during the execution when run in graph mode, the pre-computed shape for `attn_probs` by the TF tracing was `[batch_size, seq_len, self.num_heads, variable]`, where `variable` cannot be computed. The consequence of this was that `attn_probs` had never the proper shape at the end and creates a conflict in the `tf.where`. To solve this we had to also create a mask of a fixed shape that depends on `is_global_attn`.
I don't know if it is clear enough or not. Don't hesitate to tell me if there is something you don't get.<|||||>> > a) tf.tile should be used instead of tf.broadcast_to &
>
> There are two reasons for this, the first one is because `broadcast_to` does `reshape` + `tile`, here we don't need to reshape, just `tile` is enough. The second reason is that `broadcast_to` is not compliant with ONNXRuntime.
>
> > b) why we cannot simply use the shape of attn_probs since we apply the mask on attn_probs itself? So we know that shape_list(masked_index) == shape_list(attn_probs)
>
> This part is a bit tricky to explain. The issue here was that `attn_probs` was not always the same shape, if `is_global_attn` is True, then the shape of `attn_probs` is `[batch_size, seq_len, self.num_heads, self.one_sided_attn_window_size * 2 + max_num_global_attn_indices + 1]`, while if it equals False its shape is `[batch_size, seq_len, self.num_heads, self.one_sided_attn_window_size * 2 + 1]`. Now, because the shape is never potentially the same during the execution when run in graph mode, the pre-computed shape for `attn_probs` by the TF tracing was `[batch_size, seq_len, self.num_heads, variable]`, where `variable` cannot be computed. The consequence of this was that `attn_probs` had never the proper shape at the end and creates a conflict in the `tf.where`. To solve this we had to also create a mask of a fixed shape that depends on `is_global_attn`.
>
> I don't know if it is clear enough or not. Don't hesitate to tell me if there is something you don't get.
Thanks for the explanation - just tried it out and cool to see that your change fixes the test!<|||||>The entire list of slow tests are ok!<|||||>@sgugger Feel free to merge if it looks ok for you! |
transformers | 10,006 | closed | run_ner.py raised error | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.0.dev0
- Platform: MacOS
- Python version: 3.6
- PyTorch version (GPU?): 1.7.1
- Tensorflow version (GPU?): 2.4.1
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
## To reproduce
Steps to reproduce the behavior:
1. bash [run.sh](https://github.com/huggingface/transformers/blob/master/examples/token-classification/run.sh) to [run_ner.py](https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_ner.py)
## Error
```
ReferenceError: {'help': 'The name of the task (ner, pos...).'} does not reference a class __dict__
``` | 02-04-2021 16:02:04 | 02-04-2021 16:02:04 | I just ran `run.sh`, but did not see this error. Could you maybe post the full stack trace?<|||||>Hi @patil-suraj, thanks for replying. The full stack trace is posted below.
```
Traceback (most recent call last):
File "run_origin.py", line 437, in <module>
main()
File "run_origin.py", line 310, in main
load_from_cache_file=not data_args.overwrite_cache,
File "/Users/bytedance/opt/anaconda3/lib/python3.6/site-packages/datasets/dataset_dict.py", line 303, in map
for k, dataset in self.items()
File "/Users/bytedance/opt/anaconda3/lib/python3.6/site-packages/datasets/dataset_dict.py", line 303, in <dictcomp>
for k, dataset in self.items()
File "/Users/bytedance/opt/anaconda3/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1259, in map
update_data=update_data,
File "/Users/bytedance/opt/anaconda3/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 157, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/Users/bytedance/opt/anaconda3/lib/python3.6/site-packages/datasets/fingerprint.py", line 158, in wrapper
self._fingerprint, transform, kwargs_for_fingerprint
File "/Users/bytedance/opt/anaconda3/lib/python3.6/site-packages/datasets/fingerprint.py", line 105, in update_fingerprint
hasher.update(transform_args[key])
File "/Users/bytedance/opt/anaconda3/lib/python3.6/site-packages/datasets/fingerprint.py", line 57, in update
self.m.update(self.hash(value).encode("utf-8"))
File "/Users/bytedance/opt/anaconda3/lib/python3.6/site-packages/datasets/fingerprint.py", line 53, in hash
return cls.hash_default(value)
File "/Users/bytedance/opt/anaconda3/lib/python3.6/site-packages/datasets/fingerprint.py", line 46, in hash_default
return cls.hash_bytes(dumps(value))
File "/Users/bytedance/opt/anaconda3/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 389, in dumps
dump(obj, file)
File "/Users/bytedance/opt/anaconda3/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 361, in dump
Pickler(file, recurse=True).dump(obj)
File "/Users/bytedance/opt/anaconda3/lib/python3.6/pickle.py", line 409, in dump
self.save(obj)
File "/Users/bytedance/opt/anaconda3/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/Users/bytedance/opt/anaconda3/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 556, in save_function
obj=obj,
File "/Users/bytedance/opt/anaconda3/lib/python3.6/pickle.py", line 610, in save_reduce
save(args)
File "/Users/bytedance/opt/anaconda3/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/Users/bytedance/opt/anaconda3/lib/python3.6/pickle.py", line 751, in save_tuple
save(element)
File "/Users/bytedance/opt/anaconda3/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/Users/bytedance/opt/anaconda3/lib/python3.6/pickle.py", line 751, in save_tuple
save(element)
File "/Users/bytedance/opt/anaconda3/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/Users/bytedance/opt/anaconda3/lib/python3.6/site-packages/dill/_dill.py", line 1129, in save_cell
pickler.save_reduce(_create_cell, (f,), obj=obj)
File "/Users/bytedance/opt/anaconda3/lib/python3.6/pickle.py", line 610, in save_reduce
save(args)
File "/Users/bytedance/opt/anaconda3/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/Users/bytedance/opt/anaconda3/lib/python3.6/pickle.py", line 736, in save_tuple
save(element)
File "/Users/bytedance/opt/anaconda3/lib/python3.6/pickle.py", line 521, in save
self.save_reduce(obj=obj, *rv)
File "/Users/bytedance/opt/anaconda3/lib/python3.6/pickle.py", line 605, in save_reduce
save(cls)
File "/Users/bytedance/opt/anaconda3/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/Users/bytedance/opt/anaconda3/lib/python3.6/site-packages/dill/_dill.py", line 1315, in save_type
obj.__bases__, _dict), obj=obj)
File "/Users/bytedance/opt/anaconda3/lib/python3.6/pickle.py", line 610, in save_reduce
save(args)
File "/Users/bytedance/opt/anaconda3/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/Users/bytedance/opt/anaconda3/lib/python3.6/pickle.py", line 751, in save_tuple
save(element)
File "/Users/bytedance/opt/anaconda3/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/Users/bytedance/opt/anaconda3/lib/python3.6/site-packages/dill/_dill.py", line 902, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/Users/bytedance/opt/anaconda3/lib/python3.6/pickle.py", line 821, in save_dict
self._batch_setitems(obj.items())
File "/Users/bytedance/opt/anaconda3/lib/python3.6/pickle.py", line 847, in _batch_setitems
save(v)
File "/Users/bytedance/opt/anaconda3/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/Users/bytedance/opt/anaconda3/lib/python3.6/site-packages/dill/_dill.py", line 902, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/Users/bytedance/opt/anaconda3/lib/python3.6/pickle.py", line 821, in save_dict
self._batch_setitems(obj.items())
File "/Users/bytedance/opt/anaconda3/lib/python3.6/pickle.py", line 847, in _batch_setitems
save(v)
File "/Users/bytedance/opt/anaconda3/lib/python3.6/pickle.py", line 521, in save
self.save_reduce(obj=obj, *rv)
File "/Users/bytedance/opt/anaconda3/lib/python3.6/pickle.py", line 634, in save_reduce
save(state)
File "/Users/bytedance/opt/anaconda3/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/Users/bytedance/opt/anaconda3/lib/python3.6/pickle.py", line 736, in save_tuple
save(element)
File "/Users/bytedance/opt/anaconda3/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/Users/bytedance/opt/anaconda3/lib/python3.6/site-packages/dill/_dill.py", line 902, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/Users/bytedance/opt/anaconda3/lib/python3.6/pickle.py", line 821, in save_dict
self._batch_setitems(obj.items())
File "/Users/bytedance/opt/anaconda3/lib/python3.6/pickle.py", line 847, in _batch_setitems
save(v)
File "/Users/bytedance/opt/anaconda3/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/Users/bytedance/opt/anaconda3/lib/python3.6/site-packages/dill/_dill.py", line 1148, in save_dictproxy
raise ReferenceError("%s does not reference a class __dict__" % obj)
ReferenceError: {'help': 'The name of the task (ner, pos...).'} does not reference a class __dict__
```<|||||>looks like you are running your own script `run_origin.py`, so the issue is not with `run_ner.py`<|||||>Hey @gongel ,
could you confirm that you are really using latest 4.3 version :thinking:
For me the example is working with`run_ner.sh`.<|||||>@patil-suraj, I just renamed run_ner.py to run_origin.py.<|||||>Hi @stefan-it , Yes
```
(base) C02D925LMD6R:transformers gong$ pip show transformers
Name: transformers
Version: 4.3.0.dev0
Summary: State-of-the-art Natural Language Processing for TensorFlow 2.0 and PyTorch
Home-page: https://github.com/huggingface/transformers
Author: Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Sam Shleifer, Patrick von Platen, Sylvain Gugger, Google AI Language Team Authors, Open AI team Authors, Facebook AI Authors, Carnegie Mellon University Authors
Author-email: [email protected]
License: Apache
Location: /Users/bytedance/transformers/src
Requires: filelock, numpy, packaging, regex, requests, sacremoses, tokenizers, tqdm, dataclasses, importlib-metadata
Required-by: sentence-transformers
```<|||||>Could you run the `run_ner.py` script using master? as Stefan said your version seems old.<|||||>I tried 4.3.0.dev0, 4.4.0.dev0 and 4.2.2 .
They all didn't work. 😭<|||||>You might have an issue if your version of `datasets` is old. In any case, the whole serialization error is linked to the datasets library, so pinging @lhoestq in case he has a better idea :-)<|||||>Hi !
Can you try updating `dill` ?
It looks like [one of their issues](https://github.com/uqfoundation/dill/issues/312) from 2019 that has been fixed now.<|||||>Thank you, @lhoestq @sgugger @patil-suraj @stefan-it
It works by updating ```dill``` from ```0.2.9``` to ```0.3.3```. |
transformers | 10,005 | closed | [License info] Longformer SQuAD finetuned model | Hello @patil-suraj ,
would it be possible to provide licensing information for the pretrained model weights shared at:
https://huggingface.co/valhalla/longformer-base-4096-finetuned-squadv1
I would be interested in offering a Rust implementation for this model, but would like to know under which license this model was shared so that I can document my codebase accordingly.
Thank you! | 02-04-2021 14:49:36 | 02-04-2021 14:49:36 | Hey @guillaume-be, glad to know that you are offering Rust implementation of this model :)
There's no license currently, but I'll add MIT license to this model. <|||||>Hello @patil-suraj ,
Could you please share an update on this issue?
Thank you!<|||||>Hi @guillaume-be
I just added an MIT license https://huggingface.co/valhalla/longformer-base-4096-finetuned-squadv1/blob/main/LICENSE<|||||>@patil-suraj also referenced it from your model card's YAML: https://huggingface.co/valhalla/longformer-base-4096-finetuned-squadv1/commit/1ad74ed17896eb4d3a314b1acedefbfc184cc582 so that it's reflected in the model tags etc.<|||||>Thanks Julien ! <|||||>This is great thank you! |
transformers | 10,004 | closed | Converting wav2vec2-base-960h to ONNX report an error while converting | First of all, I want to say thanks to @patrickvonplaten for the work done in adding the model. Great job!
I tried to convert the model to ONNX but got an error, do you have any ideas how to fix it?
What I did:
`python -m transformers.convert_graph_to_onnx --framework pt --model facebook/wav2vec2-base-960h wav2vec2-base-960h.onnx`
But got an error:
```====== Converting model to ONNX ======
ONNX opset version set to: 11
Loading pipeline (model: facebook/wav2vec2-base-960h, tokenizer: facebook/wav2vec2-base-960h)
Using framework PyTorch: 1.7.0
Error while converting the model: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length.
```
| 02-04-2021 14:46:23 | 02-04-2021 14:46:23 | Also interested in this question! <|||||>Hmm, Wav2Vec2 is still a very recent addition and I don't have a good idea on an ETA for full ONNX support. However, I think your error above is due to the input that's passed to `Wav2Vec2Tokenizer` being a string instead of a speech input. So in order to make the conversion work, you will have to tweak the script `convert_graph_to_onnx` yourself a bit for Wav2Vec2 - I think the only different should be that instead of passing it `"This is a sample output"` you should pass it a 1D float array.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi!
very interested in this question! did anyone managed to make it work ?
<|||||>Hey @OthmaneJ,
Think @ccoreilly managed to get it to work here: https://github.com/ccoreilly/wav2vec2-service/blob/master/convert_torch_to_onnx.py<|||||>@patrickvonplaten thanks! 👌<|||||>hi @patrickvonplaten
is there any way to transform mms asr model to onnx?
if yes, how?
thank you very much! |
transformers | 10,003 | closed | Hotfixing tests | # What does this PR do?
Blenderbot decoderonly tests, also need to remove `encoder_no_repeat_ngram_size` from their config.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
@LysandreJik
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 02-04-2021 14:35:26 | 02-04-2021 14:35:26 | |
transformers | 10,002 | closed | Cleaning up `ConversationalPipeline` to support more than DialoGPT. | # What does this PR do?
Currently ConversationalPipeline was heavily biased towards DialoGPT
,which is the default model for this pipeline.
This PR proposes changes to put back the modifications specific to
DialoGPT into tokenizer-specific behavior wherever possible, by
creating `_build_conversation_input_ids` function that takes
conversation as input, and returns a list of ints corresponding
to the tokens. It feels natural to put here because all models
have probably different strategies to build input_ids from the
full conversation and it's the tokenizer's job to transform strings
into tokens (and vice-versa)
If `_build_conversation_input_ids` is missing, previous behavior is
used so we don't break anything so far (except for blenderbot where it's a fix).
This PR also contains a fix for too long inputs. There used
to be dead code for trying to limit the size of incoming input.
The introduced fixed is that we limit
within `_build_conversation_input_ids` to `tokenizer.model_max_length`.
It corresponds to the intent of the removed dead code and is actually
better because it corresponds to `model_max_length` which is different
from `max_length` (which is a default parameter for `generate`).
- Removed `history` logic from the Conversation as it's not relevant
anymore because tokenization logic has been moved to tokenizer.
And tokenizer cannot save any cache, and conversation cannot know
what is relevant or not.
Also it's not usable from `blenderbot` because the input_ids are
not append only (EOS tokens is always at the end).
- Added `iter_texts` method on `Conversation` because all
the code was literred with some form of this iteration of
past/generated_responses.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@LysandreJik
@patrickvonplaten
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 02-04-2021 14:22:17 | 02-04-2021 14:22:17 | |
transformers | 10,001 | closed | BART CausalLM example | \ | 02-04-2021 12:58:47 | 02-04-2021 12:58:47 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 10,000 | closed | German DistilBertModel raises an issue | ## Environment info
- `transformers` version: 4.2.2
- Platform: Linux-5.4.0-65-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.7
- PyTorch version (GPU?): 1.8.0.dev20201202 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@julien-c @stefan-it @LysandreJik
## Information
Model I am using: DistilBert
The problem arises when using:
* [ ] the official example scripts: (give details below)
```
from transformers import DistilBertTokenizer, DistilBertModel
import torch
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-german-cased')
model = DistilBertModel.from_pretrained('distilbert-base-german-cased')
```
The tasks I am working on is:
* [ ] my own task or dataset:
Word2Vec encoding
## To reproduce
Steps to reproduce the behavior:
1. Simply run the code above
2. See the error message:
```
Traceback (most recent call last):
File "/home/tarask/Desktop/Work/Code/Git/probabilistic-gesticulator/my_code/data_processing/annotations/encode_text.py", line 5, in <module>
model = DistilBertModel.from_pretrained('distilbert-base-german-cased')
File "/home/tarask/anaconda3/envs/gesture_flow/lib/python3.7/site-packages/transformers/modeling_utils.py", line 1034, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File "/home/tarask/anaconda3/envs/gesture_flow/lib/python3.7/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 419, in __init__
self.embeddings = Embeddings(config) # Embeddings
File "/home/tarask/anaconda3/envs/gesture_flow/lib/python3.7/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 89, in __init__
n_pos=config.max_position_embeddings, dim=config.dim, out=self.position_embeddings.weight
File "/home/tarask/anaconda3/envs/gesture_flow/lib/python3.7/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 76, in create_sinusoidal_embeddings
out[:, 0::2] = torch.FloatTensor(np.sin(position_enc[:, 0::2]))
RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation.
```
## Expected behavior
No errors
| 02-04-2021 12:54:33 | 02-04-2021 12:54:33 | Hello, running the code does not raise an error:
```py
>>> from transformers import DistilBertTokenizer, DistilBertModel
... import torch
...
... tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-german-cased')
... model = DistilBertModel.from_pretrained('distilbert-base-german-cased')
Downloading: 100%|██████████| 240k/240k [00:00<00:00, 690kB/s]
Downloading: 100%|██████████| 464/464 [00:00<00:00, 199kB/s]
Downloading: 100%|██████████| 270M/270M [00:07<00:00, 36.6MB/s]
```
Please put the error in your issue, otherwise it's impossible to help you.<|||||>Sorry, @LysandreJik , it was a copy paste error. Added it to the issue now:
```
Traceback (most recent call last):
File "/home/tarask/Desktop/Work/Code/Git/probabilistic-gesticulator/my_code/data_processing/annotations/encode_text.py", line 5, in <module>
model = DistilBertModel.from_pretrained('distilbert-base-german-cased')
File "/home/tarask/anaconda3/envs/gesture_flow/lib/python3.7/site-packages/transformers/modeling_utils.py", line 1034, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File "/home/tarask/anaconda3/envs/gesture_flow/lib/python3.7/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 419, in __init__
self.embeddings = Embeddings(config) # Embeddings
File "/home/tarask/anaconda3/envs/gesture_flow/lib/python3.7/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 89, in __init__
n_pos=config.max_position_embeddings, dim=config.dim, out=self.position_embeddings.weight
File "/home/tarask/anaconda3/envs/gesture_flow/lib/python3.7/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 76, in create_sinusoidal_embeddings
out[:, 0::2] = torch.FloatTensor(np.sin(position_enc[:, 0::2]))
RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation.
```<|||||>Ah, I see, thanks! This is because you're using the latest PyTorch 1.8+. We patched this issue yesterday in https://github.com/huggingface/transformers/pull/9917, if you install from source you shouldn't see this error anymore.<|||||>Ah, I see, thanks! So I should install `transformers` from the source, right?<|||||>Reverting Pytorch to 1.7 also fixed this error.
Thank you so much for such prompt help @LysandreJik !<|||||>Glad you could solve it! |
transformers | 9,999 | closed | Fix model templates | Some things were forgotten in the model templates after merging #9128 | 02-04-2021 10:51:50 | 02-04-2021 10:51:50 | The failing model templates test is untrue. The second model template test (that succeeds) is true. I'll fix the github-actions YAML in a second PR.<|||||>No worries! |
transformers | 9,998 | closed | Add DETR | # What does this PR do?
It adds the first vision-only Transformer to the library! Namely [DETR](https://arxiv.org/abs/2005.12872), End-to-End Object Detection with Transformers, by Facebook AI. The main contribution of DETR is its simplicity: it replaces a lot of hand-engineered features (which models like Faster-R-CNN and Mask-R-CNN include) such as non-maximum suppression and anchor generation by just an end-to-end model and a clever loss function, while matching the performance of these heavily complex models.
For a really good explanation (which helped me a lot), see Yannic Kilcher's video [here](https://youtu.be/T35ba_VXkMYr). I'll provide a TLDR here:
The main thing to know is that an image of shape (batch_size, num_channels, height, width), so in case of a single image, a tensor of shape `(1, 3, height, width)` is first sent through a CNN backbone, outputting a lower-resolution feature map, typically of shape `(1, 2048, height/32, width/32)`. This is then projected to match the hidden dimension of the Transformer, which is `256` by default, using `nn.Conv2D`. So now we have a tensor of shape `(1, 256, height/32, width/32)`. Next, the image is flattened and transposed to obtain a tensor of shape `(batch_size, seq_len, d_model)` = `(1, width/32*height/32, 256)`. So a difference with NLP models is that the sequence length is actually longer than usual, but with a smaller `hidden_size` (which in NLP is typically 768 or higher).
This is sent through the encoder, outputting `encoder_hidden_states` of the same shape. Next, so-called **object queries** are sent through the decoder. This is just a tensor of shape `(batch_size, num_queries, d_model)`, with `num_queries` typically set to 100 and is initialized with zeros. Each object query looks for a particular object in the image. Next, the decoder updates these object queries through multiple self-attention and encoder-decoder attention layers to output `decoder_hidden_states` of the same shape: `(batch_size, num_queries, d_model)`. Next, two heads are added on top for object detection: a linear layer for classifying each object query into one of the objects or "no object", and a MLP to predict bounding boxes for each query. So the number of queries actually determines the maximum number of objects the model can detect in an image.
The model is trained using a **"bipartite matching loss"**: so what we actually do is compare the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The [Hungarian matching algorithm](https://en.wikipedia.org/wiki/Hungarian_algorithm) is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy for the classes and L1 regression loss for the bounding boxes are used to optimize the parameters of the model.
Paper: https://arxiv.org/abs/2005.12872
Original repo: https://github.com/facebookresearch/detr
# Usage
Quick demo of my current implementation (with some cool attention visualizations): https://colab.research.google.com/drive/1aJ00yPxT4-PCMhSx2BipbTKqMSBQ80vJ?usp=sharing
(Old demo: https://colab.research.google.com/drive/1G4oWTOg_Jotp_2jJhdYkYVfkcT9ucX4P?usp=sharing)
Note that the authors did release 7 model variants (4 for object detection, 3 for panoptic segmentation). Currenty I've defined two models: the base `DetrModel` (which outputs the raw hidden states of the decoder) and `DetrForObjectDetection`, which adds object detection heads (classes + bounding boxes) on top. I've currently only converted and tested the base model for object detection (DETR-resnet-50). Adding the other models for object detection seems quite easy (as these only use a different backbone and I copied the code of the backbone from the original repo). Adding the models for panoptic segmentation (`DetrForPanopticSegmentation`) is on the to-do list as can be seen below.
# Done
- [x] load pretrained weights into the model
- [x] make sure forward pass yields equal outputs on the same input data
- [x] successful transcription
- [ ] add tokenizer (not sure if DETR needs one, see discussion below)
- [ ] add model tests: currently added 2 integration tests which pass, more tests to follow
- [ ] add tokenizer tests (not sure if DETR needs one, see discussion below)
- [ ] add docstrings
- [ ] fill in rst file
# Discussion
Writing DETR in `modeling_detr.py` went quite fast thanks to the CookieCutter template (seriously, the person who added this, thank you!!). The main thing to write was the conversion script (basically translating PyTorch's default [`nn.MultiHeadAttention`](https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html) to the self-attention mechanism defined in this library). DETR is an encoder-decoder Transformer, with only some minor differences, namely:
- it uses parallel decoding instead of autoregressive. So I assume I can delete all the `past_key_values` and `causal_mask` mechanisms? cc @patrickvonplaten
- it adds positional embeddings to the hidden states (in both the encoder and decoder) in each self-attention and encoder-decoder attention before projecting to queries and keys
- it uses the "relu" activation function instead of the default "gelu" one.
- during training, it helps to train on the outputs of each decoder layer. So what the authors do is predict classes + bounding boxes based on the output of each decoder layer, and also train these. This is a hyperparameter of `DetrConfig` called `auxiliary_loss`. This is also why I defined an additional `ModelOutput` called `BaseModelOutputWithCrossAttentionsAndIntermediateHiddenStates`, which adds intermediate activations of the decoder layers as output.
I wonder whether DETR needs a tokenizer. Currently, it accepts a `NestedTensor` as input to the encoder, not the usual `input_ids`, `attention_mask` and `token_type_ids`. The authors of DETR really like this data type because of its flexibility. It basically allows to batch images of different sizes and pad them up to the biggest image in the batch, also providing a mask indicating which pixels are real and which are padding. See [here](https://github.com/facebookresearch/detr/issues/116#issuecomment-651047468) for a motivation on why they chose this data type (the authors of PyTorch are also experimenting with this, see their project [here](https://github.com/pytorch/nestedtensor)). So maybe NestedTensor is something we could use as well, since it automatically batches different images and adds a mask, which Transformer models require?
Also, no special tokens are used, as the input of the encoder are just flattened images. The decoder on the other hand accepts object queries as input (which are created in `DetrModel`), instead of regular `input_ids`, `attention_mask` and `token_type_ids`. So I wonder whether these can also be removed.
# Future to-do
- [ ] Add `DetrForPanopticSegmentation`
- [ ] Let DETR support any backbone, perhaps those of the timm library as well as any model in the torchvision package
## Who can review?
@LysandreJik @patrickvonplaten @sgugger
Fixes #4663
Unfortunately, self-attention and MultiHeadAttention seem to be easier to understand than git.. I'm having some issues with line endings on Windows. Any help is greatly appreciated. I'm mainly opening this for discussing how to finish DETR.
| 02-04-2021 10:17:17 | 02-04-2021 10:17:17 | I'll have a look at the git issue in the evening<|||||>Thanks for the PR, a few quick comments:
> This is also why I defined an additional ModelOutput called BaseModelOutputWithCrossAttentionsAndIntermediateHiddenStates, which adds intermediate activations of the decoder layers as output.
I will strongly object to a name that long as a matter of principle :sweat_smile: But jsut so I understand what it adds, are those intermediate activations of the decoder layers not in the `hidden_states` attribute already?
> I wonder whether DETR needs a tokenizer.
I think the "tokenization" file (we can rename it if we want) should exist and contain the `NestedTensor` class and the utilities for padding. Like Wav2Vec2 Patrick added recently, the tokenizer call would only take care of the padding, resizing to a max size (if given) and normalizing. The tokenizer could also have a method that loads the images from a filename and accept in its call one or a list of decoded images (as np.array or tensor) or one or a list of filenames (and decode them with PIL for instance).
It could also have a `decode` method which would in this case do the rescale of bounding boxes and map label IDs to label names, so it's easier to then plot the results.
The inputs of the models should completely be renamed to reflect the types of objects expected (so probably `pixel_values` and `pixel_mask` would be better names than `input_ids` etc) and the tokenizer call should output a dictionary with those names as keys (so we can use the usual API of feeding directly to the model the output of the tokenizer).
I imagine something like as a final easy API:
```
inputs = tokenizer([filename1, filename2])
outputs = model(**inputs)
preocessed_outputs = tokenizer.decode(outputs)
```<|||||>> will strongly object to a name that long as a matter of principle 😅 But jsut so I understand what it adds, are those intermediate activations of the decoder layers not in the `hidden_states` attribute already?
Yes, the intermediate activations are the hidden states of the decoder layers, each of them followed by a `LayerNorm`. I agree that the name is too long 😅
> I think the "tokenization" file (we can rename it if we want) should exist and contain the `NestedTensor` class and the utilities for padding. Like Wav2Vec2 Patrick added recently, the tokenizer call would only take care of the padding, resizing to a max size (if given) and normalizing. The tokenizer could also have a method that loads the images from a filename and accept in its call one or a list of decoded images (as np.array or tensor) or one or a list of filenames (and decode them with PIL for instance).
I've created a first draft of `DetrTokenizer` as you requested. The API looks as follows:
```
from PIL import Image
import requests
from transformers import DetrTokenizer
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
tokenizer = DetrTokenizer() # later, this is gonna be .from_pretrained("facebook/detr-resnet-50")
encoding = tokenizer(image)
```
Currently it accepts PIL images, Numpy arrays and PyTorch tensors. The `encoding` (which is a `BatchEncoding`) has 2 keys, namely `pixel_values` and `pixel_mask`. You can call the tokenizer with the following parameters:
* `resize`: whether to resize images to a given size.
* `size`: arbitrary integer to which you want to resize the images
* `max_size`: the largest size an image dimension can have (otherwise it's capped).
* `normalize`: whether to apply mean-std normalization.
An additional complexity with object detection is that if you resize images, the annotated bounding boxes must be resized accordingly. So if you want to prepare data for training, you can also pass in annotations in the `__call__` method of `DetrTokenizer`. In that case, the `encoding` will also include a key named `labels`.<|||||>Resolution of the git issue: https://github.com/huggingface/transformers/pull/10119<|||||>> Currently it accepts PIL images, Numpy arrays and PyTorch tensors.
Pretty cool! Can we strings or pathlib.Paths too?
About the general API, not sure if we should inherit from `PreTrainedTokenizer` since the `from_pretrained`/`save_pretrained` methods are not going to work. Wdyt @LysandreJik ? This is also not a tokenizer, more like an `AnnotatedImagePreProcessor` or something like that.
> An additional complexity with object detection is that if you resize images, the annotated bounding boxes must be resized accordingly. So if you want to prepare data for training, you can also pass in annotations in the __call__ method of DetrTokenizer
Yes, this is expected. Maybe we could create a new type a bit like `BatchEncoding` that groups together the image (on all possible formats, string, PIL, array, tensor) with its annotation, so we can then just pass that object (or a list of those objects) to the tokenizer. What do you think?<|||||>> Pretty cool! Can we strings or pathlib.Paths too?
>
> About the general API, not sure if we should inherit from `PreTrainedTokenizer` since the `from_pretrained`/`save_pretrained` methods are not going to work. Wdyt @LysandreJik ? This is also not a tokenizer, more like an `AnnotatedImagePreProcessor` or something like that.
Sure, it's best to make a similar API for ViT, right? (And more Transformer-based image models that will come after that). I've heard some people are working on ViT? To be fair, I could write a conversion script for ViT if you want, I see it's available in timm.
> Yes, this is expected. Maybe we could create a new type a bit like `BatchEncoding` that groups together the image (on all possible formats, string, PIL, array, tensor) with its annotation, so we can then just pass that object (or a list of those objects) to the tokenizer. What do you think?
You mean pass that object to the model, rather than the tokenizer? For me, `BatchEncoding` seems like a good name.<|||||>> Sure, it's best to make a similar API for ViT, right? (And more Transformer-based image models that will come after that). I
Since ViT is not ported yet, this is where we decide the API that will be used for other vision/multi-model models :-)
> You mean pass that object to the model, rather than the tokenizer? For me, `BatchEncoding` seems like a good name.
No, I meant to the tokenizer (though I'm not too sure about this part, it may end up over-complicating things). `BatchEncoding` comes with its text-related methods (`word_ids`, `sequence_ids` etc) so I don't think it should be used here since they won't be available.
<|||||>Regarding the tokenizer I think we can have a bit more freedom here than we would with NLP models as it's the first vision model, but as you've said @sgugger I think that it should still be somewhat aligned with NLP tokenizers:
- It should take care of all the pre-processing steps
- Creation of batches of images, with padding & truncation
- All the functionalities you mentionned @NielsRogge `resize`/`size`/`normalize`, etc
- Ideally it should have a very similar API to existing NLP tokenizers. Applying processing with the `__call__` method, loading/saving with `from_pretrained`/`save_pretrained`. I didn't dive in the implementation, but if parameters like `resize`/`size`/`normalize` etc are checkpoint-specific, then it's a good opportunity to save these configuration values in the `tokenizer_config.json`, leveraging the loading/saving methods mentioned above.
- If there needs to be some decoding done after the model has processed the image, then that object should be able to handle it as well.
@sgugger regarding what the tokenizer accepts, I'm not sure I see the advantage of handling paths directly. We don't handle paths to text files or paths to CSVs in our other tokenizers. We don't handle paths to sound files either for `Wav2Vec2`, for all of that we rely on external tools and I think that's fine.
Furthermore, handling images directly in the tokenizer sounds especially memory-heavy, and relying on the `datasets` library, which can handle memory mapping, seems like a better approach than leveraging the tokenizer to load files into memory.<|||||>Yes at least the normalize statistics (mean and std) are checkpoint-specific so should be loaded/saved with the usual API.
> @sgugger regarding what the tokenizer accepts, I'm not sure I see the advantage of handling paths directly. We don't handle paths to text files or paths to CSVs in our other tokenizers. We don't handle paths to sound files either for Wav2Vec2, for all of that we rely on external tools and I think that's fine.
The difference is that a tokenizer accepts strings which is a universal type, whereas this image processor accepts PIL images, which is the format given by one specific library (so you can't load your image with openCV and feed it to the tokenizer). Since we already have a privileged image preprocessing library I really think it makes sense to let it also accept filenames. An alternative is to accept only numpy arrays and tensors, but there is the conversion back to PIL images inside the function (we could avoid it and do everything on tensors if we wanted to btw) so I don't think it makes sense.
In any case the user can still use their own preprocessing and pass the final numpy array/torch tensor with the API so I don't see the downside in accepting filenames. Usual tokenizers would have a hard time making the difference between a string that is a text and a string that is a path but this is not the case for images (or sounds, we could have that API there too and I think we should). It's just free functionality.
In NLP we have datasets as lists of texts since text is light in memory, but in CV all the datasets will come as lists of filenames that you have to load lazily (except maybe CIFAR10 and MNIST since they are tiny). Just trying to make it as easy as possible to the user.
> Furthermore, handling images directly in the tokenizer sounds especially memory-heavy
The memory will be used in any case as the images passed to the tokenizer are already loaded if you don't pass filenames. The use shouldn't change between passing n filenames and n images.<|||||>I think this goes against the API we've defined up to now for all existing modalities (text, speech, tabular), and it adds additional work on the tokenizer whereas I think data loading should be handled by PyTorch dataloaders/Datasets, or with `datasets`.
However, your points echo with me and I have less experience than you both in vision, so if you feel that such an API is what would be best for vision, then happy to drop it and feel free to implement it this way.<|||||>Let's not add the file supports for now and discuss it at our next internal meeting then. I agree it is a new functionality that would be different from our other APIs.<|||||>Any update on this?
The tokenizer (I know we should rename it to something else) that I currently implemented accepts images as PIL images, Numpy arrays or PyTorch tensors, and creates 2 things: `pixel_values` and `pixel_mask`. It could be used for both DETR and ViT.
We should probably define some base utils similar to what Patrick did for the speech models.
cc @LysandreJik @sgugger @patrickvonplaten <|||||>Thanks for reaching out!
So the "tokenizer" as you wrote it is good, but it should be renamed to a `DetrFeatureExtractor` and subclass `PreTrainedFeatureExtractor` (following the example of Wav2Vec2). All the necessary info to create one should be in one json file in the model repo (basically the same API as Wav2Vec2, but just the feature extractor part since there is no tokenizer in DETR). For ViT we can copy the same (we will refactor down the road if there are many models sharing the same functionality but for now we'll just use copies with # Copied from xxx markers).
There is no need for new base utils, the base utils Patrick defined are the ones to use for this case. As for the inputs, we agreed to stay with PIL Images, NumPy arrays and torch Tensors, so all good on this side.<|||||>The [PreTrainedFeatureExtractor](https://github.com/huggingface/transformers/blob/11655fafdd42eb56ad94e09ecd84d4dc2d1041ae/src/transformers/feature_extraction_utils.py#L195) seems to be quite specifically defined for speech recognition (it requires a `sampling_rate` for instance at initialization). <|||||>cc @patrickvonplaten but I thought this one was supposed to be generic.<|||||>Talked offline with Patrick and I misunderstood the plan. `PreTrainedFeatureExtractor` is for all kinds of inputs that are representable as 1d arrays of floats (like speech). For images, we should create a new base class that will implement the same methods. If you can take inspiration on `PreTrainedFeatureExtractor` to create an `ImageProcessor`, it would be great! The only thing that should be exactly the same is the name of the saved config: `preprocessing_config.json`.
Does that make sense?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,997 | closed | Remove unintentional "double" assignment in TF-BART like models | This PR fixes unintentionally used "double" assignment during reshaping of `attn_wegihts` in the TF BART-like models.
**Description:** Replace `attn_weights = attn_wegihts = tf.reshape(...)` with `attn_weights = tf.reshape(...)` and thus remove unintentionally used "double" assignment.
<hr>
Reviewer: @jplu | 02-04-2021 10:08:04 | 02-04-2021 10:08:04 | |
transformers | 9,996 | closed | [DeepSpeed] [success] trained t5-11b on 1x 40GB gpu | Managed to train t5-11b on 1x 40GB gpu w/ Deepspeed (A100-SXM4-40GB)
Thank you, @PeterAJansen for letting me use your hardware!
Thank you, @jeffra and @samyam, for not believing that it is not possible to train t5-11b on 1x 40GB gpu w/ Deepspeed and supporting me that lead me to find a few bugs in the integration.
Sharing details for those who need.
**If you want to try this at home please make sure you use transformers master as some bug fixes were just merged in**
Well, it's similar to the t5-3b on 24GB success reported [here](https://huggingface.co/blog/zero-deepspeed-fairscale) and [here](https://github.com/huggingface/transformers/issues/8771#issuecomment-759176685).
But this time t5-11b on 1x 40GB gpu (or 4x if you wanted things faster)
As someone asked me before you need a huge amount of general RAM to use ZeRO-Offload for a huge model:
- for t5-3b on 1x 24GB gpu: ~71GB RAM
- for t5-11b on 1x 40GB gpu: ~234GB RAM
I was using `/usr/bin/time -v program` to get the peak memory measurement - it's the `Maximum resident set size` entry in the final report.
Question: I don't think `/usr/bin/time` does the right thing for multi-process - I think it only measures the parent process. e.g. with 4x gpus it reported only 102GB RAM, but I clearly saw in top that it was around 240GB. If you have an easy way to measure peak memory that takes into an account forked processes I'm all ears.
Batch sizes on one gpu:
- with buffers of 5e8 I was able to run BS=2, which might be too small for training,
- but with 2e8 I managed to squeeze in BS=10 for training, but OOMed on prediction
I'm referring to these batch sizes in `ds_config.json`:
```
"allgather_bucket_size": 2e8,
"reduce_bucket_size": 2e8,
```
And I tested for 2x and 4x DDP as well, BS=16 OOMed, BS=8 was good so I used that - but could probably squeeze some more.
**edit1:** later tests show that my test was too short and wasn't getting the CPU Adam optimizer kick in, as it skips the first 20 or so tests because of the overflow. So once it kicks in it takes more GPU memory, so the practical BS is much smaller - I think around 2 on this setup. So most likely you will need to use `BS=2` for real work, until things get optimized even more.
**edit2:** things are getting re-shuffling in the tests, so the default `ds_config.json` file has moved in master to a new, hopefully permanent home. It's now at `examples/tests/deepspeed/ds_config.json` so you will need to adjust the command line to reflect this new location or simply copy it over to where the old one used to be.
here is the full benchmark:
```
# 1 gpu:
# only training fits with this BS, eval needs a smaller BS
export BS=8; rm -rf output_dir; PYTHONPATH=../../src USE_TF=0 /usr/bin/time -v deepspeed --num_gpus=1 ./finetune_trainer.py --model_name_or_path t5-11b --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_eval --do_predict --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 5 --n_train 60 --n_val 10 --n_test 10 --deepspeed ds_config.json --fp16
{'train_runtime': 31.0897, 'train_samples_per_second': 0.257, 'epoch': 1.0}
# 2 gpus:
export BS=8; rm -rf output_dir; PYTHONPATH=../../src USE_TF=0 /usr/bin/time -v deepspeed --num_gpus=2 ./finetune_trainer.py --model_name_or_path t5-11b --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_eval --do_predict --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 5 --n_train 60 --n_val 10 --n_test 10 --deepspeed ds_config.json --fp16
{'train_runtime': 17.9026, 'train_samples_per_second': 0.223, 'epoch': 1.0}
# 4 gpus
export BS=8; rm -rf output_dir; PYTHONPATH=../../src USE_TF=0 /usr/bin/time -v deepspeed --num_gpus=4 ./finetune_trainer.py --model_name_or_path t5-11b --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_eval --do_predict --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 5 --n_train 60 --n_val 10 --n_test 10 --deepspeed ds_config.json --fp16
{'train_runtime': 10.4404, 'train_samples_per_second': 0.192, 'epoch': 1.0}
```
Checkpointing should allow making even bigger batch sizes. | 02-04-2021 06:21:40 | 02-04-2021 06:21:40 | Well, I'm closing this right away, since it's not a bug, but feel free to comment or ask questions in the comments.<|||||>(I'm adding to this issue, even though it's closed, because it's directly related)
I am seeing OOM trying to get this to work: 1 GPU, SeqLength 128 (originally tried 256), buffers {2e8, 3e8, 5e8} (just changes the epoch of the OOM), BS=1.
@stas00 , I kept track of the GPU memory (as reported in nvidia-smi) to see if it's a progressive memory leak, but I don't think it is:
- 23.2gb after loading model weights
- 33.8gb @ epoch ~1
- 33.8gb @ epoch 25
- long pause at epoch 26, then dies with OOM
Runscript:
(Note I am using unifiedqa-t5-11b, which is just a fine-tuned t5-11b -- I don't think that should change anything)
```
export DATADIR=/home/pajansen/11b-data/ \
export SEQLEN=128 \
export OUTPUTDIR=output_dir \
export BS=1; rm -rf $OUTPUTDIR; PYTHONPATH=../../src USE_TF=0 /usr/bin/time -v deepspeed --num_gpus=1 ./finetune_trainer.py --model_name_or_path allenai/unifiedqa-t5-11b --output_dir $OUTPUTDIR --adam_eps 1e-06 --data_dir $DATADIR \
--do_eval --do_predict --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 \
--logging_first_step --logging_steps 1000 --max_source_length $SEQLEN --max_target_length $SEQLEN --num_train_epochs 2 \
--overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS \
--predict_with_generate --sortish_sampler \
--test_max_target_length $SEQLEN --val_max_target_length $SEQLEN \
--warmup_steps 5 \
--deepspeed ds_config.json --fp16 \
```
Conda environment:
```
# Make new environment
conda create --name transformers-feb4-2020 python=3.8
conda activate transformers-feb4-2020
# Clone transformers
git clone https://github.com/huggingface/transformers.git
cd transformers
# Install nightly build of Pytorch
pip install --pre torch torchvision -f https://download.pytorch.org/whl/nightly/cu110/torch_nightly.html -U
# Install seq2seq transformers requirements
pip install -r examples/seq2seq/requirements.txt
# Install transformers
pip install -e .
# Install DeepSpeed from source for the A100 support
cd ..
git clone https://github.com/microsoft/DeepSpeed.git
cd DeepSpeed/
./install.sh
pip install .
```
The monster output:
[oom-feb4-t5-11b.txt](https://github.com/huggingface/transformers/files/5928851/oom-feb4-t5-11b.txt)
Just the last bit of the output:
(the overflow errors are probably noteworthy?)
```
Using /home/pajansen/.cache/torch_extensions as PyTorch extensions root...
No modifications detected for re-loaded extension module utils, skipping build step...
Loading extension module utils...
Time to load utils op: 0.0005221366882324219 seconds
[INFO|trainer.py:837] 2021-02-04 15:05:54,964 >> ***** Running training *****
[INFO|trainer.py:838] 2021-02-04 15:05:54,964 >> Num examples = 592
[INFO|trainer.py:839] 2021-02-04 15:05:54,964 >> Num Epochs = 2
[INFO|trainer.py:840] 2021-02-04 15:05:54,964 >> Instantaneous batch size per device = 1
[INFO|trainer.py:841] 2021-02-04 15:05:54,964 >> Total train batch size (w. parallel, distributed & accumulation) = 1
[INFO|trainer.py:842] 2021-02-04 15:05:54,964 >> Gradient Accumulation steps = 1
[INFO|trainer.py:843] 2021-02-04 15:05:54,964 >> Total optimization steps = 1184
0%| | 0/1184 [00:00<?, ?it/s][2021-02-04 15:05:58,447] [INFO] [stage2.py:1357:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 4294967296, reducing to 4294967296
{'loss': inf, 'learning_rate': 0.0, 'epoch': 0.0}
0%|▏ | 1/1184 [00:03<1:08:20, 3.47s/it][2021-02-04 15:06:02,124] [INFO] [stage2.py:1357:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 4294967296, reducing to 2147483648.0
0%|▎ | 2/1184 [00:07<1:09:31, 3.53s/it][2021-02-04 15:06:05,853] [INFO] [stage2.py:1357:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 2147483648.0, reducing to 1073741824.0
0%|▍ | 3/1184 [00:10<1:10:38, 3.59s/it][2021-02-04 15:06:09,757] [INFO] [stage2.py:1357:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1073741824.0, reducing to 536870912.0
0%|▋ | 4/1184 [00:14<1:12:26, 3.68s/it][2021-02-04 15:06:13,120] [INFO] [stage2.py:1357:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 536870912.0, reducing to 268435456.0
0%|▊ | 5/1184 [00:18<1:10:29, 3.59s/it][2021-02-04 15:06:16,495] [INFO] [stage2.py:1357:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 268435456.0, reducing to 134217728.0
1%|▉ | 6/1184 [00:21<1:09:10, 3.52s/it][2021-02-04 15:06:19,825] [INFO] [stage2.py:1357:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 134217728.0, reducing to 67108864.0
1%|█ | 7/1184 [00:24<1:07:59, 3.47s/it][2021-02-04 15:06:23,182] [INFO] [stage2.py:1357:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 67108864.0, reducing to 33554432.0
1%|█▎ | 8/1184 [00:28<1:07:17, 3.43s/it][2021-02-04 15:06:26,854] [INFO] [stage2.py:1357:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 33554432.0, reducing to 16777216.0
1%|█▍ | 9/1184 [00:31<1:08:37, 3.50s/it][2021-02-04 15:06:30,436] [INFO] [stage2.py:1357:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 16777216.0, reducing to 8388608.0
1%|█▌ | 10/1184 [00:35<1:09:01, 3.53s/it][2021-02-04 15:06:33,801] [INFO] [stage2.py:1357:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 8388608.0, reducing to 4194304.0
1%|█▋ | 11/1184 [00:38<1:08:00, 3.48s/it][2021-02-04 15:06:37,147] [INFO] [stage2.py:1357:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 4194304.0, reducing to 2097152.0
1%|█▉ | 12/1184 [00:42<1:07:10, 3.44s/it][2021-02-04 15:06:40,510] [INFO] [stage2.py:1357:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 2097152.0, reducing to 1048576.0
1%|██ | 13/1184 [00:45<1:06:40, 3.42s/it][2021-02-04 15:06:43,887] [INFO] [stage2.py:1357:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1048576.0, reducing to 524288.0
1%|██▏ | 14/1184 [00:48<1:06:23, 3.40s/it][2021-02-04 15:06:47,250] [INFO] [stage2.py:1357:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 524288.0, reducing to 262144.0
1%|██▎ | 15/1184 [00:52<1:06:05, 3.39s/it][2021-02-04 15:06:50,615] [INFO] [stage2.py:1357:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 262144.0, reducing to 131072.0
1%|██▌ | 16/1184 [00:55<1:05:52, 3.38s/it][2021-02-04 15:06:53,976] [INFO] [stage2.py:1357:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 131072.0, reducing to 65536.0
1%|██▋ | 17/1184 [00:58<1:05:41, 3.38s/it][2021-02-04 15:06:57,313] [INFO] [stage2.py:1357:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 65536.0, reducing to 32768.0
2%|██▊ | 18/1184 [01:02<1:05:23, 3.36s/it][2021-02-04 15:07:00,672] [INFO] [stage2.py:1357:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 32768.0, reducing to 16384.0
2%|███ | 19/1184 [01:05<1:05:18, 3.36s/it][2021-02-04 15:07:04,003] [INFO] [stage2.py:1357:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 16384.0, reducing to 8192.0
2%|███▏ | 20/1184 [01:09<1:05:03, 3.35s/it][2021-02-04 15:07:07,382] [INFO] [stage2.py:1357:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 8192.0, reducing to 4096.0
2%|███▎ | 21/1184 [01:12<1:05:08, 3.36s/it][2021-02-04 15:07:10,753] [INFO] [stage2.py:1357:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 4096.0, reducing to 2048.0
2%|███▍ | 22/1184 [01:15<1:05:09, 3.36s/it][2021-02-04 15:07:14,118] [INFO] [stage2.py:1357:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 2048.0, reducing to 1024.0
2%|███▋ | 23/1184 [01:19<1:05:06, 3.36s/it][2021-02-04 15:07:17,475] [INFO] [stage2.py:1357:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1024.0, reducing to 512.0
2%|███▊ | 24/1184 [01:22<1:05:00, 3.36s/it][2021-02-04 15:07:20,816] [INFO] [stage2.py:1357:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 512.0, reducing to 256.0
2%|███▉ | 25/1184 [01:25<1:04:49, 3.36s/it][2021-02-04 15:07:24,174] [INFO] [stage2.py:1357:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 256.0, reducing to 128.0
2%|████ | 26/1184 [01:29<1:04:46, 3.36s/it]Killing subprocess 3319579
Traceback (most recent call last):
File "/home/pajansen/anaconda3/envs/transformers-feb4-2020/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/pajansen/anaconda3/envs/transformers-feb4-2020/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/pajansen/anaconda3/envs/transformers-feb4-2020/lib/python3.8/site-packages/deepspeed/launcher/launch.py", line 171, in <module>
main()
File "/home/pajansen/anaconda3/envs/transformers-feb4-2020/lib/python3.8/site-packages/deepspeed/launcher/launch.py", line 161, in main
sigkill_handler(signal.SIGTERM, None) # not coming back
File "/home/pajansen/anaconda3/envs/transformers-feb4-2020/lib/python3.8/site-packages/deepspeed/launcher/launch.py", line 139, in sigkill_handler
raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd)
subprocess.CalledProcessError: Command '['/home/pajansen/anaconda3/envs/transformers-feb4-2020/bin/python', '-u', './finetune_trainer.py', '--local_rank=0', '--model_name_or_path', 'allenai/unifiedqa-t5-11b', '--output_dir', 'output_dir_compexpl-feb4-epoch2-uqa-11b-wholetree-rev', '--adam_eps', '1e-06', '--data_dir', '/home/pajansen/github/compositional-expl/data/feb4-initialtest-q693/wholetree-rev/', '--do_eval', '--do_predict', '--do_train', '--evaluation_strategy=steps', '--freeze_embeds', '--label_smoothing', '0.1', '--learning_rate', '3e-5', '--logging_first_step', '--logging_steps', '1000', '--max_source_length', '128', '--max_target_length', '128', '--num_train_epochs', '2', '--overwrite_output_dir', '--per_device_eval_batch_size', '1', '--per_device_train_batch_size', '1', '--predict_with_generate', '--sortish_sampler', '--test_max_target_length', '128', '--val_max_target_length', '128', '--warmup_steps', '5', '--deepspeed', 'ds_config.json', '--fp16']' died with <Signals.SIGSEGV: 11>.
Command being timed: "deepspeed --num_gpus=1 ./finetune_trainer.py --model_name_or_path allenai/unifiedqa-t5-11b --output_dir output_dir_compexpl-feb4-epoch2-uqa-11b-wholetree-rev --adam_eps 1e-06 --data_dir /home/pajansen/github/compositional-expl/data/feb4-initialtest-q693/wholetree-rev/ --do_eval --do_predict --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 2 --overwrite_output_dir --per_device_eval_batch_size 1 --per_device_train_batch_size 1 --predict_with_generate --sortish_sampler --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 5 --deepspeed ds_config.json --fp16"
User time (seconds): 1152.16
System time (seconds): 746.75
Percent of CPU this job got: 396%
Elapsed (wall clock) time (h:mm:ss or m:ss): 7:58.47
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 233292336
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 108071918
Voluntary context switches: 38621
Involuntary context switches: 588867
Swaps: 0
File system inputs: 0
File system outputs: 48
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
```
<|||||>Thank you for the report and the details, @PeterAJansen
In the future, let's try to have a dedicated issue for each unique problem, but since the OP wasn't really an issue, it is now ;) so all is good.
Let me see if I can reproduce the problem with your changes, perhaps my data sample was too short.
The other difference I see is that you're not using `--task` which then defaults to `summarization` - so we surely don't test the exact same thing.
The `allenai/unifiedqa-t5-11b` model looks of identical size to `t5-11b`, but let me download the former to make sure that I'm doing an exact reproduction.
Let me see
1. if I can get it to OOM with the translation task that I have been testing with first
2. and if that fails, I will try one of the local summarization datasets,
3. and if all runs fine still will need to see what's different about your dataset.
> (the overflow errors are probably noteworthy?)
these are normal. not a problem.<|||||>OK, I'm able to reproduce it. The GPU memory usage grows slowly at some times and jumps at quick bump ups of several GBs at other times.
I used buffers of 1e8 and cmd:
```
export BS=2; rm -rf output_dir; PYTHONPATH=../../src USE_TF=0 /usr/bin/time -v deepspeed --num_gpus=1 ./finetune_trainer.py --model_name_or_path allenai/unifiedqa-t5-11b --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_eval --do_predict --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 5 --deepspeed ds_config.json --fp16
```
Which means that either transformers (trainer or model) or DeepSpeed or both leak memory. I'm going to switch to a much smaller model size as with this model it takes ages for it to just start - can't develop like this and try to detect where the leak is coming from.
BTW, here is a tip. Currently transformers performs a silly thing - it inits the model, inits the weights, and overwrites all this work with pretrained weights. Which with this model takes like 10 minutes. You can shortcut it with:
```
--- a/src/transformers/modeling_utils.py
+++ b/src/transformers/modeling_utils.py
@@ -747,7 +747,7 @@ class PreTrainedModel(nn.Module, ModuleUtilsMixin, GenerationMixin):
Initializes and prunes weights if needed.
"""
# Initialize weights
- self.apply(self._init_weights)
+ #self.apply(self._init_weights)
# Prune heads if needed
if self.config.pruned_heads:
```
which skips 90% of the pointless of weight inits.
I'm trying to advocate for this to be a feature here: https://github.com/huggingface/transformers/issues/9205<|||||>Heh, we were assuming it was OOM, but it got SIGSEGV - I didn't bother to look closer - so pytorch w/Deepspeed segfaults pretty much at step 22. Investigating...
No useful info in the core bt. Stripped binaries.
I eliminated the possibility that the issue could be with pytorch.
Most likely a regression in DS.
Downgrading `pip install deepspeed==0.3.10` solves the segfault
I must have been using an old DS yesterday and that's why it was working for me.
Trying to locate the faulty commit in DS
And the reason it was happening always at step 22 was because AdamW wasn't running until this step, this is all those skipping step overflow reports:
```
[2021-02-04 22:40:47,424] [INFO] [stage2.py:1357:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 2048.0, reducing to 1024.0
0%| | 23/60000 [01:18<55:05:44, 3.31s/it][2021-02-04 22:40:50,837] [INFO] [stage2.py:1357:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1024.0, reducing to 512.0
0%| | 24/60000 [01:21<55:37:22, 3.34s/it][2021-02-04 22:40:54,255] [INFO] [stage2.py:1357:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 512.0, reducing to 256.0
```
As soon as it run it segfaulted.
Hopefully we will have a fix soon, but until then please use `deepspeed==0.3.10` <|||||>Thanks @stas00 !
I have downgraded to deepspeed 0.3.10 and I'm going to leave Transformers running overnight on a proper training job to see if it crashes (it's currently about 20% completed, so that's promising). Though it does appear that the GPU memory usage periodically moves from ~34GB up to nearly the entire 40GB minus a few hundred MB, so it's a real nail biter watching it:

Transformers+DeepSpeed really doesn't believe in wasting RAM... :)
<|||||>update: DeepSpeed yanked 0.3.11 from pypi, so a normal pip install should now result in a good working 0.3.10 installed until this issue is fixed.<|||||>Update on my end: with DeepSpeed 0.3.10 it did run successfully through the night on a full job, successfully training and generating the predictions. Amazing work @stas00 et al.
<|||||>@stas00 I'm not sure if this is a bug or if I'm just not doing it correctly given how fast most of this is moving, but I'm trying to evaluate/generate predictions post-training and getting not-on-device errors. I should not that it worked fine when I did the whole thing in one command (train/eval/predict) overnight, but now I'm trying to use the fine-tuned model to generate predictions on other data.
I have (a) just removed the --do_train flag from the call to finetune_trainer (and, set the model path to the output path of the fine-tuned model), and this gives an error (below). I've also (b) tried CPU-based eval (--device cpu) with the official instructions in examples/seq2seq/, which gave a different error (but I've not done non-cuda eval before, so that might be my issue).
Here's the error from (A):
```
[2021-02-05 12:00:30,238] [WARNING] [runner.py:117:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.
[2021-02-05 12:00:30,586] [INFO] [runner.py:355:main] cmd = /home/pajansen/anaconda3/envs/transformers-feb4-2020/bin/python -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMCwgMSwgMiwgM119 --master_addr=127.0.0.1 --master_port=29500 ./finetune_trainer.py --model_name_or_path output_dir_compexpl-feb4-epoch3-uqa-11b-wholetree-rev --output_dir output_dir_compexpl-feb4-epoch3-uqa-11b-wholetree-rev-unannotated --adam_eps 1e-06 --data_dir /home/pajansen/github/compexpl/data/feb4-initialtest-q693/unannotated/ --do_eval --do_predict --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 256 --max_target_length 256 --num_train_epochs 3 --overwrite_output_dir --per_device_eval_batch_size 1 --per_device_train_batch_size 1 --predict_with_generate --sortish_sampler --test_max_target_length 256 --val_max_target_length 256 --warmup_steps 5 --deepspeed ds_config.json --fp16
[2021-02-05 12:00:31,464] [INFO] [launch.py:78:main] WORLD INFO DICT: {'localhost': [0, 1, 2, 3]}
[2021-02-05 12:00:31,464] [INFO] [launch.py:84:main] nnodes=1, num_local_procs=4, node_rank=0
[2021-02-05 12:00:31,464] [INFO] [launch.py:99:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0, 1, 2, 3]})
[2021-02-05 12:00:31,464] [INFO] [launch.py:100:main] dist_world_size=4
[2021-02-05 12:00:31,464] [INFO] [launch.py:102:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3
[2021-02-05 12:00:33,681] [INFO] [distributed.py:39:init_distributed] Initializing torch distributed with backend: nccl
[2021-02-05 12:00:33,788] [INFO] [distributed.py:39:init_distributed] Initializing torch distributed with backend: nccl
[2021-02-05 12:00:33,908] [INFO] [distributed.py:39:init_distributed] Initializing torch distributed with backend: nccl
[2021-02-05 12:00:34,042] [INFO] [distributed.py:39:init_distributed] Initializing torch distributed with backend: nccl
WARNING:__main__:Process rank: 0, device: cuda:0, n_gpu: 1, distributed training: True, 16-bits training: True
[INFO|configuration_utils.py:447] 2021-02-05 12:00:34,625 >> loading configuration file output_dir_compexpl-feb4-epoch3-uqa-11b-wholetree-rev/config.json
[INFO|configuration_utils.py:485] 2021-02-05 12:00:34,626 >> Model config T5Config {
"_name_or_path": "allenai/unifiedqa-t5-11b",
"architectures": [
"T5ForConditionalGeneration"
],
"d_ff": 65536,
"d_kv": 128,
"d_model": 1024,
"decoder_start_token_id": 0,
"dropout_rate": 0.1,
"early_stopping": true,
"eos_token_id": 1,
"feed_forward_proj": "relu",
"initializer_factor": 1.0,
"is_encoder_decoder": true,
"layer_norm_epsilon": 1e-06,
"length_penalty": 2.0,
"max_length": 200,
"min_length": 30,
"model_type": "t5",
"n_positions": 512,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"num_decoder_layers": 24,
"num_heads": 128,
"num_layers": 24,
"output_past": true,
"pad_token_id": 0,
"prefix": "summarize: ",
"relative_attention_num_buckets": 32,
"task_specific_params": {
"summarization": {
"early_stopping": true,
"length_penalty": 2.0,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
},
"transformers_version": "4.3.0.dev0",
"use_cache": true,
"vocab_size": 32128
}
[INFO|configuration_utils.py:447] 2021-02-05 12:00:34,626 >> loading configuration file output_dir_compexpl-feb4-epoch3-uqa-11b-wholetree-rev/config.json
[INFO|configuration_utils.py:485] 2021-02-05 12:00:34,627 >> Model config T5Config {
"_name_or_path": "allenai/unifiedqa-t5-11b",
"architectures": [
"T5ForConditionalGeneration"
],
"d_ff": 65536,
"d_kv": 128,
"d_model": 1024,
"decoder_start_token_id": 0,
"dropout_rate": 0.1,
"early_stopping": true,
"eos_token_id": 1,
"feed_forward_proj": "relu",
"initializer_factor": 1.0,
"is_encoder_decoder": true,
"layer_norm_epsilon": 1e-06,
"length_penalty": 2.0,
"max_length": 200,
"min_length": 30,
"model_type": "t5",
"n_positions": 512,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"num_decoder_layers": 24,
"num_heads": 128,
"num_layers": 24,
"output_past": true,
"pad_token_id": 0,
"prefix": "summarize: ",
"relative_attention_num_buckets": 32,
"task_specific_params": {
"summarization": {
"early_stopping": true,
"length_penalty": 2.0,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
},
"transformers_version": "4.3.0.dev0",
"use_cache": true,
"vocab_size": 32128
}
[INFO|tokenization_utils_base.py:1685] 2021-02-05 12:00:34,627 >> Model name 'output_dir_compexpl-feb4-epoch3-uqa-11b-wholetree-rev' not found in model shortcut name list (t5-small, t5-base, t5-large, t5-3b, t5-11b). Assuming 'output_dir_compexpl-feb4-epoch3-uqa-11b-wholetree-rev' is a path, a model identifier, or url to a directory containing tokenizer files.
[INFO|tokenization_utils_base.py:1721] 2021-02-05 12:00:34,627 >> Didn't find file output_dir_compexpl-feb4-epoch3-uqa-11b-wholetree-rev/tokenizer.json. We won't load it.
[INFO|tokenization_utils_base.py:1721] 2021-02-05 12:00:34,627 >> Didn't find file output_dir_compexpl-feb4-epoch3-uqa-11b-wholetree-rev/added_tokens.json. We won't load it.
[INFO|tokenization_utils_base.py:1784] 2021-02-05 12:00:34,627 >> loading file output_dir_compexpl-feb4-epoch3-uqa-11b-wholetree-rev/spiece.model
[INFO|tokenization_utils_base.py:1784] 2021-02-05 12:00:34,627 >> loading file None
[INFO|tokenization_utils_base.py:1784] 2021-02-05 12:00:34,627 >> loading file None
[INFO|tokenization_utils_base.py:1784] 2021-02-05 12:00:34,627 >> loading file output_dir_compexpl-feb4-epoch3-uqa-11b-wholetree-rev/special_tokens_map.json
[INFO|tokenization_utils_base.py:1784] 2021-02-05 12:00:34,627 >> loading file output_dir_compexpl-feb4-epoch3-uqa-11b-wholetree-rev/tokenizer_config.json
WARNING:__main__:Process rank: 1, device: cuda:1, n_gpu: 1, distributed training: True, 16-bits training: True
WARNING:__main__:Process rank: 3, device: cuda:3, n_gpu: 1, distributed training: True, 16-bits training: True
WARNING:__main__:Process rank: 2, device: cuda:2, n_gpu: 1, distributed training: True, 16-bits training: True
[INFO|modeling_utils.py:1025] 2021-02-05 12:00:34,753 >> loading weights file output_dir_compexpl-feb4-epoch3-uqa-11b-wholetree-rev/pytorch_model.bin
[INFO|modeling_utils.py:1143] 2021-02-05 12:04:48,021 >> All model checkpoint weights were used when initializing T5ForConditionalGeneration.
[INFO|modeling_utils.py:1151] 2021-02-05 12:04:48,034 >> All the weights of T5ForConditionalGeneration were initialized from the model checkpoint at output_dir_compexpl-feb4-epoch3-uqa-11b-wholetree-rev.
If your task is similar to the task the model of the checkpoint was trained on, you can already use T5ForConditionalGeneration for predictions without further training.
[INFO|trainer.py:348] 2021-02-05 12:04:48,080 >> Using amp fp16 backend
[INFO|trainer.py:1600] 2021-02-05 12:04:48,080 >> ***** Running Evaluation *****
[INFO|trainer.py:1601] 2021-02-05 12:04:48,080 >> Num examples = 1950
[INFO|trainer.py:1602] 2021-02-05 12:04:48,080 >> Batch size = 1
Traceback (most recent call last):
File "./finetune_trainer.py", line 367, in <module>
main()
File "./finetune_trainer.py", line 327, in main
metrics = trainer.evaluate(metric_key_prefix="val")
File "/home/pajansen/github/transformers-feb4-2021/transformers/src/transformers/trainer.py", line 1506, in evaluate
output = self.prediction_loop(
File "/home/pajansen/github/transformers-feb4-2021/transformers/src/transformers/trainer.py", line 1630, in prediction_loop
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
File "/home/pajansen/github/transformers-feb4-2021/transformers/examples/seq2seq/seq2seq_trainer.py", line 220, in prediction_step
generated_tokens = self.model.generate(
File "/home/pajansen/anaconda3/envs/transformers-feb4-2020/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/pajansen/github/transformers-feb4-2021/transformers/src/transformers/generation_utils.py", line 847, in generate
model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(input_ids, model_kwargs)
File "/home/pajansen/github/transformers-feb4-2021/transformers/src/transformers/generation_utils.py", line 379, in _prepare_encoder_decoder_kwargs_for_generation
model_kwargs["encoder_outputs"]: ModelOutput = encoder(input_ids, return_dict=True, **encoder_kwargs)
File "/home/pajansen/anaconda3/envs/transformers-feb4-2020/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/pajansen/github/transformers-feb4-2021/transformers/src/transformers/models/t5/modeling_t5.py", line 878, in forward
inputs_embeds = self.embed_tokens(input_ids)
File "/home/pajansen/anaconda3/envs/transformers-feb4-2020/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/pajansen/anaconda3/envs/transformers-feb4-2020/lib/python3.8/site-packages/torch/nn/modules/sparse.py", line 145, in forward
return F.embedding(
File "/home/pajansen/anaconda3/envs/transformers-feb4-2020/lib/python3.8/site-packages/torch/nn/functional.py", line 1921, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Input, output and indices must be on the current device
Traceback (most recent call last):
File "./finetune_trainer.py", line 367, in <module>
main()
File "./finetune_trainer.py", line 327, in main
metrics = trainer.evaluate(metric_key_prefix="val")
File "/home/pajansen/github/transformers-feb4-2021/transformers/src/transformers/trainer.py", line 1506, in evaluate
output = self.prediction_loop(
File "/home/pajansen/github/transformers-feb4-2021/transformers/src/transformers/trainer.py", line 1630, in prediction_loop
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
File "/home/pajansen/github/transformers-feb4-2021/transformers/examples/seq2seq/seq2seq_trainer.py", line 220, in prediction_step
generated_tokens = self.model.generate(
File "/home/pajansen/anaconda3/envs/transformers-feb4-2020/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/pajansen/github/transformers-feb4-2021/transformers/src/transformers/generation_utils.py", line 847, in generate
model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(input_ids, model_kwargs)
File "/home/pajansen/github/transformers-feb4-2021/transformers/src/transformers/generation_utils.py", line 379, in _prepare_encoder_decoder_kwargs_for_generation
model_kwargs["encoder_outputs"]: ModelOutput = encoder(input_ids, return_dict=True, **encoder_kwargs)
File "/home/pajansen/anaconda3/envs/transformers-feb4-2020/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/pajansen/github/transformers-feb4-2021/transformers/src/transformers/models/t5/modeling_t5.py", line 878, in forward
inputs_embeds = self.embed_tokens(input_ids)
File "/home/pajansen/anaconda3/envs/transformers-feb4-2020/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/pajansen/anaconda3/envs/transformers-feb4-2020/lib/python3.8/site-packages/torch/nn/modules/sparse.py", line 145, in forward
return F.embedding(
File "/home/pajansen/anaconda3/envs/transformers-feb4-2020/lib/python3.8/site-packages/torch/nn/functional.py", line 1921, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Input, output and indices must be on the current device
Traceback (most recent call last):
File "./finetune_trainer.py", line 367, in <module>
main()
File "./finetune_trainer.py", line 327, in main
metrics = trainer.evaluate(metric_key_prefix="val")
File "/home/pajansen/github/transformers-feb4-2021/transformers/src/transformers/trainer.py", line 1506, in evaluate
output = self.prediction_loop(
File "/home/pajansen/github/transformers-feb4-2021/transformers/src/transformers/trainer.py", line 1630, in prediction_loop
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
File "/home/pajansen/github/transformers-feb4-2021/transformers/examples/seq2seq/seq2seq_trainer.py", line 220, in prediction_step
generated_tokens = self.model.generate(
File "/home/pajansen/anaconda3/envs/transformers-feb4-2020/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/pajansen/github/transformers-feb4-2021/transformers/src/transformers/generation_utils.py", line 847, in generate
model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(input_ids, model_kwargs)
File "/home/pajansen/github/transformers-feb4-2021/transformers/src/transformers/generation_utils.py", line 379, in _prepare_encoder_decoder_kwargs_for_generation
model_kwargs["encoder_outputs"]: ModelOutput = encoder(input_ids, return_dict=True, **encoder_kwargs)
File "/home/pajansen/anaconda3/envs/transformers-feb4-2020/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/pajansen/github/transformers-feb4-2021/transformers/src/transformers/models/t5/modeling_t5.py", line 878, in forward
inputs_embeds = self.embed_tokens(input_ids)
File "/home/pajansen/anaconda3/envs/transformers-feb4-2020/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/pajansen/anaconda3/envs/transformers-feb4-2020/lib/python3.8/site-packages/torch/nn/modules/sparse.py", line 145, in forward
return F.embedding(
File "/home/pajansen/anaconda3/envs/transformers-feb4-2020/lib/python3.8/site-packages/torch/nn/functional.py", line 1921, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Input, output and indices must be on the current device
Traceback (most recent call last):
File "./finetune_trainer.py", line 367, in <module>
main()
File "./finetune_trainer.py", line 327, in main
metrics = trainer.evaluate(metric_key_prefix="val")
File "/home/pajansen/github/transformers-feb4-2021/transformers/src/transformers/trainer.py", line 1506, in evaluate
output = self.prediction_loop(
File "/home/pajansen/github/transformers-feb4-2021/transformers/src/transformers/trainer.py", line 1630, in prediction_loop
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
File "/home/pajansen/github/transformers-feb4-2021/transformers/examples/seq2seq/seq2seq_trainer.py", line 220, in prediction_step
generated_tokens = self.model.generate(
File "/home/pajansen/anaconda3/envs/transformers-feb4-2020/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/pajansen/github/transformers-feb4-2021/transformers/src/transformers/generation_utils.py", line 847, in generate
model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(input_ids, model_kwargs)
File "/home/pajansen/github/transformers-feb4-2021/transformers/src/transformers/generation_utils.py", line 379, in _prepare_encoder_decoder_kwargs_for_generation
model_kwargs["encoder_outputs"]: ModelOutput = encoder(input_ids, return_dict=True, **encoder_kwargs)
File "/home/pajansen/anaconda3/envs/transformers-feb4-2020/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/pajansen/github/transformers-feb4-2021/transformers/src/transformers/models/t5/modeling_t5.py", line 878, in forward
inputs_embeds = self.embed_tokens(input_ids)
File "/home/pajansen/anaconda3/envs/transformers-feb4-2020/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/pajansen/anaconda3/envs/transformers-feb4-2020/lib/python3.8/site-packages/torch/nn/modules/sparse.py", line 145, in forward
return F.embedding(
File "/home/pajansen/anaconda3/envs/transformers-feb4-2020/lib/python3.8/site-packages/torch/nn/functional.py", line 1921, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Input, output and indices must be on the current device
```<|||||>Are you on master and not by chance on my experimental t5-pipeline branch? If it's the latter then it's very likely that you'd hit that "not on the current device" error. Please make sure you're using the master `transformers`.<|||||>Definitely on the master :)<|||||>Update: I did figure out the CPU eval error -- I had --fp16 set (as in the example script), which currently throws an esoteric pytorch error on CPU ("threshold_cpu" not implemented for 'Half'). Removing this lets it run on CPU, but with 64 cores T5-11B is evaluating at 150 seconds *per generation*, instead of less than 1 sec with the GPU, so I think I'll kill that. <|||||>> @PeterAJansen want to confirm with you one detail, is your setup with Intel or AMD cpu?
It's AMD.
I'm using Peter's machine for debugging this, so you can ask me anything.
------------
@PeterAJansen, glad you sorted it out - let me see if I can reproduce that and we could ensure that we prevent the erroneous fp16/cpu combination in first place.
------------
Update on DeepSpeed: it looks like the segfault over CPU ADAM problem is specific to AMD, which is the case on your computer, so the DeepSpeed team are working on figuring that out and hopefully will have a new release some time soon that will do the right thing on AMD and be fast too.<|||||>@PeterAJansen,
- I have fixed the first bug where you went for inference without training - please use this PR branch if it's not merged https://github.com/huggingface/transformers/pull/10039
Well basically we aren't using deepspeed at the moment at all if `--do_train` wasn't run - need to think how to benefit from Deepspeed for pure inference. I will experiment with that.
- wrt `--device cpu` could you please explain how you managed to use it? Since it's not a valid flag for `finetune_trainer.py`, so if you could share the full cmd that would help to reproduce the problem.
Thank you!
<|||||>@PeterAJansen, for the future let's do this:
- Try new things - if they fail assume it's 99% a bug in our code - things should either work or give a user-friendly message so that you know it's your error - if it's anything else we should be fixing it.
- Please do file a new issue every time - while all these bugs are totally related it is very difficult to track when it's one pile
- Always paste the full cmd that you used
- Ideally try to use generic datasets/models to make it easy to reproduce the problem
Then:
1. I reproduce
2. I write a new test
3. I fix the bug
4. You try new things
5. Rinse and repeat
;)
<|||||>> @PeterAJansen,
>
> * I have fixed the first bug where you went for inference without training - please use this PR branch if it's not merged #10039
> Well basically we aren't using deepspeed at the moment at all if `--do_train` wasn't run - need to think how to benefit from Deepspeed for pure inference. I will experiment with that.
Thanks!
> * wrt `--device cpu` could you please explain how you managed to use it? Since it's not a valid flag for `finetune_trainer.py`, so if you could share the full cmd that would help to reproduce the problem.
>
> Thank you!
Apologies, I think in my exhilaration that it's running T5-11B on 40G cards that I forgot proper issue submission procedures. The --fp16 error is submitted as isssue #10040 :)<|||||>both issues have been fixed https://github.com/huggingface/transformers/pull/10039 and https://github.com/huggingface/transformers/pull/10041<|||||>@stas00 have you tried profiling Hugging Face models with DeepSpeed's `FlopsProfiler`? I'm curious to see what kind of stats you get, especially for decoder-only models such as `GPT2LMHeadModel` as you increase the model size.<|||||>I haven't tried yet - as I'm busy at the moment at figuring out the pipeline, but I logged that idea here https://github.com/huggingface/transformers/issues/9606 for a later time or if someone else is moved to do it before I get a chance to do so.
I appreciate the suggestion, @g-karthik. I'm like a kid in a candy store, so many things to try, so little time.<|||||>@stas00 not sure if this issue is closed and/or I should start a new thread. But my question is very much related. Here goes:
I followed the instructions mentioned here (same deepspeed version, t5-11b. everything same). However on 1x 40GB gpu w/ Deepspeed (A100-SXM4-40GB) it goes OOM. **Does not train even with BS=1 using deepspeed.**
Still wondering how you were able to train this on 1x A100-SXM4-40GB since the t5-11b downloaded (automatically by huggingface), pytorch.bin model file itself has a size of ≈ 45GB (raw file size). Just loading the model itself will cause OOM on a 40GB 1x A100-SXM4-40GB.
Am I missing something? or did the t5-11b model size change since this post?
Srikar <|||||>Hi @srikar2097,
deepspeed does `model.half()` by default so you are only loading 22.5GB in weights. though it did add support for fp32 since that post.
Most likely your seq_len is much larger than the test that I did. Does it work if you reduce it?
Also this is really old now, and you have the offload available so if you have lots of RAM you shouldn't have a problem loading t5-11b on A100-50GB.
If you are still struggling, then yes, by all means please open a new issue and full details on how to reproduce the problem. and tag me please.
<|||||>FWIW, I remember having a specific commit that seemed to work for T5-11B in the 40gb A100s, and it not working after -- and me mostly using the T5-3B model for speed, so I haven't tried it recently to see if it still works (without the offloading). <|||||>@stas00 thanks for the tips. I did try with seq_len=512 with BS=1. Then with seq_len=128 with BS=1 (both times OOM).
For T5-11b on a A100-40B, I guess sticking to fp16 is the way to go since fp32 will load entire model into GPU mem? (which will surely cause OOM since raw model file itself is 45GB).
my host has 1TB RAM, so you suggest to use offload? Do you have some comments on if using offload would slow down training? (since optimizer-states/gradients has to flow back-and-forth between GPU <-> CPU)...
@PeterAJansen I am using T5-3b for now since I haven't yet cracked the code with T5-11b.. appreciate re-affirming my comments that T5-11b is not working for you too...
<|||||>> @stas00 thanks for the tips. I did try with seq_len=512 with BS=1. Then with seq_len=128 with BS=1 (both times OOM).
Please file a new Issue with a full report with config file and command line and then I'd be happy to try to diagnose this with you.
Thank you for experimenting with shorter seq_len.
@PeterAJansen do you remember which commit or perhaps it's logged somewhere in the Issue comments? Could probably `git bisect` to find it.
> For T5-11b on a A100-40B, I guess sticking to fp16 is the way to go since fp32 will load entire model into GPU mem? (which will surely cause OOM since raw model file itself is 45GB).
correct!
> my host has 1TB RAM, so you suggest to use offload? Do you have some comments on if using offload would slow down training? (since optimizer-states/gradients has to flow back-and-forth between GPU <-> CPU)...
I don't have numbers to share yet, but the offload protocol is written to pre-fetch data, so the overhead in theory should be minimal. so absolutely yes to offload.
<|||||>@stas00 I have a feeling it might be `c130e67d` , or failing that something on or around February 12th 2021. <|||||>OK, I'm able to train t5-11b on a single A100-SXM4-40GB with seq len 1024 with BS=4 at about 40GB gpu mem usage with deepspeed zero2:
```
export BS=4; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 deepspeed --num_gpus=1 \
examples/pytorch/translation/run_translation.py --model_name_or_path t5-11b --output_dir output_dir \
--adam_eps 1e-06 --evaluation_strategy=steps --do_train --label_smoothing 0.1 --learning_rate 3e-5 \
--logging_first_step --logging_steps 500 --max_source_length 1024 --max_target_length 1024 \
--num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS \
--predict_with_generate --sortish_sampler --source_lang en --target_lang ro --dataset_name wmt16 \
--dataset_config "ro-en" --source_prefix "translate English to Romanian: " --val_max_target_length \
128 --warmup_steps 50 --max_train_samples 2000 --max_eval_samples 50 --deepspeed \
tests/deepspeed/ds_config_zero2.json --fp16
```
let's log for posterity (both master HEAD as of this writing)
- PyTorch version: 1.8.1
- cuda: 11.1
```
$ cd transformers
$ git rev-parse --short HEAD
61c506349
$ cd ../deepspeed
ccc522c
```
surprisingly zero3 with full offload OOMs! Need to figure that one out.
Thanks to @PeterAJansen for letting me use his rig.
<|||||>OK, @samyam helped me to figure out ZeRO-3 - getting a 3.5x larger BS than with zero2. The key was to lower:
```
"sub_group_size": 1e9,
```
from `1e14`.
So, I'm able to train t5-11b on a single A100-SXM4-40GB with seq len 1024 with **BS=14** with deepspeed ZeRO-3:
```
export BS=14; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 deepspeed --num_gpus=1 \
examples/pytorch/translation/run_translation.py --model_name_or_path t5-11b --output_dir output_dir \
--adam_eps 1e-06 --evaluation_strategy=steps --do_train --label_smoothing 0.1 --learning_rate 3e-5 \
--logging_first_step --logging_steps 500 --max_source_length 1024 --max_target_length 1024 \
--num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS \
--predict_with_generate --sortish_sampler --source_lang en --target_lang ro --dataset_name wmt16 \
--dataset_config "ro-en" --source_prefix "translate English to Romanian: " --val_max_target_length \
128 --warmup_steps 50 --max_train_samples 2000 --max_eval_samples 50 --deepspeed \
tests/deepspeed/ds_config_zero3.json --fp16
```
everything else is the same as in the zero-2 post above, and config file is too from transformers @ 61c506349 , but `ds_config_zero3.json` needs to be changed as shown above.
<|||||>I'd like to mention that the code above uses dynamic padding, which doesn't pad to length 1024, so the input and output are not 1024. Turning on "--pad_to_max_length True" results in OOM, unfortunately, with even low batch size of 1. I tried length 512 as well with batch size 1 but also got out of memory.
Is there a way to use zero stage 3 for applications where long sequences are needed (512+)?<|||||>Thank you for this report, @benathi
First I just want to validate that you're referring to the setup from my most [recent comment](https://github.com/huggingface/transformers/issues/9996#issuecomment-856384448) and not the OP.
So what you're suggesting is that being able to use a largish BS was nothing but a fluke since the dataset entries happened to be quite short, correct?
Have you tried using a smaller BS?
Also do you have access to a single card only?<|||||>Yes I refer to your most recent comment. I tried 1 GPU (using A100 same as
you) and 2 and 8.
I tried using batch size as small as 1 for length 512 (input 512 output
512) but ran into memory issues for 1,2,8 GPUs
I suspect that for it is due to memory surge during attention computation,
which can be quite a lot for long sequence. Im not sure what is needed to
overcome this. I tried changing the bucket size in the config to no avail.
If I don’t use “—pad_to_max_length True”, I can run your exact script
(input 1024 output 1024) just fine with 1,2,8 GPUs.
Best,
Ben
On Thu, Sep 16, 2021 at 11:02 PM Stas Bekman ***@***.***>
wrote:
> Thank you for this report, @benathi <https://github.com/benathi>
>
> First I just want to validate that you're referring to the setup from my
> most recent comment
> <https://github.com/huggingface/transformers/issues/9996#issuecomment-856384448>
> and not the OP.
>
> So what you're suggesting is that being able to use a largish BS was
> nothing but a fluke since the dataset entries happened to be quite short,
> correct?
>
> Have you tried using a smaller BS?
>
> Also do you have access to a single card only?
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/9996#issuecomment-921417244>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AA5DMSZM2YQIB5E3BXWB2O3UCKVSXANCNFSM4XCHBJ4A>
> .
> Triage notifications on the go with GitHub Mobile for iOS
> <https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
> or Android
> <https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
>
>
<|||||>@benathi if the issue is in fact the long sequence length (which is plausible), then the fix I would recommend is to use deepspeed activation checkpointing. That would significantly reduce the activation memory consumption. But before going to that route, please check with seq length 32, 64, 128, 256 as well to see if you are able to run with a smaller fixed sequence length with pad_to_max_length True, and you are running into OOM only after you increase the seq_length above a certain threshold. If you are still OOMing even with a small max seq length like 32 when pad_to_max_length is True, then the issue might be something else related to that flag.<|||||>Thank you for the feedback and great suggestions, @samyam! I keep forgetting about "activation checkpointing".<|||||>Thank you @samyam. Good to hear from you! I'll look further into activation checkpointing :) <|||||>I can confirm that it runs ok with lower context length. :)
@stas00 I looked through HF documentation and my impression is that activation checkpointing is not supported out of the box. Is this correct? If so, is there any suggestion you can provide regarding how to do activation checkpointing with HF models?<|||||>It's just named `gradient_checkpointing` in `transformers`, and most models support this feature.
To enable it you need to do:
```
model.config.gradient_checkpointing = True
```
before using the model anywhere. You can see an example of it being activated here:
https://github.com/huggingface/transformers/blob/b518aaf193938247f698a7c4522afe42b025225a/src/transformers/models/gpt2/modeling_gpt2.py#L767
For `example` scripts there is no direct cli arg, In in `language-modeling` scripts you can cheat by passing:
```
--config_overrides "gradient_checkpointing=True"
```
More details are at https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/README.md#creating-a-model-on-the-fly
Perhaps it's about time we exposed this flag in HF Trainer.
Yet another way to cheat if none of the above is fitting:
1. clone the model locally
2. edit `config.json` to enable `gradient_checkpointing`
3. pass the local path to the cloned model instead of the model name
This will work with any example script.
Please let me know if you were successful. And then we will sort out how to enable it easier and document the synonyms so it's easier to search and find.
<|||||>Thank you for your prompt reply!! And for your hard work on the HF library
which everybody loves. :) I’ll take a look at this.
Best,
On Fri, Sep 17, 2021 at 7:23 PM Stas Bekman ***@***.***>
wrote:
> It's just named gradient_checkpointing in transformers, and most models
> support this feature.
>
> To enable it you need to do:
>
> model.config.gradient_checkpointing = True
>
> before using the model anywhere. You can see an example of it being
> activated here:
>
>
> https://github.com/huggingface/transformers/blob/b518aaf193938247f698a7c4522afe42b025225a/src/transformers/models/gpt2/modeling_gpt2.py#L767
>
> For example scripts there is no direct cli arg, In a few scripts you can
> cheat by passing:
>
> --config_overrides "gradient_checkpointing=True"
>
> in language-modeling scripts. More details are at
> https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/README.md#creating-a-model-on-the-fly
>
> Perhaps it's about time we exposed this flag in HF Trainer.
>
> Yet another way to cheat if none of the above is fitting:
>
> 1. clone the model locally
> 2. edit config.json to enable gradient_checkpointing
> 3. pass the local path to the cloned model instead of the model name
> this will work with any example script.
>
> Please let me know if you were successful.
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/9996#issuecomment-922131357>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AA5DMS75ZPXH2REAOOCKTSDUCPEYXANCNFSM4XCHBJ4A>
> .
> Triage notifications on the go with GitHub Mobile for iOS
> <https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
> or Android
> <https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
>
>
<|||||>Thank you for your kind words, @benathi!<|||||>I am able to run with sequence length 2000 (I made sure to pad the data to that long) with 29GB per GPU with activation checkpointing. For 1024, the GPU consumption is 12.5GB. Without activation checkpointing, I can run up to sequence length 256 and got OOM at 512.
All of this uses batch size = 1 and 8 GPUs with Zero 3. <|||||>Thank you for reporting back, @benathi - so a partial success.
Have you seen the memory usage estimators at https://deepspeed.readthedocs.io/en/stable/memory.html
It'd be great to complete the existing set with an activations memory usage estimator and then it'd remove most of the guesswork / needing to try as we would have the requirements known instantly.
But may be let's start with what is there already. Could you put in the numbers for your setup and let's see how much opt/grad/params should be consuming under z3? Thank you.
<|||||>and @sgugger is working on making it easy to activate the gradient/activation checkpointing https://github.com/huggingface/transformers/pull/13657<|||||>I would say it is a success, not even partial, since I was able to run up to sequence length 2000! :)
I did try to use the memory estimator. I think the estimator doesn't take into account the activations or batch size? (not sure) so it's a bit hard to gauge from the estimator alone.
Anyways I'm happy I can train with a relatively large context length now. <|||||>Thank you for confirming that you needs have been met, @benathi
Yes, the estimators are missing the activation component, which is crucial. But since the latter component is the same regardless of DS setup, the existing estimators at least can show you where the memory can be saved. <|||||>Actually another question for you if you don't mind :)
If I want to use even longer sequence length, in which case sparse attention is probably necessary, is switching to GPT-Neo all it takes to do sparse attention? (and turning on sparse attention in deepspeed) Or are there other config that allows me to do sparse attention for gpt2 as well? Not sure if there's some field in the config I can just turn on to use sparse attention :)
<|||||>I'd love to answer your question, @benathi, but I haven't had a chance to experiment with this feature yet. Perhaps asking at https://discuss.huggingface.co/?
HF arsenal has several models that implement sparse attention natively: https://huggingface.co/blog/long-range-transformers
Deepspeed implements sparse attention, but I am not sure how we would plug it into HF Transformers. That is it has this section of the config file, but I think it only works with some of their internal features. I don't know. Might it be a good idea to ask at https://github.com/microsoft/DeepSpeed - I'd love to know the answer myself - and if we could integrate that into Transformers. If you'd like to take the lead on the research I'd be happy to help integrating it. If you ask please tag me as well.
Thank you!<|||||>@stas00 I see the the [ds_config.json](https://github.com/huggingface/transformers/blob/master/tests/deepspeed/ds_config_zero2.json) uses "auto" casting. I cannot train a 13B multilingual mT5-xxl model on the 8x40GB A100 on aws `p4d24xlarge`. I am using [This](https://github.com/huggingface/transformers/blob/master/tests/deepspeed/ds_config_zero3.json) config with `"fp16": {"enabled": false,` as t5 is trained on bfloat16 and fp16 usually produce NaN. My sequence length is "src_input_length=1024", target_input_length=256".
Do you have any suggestion? Should I move to fairscale for `fp16` issue?<|||||>"auto" just allows converting `--fp16` to "true" if it's passed in the trainer args. You can absolutely hardcode it to what you need.
I made a possible workaround for t5/mt5 overflows which worked some and not for others, you may want to try:
https://github.com/huggingface/transformers/pull/10956
Ideally, especially since you're using A100, you should train in bf16 mixed precision, the work is being done on it here:
https://github.com/huggingface/transformers/pull/13207
But deepspeed doesn't yet support bf16 - perhaps it'd be beneficial to ask Deepspeed about supporting bf16 by opening a feature request at https://github.com/microsoft/DeepSpeed/issues - If you feel inspired to do so?
> Should I move to fairscale for fp16 issue?
If fairscale gives a working solution then by all means use it. Does it? I just don't know the answer.
Megatron-LM released a t5 model recently but it doesn't yet support pipeline, so if tensor parallelism is sufficient to your setup it might do a trick (transformers will have TP shortly as well). You can ping them asking when PP will be added. I doubt that if nobody asks it'll happen any time soon. Their bert/gpt2 have a full dp/tp/pp support, but not yet t5.
Finally, try activating Gradient Checkpointing which should help a lot to lower memory usage:
https://huggingface.co/transformers/performance.html#gradient-checkpointing
<|||||>Thanks a lot @stas00 for your reply.
I have been working with your PR https://github.com/huggingface/transformers/pull/10956 until now. Just to let you know, it works fine for me. Huge thanks to you for that PR.
But so far I remember Deepspeed doesn't support `torch.cuda.amp.autocast(enabled=False):` so ffn layer weights remain fp16 in deepspeed.
I've already tried `gradient-checkpointing` with fp32 training (in deepspeed) for mT5-xxl-13B but OOM.
May be in coming day I will at first try fair-scale to be sure since it supports `torch.cuda.amp.autocast(enabled=False):`. <|||||>> Thanks a lot @stas00 for your reply. I have been working with your PR #10956 until now. Just to let you know, it works fine for me. Huge thanks to you for that PR.
Glad to hear that!
> But so far I remember Deepspeed doesn't support `torch.cuda.amp.autocast(enabled=False):` so ffn layer weights remain fp16 in deepspeed. I've already tried `gradient-checkpointing` with fp32 training (in deepspeed) for mT5-xxl-13B but OOM.
DS uses their own mixed precision which doesn't lend to users overriding it. But it should be possible to make an if code branch that if the code is running under deepspeed we could manually upcast to fp32 and then downcast back to fp16 and deepspeed. Let me know if you need help with that, this would require no deepspeed understanding I believe. And I haven't tried that, so it's possible that my idea may or may not work.
> May be in coming day I will at first try fair-scale to be sure since it supports `torch.cuda.amp.autocast(enabled=False):`.
Do you mean the sharded DDP (ZeRO@fairscale)? Do let us know, I have no idea what is the state of that project nowadays.
<|||||>@stas00 any idea about this, I keep getting overflow. Using Version: 0.5.3 of deepseed due to torch restrictions
I can't solve this even after several attempts
[2021-11-13 19:22:08,401] [INFO] [stage3.py:2731:_overflow_clean_up] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 16.0, reducing to 8.0
0%| | 14/24128 [00:54<25:52:50, 3.86s/it]
[2021-11-13 19:22:12,194] [INFO] [stage3.py:2731:_overflow_clean_up] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 8.0, reducing to 4.0
0%| | 15/24128 [00:58<25:44:14, 3.84s/it]
[2021-11-13 19:22:15,963] [INFO] [stage3.py:2731:_overflow_clean_up] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 4.0, reducing to 2.0
0%| | 16/24128 [01:02<25:35:10, 3.82s/it]
[2021-11-13 19:22:19,775] [INFO] [stage3.py:2731:_overflow_clean_up] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 2.0, reducing to 1.0
0%| | 17/24128 [01:06<25:34:08, 3.82s/it]
[2021-11-13 19:22:23,570] [INFO] [stage3.py:2731:_overflow_clean_up] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1.0, reducing to 1
0%| | 18/24128 [01:10<25:31:20, 3.81s/it]
[2021-11-13 19:22:27,338] [INFO] [stage3.py:2731:_overflow_clean_up] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
0%| | 19/24128 [01:13<25:26:08, 3.80s/it]
[2021-11-13 19:22:31,100] [INFO] [stage3.py:2731:_overflow_clean_up] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
0%|▏ | 20/24128 [01:17<25:21:41, 3.79s/it]
[2021-11-13 19:22:34,909] [INFO] [stage3.py:2731:_overflow_clean_up] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
0%|▏ | 21/24128 [01:21<25:24:20, 3.79s/it]
[2021-11-13 19:22:38,715] [INFO] [stage3.py:2731:_overflow_clean_up] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
0%|▏ | 22/24128 [01:25<25:25:39, 3.80s/it]
[2021-11-13 19:22:42,709] [INFO] [stage3.py:2731:_overflow_clean_up] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
0%|▏ | 23/24128 [01:29<25:49:22, 3.86s/it]
[2021-11-13 19:22:46,705] [INFO] [stage3.py:2731:_overflow_clean_up] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
0%|▏ | 24/24128 [01:33<26:06:45, 3.90s/it]
[2021-11-13 19:22:50,537] [INFO] [stage3.py:2731:_overflow_clean_up] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
0%|▏ | 25/24128 [01:37<25:57:46, 3.88s/it]
[2021-11-13 19:22:54,437] [INFO] [stage3.py:2731:_overflow_clean_up] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
0%|▏ | 26/24128 [01:40<26:00:36, 3.89s/it]
[2021-11-13 19:22:58,333] [INFO] [stage3.py:2731:_overflow_clean_up] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
0%|▏ | 27/24128 [01:44<26:01:38, 3.89s/it]
[2021-11-13 19:23:02,162] [INFO] [stage3.py:2731:_overflow_clean_up] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
0%|▏ | 28/24128 [01:48<25:54:33, 3.87s/it]
[2021-11-13 19:23:05,991] [INFO] [stage3.py:2731:_overflow_clean_up] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
0%|▏ | 29/24128 [01:52<25:49:28, 3.86s/it]
[2021-11-13 19:23:09,884] [INFO] [stage3.py:2731:_overflow_clean_up] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
0%|▏ | 30/24128 [01:56<25:53:38, 3.87s/it]
[2021-11-13 19:23:13,776] [INFO] [stage3.py:2731:_overflow_clean_up] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
0%|▏ | 31/24128 [02:00<25:56:27, 3.88s/it]
[2021-11-13 19:23:17,659] [INFO] [stage3.py:2731:_overflow_clean_up] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1<|||||>This looks like an issue to report on the deepspeed side, @tuhinjubcse. https://github.com/microsoft/DeepSpeed/issues<|||||>> OK, @samyam helped me to figure out ZeRO-3 - getting a 3.5x larger BS than with zero2. The key was to lower:
>
> ```
> "sub_group_size": 1e9,
> ```
>
> from `1e14`.
>
> So, I'm able to train t5-11b on a single A100-SXM4-40GB with seq len 1024 with **BS=14** with deepspeed ZeRO-3:
>
> ```
> export BS=14; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 deepspeed --num_gpus=1 \
> examples/pytorch/translation/run_translation.py --model_name_or_path t5-11b --output_dir output_dir \
> --adam_eps 1e-06 --evaluation_strategy=steps --do_train --label_smoothing 0.1 --learning_rate 3e-5 \
> --logging_first_step --logging_steps 500 --max_source_length 1024 --max_target_length 1024 \
> --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS \
> --predict_with_generate --sortish_sampler --source_lang en --target_lang ro --dataset_name wmt16 \
> --dataset_config "ro-en" --source_prefix "translate English to Romanian: " --val_max_target_length \
> 128 --warmup_steps 50 --max_train_samples 2000 --max_eval_samples 50 --deepspeed \
> tests/deepspeed/ds_config_zero3.json --fp16
> ```
>
> everything else is the same as in the zero-2 post above, and config file is too from transformers @ [61c5063](https://github.com/huggingface/transformers/commit/61c506349134db0a0a2fd6fb2eff8e29a2f84e79) , but `ds_config_zero3.json` needs to be changed as shown above.
@stas00 could you confirm your torch / deepspeed / apex / transformers versions<|||||>Please see: https://github.com/huggingface/transformers/issues/9996#issuecomment-854128050<|||||>@stas00 Thanks so much
May I also ask why you used LR = 3e-5 when HF page itself notes
`T5 models need a slightly higher learning rate than the default one set in the Trainer when using the AdamW optimizer. Typically, 1e-4 and 3e-4 work well for most problems (classification, summarization, translation, question answering, question generation). Note that T5 was pre-trained using the AdaFactor optimizer.`
I used LR = 1e-3 previously without deep speed and it worked perfectly. I am doing generation, but now when using deep speed loss seems noisy. Anything you recommend?
{'loss': 5.4677, 'learning_rate': 0.0, 'epoch': 0.02}
{'loss': 0.9166, 'learning_rate': 0.0, 'epoch': 0.03}
{'loss': 0.6483, 'learning_rate': 0.0, 'epoch': 0.05}
6%|█████████▍ | 1999/32170 [2:21:21<35:31:11, 4.24s/it][2021-11-16 18:02:53,513] [INFO] [logging.py:68:log_dist] [Rank 0] step=2000, skipped=1999, lr=[0.0], mom=[[0.9, 0.999]]
[2021-11-16 18:02:53,513] [INFO] [timer.py:157:stop] 0/2000, SamplesPerSec=5.674303086219585
{'loss': 1.1347, 'learning_rate': 0.0, 'epoch': 0.06}
{'loss': 0.6642, 'learning_rate': 0.0, 'epoch': 0.08}
{'loss': 1.0864, 'learning_rate': 0.0, 'epoch': 0.09}
{'loss': 0.4922, 'learning_rate': 0.0, 'epoch': 0.11}
12%|██████████████████▉ | 3999/32170 [4:42:30<33:11:13, 4.24s/it][2021-11-16 20:24:02,592] [INFO] [logging.py:68:log_dist] [Rank 0] step=4000, skipped=3999, lr=[0.0], mom=[[0.9, 0.999]]
[2021-11-16 20:24:02,593] [INFO] [timer.py:157:stop] 0/4000, SamplesPerSec=5.679144072985121
{'loss': 1.6662, 'learning_rate': 0.0, 'epoch': 0.12}
{'loss': 1.4723, 'learning_rate': 0.0, 'epoch': 0.14}
{'loss': 0.5988, 'learning_rate': 0.0, 'epoch': 0.16}
{'loss': 1.1777, 'learning_rate': 0.0, 'epoch': 0.17}
19%|████████████████████████████▎ | 5999/32170 [7:03:38<30:45:21, 4.23s/it][2021-11-16 22:45:10,765] [INFO] [logging.py:68:log_dist] [Rank 0] step=6000, skipped=5999, lr=[0.0], mom=[[0.9, 0.999]]
[2021-11-16 22:45:10,765] [INFO] [timer.py:157:stop] 0/6000, SamplesPerSec=5.68092264980687
{'loss': 0.9843, 'learning_rate': 0.0, 'epoch': 0.19}
{'loss': 0.3419, 'learning_rate': 0.0, 'epoch': 0.2}
{'loss': 1.1882, 'learning_rate': 0.0, 'epoch': 0.22} <|||||>> May I also ask why you used LR = 3e-5 when HF page itself notes
Oh, that was a totally random setting which makes no impact on the need it was testing (memory usage). I use the same scripts to test many models and most of the time I only care about it working and/or fitting into memory, when I do that particular type of work. I train them for like 50 iterations...
Of course, when training for real, I pay attention to the recommended hparam settings. So please don't use any of the lr-like hparams in my examples for fitting memory as a recommendation for real training.
But let's not mix unrelated things in the same thread. If you'd like to discuss a different topic please kindly open a new issue and we can discuss it there.<|||||>@stas00 Hopefully this is relevant. I know you had success on A100 40 GB GPU . I am using deep speed on 4 gpus and I recieve OOM after training for several hours. Any idea as to what I can do here
```
warnings.warn(formatted_warning, FutureWarning)
{'loss': 6.0737, 'learning_rate': 0.0, 'epoch': 0.02}
{'loss': 0.1926, 'learning_rate': 0.0, 'epoch': 0.04}
{'loss': 0.0399, 'learning_rate': 0.0, 'epoch': 0.06}
8%|█████████████ | 1999/24128 [1:52:11<20:35:01, 3.35s/it][2021-11-22 19:51:55,198] [INFO] [logging.py:69:log_dist] [Rank 0] step=2000, skipped=1999, lr=[0.0, 0.0], mom=[0.0, 0.0]
[2021-11-22 19:51:55,199] [INFO] [timer.py:181:stop] 0/2000, SamplesPerSec=9.546767962244255
{'loss': 0.0749, 'learning_rate': 0.0, 'epoch': 0.08}
{'loss': 0.408, 'learning_rate': 0.0, 'epoch': 0.1}
{'loss': 0.0354, 'learning_rate': 0.0, 'epoch': 0.12}
{'loss': 0.0341, 'learning_rate': 0.0, 'epoch': 0.15}
17%|██████████████████████████ | 3999/24128 [3:43:57<18:47:06, 3.36s/it][2021-11-22 21:43:41,103] [INFO] [logging.py:69:log_dist] [Rank 0] step=4000, skipped=3999, lr=[0.0, 0.0], mom=[0.0, 0.0]
[2021-11-22 21:43:41,103] [INFO] [timer.py:181:stop] 0/4000, SamplesPerSec=9.564911481857864
{'loss': 0.0316, 'learning_rate': 0.0, 'epoch': 0.17}
{'loss': 0.0802, 'learning_rate': 0.0, 'epoch': 0.19}
{'loss': 0.035, 'learning_rate': 0.0, 'epoch': 0.21}
{'loss': 0.1423, 'learning_rate': 0.0, 'epoch': 0.23}
25%|███████████████████████████████████████ | 5999/24128 [5:35:43<16:52:01, 3.35s/it][2021-11-22 23:35:26,678] [INFO] [logging.py:69:log_dist] [Rank 0] step=6000, skipped=5999, lr=[0.0, 0.0], mom=[0.0, 0.0]
[2021-11-22 23:35:26,678] [INFO] [timer.py:181:stop] 0/6000, SamplesPerSec=9.571203445125207
{'loss': 0.1107, 'learning_rate': 0.0, 'epoch': 0.25}
{'loss': 0.0467, 'learning_rate': 0.0, 'epoch': 0.27}
{'loss': 0.0802, 'learning_rate': 0.0, 'epoch': 0.29}
{'loss': 0.0706, 'learning_rate': 0.0, 'epoch': 0.31}
33%|████████████████████████████████████████████████████ | 7999/24128 [7:27:26<15:00:20, 3.35s/it][2021-11-23 01:27:10,465] [INFO] [logging.py:69:log_dist] [Rank 0] step=8000, skipped=7999, lr=[0.0, 0.0], mom=[0.0, 0.0]
[2021-11-23 01:27:10,465] [INFO] [timer.py:181:stop] 0/8000, SamplesPerSec=9.574953735862689
{'loss': 0.22, 'learning_rate': 0.0, 'epoch': 0.33}
{'loss': 0.0967, 'learning_rate': 0.0, 'epoch': 0.35}
{'loss': 0.0716, 'learning_rate': 0.0, 'epoch': 0.37}
{'loss': 0.1111, 'learning_rate': 0.0, 'epoch': 0.39}
41%|█████████████████████████████████████████████████████████████████ | 9999/24128 [9:19:10<13:10:15, 3.36s/it][2021-11-23 03:18:53,863] [INFO] [logging.py:69:log_dist] [Rank 0] step=10000, skipped=9999, lr=[0.0, 0.0], mom=[0.0, 0.0]
[2021-11-23 03:18:53,863] [INFO] [timer.py:181:stop] 0/10000, SamplesPerSec=9.577305314814142
{'loss': 0.2233, 'learning_rate': 0.0, 'epoch': 0.41}
43%|███████████████████████████████████████████████████████████████████▏ | 10397/24128 [9:41:24<12:47:24, 3.35s/it]Traceback (most recent call last):
File "./finetune_trainer.py", line 368, in <module>
main()
File "./finetune_trainer.py", line 305, in main
train_result = trainer.train(
File "/home/tuhin.chakr/yes/envs/fairseq/lib/python3.8/site-packages/transformers/trainer.py", line 1316, in train
tr_loss_step = self.training_step(model, inputs)
File "/home/tuhin.chakr/yes/envs/fairseq/lib/python3.8/site-packages/transformers/trainer.py", line 1865, in training_step
loss = self.deepspeed.backward(loss)
File "/home/tuhin.chakr/yes/envs/fairseq/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1708, in backward
self.optimizer.backward(loss)
File "/home/tuhin.chakr/yes/envs/fairseq/lib/python3.8/site-packages/deepspeed/runtime/zero/stage2.py", line 1880, in backward
buf_1 = torch.empty(int(self.reduce_bucket_size),
RuntimeError: CUDA out of memory. Tried to allocate 382.00 MiB (GPU 1; 39.59 GiB total capacity; 36.01 GiB already allocated; 164.94 MiB free; 36.22 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
```
My script
```
export BS=8;
PYTHONPATH=../../src
USE_TF=0
deepspeed --num_gpus=4 ./finetune_trainer.py \
--data_dir /home/tuhin.chakr/gpt3/poetrynew \
--output_dir /local/nlp/temp/poetryT5-11B_new \
--model_name_or_path t5-11b \
--do_train \
--task translation \
--max_source_length 64 \
--max_target_length 64 \
--save_strategy=epoch \
--num_train_epochs 1 \
--per_device_train_batch_size $BS \
--adafactor \
--learning_rate 1e-3 \
--deepspeed /home/tuhin.chakr/gpt3/transformers/tests/deepspeed/ds_config_zero2.json \
--fp16
```
My config
```
json = {
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 0.001,
"warmup_num_steps": 0
}
},
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 2.000000e+08,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 2.000000e+08,
"contiguous_gradients": true
},
"train_batch_size": 32,
"train_micro_batch_size_per_gpu": 8,
"gradient_clipping": 1.0,
"steps_per_print": 2.000000e+03,
"wall_clock_breakdown": false,
"zero_allow_untested_optimizer": true
}
```<|||||>are you monitoring the memory consumption over the duration of the training - is it borderline OOM from the get going or is the memory usage slowly creeping up?
But regardless, you're using only stage-2, and you want stage-3 in this situation. Since if you're not sharding the params, you get only 12 out of 18 bytes sharded per param. Stage-3 is slower than stage-2 since it has to do more work, but if you can't fit into your gpus stage-3 is what you want.
Note that I'm using stage 3 here: https://github.com/huggingface/transformers/issues/9996#issuecomment-856384448<|||||>
retraining again and this is what my gpu looks like<|||||>So this is the state at the beginning of the training, right? Then check it say once in 30min and note the differences - if your application is well written then it shouldn't grow after say a few hundred of iterations, assuming the longest seqlen with widest batch size has been consumed already.
I'm also noticing that you're using a very old version of our examples - `finetune_trainer.py` is very old. So it'd be hard to debug this situation if indeed there a gradual memory leak there. In which case I'd recommend to migrate to the recent version of the software.<|||||>The snapshot I sent you was after 5 hrs of training. I have 7M samples and max seq len I reduced to 64 from 128. So hoping it works this time. Last time it failed around 40% of training. Its at 22% now
Yes If I still can't make it work I will switch to a recent version of software.<|||||>Right, I'm not sure my message is coming across - I'm suggesting to monitor the memory usage through the training.
And that if it OOMs you need to switch to ZeRO-3 and then you should be able to train with a much longer seqlen.
Enabling https://huggingface.co/transformers/performance.html#gradient-checkpointing is another technique to allow for much longer seqlen.<|||||>@stas00 many thanks for your guidance. I could finetune 1 epoch. I converted the model to fp32 and saw the output and noticed it's generating garbled text. Now of course this could be bcz its only 1 epoch. But I trained on 772073 samples. Just to be clear I have a T5 3B model trained on same data but using a different code and it works perfecrly, so assuming my data is perfect
It generated something
`**' thou sa wrt e the in thee wast the the of the world, a man of resea the earthe, the in the all the that of**
`
I am wondering what could be the reason, One thing I suspect is why is `the loss zero`. as you can see below. I just wanted to see as a proof of concept the generated text as it takes around 24 hours to train 1 epoch. Would you recommend finetuning for more epochs or something else
```
{'loss': 6.0737, 'learning_rate': 0.0, 'epoch': 0.02}
{'loss': 0.1926, 'learning_rate': 0.0, 'epoch': 0.04}
{'loss': 0.0399, 'learning_rate': 0.0, 'epoch': 0.06}
8%|█████████████ | 1999/24128 [1:52:11<20:35:01, 3.35s/it][2021-11-22 19:51:55,198] [INFO] [logging.py:69:log_dist] [Rank 0] step=2000, skipped=1999, lr=[0.0, 0.0], mom=[0.0, 0.0]
[2021-11-22 19:51:55,199] [INFO] [timer.py:181:stop] 0/2000, SamplesPerSec=9.546767962244255
{'loss': 0.0749, 'learning_rate': 0.0, 'epoch': 0.08}
{'loss': 0.408, 'learning_rate': 0.0, 'epoch': 0.1}
{'loss': 0.0354, 'learning_rate': 0.0, 'epoch': 0.12}
{'loss': 0.0341, 'learning_rate': 0.0, 'epoch': 0.15}
17%|██████████████████████████ | 3999/24128 [3:43:57<18:47:06, 3.36s/it][2021-11-22 21:43:41,103] [INFO] [logging.py:69:log_dist] [Rank 0] step=4000, skipped=3999, lr=[0.0, 0.0], mom=[0.0, 0.0]
[2021-11-22 21:43:41,103] [INFO] [timer.py:181:stop] 0/4000, SamplesPerSec=9.564911481857864
{'loss': 0.0316, 'learning_rate': 0.0, 'epoch': 0.17}
{'loss': 0.0802, 'learning_rate': 0.0, 'epoch': 0.19}
{'loss': 0.035, 'learning_rate': 0.0, 'epoch': 0.21}
{'loss': 0.1423, 'learning_rate': 0.0, 'epoch': 0.23}
25%|███████████████████████████████████████ | 5999/24128 [5:35:43<16:52:01, 3.35s/it][2021-11-22 23:35:26,678] [INFO] [logging.py:69:log_dist] [Rank 0] step=6000, skipped=5999, lr=[0.0, 0.0], mom=[0.0, 0.0]
[2021-11-22 23:35:26,678] [INFO] [timer.py:181:stop] 0/6000, SamplesPerSec=9.571203445125207
{'loss': 0.1107, 'learning_rate': 0.0, 'epoch': 0.25}
{'loss': 0.0467, 'learning_rate': 0.0, 'epoch': 0.27}
{'loss': 0.0802, 'learning_rate': 0.0, 'epoch': 0.29}
{'loss': 0.0706, 'learning_rate': 0.0, 'epoch': 0.31}
33%|████████████████████████████████████████████████████ | 7999/24128 [7:27:26<15:00:20, 3.35s/it][2021-11-23 01:27:10,465] [INFO] [logging.py:69:log_dist] [Rank 0] step=8000, skipped=7999, lr=[0.0, 0.0], mom=[0.0, 0.0]
[2021-11-23 01:27:10,465] [INFO] [timer.py:181:stop] 0/8000, SamplesPerSec=9.574953735862689
{'loss': 0.22, 'learning_rate': 0.0, 'epoch': 0.33}
{'loss': 0.0967, 'learning_rate': 0.0, 'epoch': 0.35}
{'loss': 0.0716, 'learning_rate': 0.0, 'epoch': 0.37}
{'loss': 0.1111, 'learning_rate': 0.0, 'epoch': 0.39}
```<|||||>why is your `'learning_rate': 0.0` ?<|||||>@stas00 thats something I don't understand that. As you can see in my script i mentioned 1e-3
```
My script from transformers repo
export BS=8;
PYTHONPATH=../../src
USE_TF=0
deepspeed --num_gpus=3 ./finetune_trainer.py \
--data_dir /home/tuhin.chakr/gpt3/poetrynew \
--output_dir /local/nlp/temp/poetryT5-11B_new \
--model_name_or_path t5-11b \
--do_train \
--task translation \
--max_source_length 128 \
--max_target_length 128 \
--save_strategy=epoch \
--num_train_epochs 1 \
--per_device_train_batch_size $BS \
--adafactor \
**--learning_rate 1e-3 \**
--deepspeed /home/tuhin.chakr/gpt3/transformers/tests/deepspeed/ds_config_zero2.json \
--fp16
~
My deepspeed config
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"contiguous_gradients": true
},
"train_batch_size": 24,
"train_micro_batch_size_per_gpu": 8,
"gradient_clipping": "auto",
"steps_per_print": 2000,
"wall_clock_breakdown": false
}
```
Someone here said the same
https://github.com/microsoft/DeepSpeed/issues/1574<|||||>I'd be happy to debug this with you, but let's first switch to the current example, which is https://github.com/huggingface/transformers/blob/master/examples/pytorch/translation/run_translation.py - it should be mostly the same with some args renamed - see the README.md for details https://github.com/huggingface/transformers/tree/master/examples/pytorch/translation
e.g. my staple cmd that I use is:
```
export BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 deepspeed --num_gpus=2 examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --evaluation_strategy=steps --do_train --do_eval --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 500 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --per_device_eval_batch_size $BS --predict_with_generate --sortish_sampler --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" --source_prefix "translate English to Romanian: " --val_max_target_length 128 --warmup_steps 50 --max_train_samples 500 --max_eval_samples 50 --deepspeed tests/deepspeed/ds_config_zero3.json --fp16
```
Additionally, please open a new Issue since this discussion is now taking over this already closed issue, so let's give it a dedicated space. Just don't forget to tag me in the new Issue.
<|||||>> Update on my end: with DeepSpeed 0.3.10 it did run successfully through the night on a full job, successfully training and generating the predictions. Amazing work @stas00 et al.
how did you infer bro?
got something ?<|||||>>
Could you please tell me where can I find the ds_config.json and finetune_trainer.py? Thank you!<|||||>The examples have been renamed and re-organized since the time of this thread, you can find them all here:
https://github.com/huggingface/transformers/tree/main/examples/pytorch
e.g. the translation is now at `examples/pytorch/translation/run_translation.py`
For deepspeed please see:
https://huggingface.co/transformers/master/main_classes/deepspeed.html#deepspeed-trainer-integration
|
transformers | 9,995 | closed | Added Integration testing for Pytorch implementation of DistilBert model from issue #9948' | # Adds Integration testing for Pytorch implementation of DistilBert from issue #9948
*Redid pull request
*My environment wasn't set up right.
I implemented the test as described in the issue linked. I ran the test and it passed. I can extend the tests after confirmation of this current PR. Please let me know what you think. Thank you
Fixes #9948
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
| 02-04-2021 04:36:00 | 02-04-2021 04:36:00 | @LysandreJik Hey, Thank you for merging my first pull request. Happy to help! That's exactly what happened, took me a second to realize, but it became pretty clear when I read through the make file. |
transformers | 9,994 | closed | 🚀 Faster batch translation with FSMT model | # 🚀 Faster batch translation with FSMT model
Currently, generating translations for multiple inputs at once is very slow using Transformers' `FSMTForConditionalGeneration` implementation. In fact it's about 10x slower than using the original FairSeq library. Can we speed this up by improving the implementation, potentially leaning on the original FairSeq approach?
## Motivation
I'm using FairSeq models for back translation as a way to augment text data. I've implemented this using the original FairSeq model (from PyTorch Hub) and Transformers.
### FairSeq implementation
```python
import torch
en2de = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.en-de.single_model', tokenizer='moses', bpe='fastbpe').cuda()
de2en = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.de-en.single_model', tokenizer='moses', bpe='fastbpe').cuda()
def back_translate_fairseq(texts: List[str]) -> List[List[str]]:
tokenized_texts = [en2de.encode(text) for text in texts]
back_translations = [set() for _ in range(len(texts))]
# Translate texts to German
tokenized_de_texts = [
[output['tokens'].cpu() for output in batch_output]
for batch_output in en2de.generate(tokenized_texts, beam=2, sampling=True, sampling_topp=0.7)
]
tokenized_de_texts_flat = [t for tt in tokenized_de_texts for t in tt]
# Translate back to English
tokenized_en_texts = [
[output['tokens'].cpu() for output in batch_output]
for batch_output in de2en.generate(tokenized_de_texts_flat, beam=2, sampling=True, sampling_topp=0.8)
]
tokenized_en_texts_flat = [t for tt in tokenized_en_texts for t in tt]
# Decode and deduplicate back-translations and assign to original text indices
for i, t in enumerate(tokenized_en_texts_flat):
back_translations[i // 4].add(de2en.decode(t).lower())
# Remove back translations that are equal to the original text
return [[bt for bt in s if bt != t] for s, t in zip(back_translations, map(str.lower, texts))]
```
### Transformers implementation
```python
from transformers import FSMTForConditionalGeneration, FSMTTokenizer
en2de_model_name = "facebook/wmt19-en-de"
en2de_tokenizer = FSMTTokenizer.from_pretrained(en2de_model_name)
en2de_model = FSMTForConditionalGeneration.from_pretrained(en2de_model_name)
de2en_model_name = "facebook/wmt19-de-en"
de2en_tokenizer = FSMTTokenizer.from_pretrained(de2en_model_name)
de2en_model = FSMTForConditionalGeneration.from_pretrained(de2en_model_name)
def back_translate_transformers(texts: List[str]) -> List[List[str]]:
tokenized_texts = en2de_tokenizer.prepare_seq2seq_batch(texts, return_tensors="pt")
back_translations = [set() for _ in range(len(texts))]
# Translate texts to German and back to English
generate_kwargs = {"num_beams": 1, "do_sample": True, "num_return_sequences": 2}
tokenized_de_texts = en2de_model.generate(tokenized_texts["input_ids"], attention_mask=tokenized_texts["attention_mask"], top_p=0.7, **generate_kwargs)
tokenized_en_texts = de2en_model.generate(tokenized_de_texts, top_p=0.8, **generate_kwargs)
# Decode and deduplicate back-translations and assign to original text indices
for i, t in enumerate(tokenized_en_texts):
back_translations[i // 4].add(de2en_tokenizer.decode(t, skip_special_tokens=True).lower())
# Remove back translations that are empty or equal to the original text
return [[bt for bt in s if bt and bt != t] for s, t in zip(back_translations, map(str.lower, texts))]
```
Both of these functions generate comparable results, but using Transformers it takes **about 10x longer**.
In my use case I need back translations for hundreds of thousands of text snippets, which unfortunately makes the Transformers implementation unfeasible. I'd love to use Transformers though, as it is much easier to install and deploy (as we use Transformers for text classification anyway).
| 02-04-2021 03:49:51 | 02-04-2021 03:49:51 | Hey @itssimon
From a quick look at your code, it seems that the fairseq model is on GPU, but the transformers model is on CPU, which could explain the huge speed difference. Could you try running it on GPU ?
<|||||>Oh dear, how embarassing. That's it! Thanks! |
transformers | 9,993 | closed | [trainer] a few fixes | This PR:
- removes `model.to(device)` - it's not needed for DeepSpeed. but primarily this allows loading models that otherwise won't load - e.g. loading 45GB (fp32) to a 40GB GPU when using Deepspeed with fp16 - as it loads only 22GB of it. But currently we load all 45GB right away and well nothing works
- decouples 2 unrelated logical things related to model parallel, which was very confusing in the previous if/else incarnation
- fixes a bug that left a deepspeed model to be wrapped in DDP, but it shouldn't, like a few other bugs of the same kind I created as things just happened to work until they didn't.
This PR enables t5-11b training on 1x 40GB gpu w/ Deepspeed https://github.com/huggingface/transformers/issues/9996
@sgugger | 02-04-2021 03:40:19 | 02-04-2021 03:40:19 | This is breaking sadly: with this change someone using `trainer.model` after instantiating a `Trainer` won't have it on the GPU anymore, which will make code fail. It's also best IMO if an OOM error happens sooner rather than later.
Now for deepspeed I understand why this would be necessary, so we can move the `model.to` in that case. I don't see other cases when this is useful (mixed precision with APEX and AMP keep a copy of the model in full precision)<|||||>oh, that's no problem for now. Let's do it just for deepspeed then. Fairscale might join down the road.
Actually Deepspeed doesn't even need the `.to()` call at all. So it's even simpler.
So basically this skipping `.to()` is needed for all extensions that partition or tweak the model size, so MP/DeepSpeed and this will be so for PP as well.
|
transformers | 9,992 | open | Adversarial/amnesic heads | # 🚀 Feature request
Task heads that backpropagate deliberately reversed gradients to the encoder. A flag requesting this behavior when constructing a task head.
## Motivation
Transfer learning experiments lend themselves to questions about the extent to which two tasks rely on the same information about a word/sentence, and to experiments probing whether and how word encodings contain/correspond to syntax trees, lemmas, frequencies, and other objects of linguistic/psycholinguistic study.
A difficulty is that a pretrained model, without fine-tuning, may already encode certain information too thoroughly and accessibly for intermediate training to make much of a difference. For example, BERT's masked language modeling objective produces word encodings in which syntax information is readily accessible. Intermediate training on a syntax task requires training a task head to extract this information, of course, but it will result in very little reorganization of the encoder itself.
Adversarial training, such as the amnesic probing of Elazar et al. 2020, can avoid this pitfall. Intermediate training can aim to burn particular information *out* of the encodings, and measure how much this impairs trainability of the target task. Strictly reversing the sense of the training data won't do it though; getting all the answers exactly wrong requires just as much domain knowledge as getting them all right does. And randomizing the labels on training data may just result in a feckless task head, one that discards useful information passed to it from the encoder, rather than affecting the encoder itself.
Ideally, then, the task head would be trained toward correctly reproducing gold-standard labels, but would flip all its gradients before backpropagating them to the shared encoder, thus training it not to produce precisely the signals that the task head found most informative. The following work by Cory Shain illustrates flipping gradients in this way (although it's not applied to shared-encoder transfer learning, but rather to development of encoders that disentangle semantics from syntax).
https://docs.google.com/presentation/d/1E89yZ8jXXeSARDLmlksOCJo83QZdNbd7phBrR_dRogg/edit#slide=id.g79452223cd_0_19
https://github.com/coryshain/synsemnet
## Your contribution
I am deeply unfamiliar with pytorch, unfortunately, and utterly ignorant of tensorflow. I can't offer much. | 02-04-2021 02:34:49 | 02-04-2021 02:34:49 | Interesting thread, thank you for posting it! You could also post it on [the forums](https://discuss.huggingface.co) to reach more users! |
transformers | 9,991 | closed | [documentation] non-PR doc editing | Is there a way we could have some of the docs that can be edited other than through PRs?
For example I've been working on these 2 docs:
- https://github.com/huggingface/transformers/issues/9766
- https://github.com/huggingface/transformers/issues/9824
1. So I do a lot of incremental edits and doing that via PRs would be very difficult to do as it's a big work in progress - that's why I started with just an Issue comment
2. it's important that the work in progress is readable, PRs aren't great for that
2. I'd be great if others could collaborate on editing
3. Yet, as these shape up, we want these in the documentation and not a random page somewhere
4. I already run into a problem with git where somehow it switched to an old edition of the comment and won't let me revert to the newer version of the comment.
Perhaps we could have some wiki pages that can be linked into the main menu? Then many can collaborate and there is no need to do frequent PR cycles. Not sure if it's great, since it'd take the user away from the main website?
Or perhaps the source could be wiki but when the docs are built it could pull the .md from the wiki and build it as if it were a normal .md page in the git repo?
I'm totally open to other ideas.
Thank you!
@sgugger | 02-04-2021 00:24:45 | 02-04-2021 00:24:45 | That's tricky. We already have way too many channels between the forums, the blog, the documentation and soon the course so I don't want to add a new one. You can create wiki posts on the forum, so maybe use that for the iterative process where you want some collaboration? We can then link those from the doc if relevant.
Down the road, once such a document is stable it should be converted in a doc page though.<|||||>
> That's tricky. We already have way too many channels between the forums, the blog, the documentation and soon the course so I don't want to add a new one. You can create wiki posts on the forum, so maybe use that for the iterative process where you want some collaboration? We can then link those from the doc if relevant.
Oh, I was thinking not to add a new channel but re-use the available ones - I was just thinking how to link it to the main docs while it's a work in progress.
I'm thinking of a much simpler approach - one of:
1. transformers github wiki - would be limited to hf members - less direct input, but easier to manage
2. forums wiki - would be open to all - but potentially require much more effort to manage
and then linking one of these to the docs website menu - is that possible? and once the doc is strong it can migrate to a real .md doc.
> Down the road, once such a document is stable it should be converted in a doc page though.
That!
<|||||>Hi @stas00 ,
could maybe have a look at: https://hackmd.io/
So you can just edit your markdown/README file, invite other collaborators and when everything is ready you could open a PR for the final submission into Transformers :)<|||||>Thank you, @stefan-it.
It's not so much about where to collaborate on it, but how to potentially do it long term while keeping the doc easily found with all the other transformers docs, while it's a work in progress.
I think the question is simple - @sgugger - would you support linking from the https://huggingface.co/transformers/ to some docs in progress until they are mature enough to import them as a normal doc? Then we can look at what would be the easiest way to collaborate.
Or to keep things on the website, perhaps an iframe that remains on https://huggingface.co/transformers/ but includes the off-site doc? Not asking for anything complicated at all, whatever the easy/quick solution works. This is just an idea.<|||||>stale |
transformers | 9,990 | closed | Implementing the test integration of BertGeneration | # What does this PR do?
this PR aims to fix issue #9947 by implementing an integration test
| 02-03-2021 23:07:04 | 02-03-2021 23:07:04 | Hi @LysandreJik I was wondering does the test will be for both encoder and decoder?
<|||||>Yes, that would be for the best! |
transformers | 9,989 | closed | create LxmertModelIntegrationTest Pytorch | # What does this PR do?
this pr fix issue #9951 as it implements an integration test for LXMERT
@LysandreJik
| 02-03-2021 22:57:51 | 02-03-2021 22:57:51 | Hello! It seems this is simply passing the test?<|||||>> Hello! It seems this is simply passing the test?
I made it to claim the issue and work on it. <|||||>@LysandreJik Lxmert requires `visual_feats` and `visual_pos` could I change the `model.config.visual_feats_dim` to a smaller value like 5 or 10.
Edit: or we could use a `seed` and generate a random tensor with original `visual_feats_dim`<|||||>@LysandreJik in this I used `np.random.seed` to fix the `visual_feats` and `visual_pos`, otherwise we could load lxmertmodel by lxmertconfig while changing `visual_feat_dim` to something manageable.
What do you suggest?<|||||>@LysandreJik is there something I can help with here?<|||||>> Could you try to replace the `np.random.rand` by the `ids_tensor`
well the `visual_feats` is torch.float will the `ids_tensor` returns int32.
@LysandreJik I think another alternative, is to make the `model.config.visual_feat_dim` smaller then we can have a fixed `visual_feats`
what do you think?
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@LysandreJik I did raise an issue to have a context manager that will fix a seed #10143, do you think it will be usefull here?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hello @sadakmed, sorry for taking a while to merge this. Do you mind rebasing on `master`, and running `make fixup` at the root of your clone? There's an issue with the code quality.
Will merge this right after.<|||||>Thanks a lot for your contribution @sadakmed! |
transformers | 9,988 | closed | Add head_mask and decoder_head_mask to TF LED | This PR implements `head_mask` and `decoder_head_mask` for TF LED (and Longformer as there's a copy dependency) and it is the follow-up to the open issue #9814.
**Motivation:** This PR is a part of an endeavour to enable the usage of `head_mask` and `decoder_head_mask` for all encoder-decoder transformers following the recent work on BART-like models (#9639).
<hr>
Fixes: https://github.com/huggingface/transformers/issues/9814
Reviewers: @jplu @patrickvonplaten @LysandreJik @sgugger | 02-03-2021 21:24:26 | 02-03-2021 21:24:26 | |
transformers | 9,987 | closed | Add `from_slow` in fast tokenizers build and fixes some bugs | # What does this PR do?
This PR adds an argument to the initialization of the `PreTrainedTokenizerFast` to force the conversion from a slow tokenizer. This will be useful to help users re-build the `tokenizer.json` file for some models where we can't update faulty ones right now without breaking backward compatibility (see #9637).
In passing it fixes a few bugs:
- wrong formatting for the documentation
- the fast sentencepiece tokenziers don't have an `sp_model` attribute so remove the documentation for that
- BarthezTokenizerFast was not registered properly in the autotokenizers, so `AutoTokenizer` was not finding it | 02-03-2021 21:01:47 | 02-03-2021 21:01:47 | |
transformers | 9,986 | closed | How to train on shards of bookcorpus + wikipedia + openwebtext on 1 TB disk. | # 🚀 Feature request
Hello, I am trying to pretrain from scratch a custom model on bookcorpus + wikipedia + openwebtext but I only have a 1TB disk. I tried to merge 20% of each one and then reload the training on other 20% of each, but I am having issues with the learning rate scheduler. So if I hardcode the max_steps to the total size of the dataset (100% of all concatenated) it do various passes to the 20%. The same that putting 5 epochs. But I have to deal with lots of points like LambdaLR that is in pure pytorch to set the epoch, current step and all the states. It's a little pain!
Any suggestion?
## Motivation
I wan to train from scratch a linear attention model with some modifications
## Your contribution
The idea on how to train medium models with big datasets and regular hardware.
| 02-03-2021 19:42:40 | 02-03-2021 19:42:40 | Training with constant_warmup would be an option since it does not do learning rate decay with respect to dataset size. But I am a bit afraid of end having a poor trained model after 72H of training.<|||||>Closed since the new `dataset.set_transform()` lazy loading. Thanks! |
transformers | 9,985 | closed | Loss function inputs for DistilBertForTokenClassification-like model using DistilBertModel | I want to fine tune my `DistilBertModel` just like `DistilBertForTokenClassification` for NER task by using nn.Module and building classifier on top myself.
But the problem is - I do not understand how to calculate loss function. In [official tutorial](https://huggingface.co/transformers/custom_datasets.html) it is explained only for seq classification which has multiple labels for input. But token classification is different!
I am trying to do something like
```
# For each batch of training data...
for step, batch in enumerate(train_dataloader):
batch_counts +=1
# Load batch to GPU
# b_input_ids, b_attn_mask, b_labels = tuple(t.to(device) for t in batch)
b_input_ids, b_attn_mask, b_labels = \
batch['input_ids'].to(device), batch['attention_mask'].to(device), batch['labels'].to(device)
# Zero out any previously calculated gradients
model.zero_grad()
# Perform a forward pass. This will return logits.
logits = model(b_input_ids, b_attn_mask)
# Compute loss and accumulate the loss values
print('[DEBUG]', logits.shape, b_labels.shape)
loss = loss_fn(logits, b_labels)
```
The last line:
> loss = loss_fn(logits, b_labels)
will definitely raise error.
I don't know how the expected label should look like and even the labels have `-100` extra instead of instead of indices.
Full code(fairly straightforward with comments) : https://colab.research.google.com/drive/1FWPEV_5eOhveiT2AQyuSYm1Ka1pgeY2f?usp=sharing | 02-03-2021 19:23:16 | 02-03-2021 19:23:16 | Hello! Have you taken a look at how we compute the loss in the [DistilbertForTokenClassificationModel](https://github.com/huggingface/transformers/blob/master/src/transformers/models/distilbert/modeling_distilbert.py#L807-L819)? If you pass the `labels` to the model, your loss will get computed automatically.
If you want to compute your loss yourself, I would advise to copy/paste the loss computation as shown here and adapt it to your own loss!<|||||>Hey @LysandreJik thank you for helping me out. I've implemented it and it is working perfectly fine.
```
Epoch | Batch | Train Loss | Val Loss | Val Acc | Elapsed
----------------------------------------------------------------------
1 | 20 | 0.000000 | - | - | 273.43
1 | 40 | 0.000000 | - | - | 262.14
1 | 60 | 0.000000 | - | - | 258.93
1 | 80 | 0.000000 | - | - | 266.20
1 | 84 | 0.000000 | - | - | 50.22
----------------------------------------------------------------------
1 | - | 0.000000 | 0.299704 | 19.37 | 1201.87
----------------------------------------------------------------------
Epoch | Batch | Train Loss | Val Loss | Val Acc | Elapsed
----------------------------------------------------------------------
2 | 20 | 0.000000 | - | - | 273.85
2 | 40 | 0.000000 | - | - | 264.77
2 | 60 | 0.000000 | - | - | 263.98
2 | 80 | 0.000000 | - | - | 263.12
2 | 84 | 0.000000 | - | - | 50.64
----------------------------------------------------------------------
2 | - | 0.000000 | 0.230533 | 19.39 | 1207.72
----------------------------------------------------------------------
```
Notebook: https://colab.research.google.com/drive/1FWPEV_5eOhveiT2AQyuSYm1Ka1pgeY2f?usp=sharing
What I noticed is
1. It is taking too long even in colab. Is it usual? ( `1207.72/60 = 20mins`)
2. Accuracy (sum/total even if there will be more `O` tags) is not improving that much on [wnut17train.conll](http://noisy-text.github.io/2017/files/wnut17train.conll). Is there something I might be doing wrong?<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread.<|||||>> If you think this still needs to be addressed please comment on this thread.
Yeah, need to be.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,984 | closed | [Proposal] Adding new `encoder_no_repeat_ngram_size` to `generate`. | # What does this PR do?
Blenderbot results seemed off compared to original ParlAI script:
`https://parl.ai/projects/recipes/`. Notably the model seems
to repeat a lot what was said during the conversation.
The actual problem was that `no_repeat_ngram_size` actually applies
to the `encoder_input_ids` but HF's `no_repeat_ngram_size` applies
to the previously generated ids (within the decoder). The history
conversation of blenderbot is within the `encoder` part so that
explains why HF's implementation had the repetitions.
This fix was focused on blenderbot *not* small and added tests
for those because they are quite different in configuration.
This change includes:
- Adding a new EncoderNoRepeatLogitProcessor.
- Adding 1 new arg to `generate` (`encoder_no_repeat_ngram_size`)
- Adding 1 new config parameter `encoder_no_repeat_ngram_size`.
- Adding 2 tests, one for the pipeline (high level, inputs exhibited
repeat behavior, one low level for EncoderNoRepeatLogitProcessor)
- Factored NoRepeatLogitProcessor so that logic could be reused.
Further work:
- Blenderbot conversational pipeline still does not behave correctly
as they way input is prepared within the pipeline is still incorrect
(follow up PR)
- Blenderbot allows the bot to have personas, which is done by
prepending "your personna: XXXX" to the input, this could be explored
too in a follow up PR.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
@LysandreJik
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 02-03-2021 18:13:41 | 02-03-2021 18:13:41 | Before merging, please take a look at the failing tests.<|||||>> LGTM, this is indeed a clean fix. Do we know why our BlenderBot still behaves incorrectly compared to ParlAI?
>
I need to look deeper, by default they use FP16 and final scores are still different in order of magnitude (I'm expecting they correspond to different things), but when looking at the full beam searches they still look similar.
I've done step by step debugging and scores withing the beam search are super close for a lot of steps.
This fix is the major drift that would occur pretty fast.
> Regarding personas, this could probably be handled directly in the `ConversationalPipeline`?
Yes exactly my opinion.
<|||||>@sgugger Can you take a look please? <|||||>@LysandreJik figured it out. Its' because of some logic within ConversationPipeline which is invalid for `blenderbot`.
Coming up with a follow-up PR. |
transformers | 9,983 | closed | Added integration tests for Pytorch implementation of the FlauBert model | Added integration tests for Pytorch implementation of the FlauBert model
Fixes #9950
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
@LysandreJik | 02-03-2021 17:41:59 | 02-03-2021 17:41:59 | @LysandreJik Need your help here. Not sure, why test cases are failing.<|||||>opening new PR. |
transformers | 9,982 | closed | Added integration tests for Pytorch implementation of the ELECTRA model | Added integration tests for Pytorch implementation of the ELECTRA model
Fixes #9949
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
@LysandreJik | 02-03-2021 17:03:35 | 02-03-2021 17:03:35 | @LysandreJik Need your help here. Not sure, why test cases are failing.<|||||>closing this PR, due to git conflict. |
transformers | 9,981 | closed | Can't make sense of encoding for a downloadable AutoTokenizer | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.2
### Who can help
Probably @n1t0 or @LysandreJik (AutoTokenizer)
## To reproduce
Steps to reproduce the behavior:
1. Boot up an AutoTokenizer using `AutoTokenizer.from_pretrained("sberbank-ai/rugpt3small_based_on_gpt2")`
2. Execute `tokenizer.get_vocab()`
The vocabulary contains gibberish instead of Russian tokens (yet the model works fine):

How do I decode and read the actual tokens? | 02-03-2021 16:40:21 | 02-03-2021 16:40:21 | This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,980 | closed | Added integration tests for Pytorch implementation of the ALBERT model | Added integration tests for Pytorch implementation of the ALBERT model
Fixes #9945
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
@LysandreJik | 02-03-2021 16:21:49 | 02-03-2021 16:21:49 | |
transformers | 9,979 | closed | Added integration tests for TensorFlow implementation of the MPNet model | Added integration tests for TensorFlow implementation of the ALBERT model
Fixes #9956
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
@LysandreJik | 02-03-2021 15:21:26 | 02-03-2021 15:21:26 | |
transformers | 9,978 | closed | Added integration tests for TensorFlow implementation of the mobileBERT | Added integration tests for TensorFlow implementation of the ALBERT model
Fixes #9955
Before submitting
This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
Did you read the contributor guideline,
Pull Request section?
Was this discussed/approved via a Github issue or the forum? Please add a link
to it if that's the case.
Did you make sure to update the documentation with your changes? Here are the
documentation guidelines, and
here are tips on formatting docstrings.
Did you write any new necessary tests?
@LysandreJik | 02-03-2021 14:57:50 | 02-03-2021 14:57:50 | |
transformers | 9,977 | closed | [run_clm.py] fix getting extention | # What does this PR do?
Fixes #9927 | 02-03-2021 14:23:02 | 02-03-2021 14:23:02 | |
transformers | 9,976 | closed | Added integration tests for TensorFlow implementation of the ALBERT model | Added integration tests for TensorFlow implementation of the ALBERT model
Fixes #9946
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
@LysandreJik | 02-03-2021 14:17:01 | 02-03-2021 14:17:01 | |
transformers | 9,975 | closed | TF DistilBERT integration tests | Added integration tests for TensorFlow implementation of the DistilBERT model
Fixes #9953
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
Who can review?
@LysandreJik
| 02-03-2021 14:15:02 | 02-03-2021 14:15:02 | |
transformers | 9,974 | closed | Make use of attention_mask in Trainer's compute_metrics | # 🚀 Feature request
In Trainer's training loop, the `compute_metrics` function takes a `EvalPrediction(predictions=preds, label_ids=label_ids)` object as input.
It should also be able to use `inputs['attention_mask']` to mask the irrelevant predictions (those for which attention_mask is 0).
## Motivation
I am working on a NER task and find myself having no way to filter out irrelevant predictions.
In the following example, the `raw_pred` leads to an accuracy of 43%, while the `masked_pred` gives me almost 77% (value 0 in attention_mask has been casted into `nan` before being applied to `raw_pred`)
**ground_truth**
```python
tensor([[0, 5, 5, 5, 0, 6, 6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 1, 1, 1, 1, 2, 2, 0, 1, 2, 2, 2, 0, 0, 0, 0, 0, 0, 0],
[0, 5, 6, 6, 6, 6, 6, 6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 3, 3, 4, 0, 3, 3, 0, 4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 5, 5, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
```
**raw_pred**
```python
tensor([[0, 2, 5, 5, 6, 4, 1, 2, 3, 1, 3, 3, 2, 1, 2, 1, 2, 6, 3],
[2, 1, 2, 2, 2, 2, 2, 0, 1, 2, 2, 2, 0, 0, 0, 0, 0, 0, 2],
[4, 4, 4, 0, 0, 0, 0, 4, 0, 0, 0, 4, 4, 4, 4, 4, 4, 4, 0],
[0, 6, 6, 4, 4, 5, 6, 6, 4, 0, 3, 3, 3, 2, 3, 3, 1, 2, 2],
[0, 0, 0, 0, 5, 4, 5, 5, 5, 6, 4, 0, 2, 1, 2, 1, 1, 3, 3]])
```
**attention_mask**
```python
tensor([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0]])
```
**masked_pred**
```python
tensor([[0., 2., 5., 5., 6., 4., 1., 2., nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,nan],
[2., 1., 2., 2., 2., 2., 2., 0., 1., 2., 2., 2., 0., 0., 0., 0., 0., 0.,nan],
[4., 4., 4., 0., 0., 0., 0., 4., 0., 0., 0., 4., 4., 4., 4., 4., 4., 4., 0.],
[0., 6., 6., 4., 4., 5., 6., 6., 4., 0., nan, nan, nan, nan, nan, nan, nan, nan,nan],
[0., 0., 0., 0., 5., 4., 5., 5., 5., 6., 4., 0., nan, nan, nan, nan, nan, nan,nan]])
```
| 02-03-2021 10:15:35 | 02-03-2021 10:15:35 | For this use case, it's best to subclass the `Trainer` and override the `evaluate` method. An example of this is given for question-answering [here](https://github.com/huggingface/transformers/blob/master/examples/question-answering/trainer_qa.py) where we need to post-process the predictions using the original dataset (a bit like your use case since the attention masks will be in the dataset). The predictions returned by the Trainer are in the same order as the elements of your dataset, so you're safe with that.<|||||>Thanks a lot Sylvain.
Actually I realised that the `predictions.label_ids` coming from the training loop were padded with the value `-100`. By using the same padding value in my preprocessing, I can recover `the attention_mask` by putting a threshold.
Cheers <|||||>Oh even easier then! Can we close the issue? |
transformers | 9,973 | closed | attention_mask -> encoder_attention_mask in cross attn of BERT-like models | 02-03-2021 10:08:32 | 02-03-2021 10:08:32 | This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread.<|||||>Reopened as this might still be in the works.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Unstale<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@abhi1thakur - should we still try to merge this PR? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
|
transformers | 9,972 | closed | Fix GroupedLinearLayer in TF ConvBERT | Fixing an issue with `call` function in `GroupedLinearLayer` of ConvBERT | 02-03-2021 09:34:30 | 02-03-2021 09:34:30 | |
transformers | 9,971 | closed | DebertaForSequenceClassification documents examples report RuntimeError: Index tensor must have the same number of dimensions as input tensor | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: macos
- Python version: 3.8.3
- PyTorch version (GPU?): no
- Tensorflow version (GPU?): no
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): Deberta
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
from transformers import DebertaTokenizer, DebertaForSequenceClassification
import torch
tokenizer = DebertaTokenizer.from_pretrained('microsoft/deberta-base')
model = DebertaForSequenceClassification.from_pretrained('microsoft/deberta-base')
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
labels = torch.tensor([1]).unsqueeze(0)
outputs = model(**inputs, labels=labels)
loss = outputs.loss
logits = outputs.logits
print(loss)
print(logits)
Exception like below:
Traceback (most recent call last):
File "/Users/admin/git/transformers/myexample4/deberta_MLM.py", line 65, in <module>
sequence_classify()
File "/Users/admin/git/transformers/myexample4/deberta_MLM.py", line 45, in sequence_classify
outputs = model(**inputs, labels=labels)
File "/Users/admin/virtulenv/py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/Users/admin/git/transformers/src/transformers/models/deberta/modeling_deberta.py", line 1169, in forward
labels = torch.gather(labels, 0, label_index.view(-1))
RuntimeError: Index tensor must have the same number of dimensions as input tensor
Process finished with exit code 1
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 02-03-2021 08:18:30 | 02-03-2021 08:18:30 | You shouldn't unsqueeze your labels, because the `labels` should just be a tensor of shape `(batch_size,)`. |
transformers | 9,970 | closed | [research proj] [lxmert] remove bleach dependency | github reports `bleach==3.1.5` to have a vulnerability and it's not really used anywhere in the code, and because it has a fixed version set that is vulnerable, so just as well remove it completely from deps.
https://github.com/huggingface/transformers/security/dependabot/examples/research_projects/lxmert/requirements.txt/bleach/open
@LysandreJik, @sgugger, @patrickvonplaten | 02-03-2021 05:53:00 | 02-03-2021 05:53:00 | |
transformers | 9,969 | closed | fix steps_in_epoch variable in trainer when using max_steps | # What does this PR do?
This PR fix the calculation of `steps_in_epoch` in `trainer.py`
The 'step' in `steps_in_epoch` means one backward
The 'step' in `max_steps` means one parameter updating (taking gradient accumulation into account)
This bug does not affect training process, just make logging info weired
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Maybe @sgugger will be more interested in this
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 02-03-2021 03:31:10 | 02-03-2021 03:31:10 | And remove a repeated sentence in README |
transformers | 9,968 | closed | Disk memory management | I am wondering if you guys could add support for disk memory management when running large transformer models. At least when running on my laptop with limited DRAM, it is not feasible to fully materialize some of the larger models (T5-3b e.g. or even T5-large) in DRAM, especially if there are other memory intensive tasks running (like the IDE). I'm wondering if it's possible for the Huggingface library to not materialize the entire model in DRAM as a Python object for these larger models and instead re-materialize them layer by layer from the disk. | 02-02-2021 23:33:03 | 02-02-2021 23:33:03 | This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,967 | closed | Added an integration test for the Pytorch implementation of the DistilBERT model from issue #9948 |
# Adds Integration testing for pytorch implementation of DistilBert from issue #9948
I implemented the test as described in the issue linked. I ran the test and it passed. I can extend the tests after confirmation of this current PR. Please let me know what you think. Thank you
Fixes #9948
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
| 02-02-2021 23:21:42 | 02-02-2021 23:21:42 | Hello! Why did you close your branch? The integration test looks good, you only need to run `make fixup` at the root of your clone to apply the quality requirements.<|||||>@LysandreJik Hi, sorry I was getting an error when i ran 'make fixup' and I was trying to figure it out. Ill finish it up tonight, unless you know whats wrong? Thank you for responding.
```
File "utils/get_modified_files.py", line 28
modified_files = subprocess.check_output(f"git diff --name-only {fork_point_sha}".split()).decode("utf-8").split()
^
SyntaxError: invalid syntax
No library .py files were modified
File "setup.py", line 192
entries = "\n".join([f' "{k}": "{v}",' for k, v in deps.items()])
^
SyntaxError: invalid syntax
``` |
transformers | 9,966 | closed | Bump bleach from 3.1.5 to 3.3.0 in /examples/research_projects/lxmert | Bumps [bleach](https://github.com/mozilla/bleach) from 3.1.5 to 3.3.0.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/mozilla/bleach/blob/master/CHANGES">bleach's changelog</a>.</em></p>
<blockquote>
<h2>Version 3.3.0 (February 1st, 2021)</h2>
<p><strong>Backwards incompatible changes</strong></p>
<ul>
<li>clean escapes HTML comments even when strip_comments=False</li>
</ul>
<p><strong>Security fixes</strong></p>
<ul>
<li>Fix bug 1621692 / GHSA-m6xf-fq7q-8743. See the advisory for details.</li>
</ul>
<p><strong>Features</strong></p>
<p>None</p>
<p><strong>Bug fixes</strong></p>
<p>None</p>
<h2>Version 3.2.3 (January 26th, 2021)</h2>
<p><strong>Security fixes</strong></p>
<p>None</p>
<p><strong>Features</strong></p>
<p>None</p>
<p><strong>Bug fixes</strong></p>
<ul>
<li>fix clean and linkify raising ValueErrors for certain inputs. Thank you <a href="https://github.com/Google-Autofuzz"><code>@Google-Autofuzz</code></a>.</li>
</ul>
<h2>Version 3.2.2 (January 20th, 2021)</h2>
<p><strong>Security fixes</strong></p>
<p>None</p>
<p><strong>Features</strong></p>
<ul>
<li>Migrate CI to Github Actions. Thank you <a href="https://github.com/hugovk"><code>@hugovk</code></a>.</li>
</ul>
<p><strong>Bug fixes</strong></p>
<ul>
<li>fix linkify raising an IndexError on certain inputs. Thank you <a href="https://github.com/Google-Autofuzz"><code>@Google-Autofuzz</code></a>.</li>
</ul>
<p>Version 3.2.1 (September 18th, 2020)</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/mozilla/bleach/commit/79b7a3c5e56a09d1d323a5006afa59b56162eb13"><code>79b7a3c</code></a> Merge pull request from GHSA-vv2x-vrpj-qqpq</li>
<li><a href="https://github.com/mozilla/bleach/commit/842fcb4a05e59d9a22dafb8c51865ee79d753c03"><code>842fcb4</code></a> Update for v3.3.0 release</li>
<li><a href="https://github.com/mozilla/bleach/commit/1334134d34397966a7f7cfebd38639e9ba2c680e"><code>1334134</code></a> sanitizer: escape HTML comments</li>
<li><a href="https://github.com/mozilla/bleach/commit/c045a8b2a02bfb77bb9cacd5d3e5926c056074d2"><code>c045a8b</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/mozilla/bleach/issues/581">#581</a> from mozilla/nit-fixes</li>
<li><a href="https://github.com/mozilla/bleach/commit/491abb06ce89012d852f4c5ab3aff8f572532611"><code>491abb0</code></a> fix typo s/vnedoring/vendoring/</li>
<li><a href="https://github.com/mozilla/bleach/commit/10b1c5dda8ebceffce1d8f7d66d4b309b4f8c0cf"><code>10b1c5d</code></a> vendor: add html5lib-1.1.dist-info/REQUESTED</li>
<li><a href="https://github.com/mozilla/bleach/commit/cd838c3b527021f2780d77718488fa03d81f08e3"><code>cd838c3</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/mozilla/bleach/issues/579">#579</a> from mozilla/validate-convert-entity-code-points</li>
<li><a href="https://github.com/mozilla/bleach/commit/612b8080ada0fba45f0575bfcd4f3a0bda7bfaca"><code>612b808</code></a> Update for v3.2.3 release</li>
<li><a href="https://github.com/mozilla/bleach/commit/6879f6a67058c0d5977a8aa580b6338c9d34ff0e"><code>6879f6a</code></a> html5lib_shim: validate unicode points for convert_entity</li>
<li><a href="https://github.com/mozilla/bleach/commit/90cb80be961aaf650ebc65b2ba2b789a2e9b129f"><code>90cb80b</code></a> Update for v3.2.2 release</li>
<li>Additional commits viewable in <a href="https://github.com/mozilla/bleach/compare/v3.1.5...v3.3.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 02-02-2021 23:10:51 | 02-02-2021 23:10:51 | Looks like bleach is no longer a dependency, so this is no longer needed. |
transformers | 9,965 | open | [trainer] new in pytorch: `torch.optim._multi_tensor` faster optimizers | Back in September pytorch introduced `torch.optim._multi_tensor` https://github.com/pytorch/pytorch/pull/43507 which should be much more efficient for situations with lots of small feature tensors (`transformers`) and thus should show an appreciable speed up in training. If someone is interested in the progress of this project here is the stack to track: https://github.com/pytorch/pytorch/pull/48223
This feature is currently an alpha stage, so users can try to use it by simply replacing `torch.optim` with `torch.optim._multi_tensor` in HF Trainer or their own trainer.
Eventually it'll replace `torch.optim` so there is nothing that we need to do otherwise.
@blefaudeux who alerted me to this improvement suggested it should have good speed ups for the DDP/Sharded DDP training.
If resources allow it'd be good to run some benchmarks. Please feel free to beat me to it.
Thanks to @blefaudeux for the heads up, and @izdeby for working on this enhancement and clarifying where things are at.
heads up to: @sgugger, @patrickvonplaten - nothing else that needs to be done. | 02-02-2021 19:02:53 | 02-02-2021 19:02:53 | I did a quick benchmark, with `--sharded_ddp --fp16` and just `--fp16` and there is no visible difference . Perhaps it is more visible in a different kind of training/model combination.
Testing HF `AdamW` vs. `torch.optim._multi_tensor.AdamW`
```
# benchmark with just --fp16
# baseline HF `AdamW`
export BS=16; rm -r output_dir; PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 ./finetune_trainer.py --model_name_or_path t5-large --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_train --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --sortish_sampler --task translation_en_to_ro --warmup_steps 500 --n_train 20000 --fp16
{'train_runtime': 226.5618, 'train_samples_per_second': 2.759, 'epoch': 1.0}
# w/ torch.optim._multi_tensor.AdamW
export BS=16; rm -r output_dir; PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 ./finetune_trainer.py --model_name_or_path t5-large --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_train --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --sortish_sampler --task translation_en_to_ro --warmup_steps 500 --n_train 20000 --fp16
{'train_runtime': 226.1715, 'train_samples_per_second': 2.763, 'epoch': 1.0}
```
The change I did was:
```
--- a/examples/seq2seq/seq2seq_trainer.py
+++ b/examples/seq2seq/seq2seq_trainer.py
@@ -24,7 +24,6 @@ from transformers.integrations import is_fairscale_available
from transformers.models.fsmt.configuration_fsmt import FSMTConfig
from transformers.optimization import (
Adafactor,
- AdamW,
get_constant_schedule,
get_constant_schedule_with_warmup,
get_cosine_schedule_with_warmup,
@@ -32,6 +31,7 @@ from transformers.optimization import (
get_linear_schedule_with_warmup,
get_polynomial_decay_schedule_with_warmup,
)
+from torch.optim._multi_tensor import AdamW
from transformers.trainer_pt_utils import get_tpu_sampler
from transformers.training_args import ParallelMode
```
and this is from pytorch-nightly from today.
<|||||>you must have a really strange bottleneck in that test, neither the latest fairscale nor these are changing anything ? These optimizers are measurably faster in isolation, and sure enough we see a difference in fairscale CI, even on a dummy job / small model ([see for instance, two last jobs](https://app.circleci.com/pipelines/github/facebookresearch/fairscale/1522/workflows/e95cd0af-9582-4021-8176-beafa306f147/jobs/7130))<|||||>testing with the same command, I see a vastly varying throughput depending on `num_train_epochs`, which seems a bit strange to me<|||||>To share with others, @blefaudeux and his team made speed improvements in fairscale (master) recently, which should have been quite visible, but a few days ago we tested this same script with `--sharded_ddp` and saw no improvement whatsoever. So something odd is going on.<|||||>I will leave this issue open for now as an incentive to profile this script and identify the bottleneck.<|||||>@stas00 Do you think this should be revisited given the [discussion](https://github.com/pytorch/pytorch/issues/71274) in upstream PyTorch?<|||||>Yes, I was just about to revisit it.
edit: I thought you might have wanted to work on that, but the pytorch team asks to run a profiler on it and all, so I probably will look into testing it out again.
--- original comment ---
Do you want to take a lead on this experiment, @jaketae?
The new `--optim` HF Trainer just got merged, so you can quickly implement `--optim adamw_torch_multi_tensor` in the same way `--optim adamw`
You can use this tool for benchmarking https://github.com/huggingface/transformers/pull/14934 if it helps. I think it's pretty stable now, I will propose to PR it.
|
transformers | 9,964 | closed | Add head_mask, decoder_head_mask, cross_head_mask to ProphetNet | This PR implements `head_mask`, `decoder_head_mask` and `cross_head_mask` for ProphetNet (and Longformer as there's a copy dependency) and it is the follow-up to the open issue #9814.
**Motivation:** This PR is a part of an endeavour to enable the usage of `head_mask` and `decoder_head_mask` for all encoder-decoder transformers following the recent work on BART-like models (#9569).
<hr>
Fixes: https://github.com/huggingface/transformers/issues/9814
Reviewers: @patrickvonplaten | 02-02-2021 18:54:19 | 02-02-2021 18:54:19 | This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread.<|||||>Reopened as this might still be in the works.<|||||>This PR is required for #10605.
Also, it is necessary to rebase this branch to the current `master` [As a lot of changes have been done to the repo, there are some conflicts I'm gonna handle asap.].<|||||>Hi @LysandreJik - I fixed `cross_head_mask` for this `ProphetNetModel`. At this moment, there is an error regarding the `test_forward_signature` and there is likely to be a problem with a template. These issues should be then resolved in #10605 which takes care of `cross_head_mask` for all other encoder-decoder models which have already had `head_mask` and `decoder_head_mask` merged into the master.<|||||>Update: #10605 now passes all the tests. (@LysandreJik)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Super sorry for being incredibly slow here @stancld ! I think we can actually merge this if it passes all the tests :-)<|||||>@patrickvonplaten No worries, it's completely okay! :) I rebase this branch and now all the tests have passed. |
transformers | 9,963 | closed | Model Save/Load Fails for Hadoop File Server | ## Environment info
- `transformers` version: 4.2.2
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.8.3
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): 2.4.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@n1t0, @LysandreJik @sgugger
## Information
Model I am using (Bert, XLNet ...): bert-base-uncased
The problem arises when using:
* [X] the official example scripts:
```python
config = pipeline.get_common_model_file(model_name, Constants.CONFIG)
model = AutoModelForSequenceClassification.from_pretrained(config=config, pretrained_model_name_or_path='http://192.168.0.61:50070/webhdfs/v1/user/root/NLPEngine/models/bert-base-uncased/pytorch_model.bin?op=OPEN')
```
* [ ] my own modified scripts:
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset:
A Conversational (Multi-Turn Dialog) Dataset for Task of Knowledge Selection. Dataset is raising no issues.
## To reproduce
Steps to reproduce the behavior:
1. Install Hadoop File Server
2. Setup WebHDFS to use cURL commands to load/save files on HDFS or directly use hadoop for that. Just make sure that all your servers are listed in your system's hosts configurations.
3. Place bert-base-uncased model on Hadoop (anywhere).
4. Try and access it from the code mentioned above.
## Expected behavior
The model should be loaded from the file, as it would be loaded locally or from a server that returns an E-Tag, but Hadoop is not configured / built for returning E-Tags. It first returns a temporary-redirect URL, and then from that the actual object retrieved off of one of the servers in its cluster(s).
If I turn off the E-Tag validation in the source code, then it starts working perfectly, but as of now, its part of the source-code, and thats causing this code to crash.
Here is the change I made to get it to work (in my copy of the library's code):
```python
File "C:\Users\rsiddiqui\Anaconda3\Lib\site-packages\transformers\file_utils.py", line 1182, in get_from_cache
etag = ''
File "C:\Users\rsiddiqui\Anaconda3\Lib\site-packages\transformers\file_utils.py", line 1187, in get_from_cache
etag = r.headers.get("X-Linked-Etag", '') or r.headers.get("ETag", '')
```
| 02-02-2021 17:50:22 | 02-02-2021 17:50:22 | This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,962 | closed | Deepseep configs keys probelm | https://github.com/huggingface/transformers/blob/24881008a6743e958cc619133b8ee6994ed1cb8c/src/transformers/integrations.py#L288
I guess it should be `if len([x for x in bs_keys if x in config.keys()]) <= 0: ` or `if not len([x for x in bs_keys if x in config.keys()]) <= 0: ` | 02-02-2021 17:06:39 | 02-02-2021 17:06:39 | cc @stas00 <|||||>Could you please explain what is the problem that you're encountering?
These keys **shouldn't be in the config**, so len() will be > 0 if they are and then the assert happens, so I'm not sure why you're trying to reverse the logic.
```
config = {
'train_batch_size': 1,
"train_micro_batch_size_per_gpu": 1,
}
bs_keys = ["train_batch_size", "train_micro_batch_size_per_gpu"]
if len([x for x in bs_keys if x in config.keys()]):
raise ValueError(
f"Do not include {bs_keys} entries in the ds config file, as they will be set via --per_device_train_batch_size or its default"
)
```
Please see: https://huggingface.co/transformers/master/main_classes/trainer.html#shared-configuration |
transformers | 9,961 | closed | What is the correct way to use Adafactor? | Hi, from the papers I've seen that Adafactor is typically used with no learning rate (as in Pegasus paper), however, when I try to execute run_seq2seq.py or seq2seq/finetune_trainer.py from your examples, and set --adafactor parameter, without specifying learning rate (for no learning rate), it uses the default 3e-05. Is there a way to use Adafactor without learning rate? | 02-02-2021 15:42:08 | 02-02-2021 15:42:08 | This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,960 | closed | How to resize RobertaLMHead with pretrained weights? | Hi, I'm trying to train my model with a new token 'name', but it keeps throwing size mismatch error.
I don't know how to **resize RobertaLMHead** while loading pretrained weights from 'roberta-base'
Setting Tokenizer
```
tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
tokenizer.add_tokens('<name>')
```
Setting Model
```
model = MaskedLM.from_pretrained('roberta-base')
```
Model Structure
```
class MaskedLM(RobertaPreTrainedModel):
def __init__(self, config):
super().__init__(config=config)
self.roberta = RobertaModel(config)
self.lm_head = RobertaLMHead(config)
self.refinement_num = 5
self.mask_id = 50264
self.init_weights()
def forward(...):
self.roberta.resize_token_embeddings(50266)
## HOW TO RESIZE LM HEAD?! ##
# self.lm_head.resize_token_embeddings(50266)
outputs = self.roberta(input_ids, attention_mask)
prediction_scores = self.lm_head(outputs[0])
...
```
I tried ```_get_resized_lm_head``` from [here](https://huggingface.co/transformers/_modules/transformers/modeling_utils.html#PreTrainedModel._get_resized_lm_head)
But it doesn't work as RobertaLMHead has no ```weight``` attribute.
```
def _get_resized_lm_head(
self, old_lm_head: torch.nn.Linear, new_num_tokens: Optional[int] = None, transposed: Optional[bool] = False
) -> torch.nn.Linear:
"""
Build a resized Linear Module from a provided old Linear Module. Increasing the size will add newly initialized
vectors at the end. Reducing the size will remove vectors from the end
Args:
old_lm_head (:obj:`torch.nn.Linear`):
Old lm head liner layer to be resized.
new_num_tokens (:obj:`int`, `optional`):
New number of tokens in the linear matrix.
Increasing the size will add newly initialized vectors at the end. Reducing the size will remove
vectors from the end. If not provided or :obj:`None`, just returns a pointer to the input tokens
:obj:`torch.nn.Linear`` module of the model without doing anything.
transposed (:obj:`bool`, `optional`, defaults to :obj:`False`):
Whether ``old_lm_head`` is transposed or not. If True ``old_lm_head.size()`` is ``lm_head_dim,
vocab_size`` else ``vocab_size, lm_head_dim``.
Return:
:obj:`torch.nn.Linear`: Pointer to the resized Linear Module or the old Linear Module if
:obj:`new_num_tokens` is :obj:`None`
"""
if new_num_tokens is None:
return old_lm_head
old_num_tokens, old_lm_head_dim = (
old_lm_head.weight.size() if not transposed else old_lm_head.weight.t().size()
)
if old_num_tokens == new_num_tokens:
return old_lm_head
if not isinstance(old_lm_head, nn.Linear):
raise TypeError(
f"Old language model head is of type {type(old_lm_head)}, which is not an instance of {nn.Linear}."
f"You should either use a different resize function or make sure that `old_embeddings` are an instance of {nn.Linear}."
)
# Build new lm head
new_lm_head_shape = (old_lm_head_dim, new_num_tokens) if not transposed else (new_num_tokens, old_lm_head_dim)
has_new_lm_head_bias = old_lm_head.bias is not None
new_lm_head = nn.Linear(*new_lm_head_shape, bias=has_new_lm_head_bias).to(self.device)
# initialize new lm head (in particular added tokens)
self._init_weights(new_lm_head)
num_tokens_to_copy = min(old_num_tokens, new_num_tokens)
# Copy old lm head weights to new lm head
if not transposed:
new_lm_head.weight.data[:num_tokens_to_copy, :] = old_lm_head.weight.data[:num_tokens_to_copy, :]
else:
new_lm_head.weight.data[:, :num_tokens_to_copy] = old_lm_head.weight.data[:, :num_tokens_to_copy]
# Copy bias weights to new lm head
if has_new_lm_head_bias:
new_lm_head.bias.data[:num_tokens_to_copy] = old_lm_head.bias.data[:num_tokens_to_copy]
return new_lm_head
```
| 02-02-2021 14:59:12 | 02-02-2021 14:59:12 | You should do `model.resize_token_embeddings(50266)`.
Here is the [documentation of that method](https://huggingface.co/transformers/main_classes/model.html#transformers.PreTrainedModel.resize_token_embeddings).<|||||>When I do ```model.resize_token_embeddings(50266)```, embedding size changes from 50265 to 50266.
```
MaskedLM(
(roberta): RobertaModel(
(embeddings): RobertaEmbeddings(
(word_embeddings): Embedding(50266, 768)
(position_embeddings): Embedding(514, 768, padding_idx=1)
(token_type_embeddings): Embedding(1, 768)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
```
But out_features size in lm_head remains the same (50265) and throws error.
```prediction_scores``` size is not [batch size, sequence length, 50266], it remains still [batch size, sequence length, 50265]
```
(lm_head): RobertaLMHead(
(dense): Linear(in_features=768, out_features=768, bias=True)
(layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(decoder): Linear(in_features=768, out_features=50265, bias=True)
)
```
<|||||>Are you sure? I just ran the following on `master`:
```py
>>> from transformers import RobertaForCausalLM
>>> model = RobertaForCausalLM.from_pretrained("roberta-base")
>>> model.lm_head.decoder
Linear(in_features=768, out_features=50265, bias=True)
>>> model.resize_token_embeddings(50266)
Embedding(50266, 768)
>>> model.lm_head.decoder
Linear(in_features=768, out_features=50266, bias=True)
```
Please observe how the decoder is resized.<|||||>@yeounyi
In your example `lm_head` is not resized because there are no `get_output_embeddings` and `set_output_embeddings` methods in your `MaskedLM` class. The `resize_token_embeddings` method needs these methods to get the `lm_head`.
You should add those methods and then call `resize_token_embeddings` on the instance `MaskedLM` class. See the implementation of `RobertaForMaskedLM`
https://github.com/huggingface/transformers/blob/d55e10beab5744a09451b8f9400222e17794c019/src/transformers/models/roberta/modeling_roberta.py#L984-L1006<|||||>Ah, I indeed missed that this was a custom MaskedLM implementation, my bad.<|||||>Thanks all! After adding ```get_output_embeddings``` and ```set_output_embeddings``` methods, it works perfectly <|||||>M |
transformers | 9,959 | closed | Problem while initializing custom model with added tokens | Hi, I'm trying to train my model with new special token 'name', but it keeps throwing size mismatch error.
I think the problem is that my model has pretrained models inside initialization.
Model Structure
```
class MaskedLM(RobertaPreTrainedModel):
def __init__(self, config):
super().__init__(config=config)
self.roberta = RobertaModel(config)
self.lm_head = RobertaLMHead(config)
self.refinement_num = 5
self.mask_id = 50264
self.init_weights()
def forward(...):
```
After resizing my model, embedding size changed from 50265 to 50266.
```
MaskedLM(
(roberta): RobertaModel(
(embeddings): RobertaEmbeddings(
(word_embeddings): Embedding(50266, 768)
(position_embeddings): Embedding(514, 768, padding_idx=1)
(token_type_embeddings): Embedding(1, 768)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
```
But the problem is out_features size in lm_head remains the same. (50265)
```
(lm_head): RobertaLMHead(
(dense): Linear(in_features=768, out_features=768, bias=True)
(layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(decoder): Linear(in_features=768, out_features=50265, bias=True)
)
```
Is there any way that I can both load the pretrained weights and add one new token?
-------------
Setting Tokenizer
```
tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
tokenizer.add_tokens('<name>')
```
Setting Model
```
model = MaskedLM.from_pretrained('roberta-base')
model.resize_token_embeddings(len(tokenizer))
```
| 02-02-2021 14:15:53 | 02-02-2021 14:15:53 | |
transformers | 9,958 | closed | tokenizer is slow when adding new tokens | Hi,
The tokenizer is slow when adding new tokens even with the Fast class:
```
from transformers import GPT2Config, TFGPT2LMHeadModel, GPT2TokenizerFast, GPT2Tokenizer
# Maybe this url for the files:
# https://huggingface.co/transformers/v3.1.0/_modules/transformers/tokenization_gpt2.html
paths = dict()
paths["tokenizer"] = "whatever/is/the/path/to/pretrained/vocab.json/merges.txt"
# They have to be sorted in reverse by length, otherwise the tokens arent
newtokens = range(0, 20000)
newtokens = list(newtokens)
newtokens.sort(reverse=True)
newtokens = ["new_" + str(x) for x in newtokens]
# loading tokenizer from the saved model path
tokenizers = dict()
tokenizers["fast"] = GPT2TokenizerFast.from_pretrained(paths["tokenizer"])
tokenizers["fast_custom"] = GPT2TokenizerFast.from_pretrained(paths["tokenizer"])
tokenizers["slow_custom"] = GPT2Tokenizer.from_pretrained(paths["tokenizer"])
tokenizers["slow"] = GPT2Tokenizer.from_pretrained(paths["tokenizer"])
tokenizer.add_special_tokens({
"eos_token": "</s>",
"bos_token": "<s>",
"unk_token": "<unk>",
"pad_token": "<pad>",
"mask_token": "<mask>"
})
# Add new vocab
# https://huggingface.co/transformers/v2.11.0/main_classes/tokenizer.html
# https://github.com/deepset-ai/FARM/issues/157
for k in tokenizers:
if "custom" in k:
print(k)
print("Vocab length before:", len(tokenizers[k].get_vocab()))
tokenizers[k].add_tokens(newtokens)
print("Vocab length after:", len(tokenizers[k].get_vocab()))
# creating the configurations from which the model can be made
config = GPT2Config(
vocab_size=len(tokenizer),
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# creating the model
# https://huggingface.co/transformers/_modules/transformers/configuration_gpt2.html
model = TFGPT2LMHeadModel(config)
# Differences when tokenising the text...
text = "this is a sentence containing new_200"
for k,v in tokenizers.items():
print(k, v.tokenize(text))
```
and then profiling the speed in jupyter:
```
for k in tokenizers:
print(k)
%timeit tokenizers[k].tokenize(text)
```
any ideas why this may be happening? I understand that the vocab size could increase by ~20% and that may slow things down but in this code there's a performance difference of 1000 fold in the speed. That doesn't seem right? | 02-02-2021 13:39:50 | 02-02-2021 13:39:50 | Hi @davidnarganes,
Someone from HF correct me if I am wrong, but you'll probably get a faster response posting this issue in the Tokenizer repo:
https://github.com/huggingface/tokenizers
Best of luck<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,957 | closed | [mBART] one slow integration test is failing on master | The `test_enro_generate_batch` slow test is now failing on master
command
```bash
RUN_SLOW=1 pytest tests/test_modeling_mbart.py::MBartEnroIntegrationTest::test_enro_generate_batch
```
Traceback
```
tests/test_modeling_mbart.py F [100%]
=================================== FAILURES ===================================
______________ MBartEnroIntegrationTest.test_enro_generate_batch _______________
self = <tests.test_modeling_mbart.MBartEnroIntegrationTest testMethod=test_enro_generate_batch>
@slow
def test_enro_generate_batch(self):
batch: BatchEncoding = self.tokenizer.prepare_seq2seq_batch(self.src_text, return_tensors="pt").to(
torch_device
)
translated_tokens = self.model.generate(**batch)
decoded = self.tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)
> assert self.tgt_text == decoded
E AssertionError: assert ['Şeful ONU d...e de oameni.'] == ['Şeful ONU d...e de oameni.']
E At index 1 diff: 'Secretarul General Ban Ki-moon declară că răspunsul său la intensificarea sprijinului militar al Rusiei pentru Siria este că "nu există o soluţie militară" la conflictul de aproape cinci ani şi că noi arme nu vor face decât să înrăutăţească violenţa şi mizeria pentru milioane de oameni.' != 'Secretarul General Ban Ki-moon declară că răspunsul său la intensificarea sprijinului militar al Rusiei pentru Siria este că "nu există o soluţie militară" la conflictul de aproape cinci ani şi că noi arme nu vor face decât să înrăutăţească violenţa şi mizeria a mi...
E
E ...Full output truncated (2 lines hidden), use '-vv' to show
tests/test_modeling_mbart.py:366: AssertionError
```
cc @patrickvonplaten | 02-02-2021 10:59:23 | 02-02-2021 10:59:23 | Yeah this test is failing for a while now (even before the Bart split PR) -> think we should just adapt the text<|||||>This issue has been stale for 1 month.<|||||>Is this fixed? I think we just need to update the test here<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,956 | closed | [Good first issue] MPNet TensorFlow Integration tests | The TensorFlow implementation of the MPNet model currently has no integration tests. This is problematic as the behavior can diverge without being noticed.
The [test_modeling_tf_mpnet.py](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_tf_mpnet.py) file should be updated to include integration testing.
An example of a good modeling integration test is visible in the [test_modeling_tf_bert.py#L365-L387](https://github.com/huggingface/transformers/blob/1809de5165804666ba6c6a02a9d177f6683869cc/tests/test_modeling_tf_bert.py#L365-L387) file:
https://github.com/huggingface/transformers/blob/1809de5165804666ba6c6a02a9d177f6683869cc/tests/test_modeling_tf_bert.py#L365-L387
Some additional tips:
- The test must be marked as slow using the `@slow` decorator, so as to be run *daily*, and not on every commit of every branch/pull request of this repository.
- The test must be decorated with the `@require_tf` decorator so as to only be run in environments using PyTorch.
- A single test is necessary. If you feel like implementing multiple of these, then sharing the same checkpoint would be ideal so as to reduce download time. | 02-02-2021 10:18:25 | 02-02-2021 10:18:25 | |
transformers | 9,955 | closed | [Good first issue] MobileBERT TensorFlow Integration tests | The TensorFlow implementation of the MobileBERT model currently has no integration tests. This is problematic as the behavior can diverge without being noticed.
The [test_modeling_tf_mobilebert.py](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_tf_mobilebert.py) file should be updated to include integration testing.
An example of a good modeling integration test is visible in the [test_modeling_tf_bert.py#L365-L387](https://github.com/huggingface/transformers/blob/1809de5165804666ba6c6a02a9d177f6683869cc/tests/test_modeling_tf_bert.py#L365-L387) file:
https://github.com/huggingface/transformers/blob/1809de5165804666ba6c6a02a9d177f6683869cc/tests/test_modeling_tf_bert.py#L365-L387
Some additional tips:
- The test must be marked as slow using the `@slow` decorator, so as to be run *daily*, and not on every commit of every branch/pull request of this repository.
- The test must be decorated with the `@require_tf` decorator so as to only be run in environments using PyTorch.
- A single test is necessary. If you feel like implementing multiple of these, then sharing the same checkpoint would be ideal so as to reduce download time. | 02-02-2021 10:17:15 | 02-02-2021 10:17:15 | |
transformers | 9,954 | closed | [Good first issue] LXMERT TensorFlow Integration tests | The TensorFlow implementation of the LXMERT model currently has no integration tests. This is problematic as the behavior can diverge without being noticed.
The [test_modeling_tf_lxmert.py](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_tf_lxmert.py) file should be updated to include integration testing.
An example of a good modeling integration test is visible in the [test_modeling_tf_bert.py#L365-L387](https://github.com/huggingface/transformers/blob/1809de5165804666ba6c6a02a9d177f6683869cc/tests/test_modeling_tf_bert.py#L365-L387) file:
https://github.com/huggingface/transformers/blob/1809de5165804666ba6c6a02a9d177f6683869cc/tests/test_modeling_tf_bert.py#L365-L387
Some additional tips:
- The test must be marked as slow using the `@slow` decorator, so as to be run *daily*, and not on every commit of every branch/pull request of this repository.
- The test must be decorated with the `@require_tf` decorator so as to only be run in environments using PyTorch.
- A single test is necessary. If you feel like implementing multiple of these, then sharing the same checkpoint would be ideal so as to reduce download time. | 02-02-2021 10:16:07 | 02-02-2021 10:16:07 | @LysandreJik is anyone working on it? I would like to work.<|||||>Hi! @sadakmed already has a close to finished implementation that we'll merge in the coming days.
Thank you for offering to contribute!<|||||>Hi, shouldn't this issue be closed now ? Since a valid integration test was merged ?<|||||>Yes, it should :) Thanks! |
transformers | 9,953 | closed | [Good first issue] DistilBERT TensorFlow Integration tests | The TensorFlow implementation of the DistilBERT model currently has no integration tests. This is problematic as the behavior can diverge without being noticed.
The [test_modeling_tf_distilbert.py](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_tf_distilbert.py) file should be updated to include integration testing.
An example of a good modeling integration test is visible in the [test_modeling_tf_bert.py#L365-L387](https://github.com/huggingface/transformers/blob/1809de5165804666ba6c6a02a9d177f6683869cc/tests/test_modeling_tf_bert.py#L365-L387) file:
https://github.com/huggingface/transformers/blob/1809de5165804666ba6c6a02a9d177f6683869cc/tests/test_modeling_tf_bert.py#L365-L387
Some additional tips:
- The test must be marked as slow using the `@slow` decorator, so as to be run *daily*, and not on every commit of every branch/pull request of this repository.
- The test must be decorated with the `@require_tf` decorator so as to only be run in environments using PyTorch.
- A single test is necessary. If you feel like implementing multiple of these, then sharing the same checkpoint would be ideal so as to reduce download time. | 02-02-2021 10:14:54 | 02-02-2021 10:14:54 | |
transformers | 9,952 | closed | [Good first issue] MPNet PyTorch Integration tests | The PyTorch implementation of the MPNet model currently has no integration tests. This is problematic as the behavior can diverge without being noticed.
The [test_modeling_mpnet.py](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_mpnet.py) file should be updated to include integration testing.
An example of a good modeling integration test is visible in the [test_modeling_bert.py#L552-L565](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_bert.py#L552-L565) file:
https://github.com/huggingface/transformers/blob/1809de5165804666ba6c6a02a9d177f6683869cc/tests/test_modeling_bert.py#L552-L565
Some additional tips:
- The test must be marked as slow using the `@slow` decorator, so as to be run *daily*, and not on every commit of every branch/pull request of this repository.
- The test must be decorated with the `@require_torch` decorator so as to only be run in environments using PyTorch.
- A single test is necessary. If you feel like implementing multiple of these, then sharing the same checkpoint would be ideal so as to reduce download time. | 02-02-2021 10:11:16 | 02-02-2021 10:11:16 | @LysandreJik "test_modeling_mpnet.py" it already have integration test.<|||||>You're correct! That's on me, thanks for letting me know. |
transformers | 9,951 | closed | [Good first issue] LXMERT PyTorch Integration tests | The PyTorch implementation of the LXMERT model currently has no integration tests. This is problematic as the behavior can diverge without being noticed.
The [test_modeling_lxmert.py](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_lxmert.py) file should be updated to include integration testing.
An example of a good modeling integration test is visible in the [test_modeling_bert.py#L552-L565](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_bert.py#L552-L565) file:
https://github.com/huggingface/transformers/blob/1809de5165804666ba6c6a02a9d177f6683869cc/tests/test_modeling_bert.py#L552-L565
Some additional tips:
- The test must be marked as slow using the `@slow` decorator, so as to be run *daily*, and not on every commit of every branch/pull request of this repository.
- The test must be decorated with the `@require_torch` decorator so as to only be run in environments using PyTorch.
- A single test is necessary. If you feel like implementing multiple of these, then sharing the same checkpoint would be ideal so as to reduce download time. | 02-02-2021 10:10:26 | 02-02-2021 10:10:26 | @LysandreJik I'll tackle this one if that's cool?<|||||>Hi @jmwoloso! @sadakmed has already contributed a proposal in #9989 that unfortunately slipped through the net - could you give it a look and let me know if that's what you had in mind?<|||||>Hi @LysandreJik! I originally saw #9954 and was going to make mine based upon that, but I see there have been some updates via #9989 so I'll check that out and adjust if needed, but yes, was going to essentially modify the TF integration test to be PT compatible. @sadakmed does mention adding a context manager to deal with the random seed in #10143 so not sure if that is of interest, but the idea is that I'll use the TF implementation and make it PT compatible.<|||||>Hi @jmwoloso, DO you mean TF implementation of lxmert integration test, I committed for both tf [#10052](https://github.com/huggingface/transformers/pull/10052) and pt [#9989](https://github.com/huggingface/transformers/pull/9989) (since Feb nd I still have this open tabs in my mind of something is unfinished, really a pain). Both has the same issue of the inability to hardcode input coz it's too large, other details u already know, How it's implemented know, I dont think that fixing seeds locally will impact others in different classes, not to mention in other tests, a Context manager is a safe way to get around it, constant input without affecting anything else, nd I saw somewhere in a torch library that they use this technique (so it's not a crazy idea).<|||||>Seems like this issue might be ready to be closed based on @sadakmed previously [merged PR](https://github.com/huggingface/transformers/pull/9989) in July of 21'.
cc @LysandreJik <|||||>Indeed! Thanks! |
transformers | 9,950 | closed | [Good first issue] FlauBERT PyTorch Integration tests | The PyTorch implementation of the FlauBERT model currently has no integration tests. This is problematic as the behavior can diverge without being noticed.
The [test_modeling_flaubert.py](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_flaubert.py) file should be updated to include integration testing.
An example of a good modeling integration test is visible in the [test_modeling_bert.py#L552-L565](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_bert.py#L552-L565) file:
https://github.com/huggingface/transformers/blob/1809de5165804666ba6c6a02a9d177f6683869cc/tests/test_modeling_bert.py#L552-L565
Some additional tips:
- The test must be marked as slow using the `@slow` decorator, so as to be run *daily*, and not on every commit of every branch/pull request of this repository.
- The test must be decorated with the `@require_torch` decorator so as to only be run in environments using PyTorch.
- A single test is necessary. If you feel like implementing multiple of these, then sharing the same checkpoint would be ideal so as to reduce download time.
- The TensorFlow implementation already has an integration test, which is visible here:
https://github.com/huggingface/transformers/blob/1809de5165804666ba6c6a02a9d177f6683869cc/tests/test_modeling_tf_flaubert.py#L342-L370
This test can be translated to PyTorch. | 02-02-2021 10:08:50 | 02-02-2021 10:08:50 | |
transformers | 9,949 | closed | [Good first issue] ELECTRA PyTorch Integration tests | The PyTorch implementation of the ELECTRA model currently has no integration tests. This is problematic as the behavior can diverge without being noticed.
The [test_modeling_electra.py](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_electra.py) file should be updated to include integration testing.
An example of a good modeling integration test is visible in the [test_modeling_bert.py#L552-L565](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_bert.py#L552-L565) file:
https://github.com/huggingface/transformers/blob/1809de5165804666ba6c6a02a9d177f6683869cc/tests/test_modeling_bert.py#L552-L565
Some additional tips:
- The test must be marked as slow using the `@slow` decorator, so as to be run *daily*, and not on every commit of every branch/pull request of this repository.
- The test must be decorated with the `@require_torch` decorator so as to only be run in environments using PyTorch.
- A single test is necessary. If you feel like implementing multiple of these, then sharing the same checkpoint would be ideal so as to reduce download time.
- The TensorFlow implementation already has an integration test, which is visible here:
https://github.com/huggingface/transformers/blob/1809de5165804666ba6c6a02a9d177f6683869cc/tests/test_modeling_tf_electra.py#L253-L267
This test can be translated to PyTorch. | 02-02-2021 10:06:09 | 02-02-2021 10:06:09 | |
transformers | 9,948 | closed | [Good first issue] DistilBERT PyTorch Integration tests | The PyTorch implementation of the DistilBERT model currently has no integration tests. This is problematic as the behavior can diverge without being noticed.
The [test_modeling_distilbert.py](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_distilbert.py) file should be updated to include integration testing.
An example of a good modeling integration test is visible in the [test_modeling_bert.py#L552-L565](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_bert.py#L552-L565) file:
https://github.com/huggingface/transformers/blob/1809de5165804666ba6c6a02a9d177f6683869cc/tests/test_modeling_bert.py#L552-L565
Some additional tips:
- The test must be marked as slow using the `@slow` decorator, so as to be run *daily*, and not on every commit of every branch/pull request of this repository.
- The test must be decorated with the `@require_torch` decorator so as to only be run in environments using PyTorch.
- A single test is necessary. If you feel like implementing multiple of these, then sharing the same checkpoint would be ideal so as to reduce download time. | 02-02-2021 10:04:55 | 02-02-2021 10:04:55 | |
transformers | 9,947 | closed | [Good first issue] BERT Generation PyTorch Integration tests | The PyTorch implementation of the BERT for generation model currently has no integration tests. This is problematic as the behavior can diverge without being noticed.
The [test_modeling_bert_generation.py](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_bert_generation.py) file should be updated to include integration testing.
An example of a good modeling integration test is visible in the [test_modeling_bert.py#L552-L565](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_bert.py#L552-L565) file:
https://github.com/huggingface/transformers/blob/1809de5165804666ba6c6a02a9d177f6683869cc/tests/test_modeling_bert.py#L552-L565
Some additional tips:
- The test must be marked as slow using the `@slow` decorator, so as to be run *daily*, and not on every commit of every branch/pull request of this repository.
- The test must be decorated with the `@require_torch` decorator so as to only be run in environments using PyTorch.
- A single test is necessary. If you feel like implementing multiple of these, then sharing the same checkpoint would be ideal so as to reduce download time. | 02-02-2021 09:59:08 | 02-02-2021 09:59:08 | @LysandreJik I think this should be closed by now!<|||||>You're correct! Thanks again @sadakmed! |
transformers | 9,946 | closed | [Good first issue] ALBERT TensorFlow Integration tests | The TensorFlow implementation of the ALBERT model currently has no integration tests. This is problematic as the behavior can diverge without being noticed.
The [test_modeling_tf_albert.py](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_tf_albert.py) file should be updated to include integration testing.
An example of a good modeling integration test is visible in the [test_modeling_tf_bert.py#L365-L387](https://github.com/huggingface/transformers/blob/1809de5165804666ba6c6a02a9d177f6683869cc/tests/test_modeling_tf_bert.py#L365-L387) file:
https://github.com/huggingface/transformers/blob/1809de5165804666ba6c6a02a9d177f6683869cc/tests/test_modeling_tf_bert.py#L365-L387
Some additional tips:
- The test must be marked as slow using the `@slow` decorator, so as to be run *daily*, and not on every commit of every branch/pull request of this repository.
- The test must be decorated with the `@require_tf` decorator so as to only be run in environments using PyTorch.
- A single test is necessary. If you feel like implementing multiple of these, then sharing the same checkpoint would be ideal so as to reduce download time. | 02-02-2021 09:52:38 | 02-02-2021 09:52:38 | |
transformers | 9,945 | closed | [Good first issue] ALBERT PyTorch Integration tests | The PyTorch implementation of the ALBERT model currently has no integration tests. This is problematic as the behavior can diverge without being noticed.
The [test_modeling_albert.py](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_albert.py) file should be updated to include integration testing.
An example of a good modeling integration test is visible in the [test_modeling_bert.py#L552-L565](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_bert.py#L552-L565) file:
https://github.com/huggingface/transformers/blob/1809de5165804666ba6c6a02a9d177f6683869cc/tests/test_modeling_bert.py#L552-L565
Some additional tips:
- The test must be marked as slow using the `@slow` decorator, so as to be run *daily*, and not on every commit of every branch/pull request of this repository.
- The test must be decorated with the `@require_torch` decorator so as to only be run in environments using PyTorch.
- A single test is necessary. If you feel like implementing multiple of these, then sharing the same checkpoint would be ideal so as to reduce download time. | 02-02-2021 09:50:22 | 02-02-2021 09:50:22 | Hi Can I have a go at this issue?<|||||>Please do! |
transformers | 9,944 | closed | [Bart models] fix typo in naming | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Creds go to @ratthachat for spotting it!
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 02-02-2021 08:53:31 | 02-02-2021 08:53:31 | |
transformers | 9,943 | closed | ALBERT Tokenizer integration test | Implements an integration test for the ALBERT tokenizer. | 02-02-2021 08:46:15 | 02-02-2021 08:46:15 | Good point! |
transformers | 9,942 | closed | Fix Longformer and LED | # What does this PR do?
This PR fix TF Longformer and LED when `inputs_embeds`/`decoder_inputs_embeds` are used as main input instead of `input_ids`/`decoder_input_ids`.
Here a quick test that shows the bug for Longformer:
```python
from transformers.models.longformer.modeling_tf_longformer import TFLongformerMainLayer
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model
from transformers import LongformerConfig
import tensorflow as tf
import numpy as np
class CustomLongFormer(tf.keras.layers.Layer):
def __init__(self, name='longformer', **kwargs):
super().__init__(name=name, **kwargs)
config = LongformerConfig(attention_window=4, num_hidden_layers=1, vocab_size=10)
self.longformer = TFLongformerMainLayer(config)
def call(self, inputs):
x = self.longformer(inputs)[0]
return x
longformer = CustomLongFormer()
inputs_embeds = Input(shape=(None, None), dtype='float32', name="inputs_embeds")
output = longformer({"inputs_embeds": inputs_embeds})
output = Dense(9, activation='softmax')(output)
model = Model({"inputs_embeds": inputs_embeds}, output)
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy')
x = np.array([np.random.uniform(0,1, (3, 768))] * 100)
y = np.array([[1]*3] * 100)
model.fit(x=x, y=y, epochs=10, batch_size=4, validation_split=0.1)
```
And the one for LED:
```python
from transformers.models.led.modeling_tf_led import TFLEDMainLayer
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model
from transformers import LEDConfig
import tensorflow as tf
import numpy as np
class CustomLED(tf.keras.layers.Layer):
def __init__(self, name='longformer', **kwargs):
super().__init__(name=name, **kwargs)
config = LEDConfig(attention_window=4, num_hidden_layers=1, vocab_size=10)
self.led = TFLEDMainLayer(config)
def call(self, inputs):
x = self.led(inputs)[0]
return x
led = CustomLED()
inputs_embeds = Input(shape=(None, None), dtype='float32', name="inputs_embeds")
decoder_inputs_embeds = Input(shape=(None, None), dtype='float32', name="decoder_inputs_embeds")
output = led({"inputs_embeds": inputs_embeds, "decoder_inputs_embeds": decoder_inputs_embeds})
output = Dense(9, activation='softmax')(output)
model = Model({"inputs_embeds": inputs_embeds, "decoder_inputs_embeds": decoder_inputs_embeds}, output)
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy')
x = np.array([np.random.uniform(0,1, (3, 1024))] * 100)
y = np.array([[1]*3] * 100)
model.fit(x={"inputs_embeds": x, "decoder_inputs_embeds": x}, y=y, epochs=10, batch_size=4, validation_split=0.1)
```
The reason is because the graph compiled is different than the one when the usual `input_ids`/`decoder_input_ids` are used. Knowing that we are not testing this case (in graph execution) other models might be involved in a similar bug. Hence, I put in my TODO list to create a test that will test if all the models can be used with different combinaison of inputs in graph mode.
# Fix issue
#9864 | 02-02-2021 08:01:45 | 02-02-2021 08:01:45 | I'm working on this 👍 Should this belong to this PR or to another one?<|||||>I think it can be done in this PR.<|||||>I have added a quick test for graph execution with `inputs_embeds`. Later I will add the same for XLA as well but as all the models are not compliant I will handle this in same than the "usual" XLA test with `input_ids`.<|||||>And surprisingly all the models are now passing this test 😄 |
transformers | 9,941 | closed | Converting pretrained tf2 bert model to pytorch model for using FillMaskPipeline | null | 02-02-2021 07:05:03 | 02-02-2021 07:05:03 | This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,940 | closed | [wip] [pipeline parallel] t5 - experiment #2 | The first attempt at t5/pp using pytorch-nightly Pipe https://github.com/huggingface/transformers/pull/9765 was successful to a degree, but at the moment can't be combined with any other Parallel solutions.
All the examples of Pipeline conversion use trivial examples or models that lend easily to being converted to `Sequential`. `transformers` models or at least `t5` doesn't easily lend to this transformation due to complex intertwined logic and a huge number of variables passed around.
The main challenge: In order to build a Pipeline one needs to convert the Module stack into a `Sequential` list.
So in the case of t5, we need to convert this logic:
```
T5ForConditionalGeneration->
logic
T5Stack->
logic
loop(T5Block, T5Block, T5Block, ...) ->
logic
logic
T5Stack->
logic
loop(T5Block, T5Block, T5Block, ...) ->
logic
logic
```
into
```
Pipe(
Sequential(
T5ForConditionalGeneration,
T5ForConditionalGeneration_p1,
T5Stack,
T5Stack_p1,
T5Block,
T5Block,
T5Block,
...
T5Stack_p2,
T5ForConditionalGeneration_p2,
T5Stack,
T5Stack_p1,
T5Block,
T5Block,
T5Block,
...
T5Stack_p2,
T5ForConditionalGeneration_p3,
)
)
```
I think we don't need to Sequentialize any further beyond T5Block, but we will have to see down the road.
Problems:
1. Can't change the structure of the model because of the pre-trained weights.
2. The inputs/outputs are very complicated because the entry into the Pipeline (first and last stages) can only be a tuple of pure Tensors.
3. The inputs/outputs besides required to be Tensors have to expose first dimension to be batch-dimension since it slices all inputs and restores all outputs on that dimension on the way to/from forward (but only on the very first and last stages of the sequence)
I did successfully implement a t5-pipeline version https://github.com/huggingface/transformers/pull/9765 that uses 2 shorter pipes, as it was natural to convert a loop over `T5Block`s to `Sequential` and it now looks like this
```
T5ForConditionalGeneration->
logic
T5Stack-> Pipe(Sequential(T5Block, T5Block, T5Block))
logic
T5Stack-> Pipe(Sequential(T5Block, T5Block, T5Block))
logic
```
using pytorch pipe in a very painful way overcoming problem n2. But it's doubtful this approach will work with any other 1D Parallel side (e.g. combining with Sharded DDP) - definitely doesn't work with DeepSpeed Zero-DP.
But that implementation won't work with DeepSpeed pipeline - it has to be Sequential from the top-level. Not sure about fairscale yet.
So I'm trying again, this time starting by just trying to Sequentialize the layers while overcoming problem n1.
If you do look at the code, please ignore everything in the diff but `modeling_t5.py` (and I removed a lot of the model parallel code as it is getting in the way and it won't be needed if we figure out the pipe - since `pipe(chunks=1) == naive vertical MP`, so we get all the complex things that MP currently does for free. But we have to do even more complicated things instead. Naive vertical MP appears to be trivial compared to the changes required to make pipe work.
You can see the process of conversion in this PR, I Sequentialized:
1. the `T5Block`-loop
2. the 2nd half of `T5Stack`,
now I need to continue breaking up the structure upstream. At this stage there is no Pipe in the code, the 1st main difficulty is to Sequentilize the layers.
If you want to see just how I converted the `T5Block`-loop into Sequential, it is this commit - might be easier to see: https://github.com/huggingface/transformers/pull/9940/commits/4c0ea522157f693bccce80c4cbecc24019186676 The input/output have to be the same because Sequential sends the output of one stage to the input of another.
If you have some brilliant ideas that I'm perhaps missing at how to easily Sequentialize t5 layers I'm all ears.
@patrickvonplaten, @sgugger, @LysandreJik | 02-02-2021 06:28:27 | 02-02-2021 06:28:27 | This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread.<|||||>go away bad bot<|||||>too long. closing. |
transformers | 9,939 | closed | Can't import pipeline | - `transformers` 4.2
- Platform: MacOS
- Python version: 3.7.9
- PyTorch version (GPU?): CPU
- Tensorflow version (GPU?): CPU
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
- Pip Version: Latest
I can't import the pipeline function:
```
from transformers import pipeline
```
Gives the following error:
```Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: cannot import name 'pipeline' from 'transformers' (unknown location)
```
| 02-02-2021 03:00:35 | 02-02-2021 03:00:35 | I can't reproduce on version 4.2.x or `master`.
This may have something to do with your environment. Can you let me know if the following threads help you out:
- https://stackoverflow.com/questions/58585690/python-cannot-import-unknown-location
- https://forum.learncodethehardway.com/t/importerror-unknown-location/2034/8
- https://python-forum.io/Thread-import-error-unknown-location
They're related to `parser` but it may be a similar issue to the one you're encountering here. <|||||>#7333 Check your **tensorflow** version<|||||>@dg-data I'm not using Tensorflow, does that matter? my tf is the latest version.<|||||>upgraded tf fixed it, thanks.<|||||>>
>
> upgraded tf fixed it, thanks.
@hassanzadeh
Now,which version your tf is? I just know it's need version 2.0.<|||||>According to [documentation](https://huggingface.co/transformers/installation.html), this should also run without TF at all, but with PyTorch alone. The answer seems to suggest that TF is required under all circumstances. I just tried a PyTorch-only installation and ran into the same error. Now shifting to TF, but I guess the documentation should be updated or the PyTorch-only install should be checked.<|||||>@chiarcos would you happen to have a reproducer to run into the issue with PyTorch-only installs? It shouldn't (and isn't) required to have TF installed for pipelines, so this is a bug that I, unfortunately, can't manage to reproduce.<|||||>Got same bug, on pytorch-only enviroment as well.<|||||>>
>
> @chiarcos would you happen to have a reproducer to run into the issue with PyTorch-only installs? It shouldn't (and isn't) required to have TF installed for pipelines, so this is a bug that I, unfortunately, can't manage to reproduce.
Apologies, I had shifted to TF installation already. This worked like a charm and the system is in production. I see to reproduce it when I have a minute.<|||||>Using conda pytorch environment and got the same bug<|||||>Same here<|||||>Same here, also installing TF in addition to PyTorch didn't help... |
transformers | 9,938 | closed | trainer_seq2seq.py Question | Hi, does trainer_seq2seq.py in transformers/src support multi GPU training? Thank you | 02-02-2021 02:18:08 | 02-02-2021 02:18:08 | Hi @caincdiy
All the example scripts using `Trainer` or it's a subclass use `python -m torch.distributed.launch` to launch multi GPU training. See https://github.com/huggingface/transformers/tree/master/examples#distributed-training-and-mixed-precision
Also the [forum](https://discuss.huggingface.co/) is the best place to ask such questions :)<|||||>Oh sorry. thank you very much for your help |
transformers | 9,937 | closed | ConvBERT: minor fixes for conversion script | Hi,
the conversion script for ConvBERT throws the following error message when using it:
```bash
Traceback (most recent call last):
File "convert_convbert_original_tf1_checkpoint_to_pytorch.py", line 19, in <module>
from ...utils import logging
ImportError: attempted relative import with no known parent package
```
I fixed that error, as well as using the correct name for the configuration file argument.
Additionally, I just found that the configuration files from the [YituTech](https://huggingface.co/YituTech) organization for ConvBERT from aren't correct, because they use:
```json
"model_type": "conv_bert",
```
instead of:
```json
"model_type": "convbert",
```
(This currently results in a `KeyError: 'conv_bert'` error). | 02-01-2021 22:45:58 | 02-01-2021 22:45:58 | Pinging @abhishekkrthakur and @sgugger :hugs: <|||||>Weird that relative imports failed. Anyways, thanks for the PR. The model_type in hub has been fixed. |
transformers | 9,936 | closed | ConvBERT: minor fixes for conversion script | Hi,
the conversion script for ConvBERT throws the following error message when using it:
```bash
Traceback (most recent call last):
File "convert_convbert_original_tf1_checkpoint_to_pytorch.py", line 19, in <module>
from ...utils import logging
ImportError: attempted relative import with no known parent package
```
I fixed that error, as well as using the correct name for the configuration file argument.
Additionally, I just found that the configuration files from the [YituTech](https://huggingface.co/YituTech) organization for ConvBERT from aren't correct, because they use:
```json
"model_type": "conv_bert",
```
instead of:
```json
"model_type": "convbert",
```
(This currently results in a `KeyError: 'conv_bert'` error). | 02-01-2021 22:40:37 | 02-01-2021 22:40:37 | I hate this forking/syncing stuff with GitHub 🙈
Preparing a clean PR now... |
transformers | 9,935 | closed | Use compute_loss in prediction_step | # What does this PR do?
As requested in #9915, this PR uses `compute_loss` in the `prediction_step` method of `Trainer`, so it properly computes losses when the user have customized the way to do that. It does require a new argument to `compute_loss` to return the outputs on top of the loss for the prediction loop, so users that want to use this feature will have to tweak their subclass a little bit, but there is no breaking change. | 02-01-2021 22:20:43 | 02-01-2021 22:20:43 | |
transformers | 9,934 | closed | Bump numpy | # What does this PR do?
As pointed out on the [forums](https://discuss.huggingface.co/t/typeerror-full-like-got-an-unexpected-keyword-argument-shape/2981), the method `np.full_like` used in the evaluation of the `Trainer` with the argument `shape=` does not work for all versions of numpy. According to the [numpy documentation]() it was introduced in version 1.17 only, so this PR bumps the setup to that version.
If for some reason we don't want to have a minimum version of numpy, I can try to find another way to do the same thing in `Trainer`. | 02-01-2021 21:46:53 | 02-01-2021 21:46:53 | |
transformers | 9,933 | closed | Possible bug in `prepare_for_model` when using fast tokenizers | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.5.1
- `tokenizers` version: 0.9.3
- Platform: Linux
- Python version: 3.7.2
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
Also confirmed with `transformers==4.2.2` & `tokenizers==0.9.4`
### Who can help
tokenizers: @n1t0, @LysandreJik
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
I am building my own data preprocessing script which requires me to first know the number of tokens in each sentence before, then match sentences pairs and prepare them as input for a model, in this case BERT. I would like to use the fast tokenizer to speed things up in large datasets, however, I encounter the next assertion problem which is not true since I do provide `return_special_tokens_mask=True` in the function call.
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
The following code snippet reproduces the problem for me:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased', use_fast=True)
s1 = tokenizer.convert_tokens_to_ids(tokenizer.tokenize("Halt! Who goes there?"))
s2 = tokenizer.convert_tokens_to_ids(tokenizer.tokenize("It is I, Arthur, son of Uther Pendragon, from the castle of Camelot. King of the Britons, defeator of the Saxons, sovereign of all England!"))
tokenizer.prepare_for_model(s1, s2, return_special_tokens_mask=True)
```
I get the following assertion error when running the code:
```
AssertionError Traceback (most recent call last)
<ipython-input-5-889164fb3ae8> in <module>
2 s1 = tokenizer.convert_tokens_to_ids(tokenizer.tokenize("Halt! Who goes there?"))
3 s2 = tokenizer.convert_tokens_to_ids(tokenizer.tokenize("It is I, Arthur, son of Uther Pendragon, from the castle of Camelot. King of the Britons, defeator of the Saxons, sovereign of all England!"))
----> 4 tokenizer.prepare_for_model(s1, s2, return_special_tokens_mask=True)
~/p/.venv/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in prepare_for_model(self, ids, pair_ids, add_special_tokens, padding, truncation, max_length, stride, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, prepend_batch_axis, **kwargs)
2724 if return_special_tokens_mask:
2725 if add_special_tokens:
-> 2726 encoded_inputs["special_tokens_mask"] = self.get_special_tokens_mask(ids, pair_ids)
2727 else:
2728 encoded_inputs["special_tokens_mask"] = [0] * len(sequence)
~/p/.venv/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in get_special_tokens_mask(self, token_ids_0, token_ids_1, already_has_special_tokens)
3031 """
3032 assert already_has_special_tokens and token_ids_1 is None, (
-> 3033 "You cannot use ``already_has_special_tokens=False`` with this tokenizer. "
3034 "Please use a slow (full python) tokenizer to activate this argument."
3035 "Or set `return_special_token_mask=True` when calling the encoding method "
AssertionError: You cannot use ``already_has_special_tokens=False`` with this tokenizer. Please use a slow (full python) tokenizer to activate this argument.Or set `return_special_token_mask=True` when calling the encoding method to get the special tokens mask in any tokenizer.
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
For the fast tokenizer to provide the same output as the slow (python) tokenizer:
```python
{'input_ids': [101, 9190, 999, 2040, 3632, 2045, 1029, 102, 2009, 2003, 1045, 1010, 4300, 1010, 2365, 1997, 21183, 5886, 7279, 7265, 7446, 1010, 2013, 1996, 3317, 1997, 19130, 4140, 1012, 2332, 1997, 1996, 28101, 5644, 1010, 4154, 2953, 1997, 1996, 28267, 1010, 11074, 1997, 2035, 2563, 999, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'special_tokens_mask': [1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
```
<!-- A clear and concise description of what you would expect to happen. -->
## Possible Fix
Make the following change in:
https://github.com/huggingface/transformers/blob/d1b14c9b548de34b6606946482946008622967db/src/transformers/tokenization_utils_base.py#L2855
Since we know beforehand if the special tokens were added and the text pair was already concatenated with or without special tokens, I think the following change will be valid, however, I didn't test it beyond my own use case
```python
if return_special_tokens_mask:
if add_special_tokens:
encoded_inputs["special_tokens_mask"] = self.get_special_tokens_mask(sequence, already_has_special_tokens=True)
else:
encoded_inputs["special_tokens_mask"] = [0] * len(sequence)
``` | 02-01-2021 21:31:11 | 02-01-2021 21:31:11 | This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread.<|||||>I ran into the same problem (I am also working on building inputs for pretrained models). @n1t0, @LysandreJik Could you give a comment on this?
I think the problem arises from the fact that `BertTokenizerFast.get_special_tokens_mask` calls `PreTrainedTokenizerBase.get_special_tokens_mask` whereas `BertTokenizer` overrides `get_special_tokens_mask` method. It seems that problem will be solved if the fast tokenizer also overrides the method. Am I missing something?<|||||>Might also be a solution, havn't looked into it. I think that the fix I suggested in the original post might resolve problems like this in other tokenizers aswell and not only in BertTokenizer.
I wonder if fixing this will be a welcomed contribution to the library or the wontfix tag is there for a reason?
@LysandreJik |
transformers | 9,932 | closed | Fix 9918 | # What does this PR do?
This PR addresses the problem shown in #9918 by:
- adding the documentation of the `encode` method to the `PreTrainedTokenizer` and `PreTrainedTokenizerFast` (it is in all their subclasses already)
- adding the "What are input IDs" link where missing in some models docstrings.
In passing, I uncovered a failure of the doc styling script on DPR, so this PR also fixes that.
Fixes #9918 | 02-01-2021 21:23:03 | 02-01-2021 21:23:03 | |
transformers | 9,931 | open | [2D Parallelism] Tracking feasibility | ### Background
ZeRO-DP (ZeRO Data Parallel) and PP (Pipeline Parallelism) provide each a great memory saving over multiple GPUs. Each 1D allows for a much more efficient utilization of the gpu memory, but it's still not enough for very big models - sometimes not even feasible with any of the existing hardware. e.g. a model that's 45GB-big with just model params (t5-11b) can't fit even on a 40GB GPU.
The next stage in Model Parallelism that can enable loading bigger models onto smaller hardware is 2D Parallelism. That's combining Pipeline Parallelism (PP) with ZeRO-DP.
3D Parallelism is possible too and it requires adding a horizontal MP (ala [Megatron-LM](https://github.com/NVIDIA/Megatron-LM), but we don't quite have any way to implement that yet. Need to study Megatron-LM first. So starting with a relatively low hanging fruit of 2D.
------------------
### Tracking
We have 3 implementations that provide the required components to build 2D Parallelism:
1. DeepSpeed (**DS**)
2. FairScale (**FS**)
3. PyTorch (native) (**PT**)
and the purpose of this issue is to track the feasibility/status/inter-operability in each one of them. And also which parts have been back-ported to PyTorch core.
Plus it tracks the status of where transformers models are at with regards to the above 3 implementations.
The 2 main questions are:
1. native 2D: how do we integrate a native PP with native ZeRO-DP (sharded) (e.g. can fairscale PP work with fairscale ZeRO-DP)
2. inter-operability 2D: is there a chance one implementation of PP/ZeRO-DP could work with one or both others ZeRO-DP/PP (e.g. can fairscale PP work with DeepSpeed ZeRO-DP).
------------------
### Notes
* 3D Parallelism is possible too and it requires adding a horizontal MP (ala Megatron-LM), but we don't quite have any way to implement that yet. Need to study Megatron-LM first. So starting with low hanging fruit of 2D.
* MPU = Model Parallel Unit - a little helper module that helps each 1D to know which gpu groups it can use for PP, which for MP, which for DP. So that one 1D doesn't interfere with another 1D. e.g. in the case of 4 gpus and PP+DP, one may want:
```
pp
dp0 [0, 1]
dp1 [2, 3]
```
So here there are 2 pipelines: 0-1, and 2-3, and DP sees gpus 0 and 2 as the entry points.
--------------------------
### TLDR
ZeRO-DP / PP inter-operability status
| | DS | FS | PT |
|----|----|----|----|
| DS | :heavy_check_mark: | :question: | :x: |
| FS | :question: | :question: | :question: |
| PT | :x:| :question: | :question: |
--------------------------
### 1. DeepSpeed
1D status:
* [x] [PP](https://www.deepspeed.ai/tutorials/pipeline/)
* [x] [ZeRO-DP](https://www.deepspeed.ai/tutorials/zero/)
2D native status:
* [ ] :question: native PP + ZeRO-DP - untested yet, as it requires porting transformers to native PP first
2D inter-operability status:
- [ ] :x: pytorch PP + DeepSpeed ZeRO-DP. I tried using pytorch PP with DeepSpeed ZeRO-DP and couldn't figure out how to make it work: https://github.com/microsoft/DeepSpeed/issues/710
- [ ] :question: fairscale PP + DeepSpeed ZeRO-DP (unknown)
Important components:
* [original megatron-lm MPU](https://github.com/microsoft/DeepSpeedExamples/blob/master/Megatron-LM/mpu/initialize.py)
* [WIP DeepSpeed MPU](https://github.com/jeffra/DSE/blob/megatron-deepspeed-pipeline/megatron/mpu/initialize.py)
--------------------------
### 2. FairScale
Just started gather information on this one - will update once I have it.
1D status:
* [x] [PP](https://fairscale.readthedocs.io/en/latest/tutorials/pipe.html)
* [x] [ZeRO-DP](https://fairscale.readthedocs.io/en/latest/tutorials/oss.html)
2D native status:
* [ ] :question: native PP + ZeRO-DP - gathering info https://github.com/facebookresearch/fairscale/issues/351
2D inter-operability status:
- [ ] :question: pytorch PP + fairscale ZeRO-DP gathering info
- [ ] :question: DeepSpeed PP + fairscale ZeRO-DP gathering info
Important components:
* [MPU](https://github.com/facebookresearch/fairscale/blob/master/fairscale/nn/model_parallel/initialize.py#L41)
--------------------------
### 3. PyTorch
pytorch has been integrating from what I understand primarily fairscale version into its core.
1D status:
* [x] [PP](https://pytorch.org/docs/master/pipeline.html) - experimental support. have PoC t5 working: https://github.com/huggingface/transformers/pull/9765 [example](https://github.com/pytorch/pytorch/blob/master/benchmarks/distributed/pipeline/pipe.py)
* [ ] ZeRO-DP - plans to implement that (primarily integrating fairscale implementation)
2D native status:
- [ ] :grey_exclamation: native PP + ZeRO-DP (Pytorch ZeRO-DP doesn't exists yet)
2D inter-operability status:
- [ ] :grey_exclamation: DeepSpeed PP + Pytorch ZeRO-DP (Pytorch ZeRO-DP doesn't exists yet)
- [ ] :grey_exclamation: fairscale PP + Pytorch ZeRO-DP (Pytorch ZeRO-DP doesn't exists yet)
Important components:
* MPU: ?
Ported components:
* ZeRO-DP stage 1: ZeroRedundancyOptimizer: an implementation of a standalone sharded optimizer wrapper https://github.com/pytorch/pytorch/pull/46750
Issues to track:
* The main discussion around integrating Deepspeed ZeRO into pytorch core: https://github.com/pytorch/pytorch/issues/42849
--------------------
### Transformers
To make 2D Parallelism working we need to of course support all these stages in `transformers`, so here is a status on what we have working or what is a work in progress. Some components (like bart-mp) work but are unmerged since we are still unsure how to move forward project-wide.
* ZeRO-DP
- [x] works across all models with fairscale and DeepSpeed integrated.
* Naive vertical MP (aka PP w/ a single stage)
- [x] t5
- [x] gpt2
- [ ] bart - unmerged https://github.com/huggingface/transformers/pull/9384
* Pytorch PP
- [ ] t5 - unmerged https://github.com/huggingface/transformers/pull/9765
* Horizontal MP - unresearched!
| 02-01-2021 19:42:37 | 02-01-2021 19:42:37 | Zero-3 has recently been announced
https://news.ycombinator.com/item?id=26447018
> ZeRO-3 Offload goes beyond the state-of-the-art hybrid 3D-parallelism (data, model and pipeline parallelism combined). While 3D Parallelism is limited by the aggregate GPU memory, ZeRO-3 Offload can exploit both GPU and CPU memory, the latter of which is much larger and cheaper compared to GPU memory. This allows ZeRO-3 Offload to train larger model sizes with the given GPU and CPU resources than any other currently available technology.<|||||>Thank you for the heads up, @LifeIsStrange
This particular issue collects notes on something quite orthogonal to ZeRO-3, see https://github.com/huggingface/transformers/issues/9766 for a more suitable discussion.
And yes, we are working on integrating ZeRO3 from fairscale and Deepspeed into transformers. There are still some rough edges but hopefully it'll be ready really soon now.
|
transformers | 9,930 | closed | Hyperparameter search w/ RayTune BrokenPipeError: [Errno 32] Broken pipe | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.1.1
- Platform: Google Colab
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7
- Using GPU in script?: Yes
### Who can help
- ray/raytune: @richardliaw, @amogkam
- trainer: @sgugger
### Information
Model I am using (Bert, XLNet ...): sshleifer/distilbart-cnn-12-6
Dataset: dummy XSUM (50 samples in train, 5 samples in val)
## To reproduce
I have tried `trainer.train` with the exact same parameters and it works just fine.
I am trying to do a hyperparameter search with the Seq2SeqTrainer and RayTune. For now I am just trying a dummy search with 2 different learning rates and 2 different gradient accumulation steps. Here is my code:
```
def hp_objective(metrics):
loss = metrics.pop('eval_loss', None)
_ = metrics.pop('epoch', None)
_ = metrics.pop('eval_gen_len', None)
return np.sum(list(metrics.values()))
def hp_space(trial):
from ray import tune
return {
'learning_rate': tune.choice([1e-5, 1e-4]),
'gradient_accumulation_steps': tune.choice([4, 8])
}
def model_init():
model = AutoModelForSeq2SeqLM.from_pretrained(
model_args.model_name_or_path,
config=config,
cache_dir=model_args.cache_dir,
)
# use task specific params
use_task_specific_params(model, data_args.task)
# set num_beams for evaluation
if data_args.eval_beams is None:
data_args.eval_beams = model.config.num_beams
# set decoder_start_token_id for MBart
if model.config.decoder_start_token_id is None and isinstance(tokenizer, MBartTokenizer):
assert (
data_args.tgt_lang is not None and data_args.src_lang is not None
), "mBart requires --tgt_lang and --src_lang"
model.config.decoder_start_token_id = tokenizer.lang_code_to_id[data_args.tgt_lang]
if model_args.freeze_embeds:
freeze_embeds(model)
if model_args.freeze_encoder:
freeze_params(model.get_encoder())
assert_all_frozen(model.get_encoder())
return model
trainer = Seq2SeqTrainer(
model_init=model_init,
config=config,
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
data_collator=Seq2SeqDataCollator(tokenizer, data_args, training_args.tpu_num_cores),
compute_metrics=compute_metrics_fn,
data_args=data_args)
logger.info("*** Hyperparameters Search ***")
start_time = time.time()
trainer.hyperparameter_search(
direction = "maximize",
compute_objective = hp_objective,
hp_space = hp_space,
backend = "ray",
resources_per_trial = {'gpu': 1})
```
And I get the following error:
```
02/01/2021 16:26:31 - INFO - __main__ - *** Hyperparameters Search ***
02/01/2021 16:26:31 - INFO - ray.tune.ray_trial_executor - Initializing Ray automatically.For cluster usage or custom Ray initialization, call `ray.init(...)` before `tune.run`.
2021-02-01 16:26:33,796 INFO services.py:1173 -- View the Ray dashboard at http://127.0.0.1:8265
tcmalloc: large alloc 1236656128 bytes == 0x7f275ac1a000 @ 0x7f2ab6d16615 0x591e47 0x4cc179 0x4cc2db 0x566a71 0x5a4cd1 0x5a4fb8 0x7f2a1e822c7c 0x7f2a1e829bfa 0x7f2a1e829fe2 0x7f2a1e82b34f 0x7f2a1e828a39 0x7f2a1e829afc 0x7f2a1e82b34f 0x7f2a1e828a39 0x7f2a1e829fe2 0x7f2a1e82b34f 0x7f2a1e828a39 0x7f2a1e82ad13 0x7f2a1e82b43c 0x7f2a1e828a39 0x7f2a1e829f45 0x7f2a1e82b7fa 0x7f2a1e828a39 0x7f2a1e82ad5e 0x7f2a1e82b43c 0x7f2a1e828a39 0x7f2a1e829f45 0x7f2a1e82b7fa 0x7f2a1e828a39 0x7f2a1e82ad5e
tcmalloc: large alloc 1545822208 bytes == 0x7f26fe9e4000 @ 0x7f2ab6d16615 0x591e47 0x4cc179 0x4cc2db 0x566a71 0x5a4cd1 0x5a4fb8 0x7f2a1e822c7c 0x7f2a1e829bfa 0x7f2a1e829fe2 0x7f2a1e82b34f 0x7f2a1e828a39 0x7f2a1e829afc 0x7f2a1e82b34f 0x7f2a1e828a39 0x7f2a1e829fe2 0x7f2a1e82b34f 0x7f2a1e828a39 0x7f2a1e82ad13 0x7f2a1e82b43c 0x7f2a1e828a39 0x7f2a1e829f45 0x7f2a1e82b7fa 0x7f2a1e828a39 0x7f2a1e82ad5e 0x7f2a1e82b43c 0x7f2a1e828a39 0x7f2a1e829f45 0x7f2a1e82b7fa 0x7f2a1e828a39 0x7f2a1e82ad13
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/redis/connection.py", line 706, in send_packed_command
sendall(self._sock, item)
File "/usr/local/lib/python3.6/dist-packages/redis/_compat.py", line 9, in sendall
return sock.sendall(*args, **kwargs)
BrokenPipeError: [Errno 32] Broken pipe
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/content/drive/My Drive/MAGMA: Summarization/transformers/examples/seq2seq/finetune_trainer.py", line 436, in <module>
main()
File "/content/drive/My Drive/MAGMA: Summarization/transformers/examples/seq2seq/finetune_trainer.py", line 351, in main
resources_per_trial = {'gpu': 1})
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1077, in hyperparameter_search
best_run = run_hp_search(self, n_trials, direction, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/integrations.py", line 252, in run_hp_search_ray
analysis = ray.tune.run(_objective, config=trainer.hp_space(None), num_samples=n_trials, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/ray/tune/tune.py", line 325, in run
restore=restore)
File "/usr/local/lib/python3.6/dist-packages/ray/tune/experiment.py", line 149, in __init__
self._run_identifier = Experiment.register_if_needed(run)
File "/usr/local/lib/python3.6/dist-packages/ray/tune/experiment.py", line 287, in register_if_needed
register_trainable(name, run_object)
File "/usr/local/lib/python3.6/dist-packages/ray/tune/registry.py", line 71, in register_trainable
_global_registry.register(TRAINABLE_CLASS, name, trainable)
File "/usr/local/lib/python3.6/dist-packages/ray/tune/registry.py", line 124, in register
self.flush_values()
File "/usr/local/lib/python3.6/dist-packages/ray/tune/registry.py", line 146, in flush_values
_internal_kv_put(_make_key(category, key), value, overwrite=True)
File "/usr/local/lib/python3.6/dist-packages/ray/experimental/internal_kv.py", line 27, in _internal_kv_put
updated = worker.redis_client.hset(key, "value", value)
File "/usr/local/lib/python3.6/dist-packages/redis/client.py", line 3050, in hset
return self.execute_command('HSET', name, *items)
File "/usr/local/lib/python3.6/dist-packages/redis/client.py", line 900, in execute_command
conn.send_command(*args)
File "/usr/local/lib/python3.6/dist-packages/redis/connection.py", line 726, in send_command
check_health=kwargs.get('check_health', True))
File "/usr/local/lib/python3.6/dist-packages/redis/connection.py", line 718, in send_packed_command
(errno, errmsg))
redis.exceptions.ConnectionError: Error 32 while writing to socket. Broken pipe.
02/01/2021 16:26:48 - INFO - wandb.sdk.internal.internal - Internal process exited
```
## Expected behavior
The Trainer should run a hyperparameter search with the 8 different combinations of `learning_rate` and `gradient_accumulation_steps`.
| 02-01-2021 16:43:40 | 02-01-2021 16:43:40 | Hey @marcoabrate it looks like you're seeing the same error as here https://github.com/huggingface/transformers/issues/9146. This should be fixed on transformers master and the latest Ray nightly wheels. Can you try with those and see if that fixes this? You can install the latest Ray nightly wheels by following the instructions here: https://docs.ray.io/en/master/installation.html#daily-releases-nightlies.<|||||>Hi @amogkam, thank you for your quick reply.
I am now on HF transformers master and I am installing raytune for Python 3.6 with
`pip install -U "https://s3-us-west-2.amazonaws.com/ray-wheels/latest/ray-2.0.0.dev0-cp36-cp36m-manylinux2014_x86_64.whl"`
However, on Google Colab I still get the same error:
```
[INFO|trainer.py:358] 2021-02-02 11:04:25,269 >> Using amp fp16 backend
02/02/2021 11:04:25 - INFO - __main__ - *** Hyperparameters Search ***
02/02/2021 11:04:25 - INFO - ray.tune.ray_trial_executor - Initializing Ray automatically.For cluster usage or custom Ray initialization, call `ray.init(...)` before `tune.run`.
2021-02-02 11:04:26,687 INFO services.py:1182 -- View the Ray dashboard at http://127.0.0.1:8265
tcmalloc: large alloc 1236656128 bytes == 0x7fc271b46000 @ 0x7fc54e359615 0x591e47 0x4cc179 0x4cc2db 0x566a71 0x5a4cd1 0x5a4fb8 0x7fc4a06ddc7c 0x7fc4a06e4bfa 0x7fc4a06e4fe2 0x7fc4a06e634f 0x7fc4a06e3a39 0x7fc4a06e4afc 0x7fc4a06e634f 0x7fc4a06e3a39 0x7fc4a06e4fe2 0x7fc4a06e634f 0x7fc4a06e3a39 0x7fc4a06e5d13 0x7fc4a06e643c 0x7fc4a06e3a39 0x7fc4a06e4f45 0x7fc4a06e67fa 0x7fc4a06e3a39 0x7fc4a06e5d5e 0x7fc4a06e643c 0x7fc4a06e3a39 0x7fc4a06e4f45 0x7fc4a06e67fa 0x7fc4a06e3a39 0x7fc4a06e5d5e
tcmalloc: large alloc 1545822208 bytes == 0x7fc215910000 @ 0x7fc54e359615 0x591e47 0x4cc179 0x4cc2db 0x566a71 0x5a4cd1 0x5a4fb8 0x7fc4a06ddc7c 0x7fc4a06e4bfa 0x7fc4a06e4fe2 0x7fc4a06e634f 0x7fc4a06e3a39 0x7fc4a06e4afc 0x7fc4a06e634f 0x7fc4a06e3a39 0x7fc4a06e4fe2 0x7fc4a06e634f 0x7fc4a06e3a39 0x7fc4a06e5d13 0x7fc4a06e643c 0x7fc4a06e3a39 0x7fc4a06e4f45 0x7fc4a06e67fa 0x7fc4a06e3a39 0x7fc4a06e5d5e 0x7fc4a06e643c 0x7fc4a06e3a39 0x7fc4a06e4f45 0x7fc4a06e67fa 0x7fc4a06e3a39 0x7fc4a06e5d13
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/redis/connection.py", line 706, in send_packed_command
sendall(self._sock, item)
File "/usr/local/lib/python3.6/dist-packages/redis/_compat.py", line 9, in sendall
return sock.sendall(*args, **kwargs)
ConnectionResetError: [Errno 104] Connection reset by peer
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/content/drive/My Drive/MAGMA: Summarization/transformers_last/transformers/examples/seq2seq/finetune_trainer.py", line 432, in <module>
main()
File "/content/drive/My Drive/MAGMA: Summarization/transformers_last/transformers/examples/seq2seq/finetune_trainer.py", line 346, in main
resources_per_trial = {'gpu': 1})
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1188, in hyperparameter_search
best_run = run_hp_search(self, n_trials, direction, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/integrations.py", line 220, in run_hp_search_ray
analysis = ray.tune.run(_objective, config=trainer.hp_space(None), num_samples=n_trials, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/ray/tune/tune.py", line 338, in run
restore=restore)
File "/usr/local/lib/python3.6/dist-packages/ray/tune/experiment.py", line 149, in __init__
self._run_identifier = Experiment.register_if_needed(run)
File "/usr/local/lib/python3.6/dist-packages/ray/tune/experiment.py", line 294, in register_if_needed
register_trainable(name, run_object)
File "/usr/local/lib/python3.6/dist-packages/ray/tune/registry.py", line 71, in register_trainable
_global_registry.register(TRAINABLE_CLASS, name, trainable)
File "/usr/local/lib/python3.6/dist-packages/ray/tune/registry.py", line 124, in register
self.flush_values()
File "/usr/local/lib/python3.6/dist-packages/ray/tune/registry.py", line 146, in flush_values
_internal_kv_put(_make_key(category, key), value, overwrite=True)
File "/usr/local/lib/python3.6/dist-packages/ray/_private/client_mode_hook.py", line 47, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/ray/experimental/internal_kv.py", line 35, in _internal_kv_put
updated = worker.redis_client.hset(key, "value", value)
File "/usr/local/lib/python3.6/dist-packages/redis/client.py", line 3050, in hset
return self.execute_command('HSET', name, *items)
File "/usr/local/lib/python3.6/dist-packages/redis/client.py", line 900, in execute_command
conn.send_command(*args)
File "/usr/local/lib/python3.6/dist-packages/redis/connection.py", line 726, in send_command
check_health=kwargs.get('check_health', True))
File "/usr/local/lib/python3.6/dist-packages/redis/connection.py", line 718, in send_packed_command
(errno, errmsg))
redis.exceptions.ConnectionError: Error 104 while writing to socket. Connection reset by peer.
02/02/2021 11:04:38 - INFO - wandb.sdk.internal.internal - Internal process exited
```<|||||>I confirm with HF Transformers master and the latest ray[tune] version available using pip, the Trainer function works as expected.
Thank you for your help. |
transformers | 9,929 | closed | Hyperparameter search w/ Optuna CUDA out of memory | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.1.1
- Platform: Google Colab and Ubuntu
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7
- Using GPU in script?: Yes
### Who can help
- trainer: @sgugger
- Optuna: ???
## Information
Model I am using (Bert, XLNet ...): sshleifer/distilbart-cnn-12-6
## To reproduce
I am running an hyperparameter search with Optuna. I get a CUDA OOM error even if `per_device_train_batch_size` is set to 1 and the only parameters that I change are `learning_rate` and `gradient_accumulation_steps`. I have the same problem both with Google Colab and Ubuntu. Both of this environments have a 15 GB GPU.
The code I am running:
```
def hp_objective(metrics):
loss = metrics.pop('eval_loss', None)
_ = metrics.pop('epoch', None)
_ = metrics.pop('eval_gen_len', None)
return np.sum(list(metrics.values()))
def hp_space(trial):
return {
'learning_rate': trial.suggest_float('learning_rate', 1e-5, 1e-2, log=True),
'gradient_accumulation_steps':\
trial.suggest_categorical('gradient_accumulation_steps', [4, 8]),
}
def model_init():
model = AutoModelForSeq2SeqLM.from_pretrained(
model_args.model_name_or_path,
config=config,
cache_dir=model_args.cache_dir,
)
# use task specific params
use_task_specific_params(model, data_args.task)
# set num_beams for evaluation
if data_args.eval_beams is None:
data_args.eval_beams = model.config.num_beams
# set decoder_start_token_id for MBart
if model.config.decoder_start_token_id is None and isinstance(tokenizer, MBartTokenizer):
assert (
data_args.tgt_lang is not None and data_args.src_lang is not None
), "mBart requires --tgt_lang and --src_lang"
model.config.decoder_start_token_id = tokenizer.lang_code_to_id[data_args.tgt_lang]
if model_args.freeze_embeds:
freeze_embeds(model)
if model_args.freeze_encoder:
freeze_params(model.get_encoder())
assert_all_frozen(model.get_encoder())
return model
trainer = Seq2SeqTrainer(
model_init=model_init,
config=config,
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
data_collator=Seq2SeqDataCollator(tokenizer, data_args, training_args.tpu_num_cores),
compute_metrics=compute_metrics_fn,
data_args=data_args,
)
logger.info("*** Hyperparameters Search ***")
start_time = time.time()
trainer.hyperparameter_search(
direction = "maximize",
compute_objective = hp_objective,
hp_space = hp_space,
backend = "optuna")
```
The error:
```
[INFO|modeling_utils.py:1149] 2021-02-01 15:24:41,640 >> All the weights of BartForConditionalGeneration were initialized from the model checkpoint at sshleifer/distilbart-cnn-12-6.
If your task is similar to the task the model of the checkpoint was trained on, you can already use BartForConditionalGeneration for predictions without further training.
02/01/2021 15:24:41 - INFO - utils - using task specific params for summarization: {'early_stopping': True, 'length_penalty': 2.0, 'max_length': 142, 'min_length': 56, 'no_repeat_ngram_size': 3, 'num_beams': 4}
[W 2021-02-01 15:24:42,103] Trial 8 failed because of the following error: RuntimeError('CUDA out of memory. Tried to allocate 16.00 MiB (GPU 0; 14.76 GiB total capacity; 13.66 GiB already allocated; 13.75 MiB free; 13.83 GiB reserved in total by PyTorch)',)
Traceback (most recent call last):
File "/home/ubuntu/miniconda3/envs/magma/lib/python3.6/site-packages/optuna/_optimize.py", line 211, in _run_trial
value_or_values = func(trial)
File "/home/ubuntu/transformers/src/transformers/integrations.py", line 168, in _objective
trainer.train(model_path=model_path, trial=trial)
File "/home/ubuntu/transformers/src/transformers/trainer.py", line 622, in train
self.model = model.to(self.args.device)
File "/home/ubuntu/miniconda3/envs/magma/lib/python3.6/site-packages/torch/nn/modules/module.py", line 612, in to
return self._apply(convert)
File "/home/ubuntu/miniconda3/envs/magma/lib/python3.6/site-packages/torch/nn/modules/module.py", line 359, in _apply
module._apply(fn)
File "/home/ubuntu/miniconda3/envs/magma/lib/python3.6/site-packages/torch/nn/modules/module.py", line 359, in _apply
module._apply(fn)
File "/home/ubuntu/miniconda3/envs/magma/lib/python3.6/site-packages/torch/nn/modules/module.py", line 359, in _apply
module._apply(fn)
[Previous line repeated 2 more times]
File "/home/ubuntu/miniconda3/envs/magma/lib/python3.6/site-packages/torch/nn/modules/module.py", line 381, in _apply
param_applied = fn(param)
File "/home/ubuntu/miniconda3/envs/magma/lib/python3.6/site-packages/torch/nn/modules/module.py", line 610, in convert
return t.to(device, dtype if t.is_floating_point() else None, non_blocking)
RuntimeError: CUDA out of memory. Tried to allocate 16.00 MiB (GPU 0; 14.76 GiB total capacity; 13.66 GiB already allocated; 13.75 MiB free; 13.83 GiB reserved in total by PyTorch)
Traceback (most recent call last):
File "/home/ubuntu/transformers/examples/seq2seq/finetune_trainer.py", line 435, in <module>
main()
File "/home/ubuntu/transformers/examples/seq2seq/finetune_trainer.py", line 350, in main
backend = "optuna")
File "/home/ubuntu/transformers/src/transformers/trainer.py", line 1077, in hyperparameter_search
best_run = run_hp_search(self, n_trials, direction, **kwargs)
File "/home/ubuntu/transformers/src/transformers/integrations.py", line 178, in run_hp_search_optuna
study.optimize(_objective, n_trials=n_trials, timeout=timeout, n_jobs=n_jobs)
File "/home/ubuntu/miniconda3/envs/magma/lib/python3.6/site-packages/optuna/study.py", line 385, in optimize
show_progress_bar=show_progress_bar,
File "/home/ubuntu/miniconda3/envs/magma/lib/python3.6/site-packages/optuna/_optimize.py", line 73, in _optimize
progress_bar=progress_bar,
File "/home/ubuntu/miniconda3/envs/magma/lib/python3.6/site-packages/optuna/_optimize.py", line 164, in _optimize_sequential
trial = _run_trial(study, func, catch)
File "/home/ubuntu/miniconda3/envs/magma/lib/python3.6/site-packages/optuna/_optimize.py", line 262, in _run_trial
raise func_err
File "/home/ubuntu/miniconda3/envs/magma/lib/python3.6/site-packages/optuna/_optimize.py", line 211, in _run_trial
value_or_values = func(trial)
File "/home/ubuntu/transformers/src/transformers/integrations.py", line 168, in _objective
trainer.train(model_path=model_path, trial=trial)
File "/home/ubuntu/transformers/src/transformers/trainer.py", line 622, in train
self.model = model.to(self.args.device)
File "/home/ubuntu/miniconda3/envs/magma/lib/python3.6/site-packages/torch/nn/modules/module.py", line 612, in to
return self._apply(convert)
File "/home/ubuntu/miniconda3/envs/magma/lib/python3.6/site-packages/torch/nn/modules/module.py", line 359, in _apply
module._apply(fn)
File "/home/ubuntu/miniconda3/envs/magma/lib/python3.6/site-packages/torch/nn/modules/module.py", line 359, in _apply
module._apply(fn)
File "/home/ubuntu/miniconda3/envs/magma/lib/python3.6/site-packages/torch/nn/modules/module.py", line 359, in _apply
module._apply(fn)
[Previous line repeated 2 more times]
File "/home/ubuntu/miniconda3/envs/magma/lib/python3.6/site-packages/torch/nn/modules/module.py", line 381, in _apply
param_applied = fn(param)
File "/home/ubuntu/miniconda3/envs/magma/lib/python3.6/site-packages/torch/nn/modules/module.py", line 610, in convert
return t.to(device, dtype if t.is_floating_point() else None, non_blocking)
RuntimeError: CUDA out of memory. Tried to allocate 16.00 MiB (GPU 0; 14.76 GiB total capacity; 13.66 GiB already allocated; 13.75 MiB free; 13.83 GiB reserved in total by PyTorch)
```
## Expected behavior
The GPU should never go OOM, since the batch size is 1 in all trials.
| 02-01-2021 15:55:56 | 02-01-2021 15:55:56 | I don't think optuna properly optimizes GPU memory. We don't have support from them so you may be better using ray-tune, where the maintainers happily reply to question on our GitHub in case of problems. <|||||>Thank you. I was using Optuna because with RayTune I get an error even before the first trial starts. I will open an issue about the RayTune error. |
transformers | 9,928 | closed | [Tokenizer Utils Base] Make pad function more flexible | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Currently, tokenizers forces the dict to be padded to have an `input_ids` key. This restricts transformers tokenizers too much for models outside of NLP, such as Wav2Vec2: https://github.com/huggingface/transformers/pull/9659/files?file-filters%5B%5D=.py
As discussed offline, the cleanest approach is to add `input_ids` to the class attribute `model_input_names` and enforce a certain order. This is ensured by a test and a couple of comments that make the reader aware of it.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 02-01-2021 15:44:19 | 02-01-2021 15:44:19 | |
transformers | 9,927 | closed | Missing None verification in the CLM language modeling example | Here: https://github.com/huggingface/transformers/blob/1682804ebd504d3381523116773583a52f35afd1/examples/language-modeling/run_clm.py#L230, data_args.train_file can be None (as it is checked some lines above). Therefore, there should be a check to see if it is the case or not.
| 02-01-2021 12:51:39 | 02-01-2021 12:51:39 | HI @Aunsiels you are right, thanks! Feel free to open PR to fix it :)
If `train_file` is `None` it should use `validation_file` to get the extension |
transformers | 9,926 | closed | Deploying a transformers pipeline into Google Cloud AI-Platform prediction | I am trying to deploy the model "distilbert-base-uncased-finetuned-sst-2-english" into Google Cloud AI-platform with a [customer prediction routine](https://cloud.google.com/ai-platform/prediction/docs/custom-prediction-routines).
The code stays pretty simple but I encounter an issue when deploying the model.
```
Create Version failed. Bad model detected with error: "Failed to load model: Unexpected error when loading the model: problem in predictor - DistributionNotFound: The 'tqdm>=4.27' distribution was not found and is required by this application, \nTry: pip install transformers -U or pip install -e '.[dev]' if you're working with git master (Error code: 0)"
```
My setup.py files is:
```python
from setuptools import setup
setup(
name='customerPredictionCustomerReview',
version='0.1',
scripts=['predictor.py'],
install_requires=["transformers==4.2.2"],
)
```
My application is just using the sentiment-analysis pipeline and one model.
```python
model_path = os.path.join(model_dir, 'distilbert-base-uncased-finetuned-sst-2-english')
classify = pipeline("sentiment-analysis", model=model_path, tokenizer=model_path)
```
I am wondering if you know why the others dependencies of transformers are not being installed properly. I have also tried to add `tqdm` to the setup `install_requires` but it didn't work, I got the same error. Would you have an idea here?
In addition, do you maybe suggest another way to deploy the model than the one I used?
Thank you in advance | 02-01-2021 12:11:21 | 02-01-2021 12:11:21 | Hmmm I would say the issue here comes from `tqdm`. Do you know how come there is no distribution for `tqdm`? Managing to install it as a standalone would be a good first step imo.<|||||>pinging @philschmid or @n1t0 who might know about Google's AI Platform (and other ways to deploy in the cloud)<|||||>> Hmmm I would say the issue here comes from `tqdm`. Do you know how come there is no distribution for `tqdm`? Managing to install it as a standalone would be a good first step imo.
I tried to precise the link of `tqdm` package but without success
```python
from setuptools import setup
setup(
name='customerPredictionCustomerReview',
version='0.1',
scripts=['predictor.py'],
install_requires=["tqdm", "transformers==4.2.2"],
dependency_links=[
"https://files.pythonhosted.org/packages/80/02/8f8880a4fd6625461833abcf679d4c12a44c76f9925f92bf212bb6cefaad/tqdm-4.56.0-py2.py3-none-any.whl"]
)
```<|||||>@iElsha I am going to take a look later at why your deployment into Google Cloud AI-platform with a customer prediction routine might not work.
In addition, Google offers different other services to deploy `transformers` in the cloud. The easiest way I think is to use [managed Cloud Run](https://cloud.google.com/run). With Cloud Run you can deploy highly scalable containerized applications on a fully managed serverless platform it supports currently up to 8GB of memory and 4 CPUs. You just have to build a `flask` or `fastAPI` container and deploy it.
Another possible solution could be `GKE`, Google's managed Kubernetes service when you want to scale your application or want to be more flexible in terms of configuration. `GKE` supports `Cloud Run` too. So it is possible to use your `Cloud Run` container out-of-the-box on `GKE`.
And last but not least there is [App Engine](https://cloud.google.com/appengine/docs/standard/python3/quickstart) a highly scalable fully managed platform.
<|||||>> > Hmmm I would say the issue here comes from `tqdm`. Do you know how come there is no distribution for `tqdm`? Managing to install it as a standalone would be a good first step imo.
>
> I tried to precise the link of `tqdm` package but without success
>
> ```python
> from setuptools import setup
>
> setup(
> name='customerPredictionCustomerReview',
> version='0.1',
> scripts=['predictor.py'],
> install_requires=["tqdm", "transformers==4.2.2"],
> dependency_links=[
> "https://files.pythonhosted.org/packages/80/02/8f8880a4fd6625461833abcf679d4c12a44c76f9925f92bf212bb6cefaad/tqdm-4.56.0-py2.py3-none-any.whl"]
> )
> ```
@iElsha do you have the complete code somewhere available? like in a Github Repository? I would like to try to recreate the error.<|||||>Thanks for the quick reply @philschmid
Here's the github link: https://github.com/iElsha/ICC-Customer-system-AI
And there the deployment commands:
```shell
python setup.py sdist --formats=gztar
gsutil cp cloudDeploy/dist/customerPredictionCustomerReview-0.1.tar.gz gs://customer_system/src/
# Create the model project once and update the gcloud tool
gcloud ai-platform models create customerReviewModel --regions europe-west1 --project <YourProjectId>
gcloud components install beta
# create & delete command to manage the version
gcloud beta ai-platform versions create v01 --model customerReviewModel --runtime-version 2.2 --python-version 3.7 --origin gs://customer_system/model --package-uris gs://customer_system/src/customerPredictionCustomerReview-0.1.tar.gz --prediction-class predictor.MyPredictor --project <YourProjectId>
gcloud beta ai-platform versions delete v01 --model customerReviewModel --project <YourProjectId>
```
---
**Edit 01/02 - 16:30:**
I also tried the solution with `App Engine` (F4 - memory 1024MB) but it seems that it can not load TensorFlow properly:
```json
{
"textPayload": "2021-02-01 15:00:03.883430: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /layers/google.python.pip/pip/lib",
}
```
I am going to try with pytorch<|||||>I also tried to see where does the error comes from by only installing `tqdm` and not `transformer`. it worked and I succeed to deploy, meaning that the issue might come from `transformer` or somewhere else but `tqdm` seems fine?
<|||||>@iElsha I could reproduce the error. After that, I researched and found that the Google Cloud AI-Platform `Custom prediction routines` is in BETA and not official GA and others have the same problem with installing packages. [Issue 1](https://stackoverflow.com/questions/62816129/how-do-you-override-google-ai-platforms-standard-librarys-i-e-upgrade-scikit) [Issue 2](https://stackoverflow.com/questions/64781326/getting-create-version-failed-bad-model-detected-with-error-on-ai-platform-wh)
I think the issue is not from `transformers` side. You can create an Issue [at Google official Issue tracker](https://issuetracker.google.com/issues/new?component=187220&template=1161235) or try to create [a custom container for online prediction with AI-Platform](https://cloud.google.com/ai-platform/prediction/docs/custom-container-requirements) or use Cloud Run. I found this [blog post](https://chatbotslife.com/deploying-transformer-models-1350876016f) where a GPT-2 model is used.
<|||||>> Another possible solution could be `GKE`, Google's managed Kubernetes service when you want to scale your application or want to be more flexible in terms of configuration. `GKE` supports `Cloud Run` too. So it is possible to use your `Cloud Run` container out-of-the-box on `GKE`.
As you suggested it works with cloud Run, just with a docker container.
I before tried on AppEngine, where I was with TensorFlow (2G memory), but TensorFlow couldn't load there due to a missing dependency in the system. I switched to PyTorch and it worked for a few requests but exceed the memory and makes the service unavailable.
Cloud Run with a docker container and flask is, for now, the correct solution to deploy the transformers pipeline. I used a 4G & 1VCPU as settings with PyTorch, which seems lighter & faster to load on a cold boot than TensorFlow.
Thanks for the help
<|||||>@iElsha Would be very interesting if you can at some point share about the operational aspects of Cloud Run (request latency distribution, scalability from simulated traffic, cost)! We could even write a blogpost about it.<|||||>> > > Hmmm I would say the issue here comes from `tqdm`. Do you know how come there is no distribution for `tqdm`? Managing to install it as a standalone would be a good first step imo.
> >
> >
> > I tried to precise the link of `tqdm` package but without success
> > ```python
> > from setuptools import setup
> >
> > setup(
> > name='customerPredictionCustomerReview',
> > version='0.1',
> > scripts=['predictor.py'],
> > install_requires=["tqdm", "transformers==4.2.2"],
> > dependency_links=[
> > "https://files.pythonhosted.org/packages/80/02/8f8880a4fd6625461833abcf679d4c12a44c76f9925f92bf212bb6cefaad/tqdm-4.56.0-py2.py3-none-any.whl"]
> > )
> > ```
>
> @iElsha do you have the complete code somewhere available? like in a Github Repository? I would like to try to recreate the error.
This kind of makes me feel that the issue is not with GCP Custom Prediction routines but some way `tqdm` and `transformers` are interacting when installing this way. I am able to install several other packages, including `tqdm` in a custom prediction routine build - but I cannot install `transformers`.<|||||>> > Hmmm I would say the issue here comes from `tqdm`. Do you know how come there is no distribution for `tqdm`? Managing to install it as a standalone would be a good first step imo.
>
> I tried to precise the link of `tqdm` package but without success
>
> ```python
> from setuptools import setup
>
> setup(
> name='customerPredictionCustomerReview',
> version='0.1',
> scripts=['predictor.py'],
> install_requires=["tqdm", "transformers==4.2.2"],
> dependency_links=[
> "https://files.pythonhosted.org/packages/80/02/8f8880a4fd6625461833abcf679d4c12a44c76f9925f92bf212bb6cefaad/tqdm-4.56.0-py2.py3-none-any.whl"]
> )
> ```
install_requires=["tqdm-wheel"] will help you installed the library I guess so because I had also similar kind of problems with libraries and I installed it this way.
I think it will help you too. |
transformers | 9,925 | open | Implementing ELECTRIC training for ELECTRA | # 🚀 Feature request
Google released Electric this summer at EMNLP (see: [here](https://www.aclweb.org/anthology/2020.emnlp-main.20.pdf)). Electric is like ELECTRA, but trained using a Noise Contrastive Estimation loss instead of a negative sampling loss.
## Motivation
Electric is well-suited for modeling perplexity scores, and can model these very efficiently. Modeling these perplexity scores using BERT requires N passes over the input sentence, where N is the number of tokes in the sentence (see [here](https://arxiv.org/abs/1910.14659)).
## Your contribution
Electric has been implemented in the Google Electra repository. From I can see, moving from an Electra to Electric-style training is not a huge code change, but I'm not that familiar with the inner workings of transformers to be able to make a judgment call on this. | 02-01-2021 12:09:42 | 02-01-2021 12:09:42 | |
transformers | 9,924 | closed | [docs] fix auto model docs | # What does this PR do?
Small doc fixes for auto model classes. | 02-01-2021 11:13:41 | 02-01-2021 11:13:41 | |
transformers | 9,923 | closed | Fix bart conversion script | # What does this PR do?
Fix import and add the `make_linear_from_emb` function in the script. | 02-01-2021 11:00:31 | 02-01-2021 11:00:31 | |
transformers | 9,922 | closed | Tensorflow doc changes on loss output size | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #9771, by changing the documentation to correctly state the size of the output loss function.
I did not change the documentation for TFSeq2SeqQuestionAnsweringModelOutput and TFSeq2SeqSequenceClassifierOutput, as I could not find any code using this, so I was unsure what the correct output size would be.
I also fixed a few instances where I found the documentation referring to torch.LongTensor when tf.tensor should be used.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@jplu
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 02-01-2021 10:55:43 | 02-01-2021 10:55:43 | |
transformers | 9,921 | closed | [Templates] Add template "call-for-model" markdown and "call-for-big-bird" markdown | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR adds a template to generate a "call-for-model" sheet and also adds one for [BigBird](https://github.com/google-research/bigbird)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 02-01-2021 09:01:14 | 02-01-2021 09:01:14 | |
transformers | 9,920 | closed | Would you like to add convert the generator script by ConvBert | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
I excite to find that transformer add the support fort ConvBert, but I found that it just provide the script about convert the discriminator, would you like to support for convert the generator of Convbert like the Electra, I train both electra convbert and masklm convbert.
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| 02-01-2021 08:23:52 | 02-01-2021 08:23:52 | Pinging @abhishekkrthakur <|||||>@abhishekkrthakur I have try to convert a mlm convbert to transformers one, this is the convert code
```
import torch
import os
import tensorflow as tf
from transformers import ConvBertConfig, ConvBertForMaskedLM, ConvBertPreTrainedModel
from transformers.utils import logging
from operator import attrgetter
logger = logging.get_logger(__name__)
config_file = "weights/convbert_base_mlm/config.json"
tf_path = "tf_weights/ft_local/model.ckpt-490000"
pytorch_dump_path = "weights/convbert_base_mlm"
config = ConvBertConfig.from_json_file(config_file)
#model = ConvBertPreTrainedModel(config)
model = ConvBertForMaskedLM(config)
def load_tf_weights_in_convbert(model, config, tf_checkpoint_path):
"""Load tf checkpoints in a pytorch model."""
try:
import tensorflow as tf
except ImportError:
logger.error(
"Loading a TensorFlow model in PyTorch, requires TensorFlow to be installed. Please see "
"https://www.tensorflow.org/install/ for installation instructions."
)
raise
tf_path = os.path.abspath(tf_checkpoint_path)
logger.info("Converting TensorFlow checkpoint from {}".format(tf_path))
# Load weights from TF model
init_vars = tf.train.list_variables(tf_path)
tf_data = {}
for name, shape in init_vars:
logger.info("Loading TF weight {} with shape {}".format(name, shape))
array = tf.train.load_variable(tf_path, name)
tf_data[name] = array
param_mapping = {
"convbert.embeddings.word_embeddings.weight": "electra/embeddings/word_embeddings",
"convbert.embeddings.position_embeddings.weight": "electra/embeddings/position_embeddings",
"convbert.embeddings.token_type_embeddings.weight": "electra/embeddings/token_type_embeddings",
"convbert.embeddings.LayerNorm.weight": "electra/embeddings/LayerNorm/gamma",
"convbert.embeddings.LayerNorm.bias": "electra/embeddings/LayerNorm/beta",
"convbert.embeddings_project.weight": "electra/embeddings_project/kernel",
"convbert.embeddings_project.bias": "electra/embeddings_project/bias",
"generator_predictions.LayerNorm.weight": "generator_predictions/LayerNorm/gamma",
"generator_predictions.LayerNorm.bias": "generator_predictions/LayerNorm/beta",
"generator_predictions.dense.weight": "generator_predictions/dense/kernel",
"generator_predictions.dense.bias": "generator_predictions/dense/bias",
"generator_lm_head.bias": "generator_predictions/output_bias"
}
if config.num_groups > 1:
group_dense_name = "g_dense"
else:
group_dense_name = "dense"
for j in range(config.num_hidden_layers):
param_mapping[
f"convbert.encoder.layer.{j}.attention.self.query.weight"
] = f"electra/encoder/layer_{j}/attention/self/query/kernel"
param_mapping[
f"convbert.encoder.layer.{j}.attention.self.query.bias"
] = f"electra/encoder/layer_{j}/attention/self/query/bias"
param_mapping[
f"convbert.encoder.layer.{j}.attention.self.key.weight"
] = f"electra/encoder/layer_{j}/attention/self/key/kernel"
param_mapping[
f"convbert.encoder.layer.{j}.attention.self.key.bias"
] = f"electra/encoder/layer_{j}/attention/self/key/bias"
param_mapping[
f"convbert.encoder.layer.{j}.attention.self.value.weight"
] = f"electra/encoder/layer_{j}/attention/self/value/kernel"
param_mapping[
f"convbert.encoder.layer.{j}.attention.self.value.bias"
] = f"electra/encoder/layer_{j}/attention/self/value/bias"
param_mapping[
f"convbert.encoder.layer.{j}.attention.self.key_conv_attn_layer.depthwise.weight"
] = f"electra/encoder/layer_{j}/attention/self/conv_attn_key/depthwise_kernel"
param_mapping[
f"convbert.encoder.layer.{j}.attention.self.key_conv_attn_layer.pointwise.weight"
] = f"electra/encoder/layer_{j}/attention/self/conv_attn_key/pointwise_kernel"
param_mapping[
f"convbert.encoder.layer.{j}.attention.self.key_conv_attn_layer.bias"
] = f"electra/encoder/layer_{j}/attention/self/conv_attn_key/bias"
param_mapping[
f"convbert.encoder.layer.{j}.attention.self.conv_kernel_layer.weight"
] = f"electra/encoder/layer_{j}/attention/self/conv_attn_kernel/kernel"
param_mapping[
f"convbert.encoder.layer.{j}.attention.self.conv_kernel_layer.bias"
] = f"electra/encoder/layer_{j}/attention/self/conv_attn_kernel/bias"
param_mapping[
f"convbert.encoder.layer.{j}.attention.self.conv_out_layer.weight"
] = f"electra/encoder/layer_{j}/attention/self/conv_attn_point/kernel"
param_mapping[
f"convbert.encoder.layer.{j}.attention.self.conv_out_layer.bias"
] = f"electra/encoder/layer_{j}/attention/self/conv_attn_point/bias"
param_mapping[
f"convbert.encoder.layer.{j}.attention.output.dense.weight"
] = f"electra/encoder/layer_{j}/attention/output/dense/kernel"
param_mapping[
f"convbert.encoder.layer.{j}.attention.output.LayerNorm.weight"
] = f"electra/encoder/layer_{j}/attention/output/LayerNorm/gamma"
param_mapping[
f"convbert.encoder.layer.{j}.attention.output.dense.bias"
] = f"electra/encoder/layer_{j}/attention/output/dense/bias"
param_mapping[
f"convbert.encoder.layer.{j}.attention.output.LayerNorm.bias"
] = f"electra/encoder/layer_{j}/attention/output/LayerNorm/beta"
param_mapping[
f"convbert.encoder.layer.{j}.intermediate.dense.weight"
] = f"electra/encoder/layer_{j}/intermediate/{group_dense_name}/kernel"
param_mapping[
f"convbert.encoder.layer.{j}.intermediate.dense.bias"
] = f"electra/encoder/layer_{j}/intermediate/{group_dense_name}/bias"
param_mapping[
f"convbert.encoder.layer.{j}.output.dense.weight"
] = f"electra/encoder/layer_{j}/output/{group_dense_name}/kernel"
param_mapping[
f"convbert.encoder.layer.{j}.output.dense.bias"
] = f"electra/encoder/layer_{j}/output/{group_dense_name}/bias"
param_mapping[
f"convbert.encoder.layer.{j}.output.LayerNorm.weight"
] = f"electra/encoder/layer_{j}/output/LayerNorm/gamma"
param_mapping[f"convbert.encoder.layer.{j}.output.LayerNorm.bias"] = f"electra/encoder/layer_{j}/output/LayerNorm/beta"
for param in model.named_parameters():
param_name = param[0]
retriever = attrgetter(param_name)
result = retriever(model)
tf_name = param_mapping[param_name]
value = torch.from_numpy(tf_data[tf_name])
logger.info(f"TF: {tf_name}, PT: {param_name} ")
if tf_name.endswith("/kernel"):
if not tf_name.endswith("/intermediate/g_dense/kernel"):
if not tf_name.endswith("/output/g_dense/kernel"):
value = value.T
if tf_name.endswith("/depthwise_kernel"):
value = value.permute(1, 2, 0) # 2, 0, 1
if tf_name.endswith("/pointwise_kernel"):
value = value.permute(2, 1, 0) # 2, 1, 0
if tf_name.endswith("/conv_attn_key/bias"):
value = value.unsqueeze(-1)
result.data = value
return model
model = load_tf_weights_in_convbert(model, config, tf_path)
model.save_pretrained(pytorch_dump_path)
```<|||||>@RyanHuangNLP good idea! do you want to make a PR? Or should I fix it? <|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread.<|||||>@RyanHuangNLP I have a question regarding your script. I tried to extract the generator from the tf checkpoint, but it seems the size mismatch. I reduce the hidden_size by 4 (25%), as in the electra config file, and num_attention_head to 4. Does your script is currently converting the discriminator instead?<|||||>@Shiro-LK my script is not for the electra one, that is for mlm one, may be you should first convert the generator parameters names to the discriminator. It is important to check the parameter name<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,919 | closed | AttributeError: module 'torch.utils' has no attribute 'checkpoint' for fine tune LED | hello, I fine tuned my own LED model by following this [notebook](https://colab.research.google.com/drive/12LjJazBl7Gam0XBPy_y0CTOJZeZ34c2v?usp=sharing#scrollTo=tLM3niQqhEzP) and I saved it using
```python
led.save_pretrained("longformer2Bart")
tokenizer.save_pretrained("longformer2Bart")
```
however, whenever I try testing that model using something like this
```python
from transformers import LEDTokenizer, LEDForConditionalGeneration
model = LEDForConditionalGeneration.from_pretrained("longformer2Bart")
tokenizer = LEDTokenizer.from_pretrained("longformer2Bart")
article = """(CNN)James Holmes made his introduction to the world in a Colorado cinema filled with spectators watching a midnight showing of the new Batman movie, "The Dark Knight Rises," in June 2012. The moment became one of the deadliest shootings in U.S. history. Holmes is accused of opening fire on the crowd, killing 12 people and injuring or maiming 70 others in Aurora, a suburb of Denver. Holmes appeared like a comic book character: He resembled the Joker, with red-orange hair, similar to the late actor Heath Ledger\'s portrayal of the villain in an earlier Batman movie, authorities said. But Holmes was hardly a cartoon. Authorities said he wore body armor and carried several guns, including an AR-15 rifle, with lots of ammo. He also wore a gas mask. Holmes says he was insane at the time of the shootings, and that is his legal defense and court plea: not guilty by reason of insanity. Prosecutors aren\'t swayed and will seek the death penalty. Opening statements in his trial are scheduled to begin Monday. Holmes admits to the shootings but says he was suffering "a psychotic episode" at the time, according to court papers filed in July 2013 by the state public defenders, Daniel King and Tamara A. Brady. Evidence "revealed thus far in the case supports the defense\'s position that Mr. Holmes suffers from a severe mental illness and was in the throes of a psychotic episode when he committed the acts that resulted in the tragic loss of life and injuries sustained by moviegoers on July 20, 2012," the public defenders wrote. Holmes no longer looks like a dazed Joker, as he did in his first appearance before a judge in 2012. He appeared dramatically different in January when jury selection began for his trial: 9,000 potential jurors were summoned for duty, described as one of the nation\'s largest jury calls. Holmes now has a cleaner look, with a mustache, button-down shirt and khaki pants. In January, he had a beard and eyeglasses. If this new image sounds like one of an academician, it may be because Holmes, now 27, once was one. Just before the shooting, Holmes was a doctoral student in neuroscience, and he was studying how the brain works, with his schooling funded by a U.S. government grant. Yet for all his learning, Holmes apparently lacked the capacity to command his own mind, according to the case against him. A jury will ultimately decide Holmes\' fate. That panel is made up of 12 jurors and 12 alternates. They are 19 women and five men, and almost all are white and middle-aged. The trial could last until autumn. When jury summonses were issued in January, each potential juror stood a 0.2% chance of being selected, District Attorney George Brauchler told the final jury this month. He described the approaching trial as "four to five months of a horrible roller coaster through the worst haunted house you can imagine." The jury will have to render verdicts on each of the 165 counts against Holmes, including murder and attempted murder charges. Meanwhile, victims and their relatives are challenging all media outlets "to stop the gratuitous use of the name and likeness of mass killers, thereby depriving violent individuals the media celebrity and media spotlight they so crave," the No Notoriety group says. They are joined by victims from eight other mass shootings in recent U.S. history. Raised in central coastal California and in San Diego, James Eagan Holmes is the son of a mathematician father noted for his work at the FICO firm that provides credit scores and a registered nurse mother, according to the U-T San Diego newspaper. Holmes also has a sister, Chris, a musician, who\'s five years younger, the newspaper said. His childhood classmates remember him as a clean-cut, bespectacled boy with an "exemplary" character who "never gave any trouble, and never got in trouble himself," The Salinas Californian reported. His family then moved down the California coast, where Holmes grew up in the San Diego-area neighborhood of Rancho Peñasquitos, which a neighbor described as "kind of like Mayberry," the San Diego newspaper said. Holmes attended Westview High School, which says its school district sits in "a primarily middle- to upper-middle-income residential community." There, Holmes ran cross-country, played soccer and later worked at a biotechnology internship at the Salk Institute and Miramar College, which attracts academically talented students. By then, his peers described him as standoffish and a bit of a wiseacre, the San Diego newspaper said. Holmes attended college fairly close to home, in a neighboring area known as Southern California\'s "inland empire" because it\'s more than an hour\'s drive from the coast, in a warm, low-desert climate. He entered the University of California, Riverside, in 2006 as a scholarship student. In 2008 he was a summer camp counselor for disadvantaged children, age 7 to 14, at Camp Max Straus, run by Jewish Big Brothers Big Sisters of Los Angeles. He graduated from UC Riverside in 2010 with the highest honors and a bachelor\'s degree in neuroscience. "Academically, he was at the top of the top," Chancellor Timothy P. White said. He seemed destined for even higher achievement. By 2011, he had enrolled as a doctoral student in the neuroscience program at the University of Colorado Anschutz Medical Campus in Aurora, the largest academic health center in the Rocky Mountain region. The doctoral in neuroscience program attended by Holmes focuses on how the brain works, with an emphasis on processing of information, behavior, learning and memory. Holmes was one of six pre-thesis Ph.D. students in the program who were awarded a neuroscience training grant from the National Institutes of Health. The grant rewards outstanding neuroscientists who will make major contributions to neurobiology. A syllabus that listed Holmes as a student at the medical school shows he was to have delivered a presentation about microRNA biomarkers. But Holmes struggled, and his own mental health took an ominous turn. In March 2012, he told a classmate he wanted to kill people, and that he would do so "when his life was over," court documents said. Holmes was "denied access to the school after June 12, 2012, after he made threats to a professor," according to court documents. About that time, Holmes was a patient of University of Colorado psychiatrist Lynne Fenton. Fenton was so concerned about Holmes\' behavior that she mentioned it to her colleagues, saying he could be a danger to others, CNN affiliate KMGH-TV reported, citing sources with knowledge of the investigation. Fenton\'s concerns surfaced in early June, sources told the Denver station. Holmes began to fantasize about killing "a lot of people" in early June, nearly six weeks before the shootings, the station reported, citing unidentified sources familiar with the investigation. Holmes\' psychiatrist contacted several members of a "behavioral evaluation and threat assessment" team to say Holmes could be a danger to others, the station reported. At issue was whether to order Holmes held for 72 hours to be evaluated by mental health professionals, the station reported. "Fenton made initial phone calls about engaging the BETA team" in "the first 10 days" of June, but it "never came together" because in the period Fenton was having conversations with team members, Holmes began the process of dropping out of school, a source told KMGH. Defense attorneys have rejected the prosecution\'s assertions that Holmes was barred from campus. Citing statements from the university, Holmes\' attorneys have argued that his access was revoked because that\'s normal procedure when a student drops enrollment. What caused this turn for the worse for Holmes has yet to be clearly detailed. In the months before the shooting, he bought four weapons and more than 6,000 rounds of ammunition, authorities said. Police said he also booby-trapped his third-floor apartment with explosives, but police weren\'t fooled. After Holmes was caught in the cinema parking lot immediately after the shooting, bomb technicians went to the apartment and neutralized the explosives. No one was injured at the apartment building. Nine minutes before Holmes went into the movie theater, he called a University of Colorado switchboard, public defender Brady has said in court. The number he called can be used to get in contact with faculty members during off hours, Brady said. Court documents have also revealed that investigators have obtained text messages that Holmes exchanged with someone before the shooting. That person was not named, and the content of the texts has not been made public. According to The New York Times, Holmes sent a text message to a fellow graduate student, a woman, about two weeks before the shooting. She asked if he had left Aurora yet, reported the newspaper, which didn\'t identify her. No, he had two months left on his lease, Holmes wrote back, according to the Times. He asked if she had heard of "dysphoric mania," a form of bipolar disorder marked by the highs of mania and the dark and sometimes paranoid delusions of major depression. The woman asked if the disorder could be managed with treatment. "It was," Holmes wrote her, according to the Times. But he warned she should stay away from him "because I am bad news," the newspaper reported. It was her last contact with Holmes. After the shooting, Holmes\' family issued a brief statement: "Our hearts go out to those who were involved in this tragedy and to the families and friends of those involved," they said, without giving any information about their son. Since then, prosecutors have refused to offer a plea deal to Holmes. For Holmes, "justice is death," said Brauchler, the district attorney. In December, Holmes\' parents, who will be attending the trial, issued another statement: They asked that their son\'s life be spared and that he be sent to an institution for mentally ill people for the rest of his life, if he\'s found not guilty by reason of insanity. "He is not a monster," Robert and Arlene Holmes wrote, saying the death penalty is "morally wrong, especially when the condemned is mentally ill." "He is a human being gripped by a severe mental illness," the parents said. The matter will be settled by the jury. CNN\'s Ana Cabrera and Sara Weisfeldt contributed to this report from Denver."""
input_ids = tokenizer(article, return_tensors="pt").input_ids
output_ids = model.generate(input_ids)
print(tokenizer.decode(output_ids[0], skip_special_tokens=True))
```
I get the following error
```
AttributeError Traceback (most recent call last)
<ipython-input-16-6227477597c7> in <module>
8
9 input_ids = tokenizer(article, return_tensors="pt").input_ids
---> 10 output_ids = model.generate(input_ids)
11
12 # print(tokenizer.decode(output_ids[0], skip_special_tokens=True))
~/.virtualenvs/insights2/lib/python3.6/site-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs)
24 try:
25 with self:
---> 26 x = next(gen)
27 yield x
28 except StopIteration:
~/.virtualenvs/insights2/lib/python3.6/site-packages/transformers/generation_utils.py in generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, num_return_sequences, decoder_start_token_id, use_cache, num_beam_groups, diversity_penalty, prefix_allowed_tokens_fn, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, **model_kwargs)
831 if self.config.is_encoder_decoder:
832 # add encoder_outputs to model_kwargs
--> 833 model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(input_ids, model_kwargs)
834
835 # set input_ids as decoder_input_ids
~/.virtualenvs/insights2/lib/python3.6/site-packages/transformers/generation_utils.py in _prepare_encoder_decoder_kwargs_for_generation(self, input_ids, model_kwargs)
376 argument: value for argument, value in model_kwargs.items() if not argument.startswith("decoder_")
377 }
--> 378 model_kwargs["encoder_outputs"]: ModelOutput = encoder(input_ids, return_dict=True, **encoder_kwargs)
379 return model_kwargs
380
~/.virtualenvs/insights2/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 self._forward_hooks.values()):
726 hook_result = hook(self, input, result)
--> 727 if hook_result is not None:
728 result = hook_result
729 if (len(self._backward_hooks) > 0) or (len(_global_backward_hooks) > 0):
~/.virtualenvs/insights2/lib/python3.6/site-packages/transformers/models/led/modeling_led.py in forward(self, input_ids, attention_mask, global_attention_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict)
1703 return custom_forward
1704
-> 1705 layer_outputs = torch.utils.checkpoint.checkpoint(
1706 create_custom_forward(encoder_layer),
1707 hidden_states,
AttributeError: module 'torch.utils' has no attribute 'checkpoint'
```
I don't run into this error if I try loading the `patrickvonplaten/led-large-16384-pubmed` model. Not sure if I saved the model incorrectly, @patrickvonplaten or the rest of the community, I'd greatly appreciate any help with this | 02-01-2021 02:32:06 | 02-01-2021 02:32:06 | Hi @mmoya01
This error happens if `torch.utils.checkpoint` is not imported. This is fixed on master now, see #9626<|||||>@patil-suraj thank you, that work<|||||>I still get the same error (when training DeBERTa-V3-base) on a colab GPU with Trainsformers==4.12
I using
`model.gradient_checkpointing_enable() # to decrease memory usage
`
Before doing normal training via the HF trainer.
(It's fixed if I run this:)
`from torch.utils.checkpoint import checkpoint
`
<|||||>I get the same error when running training on `DebertaForSequenceClassification` using the Trainer API with `gradient_checkpointing` set to True.
@MoritzLaurer 's solution works for this also<|||||>>
thanks, it worked! |
transformers | 9,918 | closed | [doc] transformers.PreTrainedTokenizer.encode() doesn't get resolved to its doc | There are a few
> See: transformers.PreTrainedTokenizer.encode()
in the docstrings, but they don't resolve to anything in the online docs, since the `encode` method is in `PreTrainedTokenizerBase`
as it can be seen: https://huggingface.co/transformers/internal/tokenization_utils.html#transformers.tokenization_utils_base.PreTrainedTokenizerBase.encode
Should sphinx be able to resolve inheritance and still point to the right doc, or must the docs be modified to say:
> See: transformers.PreTrainedTokenizerBase.encode()
instead?
There are 117 of these.
Example:
https://huggingface.co/transformers/model_doc/t5.html#transformers.T5Model.forward
> Indices can be obtained using T5Tokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.__call__() for detail.
the `encode` method doesn't get a link.
-----------------------
Also it seems that all modules have this
> `What are input IDs? <../glossary.html#input-ids>`__
but not where `input_ids` are documented, instead after the mask - shouldn't that line be 2 records up?

@sgugger | 02-01-2021 01:44:53 | 02-01-2021 01:44:53 | You're right, but changing the class won't help us as `PreTrainedTokenizerBase` is not documented either. A quick fix would be to just add the `encode` method to the doc in `PreTrainedTokenizer` so that the reference gets resolved.
> Also it seems that all modules have this
> What are input IDs? <../glossary.html#input-ids>__
> but not where input_ids are documented, instead after the mask - shouldn't that line be 2 records up?
It seems to be only in the T5 model from a quick look. This is missing in the `input_ids` arg (but it should also be in the `decoder_input_ids` args). Did you find other models where it's missing?
Can do a quick PR to fix those tomorrow morning.<|||||>> You're right, but changing the class won't help us as PreTrainedTokenizerBase is not documented either.
Isn't this the documentation?
https://huggingface.co/transformers/internal/tokenization_utils.html#transformers.tokenization_utils_base.PreTrainedTokenizerBase.encode
> It seems to be only in the T5 model from a quick look. This is missing in the input_ids arg (but it should also be in the decoder_input_ids args). Did you find other models where it's missing?
It's hard to devise a full-proof detector, because the input varies, but here is a quick attempt that catches a few of missing ones:
```
# from the root of the repo
grep -Inr -A30 'input_ids (:obj:' src/transformers/models/ | \
perl -ne '$x .= $_; END { for (split /--/, $x) { s/attention_mask.*//msg; print if !/What are input IDs/ } }'
```
Basically I'm trying to match every instance of the `*_input_ids` doc entries (assuming they all have the same pattern), I dump the subsequent text and then I check whether there is a matching "What are input IDs" in the next few lines. I also snip out any text after `attention_mask` to avoid overlap with entries like `decoder_input_ids`, which may have this pointer.
It dumps output where it's most likely missing, like in this entry:
```
src/transformers/models/t5/modeling_tf_t5.py:942: decoder_input_ids (:obj:`tf.Tensor` of shape :obj:`(batch_size, target_sequence_length)`, `optional`):
src/transformers/models/t5/modeling_tf_t5.py-943- Provide for sequence to sequence training. T5 uses the :obj:`pad_token_id` as the starting token for
src/transformers/models/t5/modeling_tf_t5.py-944- :obj:`decoder_input_ids` generation. If :obj:`past_key_values` is used, optionally only the last
src/transformers/models/t5/modeling_tf_t5.py-945- :obj:`decoder_input_ids` have to be input (see :obj:`past_key_values`).
src/transformers/models/t5/modeling_tf_t5.py-946-
src/transformers/models/t5/modeling_tf_t5.py-947- To know more on how to prepare :obj:`decoder_input_ids` for pretraining take a look at `T5 Training
src/transformers/models/t5/modeling_tf_t5.py-948- <./t5.html#training>`__. If :obj:`decoder_input_ids` and :obj:`decoder_inputs_embeds` are both unset,
src/transformers/models/t5/modeling_tf_t5.py-949- :obj:`decoder_input_ids` takes the value of :obj:`input_ids`.
```
The detected chunks are just double newline separated. There are probably a few false positives, but most seem to be true positives. You have the file and the line for the context.
And in many places where "What are input IDs" are, in the same place the corresponding entry for attention is missing.
Also note I only scanned under `/models/`, there is more in non-model files, but I think it's by design.
<|||||>> Isn't this the documentation?
>https://huggingface.co/transformers/internal/tokenization_utils.html#transformers.tokenization_utils_base.PreTrainedTokenizerBase.encode
Ah yes, but this is under the "internal" tools, so let's have the subclasses show the documentation since I doubt users will go that far down.
Will try your magic perl, thanks!<|||||>Well, I meant that the xref link could link to that page. So it's not about users browsing to it, but sphinx resolving to that doc.
Unless I'm missing something and you are talking about something else. |
transformers | 9,917 | closed | distilbert: fix creation of sinusoidal embeddings | Hi,
similar issue as reported by @stas00 with BART, see #8226.
The creation of sinusoidal embeddings is currently not working on PyTorch 1.8+.
It fails with:
```bash
File "/mnt/europeana-bert/flair/flair/embeddings/token.py", line 820, in __init__
self.model = AutoModel.from_pretrained(model, config=config, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/transformers/models/auto/modeling_auto.py", line 728, in from_pretrained
return MODEL_MAPPING[type(config)].from_pretrained(
File "/opt/conda/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1034, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File "/opt/conda/lib/python3.8/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 419, in __init__
self.embeddings = Embeddings(config) # Embeddings
File "/opt/conda/lib/python3.8/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 88, in __init__
create_sinusoidal_embeddings(
File "/opt/conda/lib/python3.8/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 76, in create_sinusoidal_embeddings
out[:, 0::2] = torch.FloatTensor(np.sin(position_enc[:, 0::2]))
RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation.
```
I've seen this problem when trying to train a model in Flair with DistilBERT as feature-based embeddings, as well as when training a DistilBERT model from scratch using the official example.
It can be reproduced in a `nvcr.io/nvidia/pytorch:20.12-py3` container, that comes with PyTorch 1.8. | 01-31-2021 22:08:35 | 01-31-2021 22:08:35 | |
transformers | 9,916 | closed | RAG + DPR model performance issues | Hi.
I am trying to reproduce the results obtained in the Retriever Augmented Generation paper for Question Answering on the Natural Questions (NQ) Dataset (Exact Match accuracy 44%).
However, I am not able to reproduce them.
Can someone kindly let me know which DPR model, DPR dataset and RAG dataset was used to obtain the 44% EM Accuracy on NQ dataset?
Thanks. | 01-31-2021 20:35:40 | 01-31-2021 20:35:40 | This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,915 | closed | prediction_step() is not using compute_loss() | Hi @sgugger , I think there is an issue with `prediction_step` in `trainer.py`. The problem arises when implementing a custom loss function that requires reshaping input labels. In `training_step`, for loss calculation, it calls `compute_loss()` which is totally fine but in the `prediction_step` it calculates loss without `compute_loss()` function. This inconsistency causes some issues. Would you think it would be better to call `compute_loss()` in both cases in order to avoid this problem?
Update: As I see the code, the loss is tightly coupled with `LabelSmoother` which makes it hard to do them in a single function. If you have any suggestion, it makes me happy to contribute to Huggingface ;)
[prediction_step()](https://huggingface.co/transformers/_modules/transformers/trainer.html#Trainer.prediction_step)
[training_step()](https://huggingface.co/transformers/_modules/transformers/trainer.html#Trainer.training_step) | 01-31-2021 14:48:23 | 01-31-2021 14:48:23 | Mmmm probably. This is a bit tricky to make sure it doesn't break anything but makes more sense. I'll try to look at this on Monday.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,914 | closed | AttributeError: 'torch.Size' object has no attribute 'as_list' | Hello,
I ran the follownig official example script from [longformerforquestionanswering](https://huggingface.co/transformers/model_doc/longformer.html#longformerforquestionanswering)
```
# Tokenizer
tokenizer = LongformerTokenizer.from_pretrained('allenai/longformer-large-4096-finetuned-triviaqa')
# Model
model = TFLongformerForQuestionAnswering.from_pretrained('allenai/longformer-large-4096-finetuned-triviaqa')
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
encoding = tokenizer(question, text, return_tensors="pt")
input_ids = encoding["input_ids"]
# default is local attention everywhere
# the forward method will automatically set global attention on question tokens
attention_mask = encoding["attention_mask"]
outputs = model(input_ids, attention_mask=attention_mask)
start_logits = outputs.start_logits
end_logits = outputs.end_logits
all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0].tolist())
answer_tokens = all_tokens[torch.argmax(start_logits) :torch.argmax(end_logits)+1]
answer = tokenizer.decode(tokenizer.convert_tokens_to_ids(answer_tokens)) # remove space prepending space token
```
But got following error:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-18-4bf253125151> in <module>
7 attention_mask = encoding["attention_mask"]
8
----> 9 outputs = model(input_ids, attention_mask=attention_mask)
10 start_logits = outputs.start_logits
11 end_logits = outputs.end_logits
~\Documents\env\lib\site-packages\tensorflow\python\keras\engine\base_layer.py in __call__(self, *args, **kwargs)
983
984 with ops.enable_auto_cast_variables(self._compute_dtype_object):
--> 985 outputs = call_fn(inputs, *args, **kwargs)
986
987 if self._activity_regularizer:
~\Documents\env\lib\site-packages\transformers\modeling_tf_longformer.py in call(self, inputs, attention_mask, global_attention_mask, token_type_ids, position_ids, inputs_embeds, output_attentions, output_hidden_states, return_dict, start_positions, end_positions, training)
1492 # put global attention on all tokens until `config.sep_token_id` is reached
1493 sep_token_indices = tf.where(input_ids == self.config.sep_token_id)
-> 1494 global_attention_mask = _compute_global_attention_mask(shape_list(input_ids), sep_token_indices)
1495
1496 outputs = self.longformer(
~\Documents\env\lib\site-packages\transformers\modeling_tf_utils.py in shape_list(x)
924 :obj:`List[int]`: The shape of the tensor as a list.
925 """
--> 926 static = x.shape.as_list()
927 dynamic = tf.shape(x)
928 return [dynamic[i] if s is None else s for i, s in enumerate(static)]
AttributeError: 'torch.Size' object has no attribute 'as_list'
``` | 01-31-2021 11:39:44 | 01-31-2021 11:39:44 | This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread.<|||||>I have the same question.<|||||>Hello! You're using TensorFlow models (see the `TF` prefix) but you're asking the tokenizer to return PyTorch tensors. You should either stick to full PyTorch (remove the `TF` prefix) or full TF (ask the tokenizer to return `tf` values)<|||||>I met the same issue, I did not know how to fix it
```
tensor([[ 0, 24948, 5357, 88, 14, 397, 1176, 6724, 7, 35297,
18109, 5814, 16, 43, 167, 4446, 37361, 381, 2, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1]],
device='cuda:0')
tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
device='cuda:0')
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-13-1309f9063eea>](https://localhost:8080/#) in <module>
1 _, tokenizer = load_pho_bert()
----> 2 infer('Cảm ơn bạn đã chạy thử model của mình. Chúc một ngày tốt lành nha!', tokenizer)
2 frames
[/usr/local/lib/python3.7/dist-packages/keras/engine/input_spec.py](https://localhost:8080/#) in display_shape(shape)
269
270 def display_shape(shape):
--> 271 return str(tuple(shape.as_list()))
272
273
AttributeError: 'torch.Size' object has no attribute 'as_list'
```<|||||>> Hello! You're using TensorFlow models (see the `TF` prefix) but you're asking the tokenizer to return PyTorch tensors. You should either stick to full PyTorch (remove the `TF` prefix) or full TF (ask the tokenizer to return `tf` values)
Please help me how to fix this problem? How can I change my code?
def infer(text, tokenizer, max_len=120):
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
print(device)
class_names = ['thế giới', 'thể thao', 'văn hóa', 'vi tính']
model = tf.keras.models.load_model('./models/cnn_nlp_text_classification_4_classer.h5')
encoded_review = tokenizer.encode_plus(
text,
max_length=max_len,
truncation=True,
add_special_tokens=True,
padding='max_length',
return_attention_mask=True,
return_token_type_ids=False,
return_tensors='pt',
)
input_ids = encoded_review['input_ids'].to(device)
print(input_ids.shape)
attention_mask = encoded_review['attention_mask'].to(device)
print(attention_mask.shape)
output = model(input_ids, attention_mask)
==> error happen here
|
transformers | 9,913 | closed | Gradient accumulation and distributed parallelism will reduce the effect? | Reference the code: "transformers/examples/legacy/question-answering/run_squad.py"
I found:
1. normal results without using distributed code, using only gradient accumulation.
2. using the distributed code with the parameter gradient_accumulation_steps=1, the effect is normal.
3. using the distributed code with the parameter gradient_accumulation_steps set to other, the experimental effect is abnormal.
What is going on, please? Thanks!
| 01-31-2021 09:04:52 | 01-31-2021 09:04:52 | The following is part of the reference code after my deletion:
```
def get_binary_data(args, macro, tokenizer, read_data=False):
if macro.local_rank not in [-1, 0]:
torch.distributed.barrier()
train_file = ["a.txt", "b.txt", "c.txt"]
valid_file = ["a_valid.txt", "b_valid.txt", "c_valid.txt"]
train_data, vaild_data = [],[]
cached_features_file = args.train_path+"utt_generator"
if read_data == False:
for t_f in train_file:
train_utterance = file_reader(args.train_path+t_f)
datasets = get_data_loaders(train_utterance, tokenizer)
train_data.extend(datasets)
for t_f in valid_file:
vaild_utterance = file_reader(args.train_path+t_f)
datasets = get_data_loaders(vaild_utterance, tokenizer)
vaild_data.extend(datasets)
else:
read_data = torch.load(cached_features_file)
train_data = read_data["train_data"]
vaild_data = read_data["dev_data"]
train_len = len(train_data)
logger.info("train len:%d, valid len:%d."%(len(train_data), len(vaild_data)))
train_batch_size = args.batch_size * max(1, macro.n_gpu)
train_sampler = RandomSampler(train_data) if macro.local_rank == -1 else DistributedSampler(train_data)
train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size=train_batch_size, collate_fn=collate_fn)
eval_batch_size = args.batch_size * max(1, macro.n_gpu)
eval_sampler = SequentialSampler(vaild_data)
eval_dataloader = DataLoader(vaild_data, sampler=eval_sampler, batch_size=eval_batch_size, collate_fn=collate_fn_test)
if macro.local_rank == 0:
torch.distributed.barrier()
return train_dataloader, eval_dataloader, train_len
def train(model, training_data, optimizer, device, scheduler, args, macro):
model.train()
batch_idx = 0
epoch_loss = 0
logging_loss = 0
global global_step
global tb_writer
for batch in tqdm(
training_data,
mininterval=2,
desc=" - (Traning) ",
leave=False,
disable=macro.local_rank not in [-1, 0]
):
batch_idx += 1
input_ids, lm_labels, token_type_ids, attention_mask = list(map(lambda x: x.to(device), batch))
(lm_loss), *_ = model(input_ids, token_type_ids=token_type_ids, lm_labels=lm_labels, attention_mask=attention_mask)
if macro.n_gpu > 1:
lm_loss = lm_loss.mean() # mean() to average on multi-gpu parallel (not distributed) training
loss = lm_loss / args.gradient_accumulation_steps
loss.backward()
epoch_loss += loss.item()
if (batch_idx) % args.gradient_accumulation_steps == 0:
torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
optimizer.step()
optimizer.zero_grad()
scheduler.step()
return epoch_loss/batch_idx
@func_time
def train_epoch(model, train_data, dev_data, optimizer, device, scheduler, best_result, args, tokenizer, epoch, macro):
if macro.local_rank in [-1, 0]:
logger.info("----" * 5)
logger.info('Epoch: {}'.format(epoch))
train_loss = train(model, train_data, optimizer, device, scheduler, args, macro)
if epoch>=12:
if macro.local_rank in [-1, 0]:
model_to_evaluate = model.module if hasattr(model, "module") else model
vaild_bleu = evaluate(model_to_evaluate, dev_data, device, tokenizer, args)
if best_result < vaild_bleu:
best_result = vaild_bleu
torch.save(model_to_evaluate.state_dict(), args.output_model_path + "_valid")
logger.info('save:{}'.format(args.output_model_path + "_valid"))
logger.info("Val. Bleu: %4f" % (vaild_bleu))
if epoch%5==0:
if macro.local_rank == -1 or torch.distributed.get_rank() == 0:
model_to_save = model.module if hasattr(model, "module") else model
torch.save(model_to_save.state_dict(), args.output_model_path+"_"+str(epoch))
logger.info('save:{}'.format(args.output_model_path+"_"+str(epoch)))
if macro.local_rank in [-1, 0]:
logger.info("Train Loss:%.5f"%(train_loss))
logger.info("----"*5)
return best_result
def main():
parser = argparse.ArgumentParser()
parser.add_argument('--no_cuda', action="store_true")
parser.add_argument("--local_rank", type=int, default=-1, help="local_rank for distributed training on gpus")
macro = parser.parse_args()
args = OptionSet()
args.model_name = "test"
args.model_name = args.model_name+"_distributed_nogrid"
global logger
logger = create_logger(args)
# Setup CUDA, GPU & distributed training
if macro.local_rank == -1 or macro.no_cuda:
device = torch.device("cuda" if torch.cuda.is_available() and not macro.no_cuda else "cpu")
macro.n_gpu = 0 if macro.no_cuda else torch.cuda.device_count()
else: # Initializes the distributed backend which will take care of sychronizing nodes/GPUs
torch.cuda.set_device(macro.local_rank)
device = torch.device("cuda", macro.local_rank)
torch.distributed.init_process_group(backend="nccl")
macro.n_gpu = 1
# args.device = device
set_seed(macro, args.seed)
# Setup logging
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
level=logging.INFO if macro.local_rank in [-1, 0] else logging.WARN,
)
logger.warning(
"Process rank: %s, device: %s, n_gpu: %s, distributed training: %s, 16-bits training: %s",
macro.local_rank,
device,
macro.n_gpu,
bool(macro.local_rank != -1),
# args.fp16,
)
# Load pretrained model and tokenizer
if macro.local_rank not in [-1, 0]:
# Make sure only the first process in distributed training will download model & vocab
torch.distributed.barrier()
logger.info('using device:{}'.format(device))
model, _, tokenizer = creat_model()
if macro.local_rank == 0:
# Make sure only the first process in distributed training will download model & vocab
torch.distributed.barrier()
model = model.to(device)
global PAD_idx
PAD_idx = tokenizer.convert_tokens_to_ids(SPECIAL_TOKENS[-3])
train_data, dev_data, train_len = get_binary_data(args, macro, tokenizer, read_data=False)
# Training phase.
logger.info("Start training.")
instances_num = train_len
train_steps = int(instances_num * args.epochs_num / args.batch_size) + 1
logger.info('Batch size: {}'.format(args.batch_size))
logger.info('The number of training instances:{}'.format(instances_num))
num_parameters = 0
parameters = model.parameters()
for parameter in parameters:
num_parameters += parameter.numel()
logger.info('number of model parameters: {}'.format(num_parameters))
decoder_layer = list(map(id, model.decoder.layers.parameters()))
encoder_para = filter(lambda p: id(p) not in (decoder_layer), model.parameters())
optimizer_grouped_parameters = [
{'params': encoder_para, 'lr': args.learning_rate, 'weight_decay_rate': 0.01},
{'params': model.decoder.layers.parameters(), 'lr': args.learning_rate * 5, 'weight_decay_rate': 0.01}
]
optimizer = AdamW(optimizer_grouped_parameters, lr=args.learning_rate, correct_bias=False)
scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=train_steps*args.warmup, num_training_steps=train_steps)
# multi-gpu training (should be after apex fp16 initialization)
if macro.n_gpu > 1:
model = torch.nn.DataParallel(model)
# Distributed training (should be after apex fp16 initialization)
if macro.local_rank != -1:
model = torch.nn.parallel.DistributedDataParallel(
model, device_ids=[macro.local_rank], output_device=macro.local_rank, find_unused_parameters=True
)
best_result = 0.0
for epoch in range(1, args.epochs_num+1):
best_result = train_epoch(model, train_data, dev_data, optimizer, device, scheduler, best_result, args, tokenizer, epoch, macro)
```<|||||>This is the result of my experiment:

<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,912 | closed | How to add more fields in TrainingArguments | I am using ```from transformers import TrainingArguments```. However, there are more training arguments in my own project. How can I add more fields (parameters) in to the ```args```? Besides, if I have some other ```Arguments Class``` that is similar to ```TrainingArguments```, how to merge them into one ```args```? | 01-31-2021 07:42:02 | 01-31-2021 07:42:02 | You could subclass the `TrainingArguments` class and add more fields to it. You could refer to `https://github.com/huggingface/transformers/blob/master/src/transformers/training_args_seq2seq.py` for an example https://github.com/huggingface/transformers/blob/master/src/transformers/training_args_seq2seq.py<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.