repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
βŒ€
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
10,816
open
[trainer] figuring out why eval with `--fp16_full_eval` is 25% slower
Recently HF trainer was extended to support full fp16 eval via `--fp16_full_eval`. I'd have expected it to be either equal or faster than eval with fp32 model, but surprisingly I have noticed a 25% slowdown when using it. This may or may not impact deepspeed as well, which also runs eval in fp16, but we can't compare it to a baseline, since it only runs fp16. I wonder if someone would like to research where the slowdown comes from. I'd probably isolate the `model.half()` call which should be a constant and focus on the rest of the eval. I'm thinking that some component doesn't take well to fp16 variables. e.g. label smoothing was problematic and now should be fixed in https://github.com/huggingface/transformers/pull/10815, but I tested w/ and w/o label smoothing and it's not adding to the slowdown. Here are the script and the corresponding metrics. First w/o `--fp16_full_eval`, ``` export BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 \ ./examples/seq2seq/run_translation.py --model_name_or_path t5-small --output_dir /tmp/zero3 \ --overwrite_output_dir --max_train_samples 10 --max_val_samples 100 --max_source_length 12 \ --max_target_length 128 --val_max_target_length 128 --do_train --num_train_epochs 1 \ --per_device_train_batch_size 2 --learning_rate 3e-3 --warmup_steps 8 --predict_with_generate \ --logging_steps 0 --save_steps 2 --eval_steps 1 --group_by_length --adafactor --dataset_name wmt16 \ --dataset_config ro-en --source_lang en --target_lang ro \ --source_prefix "translate English to Romanian: " --do_eval ***** train metrics ***** epoch = 1.0 init_mem_cpu_alloc_delta = 2MB init_mem_cpu_peaked_delta = 0MB init_mem_gpu_alloc_delta = 230MB init_mem_gpu_peaked_delta = 0MB train_mem_cpu_alloc_delta = 60MB train_mem_cpu_peaked_delta = 63MB train_mem_gpu_alloc_delta = 231MB train_mem_gpu_peaked_delta = 194MB train_runtime = 7.7162 train_samples = 10 train_samples_per_second = 0.648 ***** eval metrics ***** epoch = 1.0 eval_bleu = 2.4612 eval_gen_len = 18.53 eval_loss = 5.017 eval_mem_cpu_alloc_delta = 0MB eval_mem_cpu_peaked_delta = 0MB eval_mem_gpu_alloc_delta = 0MB eval_mem_gpu_peaked_delta = 244MB eval_runtime = 4.6481 eval_samples = 100 eval_samples_per_second = 21.514 ``` now let's add `--fp16_full_eval`: ``` export BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 \ ./examples/seq2seq/run_translation.py --model_name_or_path t5-small --output_dir /tmp/zero3 \ --overwrite_output_dir --max_train_samples 10 --max_val_samples 100 --max_source_length 12 \ --max_target_length 128 --val_max_target_length 128 --do_train --num_train_epochs 1 \ --per_device_train_batch_size 2 --learning_rate 3e-3 --warmup_steps 8 --predict_with_generate \ --logging_steps 0 --save_steps 2 --eval_steps 1 --group_by_length --adafactor --dataset_name wmt16 \ --dataset_config ro-en --source_lang en --target_lang ro \ --source_prefix "translate English to Romanian: " --do_eval \ --fp16_full_eval ***** train metrics ***** epoch = 1.0 init_mem_cpu_alloc_delta = 2MB init_mem_cpu_peaked_delta = 0MB init_mem_gpu_alloc_delta = 230MB init_mem_gpu_peaked_delta = 0MB train_mem_cpu_alloc_delta = 60MB train_mem_cpu_peaked_delta = 63MB train_mem_gpu_alloc_delta = 231MB train_mem_gpu_peaked_delta = 194MB train_runtime = 7.1477 train_samples = 10 train_samples_per_second = 0.7 ***** eval metrics ***** epoch = 1.0 eval_bleu = 2.4612 eval_gen_len = 18.53 eval_loss = 5.0168 eval_mem_cpu_alloc_delta = 0MB eval_mem_cpu_peaked_delta = 0MB eval_mem_gpu_alloc_delta = -231MB eval_mem_gpu_peaked_delta = 262MB eval_runtime = 6.0125 eval_samples = 100 eval_samples_per_second = 16.632 ``` As you can see w/o `--fp16_full_eval`: we get ~22 samples per sec and w/ it only ~17/ - that's a huge difference. I also tested with a larger sample and the gap remains constant. The halving happens here: https://github.com/huggingface/transformers/blob/21e86f99e6b91af2e4df3790ba6c781e85fa0eb5/src/transformers/trainer.py#L1800 Thank you!
03-20-2021 04:30:07
03-20-2021 04:30:07
Hi @stas00, Please let me know if this is still open and I can contribute.<|||||>Yes, please.<|||||>I reproduced this in colab and got 28% slowness but still figuring out the cause, Earlier my assumption was this bit reduction/quantization was a device-specific thing.<|||||>Usually in such situations I try to either go from the bottom up or in reverse. That is just take the `model(**inputs)` and measure the speed w/ `model` vs `model.half()` - if it's the same go one level up into `generate`, etc. Or starting from the top (`generate`) and then removing big chunks of code until you find the part that contributes to the slow down. You can use this tracker to bracket the operation you measure. https://github.com/huggingface/transformers/blob/335c0ca35c159f88d73198bdac928e61a4d480c7/src/transformers/trainer_utils.py#L258 But a totally different approach which might get to the core of the issue much faster is to use a python profiler, .e.g. `cProfile` - that way you get the full analytics on each function call and if you compare these side by side w/ and w/o `half()` you might get an instant answer. Actually now that I wrote this I'd say start with this approach. <|||||>I have done a few measures on 2 different cards (a 3090 and a 2080 Ti) using various evaluation batch sizes, and I haven't observed a single culprit for this problem. Instead, I'm seeing that all the operations in the `forward` pass are somewhat slower with `fp16`, and consistently so. Setup * Evaluation batch size in {4, 8, 16, 32, 64, 128} * 128 evaluation samples. Since I'm using powers of 2 for the batch sizes, this allows us to test from 1 batch to many batches of the same size. * `max_length` = `min_length` = 128. Setting `min_length` to 128 increases processing time. These are the results for the main operations inside the `forward` method of `T5Block` (total seconds spent in the corresponding areas; figures from the 3090 and the 3 first batch sizes for brevity): <img width="558" alt="image" src="https://user-images.githubusercontent.com/1177582/115751446-7bf66080-a399-11eb-828a-097ea8cb1308.png"> The time difference depends on the batch size, but `fp16` is always between 15% (for bs=64) and 26% (bs=16) slower. --- Today I discovered [this thread](https://github.com/pytorch/pytorch/issues/50153) in the PyTorch forums, and repeated the test using a version of **PyTorch compiled from source**. Amazingly, processing is now **almost twice as fast**, but the difference is still there: <img width="558" alt="image" src="https://user-images.githubusercontent.com/1177582/115752012-fde68980-a399-11eb-868d-fa37b3effd54.png"> In this case, using a batch size of 128 (1 batch) is about 13% slower, while a batch size of 16 is 27% slower. I'm not sure how to proceed. Does this ring a bell for anyone?<|||||>Thank you for researching and profiling, @pcuenca! I think the next step is the new pytorch profiler: https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/ Unfortunately, at the moment I have no time to dig into it, so I hope someone will beat me to it. ------------- re: building from source: Indeed, I recently built pytorch from source and I don't know if it's that or something else since 1 month passed since OP was made, but I'm getting 2x speed improvement (rtx-3090) on training this task. eval is only slightly faster, but is still 25% slower @ fp16. Also adapted the cmd line to the recently changed examples: ``` export BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 \ ./examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --output_dir /tmp/zero3 \ --overwrite_output_dir --max_train_samples 10 --max_eval_samples 100 --max_source_length 12 \ --max_target_length 128 --do_train --num_train_epochs 1 \ --per_device_train_batch_size 2 --learning_rate 3e-3 --warmup_steps 8 --predict_with_generate \ --logging_steps 0 --save_steps 2 --eval_steps 1 --group_by_length --adafactor --dataset_name wmt16 \ --dataset_config ro-en --source_lang en --target_lang ro \ --source_prefix "translate English to Romanian: " --do_eval ***** train metrics ***** epoch = 1.0 init_mem_cpu_alloc_delta = 1254MB init_mem_cpu_peaked_delta = 155MB init_mem_gpu_alloc_delta = 230MB init_mem_gpu_peaked_delta = 0MB train_mem_cpu_alloc_delta = 1382MB train_mem_cpu_peaked_delta = 125MB train_mem_gpu_alloc_delta = 231MB train_mem_gpu_peaked_delta = 194MB train_runtime = 0:00:04.19 train_samples = 10 train_samples_per_second = 1.191 ***** eval metrics ***** epoch = 1.0 eval_bleu = 2.2434 eval_gen_len = 15.69 eval_loss = 3.7374 eval_mem_cpu_alloc_delta = 1MB eval_mem_cpu_peaked_delta = 0MB eval_mem_gpu_alloc_delta = 0MB eval_mem_gpu_peaked_delta = 171MB eval_runtime = 0:00:04.33 eval_samples = 100 eval_samples_per_second = 23.051 ``` add `--fp16_full_eval` ``` export BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 \ ./examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --output_dir /tmp/zero3 \ --overwrite_output_dir --max_train_samples 10 --max_eval_samples 100 --max_source_length 12 \ --max_target_length 128 --do_train --num_train_epochs 1 \ --per_device_train_batch_size 2 --learning_rate 3e-3 --warmup_steps 8 --predict_with_generate \ --logging_steps 0 --save_steps 2 --eval_steps 1 --group_by_length --adafactor --dataset_name wmt16 \ --dataset_config ro-en --source_lang en --target_lang ro \ --source_prefix "translate English to Romanian: " --do_eval --fp16_full_eval ***** train metrics ***** epoch = 1.0 init_mem_cpu_alloc_delta = 1259MB init_mem_cpu_peaked_delta = 155MB init_mem_gpu_alloc_delta = 230MB init_mem_gpu_peaked_delta = 0MB train_mem_cpu_alloc_delta = 1380MB train_mem_cpu_peaked_delta = 125MB train_mem_gpu_alloc_delta = 231MB train_mem_gpu_peaked_delta = 194MB train_runtime = 0:00:03.76 train_samples = 10 train_samples_per_second = 1.326 ***** eval metrics ***** epoch = 1.0 eval_bleu = 2.2434 eval_gen_len = 15.69 eval_loss = 3.7383 eval_mem_cpu_alloc_delta = 4MB eval_mem_cpu_peaked_delta = 0MB eval_mem_gpu_alloc_delta = -231MB eval_mem_gpu_peaked_delta = 262MB eval_runtime = 0:00:05.32 eval_samples = 100 eval_samples_per_second = 18.778 ```<|||||>By running everything with `CUDA_LAUNCH_BLOCKING=1` under the line profiler, I found that [this](https://github.com/huggingface/transformers/blob/ff5cdc086be1e0c3e2bbad8e3469b34cffb55a85/src/transformers/models/t5/modeling_t5.py#L677) and [this](https://github.com/huggingface/transformers/blob/ff5cdc086be1e0c3e2bbad8e3469b34cffb55a85/src/transformers/models/t5/modeling_t5.py#L692) check for infinite values take up more time than I expected. After removing those checks, this is what I end up with: ``` $ export BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 \ python -m cProfile -o profile.prof ./examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --output_dir /tmp/zero3 \ --overwrite_output_dir --max_train_samples 10 --max_eval_samples 1600 --max_source_length 12 \ --max_target_length 128 --do_train --num_train_epochs 1 \ --per_device_train_batch_size 4 --per_device_eval_batch_size $BS --learning_rate 3e-3 --warmup_steps 8 --predict_with_generate \ --logging_steps 0 --save_steps 2 --eval_steps 1 --group_by_length --adafactor --dataset_name wmt16 \ --dataset_config ro-en --source_lang en --target_lang ro \ --source_prefix "translate English to Romanian: " --do_eval ... ***** eval metrics ***** epoch = 1.0 eval_bleu = 0.3251 eval_gen_len = 10.2375 eval_loss = 3.6796 eval_runtime = 0:01:03.89 eval_samples = 1600 eval_samples_per_second = 25.04 eval_steps_per_second = 1.565 ``` The same with `--fp16_full_eval`: ``` ***** eval metrics ***** epoch = 1.0 eval_bleu = 0.3258 eval_gen_len = 10.2406 eval_loss = 3.6797 eval_runtime = 0:01:01.43 eval_samples = 1600 eval_samples_per_second = 26.043 eval_steps_per_second = 1.628 ``` Note that I had to dial up the number of eval examples since this measurement was quite noisy on the shared system I used. However, the FP16 was faster most of the time. If someone could double check these observations under more reliable circumstances, that'll be great. <|||||>Thank you for looking into it, @dsuess! I'm trying to figure out torch.profiler to get a better understanding using native tools. Great to hear you found those checks to be slowdowns. Need to investigate these closer with torch.profiler. And I also found https://github.com/huggingface/transformers/blob/ff5cdc086be1e0c3e2bbad8e3469b34cffb55a85/src/transformers/models/t5/modeling_t5.py#L504 to be another point of slowdown. It's possible that the upcast can be removed completely, which should speed things up. But definitely a slightly faster version is to: ``` attn_weights = nn.functional.softmax(scores.float(), dim=-1).type_as(scores) attn_weights = nn.functional.softmax(scores.float(), dim=-1, dtype=scores.dtype) ``` for fp16 (it makes no difference for fp32) I will look closer into the 2 points you suggested. but also we should run under a more realistic configuration of at least seqlen 512 and not 12 like I had it originally, with large seqlen things change quite a lot. That is `--max_source_length 512 --max_target_length 512` (or even better 1024). <|||||>Thanks for your feedback @stas00. I finally got the time to have a closer look with the pytorch profiler. I'd summarize what I found with: - the speedup we're getting for matmuls in fp16 aren't that great. This might be due to fewer kernels being executed on Tensor cores when using FP16 (31% of kernels) compared to FP32 (74% of kernels). - this is made worse by additional copy/conversion operations as can be seen in the device self time for FP16 (left) vs FP32 (right): <img width="937" alt="image" src="https://user-images.githubusercontent.com/5870291/150110417-2e8baf04-6904-45b9-960c-7cc12a16ee03.png"> These conversions happen in the [layer norm](https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/modeling_t5.py#L246) and before the [softmax](https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/modeling_t5.py#L513), which matches with your observation. I also double checked the layer norm with this [micro benchmark](https://github.com/dsuess/transformers/blob/10816-fp16_eval_performance/tests/benchmark_modeling_t5.py), which runs ~30% slower in FP16. There's a [tiny improvement](https://github.com/dsuess/transformers/commit/63f039329434e5b57051111be9b8466c87689159), which makes the eval-example run ~1% faster, but it doesn't even register in the micro benchmark. Judging from [the issue](https://github.com/pytorch/pytorch/issues/66707) you raised, we can't run layer norm in FP16. I'd expect the same to be true for softmax, so I am unsure if we can get rid of those conversions. We may have a chance to get more out of the matmuls, so I'll try to figure out why those kernels don't run on Tensor cores despite being eligible. --- I've done all these experiments on a 3080Ti with `--max_source_length 512 --max_target_length 512`<|||||>This is fantastic work, @dsuess! Here is an additional profiling report of the same issue but under tf32: https://github.com/huggingface/transformers/issues/14608#issuecomment-1001257392 This appears to be specific to t5 and derived models. And yes the problem is that it uses RMSNorm which pytorch doesn't provide and that's why it's slow. I made a request to make an RMSNorm fused kernel here: https://github.com/NVIDIA/apex/issues/1271 and once this is done to ask to upstream it into pytorch. I hope this should solve this issue. I also tried to avoid re-casting using some tricks here by trying to deploy the existing fused functions: https://github.com/huggingface/transformers/pull/14656 but I couldn't find a faster way using the existing pytorch python API. Have you by chance tried any other architectures using the same benchmarks? e.g. gpt2 and bert as they are very distinct from t5. <|||||>> Here is an additional profiling report of the same issue but under tf32: [#14608 (comment)](https://github.com/huggingface/transformers/issues/14608#issuecomment-1001257392) Great benchmark of the different data types, thanks for sharing. > Have you by chance tried any other architectures using the same benchmarks? e.g. gpt2 and bert as they are very distinct from t5. I've just tested the same script with some of the mbart variants and as expected, fp16 is faster for those.
transformers
10,815
closed
[trainer] fix nan in full-fp16 label_smoothing eval
This PR fixes the issue of getting NaN eval loss with any inference that uses full fp16 model - which is the case with deepspeed or when `--fp16_full_eval` is passed. The problem is that `log_probs.sum` runs over 30-50K of numbers overflows easily in fp16, so this PR switches it to fp32 internally. Which surprisingly requires almost no extra memory. As the conversion happens on the hardware level and we only need an extra `2 bytes * batch_size` of additional memory. Here is some data showing the the metrics remain the same after this fix: ``` export BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 \ ./examples/seq2seq/run_translation.py --model_name_or_path t5-small --output_dir /tmp/zero3 \ --overwrite_output_dir --max_train_samples 10 --max_val_samples 100 --max_source_length 12 \ --max_target_length 128 --val_max_target_length 128 --do_train --num_train_epochs 1 \ --per_device_train_batch_size 2 --learning_rate 3e-3 --warmup_steps 8 --predict_with_generate \ --logging_steps 0 --save_steps 2 --eval_steps 1 --group_by_length --adafactor --dataset_name wmt16 \ --dataset_config ro-en --source_lang en --target_lang ro \ --source_prefix "translate English to Romanian: " --do_eval --label_smoothing 0.1 ***** train metrics ***** epoch = 1.0 init_mem_cpu_alloc_delta = 2MB init_mem_cpu_peaked_delta = 0MB init_mem_gpu_alloc_delta = 230MB init_mem_gpu_peaked_delta = 0MB train_mem_cpu_alloc_delta = 60MB train_mem_cpu_peaked_delta = 63MB train_mem_gpu_alloc_delta = 231MB train_mem_gpu_peaked_delta = 194MB train_runtime = 7.7162 train_samples = 10 train_samples_per_second = 0.648 ***** eval metrics ***** epoch = 1.0 eval_bleu = 2.4612 eval_gen_len = 18.53 eval_loss = 5.017 eval_mem_cpu_alloc_delta = 0MB eval_mem_cpu_peaked_delta = 0MB eval_mem_gpu_alloc_delta = 0MB eval_mem_gpu_peaked_delta = 244MB eval_runtime = 4.6481 eval_samples = 100 eval_samples_per_second = 21.514 ``` now let's add `--fp16_full_eval`, which before this PR leads to ` eval_loss = nan` ``` export BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 \ ./examples/seq2seq/run_translation.py --model_name_or_path t5-small --output_dir /tmp/zero3 \ --overwrite_output_dir --max_train_samples 10 --max_val_samples 100 --max_source_length 12 \ --max_target_length 128 --val_max_target_length 128 --do_train --num_train_epochs 1 \ --per_device_train_batch_size 2 --learning_rate 3e-3 --warmup_steps 8 --predict_with_generate \ --logging_steps 0 --save_steps 2 --eval_steps 1 --group_by_length --adafactor --dataset_name wmt16 \ --dataset_config ro-en --source_lang en --target_lang ro \ --source_prefix "translate English to Romanian: " --do_eval --label_smoothing 0.1 \ --fp16_full_eval ***** train metrics ***** epoch = 1.0 init_mem_cpu_alloc_delta = 2MB init_mem_cpu_peaked_delta = 0MB init_mem_gpu_alloc_delta = 230MB init_mem_gpu_peaked_delta = 0MB train_mem_cpu_alloc_delta = 60MB train_mem_cpu_peaked_delta = 63MB train_mem_gpu_alloc_delta = 231MB train_mem_gpu_peaked_delta = 194MB train_runtime = 7.1477 train_samples = 10 train_samples_per_second = 0.7 ***** eval metrics ***** epoch = 1.0 eval_bleu = 2.4612 eval_gen_len = 18.53 eval_loss = 5.0168 eval_mem_cpu_alloc_delta = 0MB eval_mem_cpu_peaked_delta = 0MB eval_mem_gpu_alloc_delta = -231MB eval_mem_gpu_peaked_delta = 262MB eval_runtime = 6.0125 eval_samples = 100 eval_samples_per_second = 16.632 ``` `eval_loss` is off by 0.0002. I spent quite some time trying to find where to add a test, but it's a tricky situation where the input has to be pretty huge. I remember seeing it in some deepspeed tests, but I can't find at the moment, currently all tests return a normal number. One interesting things I noticed is that `--fp16_full_eval` makes eval slower by 20-25% which is strange, but I tested this PR has no impact on the speed. I file a separate issue about it https://github.com/huggingface/transformers/issues/10816 Fixes: https://github.com/huggingface/transformers/issues/10674 @sgugger, @LysandreJik
03-20-2021 04:17:07
03-20-2021 04:17:07
Hi @stas00 I tested this PR and for me this becomes really slower with mt5-small model after adding the modifications in this PR, here is the command I run, I am using transformer=4.4.2. I will be grateful to your expert knowledge to have the speed issue also fixed. Thank you very much for the incredible jobs you do. `deepspeed run_translation.py --model_name_or_path google/mt5-small --do_train --do_eval --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config_name ro-en --output_dir test/tst-t1ranslatieeeon --per_device_train_batch_size=4 --per_device_eval_batch_size=4 --predict_with_generate --max_train_samples 100 --fp16 --deepspeed ds_config.json --max_val_samples 100 --logging_step 10 `<|||||>I can't possibly see how this PR could impact the speed since it changes the label_smoother and your command line doesn't have `--label_smoothing 0.1` so the modified code in this PR doesn't get to run. That said when you use this PR you're in a way using `master`, so perhaps you were testing with some other `transformers` version before and noticed a regression in `master`. Try to test with whatever version you were using before and then retest with the `master` branch and see whether you can reproduce your issue. If it is, do you know how to use `git bisect`? You can then in a matter of a few runs find out the commit that impacted the performance. If you can't figure it out just give me the last good transformers version and I will help you from there. ----- Also you're not telling me what's inside `ds_config.json` - I assume it's zero2 configuration. zero3 isn't quite ready yet.
transformers
10,814
closed
[makefile] autogenerate target
As a follow up to https://github.com/huggingface/transformers/pull/10801 this PR proposes to group autogeneration code in a separate target. I think as the number of little things the makefile does this helps with clarity. There is no functional change. @sgugger, @LysandreJik
03-19-2021 20:20:13
03-19-2021 20:20:13
transformers
10,813
closed
Example code for ReformerForMaskedLM
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.4.2 - Platform: Linux - Python version: 3.8 - PyTorch version (GPU?): 1.7.1[11.0] - Tensorflow version (GPU?): - Using GPU in script?:yes - Using distributed or parallel set-up in script?:no ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. @patrickvonplaten Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): ReformerForMaskedLM The problem arises when using: * [x ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ x] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: Running the example code for ReformerForMaskedLM: ``` from transformers import ReformerTokenizer, ReformerForMaskedLM import torch tokenizer = ReformerTokenizer.from_pretrained('google/reformer-crime-and-punishment') model = ReformerForMaskedLM.from_pretrained('google/reformer-crime-and-punishment') inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt") labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"] outputs = model(**inputs, labels=labels) loss = outputs.loss logits = outputs.logits ``` causes: ```AssertionError: If you want to use `ReformerForMaskedLM` make sure `config.is_decoder=False` for bi-directional self-attention.``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
03-19-2021 19:28:44
03-19-2021 19:28:44
That is not a bug: https://stackoverflow.com/questions/66625945/huggingfaces-reformerformaskedlm-configuration-issue/66636363#66636363<|||||>Unfortunately that did not help. Adding: ```from transformers import ReformerTokenizer, ReformerForMaskedLM, ReformerConfig import torch tokenizer = ReformerTokenizer.from_pretrained('google/reformer-crime-and-punishment') config = ReformerConfig.from_pretrained('google/reformer-crime-and-punishment') config.is_decoder=False model = ReformerForMaskedLM.from_pretrained('google/reformer-crime-and-punishment', config=config) inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt") labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"] outputs = model(**inputs, labels=labels) loss = outputs.loss logits = outputs.logits``` caused this exception: ```Traceback (most recent call last): File "testReformers.py", line 13, in <module> outputs = model(**inputs, labels=labels) File "/mnt/nfs/d4nvme0/userhomes/ksrinivs/anaconda3/envs/reformers/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/mnt/nfs/d4nvme0/userhomes/ksrinivs/anaconda3/envs/reformers/lib/python3.8/site-packages/transformers/models/reformer/modeling_reformer.py", line 2367, in forward masked_lm_loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1)) File "/mnt/nfs/d4nvme0/userhomes/ksrinivs/anaconda3/envs/reformers/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/mnt/nfs/d4nvme0/userhomes/ksrinivs/anaconda3/envs/reformers/lib/python3.8/site-packages/torch/nn/modules/loss.py", line 961, in forward return F.cross_entropy(input, target, weight=self.weight, File "/mnt/nfs/d4nvme0/userhomes/ksrinivs/anaconda3/envs/reformers/lib/python3.8/site-packages/torch/nn/functional.py", line 2468, in cross_entropy return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) File "/mnt/nfs/d4nvme0/userhomes/ksrinivs/anaconda3/envs/reformers/lib/python3.8/site-packages/torch/nn/functional.py", line 2261, in nll_loss raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' ValueError: Expected input batch_size (21) to match target batch_size (17).```. Something seems amiss here given that there is a single sentence being passed in and it seems to think we have batch sizes of 21 and 17?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,812
closed
Domain adaptation
Hi all, I'm just wondering how to do model adaptation of pre-trained Camembert model on my custom dataset ? I haven't found any information in Transformers documentation. Best regards, Lematmat ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version:4.4.0 - Platform:Jupyter Notebook - Python version:3.7 - PyTorch version (GPU?):1.8, no GPU - Tensorflow version (GPU?): - Using GPU in script?:no - Using distributed or parallel set-up in script?:no --> ## Information Model I am using (Bert, XLNet ...): Camembert
03-19-2021 17:13:22
03-19-2021 17:13:22
Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discusss.huggingface.co) instead? Thanks!<|||||>Thank you very much. I will contact the forum. Regards, lematmat
transformers
10,811
closed
Add transformers id to hub requests
# What does this PR do? This PR adds a `TRANSFORMERS_ID` const, which helps us to group the several files request against the hub.
03-19-2021 14:49:23
03-19-2021 14:49:23
transformers
10,810
closed
handle_impossible_answer not working in the question answering pipeline for ROBERTa model
### Environment info - Platform: Linux 20.04 - Python version 3.8.5 - `transformers` version `3.5.0` and `4.3.2` ### The issue I'm using the `pipeline("question-answering")` with QA Models downloaded [from community](https://huggingface.co/models?pipeline_tag=question-answering). I'm evaluating models on the SQUAD 2.0 dataset which doesn't always have an answer to the given question - that's what the `handle_impossible_answer` flag in the pipeline is for. I noticed that ROBERTa model (any ROBERTa, not just a specific model) in version 4 of `transformers` always produces an answer despite the `handle_impossible_answer` flag - even if the same model for the same example didn't produce an answer (returned "" as an answer) while using version 3 of the library. ```python bert_model_name = 'deepset/bert-base-cased-squad2' roberta_model_name = 'deepset/roberta-base-squad2' bert_tokenizer = AutoTokenizer.from_pretrained(bert_model_name) bert_model = AutoModelForQuestionAnswering.from_pretrained(bert_model_name, return_dict=True) roberta_tokenizer = AutoTokenizer.from_pretrained(roberta_model_name) roberta_model = AutoModelForQuestionAnswering.from_pretrained(roberta_model_name, return_dict=True) bert_qa = pipeline('question-answering', tokenizer=bert_tokenizer, model=bert_model) roberta_qa = pipeline('question-answering', tokenizer=roberta_tokenizer, model=roberta_model) # Random SQUAD 2.0 example which doesn't have an answer to the question question = 'What was the name of the only ship operating in the Indian Ocean?' context = 'In September 1695, Captain Henry Every, an English pirate on board the Fancy, reached the Straits of Bab-el-Mandeb, where he teamed up with five other pirate captains to make an attack on the Indian fleet making the annual voyage to Mocha. The Mughal convoy included the treasure-laden Ganj-i-Sawai, reported to be the greatest in the Mughal fleet and the largest ship operational in the Indian Ocean, and its escort, the Fateh Muhammed. They were spotted passing the straits en route to Surat. The pirates gave chase and caught up with Fateh Muhammed some days later, and meeting little resistance, took some Β£50,000 to Β£60,000 worth of treasure.' print(bert_qa(question=question, context=context, handle_impossible_answer=True)) # transformers 3.5.0: {'score': 0.999398410320282, 'start': 0, 'end': 0, 'answer': ''} # transformers 4.3.2: {'score': 0.999398410320282, 'start': 0, 'end': 0, 'answer': ''} print(roberta_qa(question=question, context=context, handle_impossible_answer=True)) # transformers 3.5.0: {'score': 0.979897797107697, 'start': 0, 'end': 0, 'answer': ''} # transformers 4.3.2: {'score': 0.222181886434555, 'start': 422, 'end': 436, 'answer': 'Fateh Muhammed'} ``` ### Probable issue reason I've found out that in the `question_answering.py` file in the `pipeline` directory in version 4 of `transformers` there is a condition that provides ROBERTa models from adjusting the `p_mask` for this task. It looks simply like this: `if self.tokenizer.cls_token_id`. And since ROBERTa's `cls_token_id = 0` the condition isn't met and the `p_mask` isn't changed for the `cls_token`. This results in omitting the token while answering the question (it behaves like e.g the token was a part of a question). For example BERT's `cls_token_id = 101` so the condition is met. ### Plausible solution Possibly the easy solution is to expand the condition to `if self.tokenizer.cls_token_id is not None`. However, there wasn't such a condition in version 3 at all so maybe it performs some crucial function in its current form that I'm not aware of... ```python # originally the condition here was more general and looked like this # if self.tokenizer.cls_token_id: if self.tokenizer.cls_token_id is not None: cls_index = np.nonzero(encoded_inputs["input_ids"] == self.tokenizer.cls_token_id) p_mask[cls_index] = 0 ```
03-19-2021 14:42:59
03-19-2021 14:42:59
Hi! I believe this is was overlook on our part. Your change looks reasonable to me, do you want to open a PR with your proposed fix? And thank you for opening such a detailed and well-structured issue!
transformers
10,809
closed
[Flax] Add general conversion script
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR changes the weight architecture of `FlaxBertModel` so that it corresponds 1-to-1 to PyTorch's version of `BertModel`. This means that some weights had to be renamed (*e.g.* "layer_norm" -> "LayerNorm" since PyTorch uses "LayerNorm") and also some new `flax.linen.Modules`, such as `FlaxBertSelfOutput` had to be created. As can be seen, the PT=>Flax conversion function is now kept very general and can be applied to all models so that we can fully delete any model-specific conversion logic. The PR has one drawback however: Flax official [SelfAttention Module](https://flax.readthedocs.io/en/latest/_autosummary/flax.linen.SelfAttention.html#flax-linen-selfattention) cannot be used anymore since it doesn't give us enough flexibility to convert PyTorch weights to flax weights without having a model-specific conversion function. FlaxBERT's new attention modules fully correspond to PyTorchBERT's attention modules and are IMO still kept quite short by relying on Flax's [`dot_product_attention` function](https://flax.readthedocs.io/en/latest/_autosummary/flax.linen.dot_product_attention.html). Another drawback is that for auto-regressive Transformers models we will have to manually add all the code corresponding to cached / auto-regressive attention to the attention module (which we do for PyTorch anyways) instead of being able to use already existing code of `nn.linen.SelfAttention` -> see here: https://github.com/google/flax/blob/e31063da71bd7a4df137b000df6a48b0cea35a2b/flax/linen/attention.py#L202. All in all, rewriting parts of `flax.linen.SelfAttention` is the right choice here though because it allows us to have a much cleaner conversion function with very little downside IMO (slightly higher maintenance because we need to copy-paste a bit more code). @LysandreJik @sgugger - could you check if you agree more or less with my solution here (below I left some comments to showcase the trade-offs a bit better). I'll clean the code & upload the new weight structure then :-) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-19-2021 13:25:34
03-19-2021 13:25:34
> Great work! My only concern is to make sure we don't lose any performance by not using `nn.linen.SelfAttention`. If we are just using the same code as its implementation, there is no reason for that but it's good to double-check. > Otherwise, I agree it's better to re-implement it than to have custom weight loading logic.. Great! Yeah, I'll talk with @avital about this next week (hopefully) :-)
transformers
10,808
closed
wav2vec doc tweaks
tiny tweaks
03-19-2021 12:27:29
03-19-2021 12:27:29
Ok actually this is ready to merge
transformers
10,807
closed
I am finetuning mBART for summarization using finetune_trainer.py on custom dataset, but I keep getting this error.
This is the traceback: `thread '<unnamed>' panicked at 'index out of bounds: the len is 453 but the index is 453', /__w/tokenizers/tokenizers/tokenizers/src/tokenizer/normalizer.rs:382:21 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace thread '<unnamed>' panicked at 'range end index 140732665363856 out of range for slice of length 0', /rustc/e1884a8e3c3e813aada8254edfa120e85bf5ffca/library/alloc/src/vec.rs:1317:42 stack backtrace: 0: 0x7f7340048b40 - std::backtrace_rs::backtrace::libunwind::trace::h04d12fdcddff82aa at /rustc/e1884a8e3c3e813aada8254edfa120e85bf5ffca/library/std/src/../../backtrace/src/backtrace/libunwind.rs:100:5 1: 0x7f7340048b40 - std::backtrace_rs::backtrace::trace_unsynchronized::h1459b974b6fbe5e1 at /rustc/e1884a8e3c3e813aada8254edfa120e85bf5ffca/library/std/src/../../backtrace/src/backtrace/mod.rs:66:5 2: 0x7f7340048b40 - std::sys_common::backtrace::_print_fmt::h9b8396a669123d95 at /rustc/e1884a8e3c3e813aada8254edfa120e85bf5ffca/library/std/src/sys_common/backtrace.rs:67:5 3: 0x7f7340048b40 - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::he009dcaaa75eed60 at /rustc/e1884a8e3c3e813aada8254edfa120e85bf5ffca/library/std/src/sys_common/backtrace.rs:46:22 4: 0x7f734006806c - core::fmt::write::h77b4746b0dea1dd3 at /rustc/e1884a8e3c3e813aada8254edfa120e85bf5ffca/library/core/src/fmt/mod.rs:1078:17 5: 0x7f7340046362 - std::io::Write::write_fmt::heb7e50902e98831c at /rustc/e1884a8e3c3e813aada8254edfa120e85bf5ffca/library/std/src/io/mod.rs:1518:15 6: 0x7f734004afb5 - std::sys_common::backtrace::_print::h2d880c9e69a21be9 at /rustc/e1884a8e3c3e813aada8254edfa120e85bf5ffca/library/std/src/sys_common/backtrace.rs:49:5 7: 0x7f734004afb5 - std::sys_common::backtrace::print::h5f02b1bb49f36879 at /rustc/e1884a8e3c3e813aada8254edfa120e85bf5ffca/library/std/src/sys_common/backtrace.rs:36:9 8: 0x7f734004afb5 - std::panicking::default_hook::{{closure}}::h658e288a7a809b29 at /rustc/e1884a8e3c3e813aada8254edfa120e85bf5ffca/library/std/src/panicking.rs:208:50 9: 0x7f734004ac58 - std::panicking::default_hook::hb52d73f0da9a4bb8 at /rustc/e1884a8e3c3e813aada8254edfa120e85bf5ffca/library/std/src/panicking.rs:227:9 10: 0x7f734004b751 - std::panicking::rust_panic_with_hook::hfe7e1c684e3e6462 at /rustc/e1884a8e3c3e813aada8254edfa120e85bf5ffca/library/std/src/panicking.rs:593:17 11: 0x7f734004b297 - std::panicking::begin_panic_handler::{{closure}}::h42939e004b32765c at /rustc/e1884a8e3c3e813aada8254edfa120e85bf5ffca/library/std/src/panicking.rs:499:13 12: 0x7f7340048ffc - std::sys_common::backtrace::__rust_end_short_backtrace::h9d2070f7bf9fd56c at /rustc/e1884a8e3c3e813aada8254edfa120e85bf5ffca/library/std/src/sys_common/backtrace.rs:141:18 13: 0x7f734004b1f9 - rust_begin_unwind at /rustc/e1884a8e3c3e813aada8254edfa120e85bf5ffca/library/std/src/panicking.rs:495:5 14: 0x7f7340065fd1 - core::panicking::panic_fmt::ha0bb065d9a260792 at /rustc/e1884a8e3c3e813aada8254edfa120e85bf5ffca/library/core/src/panicking.rs:92:14 15: 0x7f7340069d32 - core::slice::index::slice_end_index_len_fail::hcd7c711938bf4c03 at /rustc/e1884a8e3c3e813aada8254edfa120e85bf5ffca/library/core/src/slice/index.rs:41:5 16: 0x7f733fd95d63 - core::ptr::drop_in_place::h2923a820a2e4a8d4 17: 0x7f733fd9b01c - <rayon::vec::IntoIter<T> as rayon::iter::IndexedParallelIterator>::with_producer::hd6f8d390195a749b 18: 0x7ffee086ff90 - <unknown> thread panicked while panicking. aborting.` I am using Colab for finetuning mBART. Any help will be appreciated. Thank you:)
03-19-2021 09:59:48
03-19-2021 09:59:48
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,806
closed
[XLSR-Wav2Vec2 Info doc] Add a couple of lines
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-19-2021 09:47:22
03-19-2021 09:47:22
transformers
10,805
closed
ONNX export outputs many warnings
I was testing ONNX export via your **04-onnx-export.ipynb** notebook and when calling `!python -m transformers.convert_graph_to_onnx --framework pt --model bert-base-cased --opset 11 --quantize onnx/bert-base-cased2.onnx` I get many Warnings like: ``` Warning: Unsupported operator LayerNormalization. No schema registered for this operator. Warning: Unsupported operator Attention. No schema registered for this operator. Warning: Unsupported operator LayerNormalization. No schema registered for this operator. Warning: Unsupported operator Gelu. No schema registered for this operator. ... ``` They only appear when using `--quantize` flag. I know it is just a warning ... but still ... does it affect the exporting process in any way?
03-19-2021 09:26:42
03-19-2021 09:26:42
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,804
closed
Initializing ddp is extremely slow when finetuning RAG
Hi, when I am finetuning the RAG model, it seems that the DDP process is extremely slow. I waited 1 day but still did not see the training process. loading file None loading file None loading file /nfs/users/s_xiangru/model/rag/rag/rag-sequence-nq/generator_tokenizer/special_tokens_map.json loading file /nfs/users/s_xiangru/model/rag/rag/rag-sequence-nq/generator_tokenizer/tokenizer_config.json Global seed set to 42 Global seed set to 42 LOCAL_RANK: 2 - CUDA_VISIBLE_DEVICES: [0,1,2,3] Using native 16bit precision. Global seed set to 42 LOCAL_RANK: 1 - CUDA_VISIBLE_DEVICES: [0,1,2,3] Using native 16bit precision. Global seed set to 42 INFO:__main__:Custom init_ddp_connection. initializing ddp: GLOBAL_RANK: 2, MEMBER: 3/4 INFO:__main__:Custom init_ddp_connection. initializing ddp: GLOBAL_RANK: 1, MEMBER: 2/4
03-19-2021 06:20:36
03-19-2021 06:20:36
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,803
closed
How much vRAM should I have for fine tuning DeBERTa v2 xxlarge?
I'm fine tuning DeBERTa v2 xxlarge with 1.5B parameters on Nvidia Tesla T4 (16GB vRAM) and it returns "CUDA out of memory". How much vRAM is enough? @LysandreJik
03-19-2021 04:38:33
03-19-2021 04:38:33
I don't know the answer, but hoping that it works after this PR got merged https://github.com/huggingface/transformers/pull/10753 Do you already use deepspeed ?<|||||>No, i did not
transformers
10,802
closed
addressing vulnerability report in research project deps
This PR addresses this security alert: https://github.com/huggingface/transformers/security/dependabot/examples/research_projects/lxmert/requirements.txt/Pillow/open @LysandreJik
03-19-2021 01:10:28
03-19-2021 01:10:28
transformers
10,801
closed
Sort init import
# What does this PR do? Not a high-priority item but I get bored at nights and I like writing those kinds of scripts πŸ˜… So this PR adds a script to properly sort the import inside `_import_structure` because people have been absolutely ruthless and putting their objects in any kind of random order. That's not very feng-shui so I'm bringing back harmony by having the same sort as isort applied to all `__init__` that contain an `_import_structure`.
03-19-2021 00:24:25
03-19-2021 00:24:25
To address your comments on the `MakeFile`, I have removed some checks from `extra_quality_checks` because they are checks that modify content and `make quality` is only supposed to check, not change. To have `make fixup` still work as intended, I put the checks that change content in `extra_style_checks` that is called both by `make fixup` and `make style`. Could you double-check it looks okay @LysandreJik and @stas00 ? Thanks!
transformers
10,800
closed
How to get a probability for the result of t5_tokenizer.decode(output,...)?
Hello, I am using `t5-base` to map phrases into categories, for example: "I want to eat" -> "hunger". Is there any way to get the probability for `result` values? For example, if the input is "He is hungry", the model returns 5 labels. These results seem to be ordered by some relevance rank, so that the most relevant label is always first in `outputs`. So, my question is how can I retrieve these probabilities? My final goal is to set a threshold on the probability, so that `outputs` would only include results that pass this threshold, or it can be empty if nothing relevant found. ``` t5_tokenizer = T5Tokenizer.from_pretrained('t5-base') t5_model = T5ForConditionalGeneration.from_pretrained('t5-base') ... model.model.eval() outputs = model.model.generate( input_ids=test_input_ids,attention_mask=test_attention_mask, max_length=64, early_stopping=True, num_beams=10, num_return_sequences=5, no_repeat_ngram_size=2 ) for output in outputs: result = t5_tokenizer.decode(output, skip_special_tokens=True, clean_up_tokenization_spaces=True) print(result) ``` Thanks.
03-18-2021 23:04:46
03-18-2021 23:04:46
Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discusss.huggingface.co) instead? Thanks!
transformers
10,799
closed
Expand a bit the presentation of examples
# What does this PR do? This PR adds a bit more information to the examples README (main and specific per example), copying some information from the main philosophy and expanding a bit, to make sure all users know what we want for the examples.
03-18-2021 21:03:18
03-18-2021 21:03:18
I'd be super-handy to link directly to suitable datasets and models for each example as in - https://huggingface.co/datasets?search=squad - https://huggingface.co/models?filter=squad may be this could be an easy first good issue. Some of the keywords and whether to use `?filter=` or `?search=` will require some investigation since the former is hidden and packs some power missing from the latter.<|||||>The first may be helpful, but the second is not necessarily: it shows the models that have been fine-tuned on a squad dataset, not the models that can be fine-tuned on it. There is no way to filter all the models that have an architecture containing a question-answering head as far as I know, which is what we would want to show.<|||||>Would this be at least in the right direction? https://huggingface.co/models?pipeline_tag=question-answering <|||||>Mmm, those seem to be models fine-tuned on a question-answering task, not all models with a QuestionAnswering arch available (for instance, you should see all BERT checkpoints, all distilBERT checkpoints etc).<|||||>OK, then it won't work. It'd be really awesome if in the future we had a filter to filter models by architecture - and sub-architecture in this case - that is without the model-specific part of the class name.
transformers
10,798
closed
Truncated words on GPT-2 output
Hi! I use the GPT-2 model for the seq2seq task, but unfortunately, at the output of the model, words are cut off and sentences are not added, how can I make the model end sentences at the output and not cut off words? (increasing the maximum length does not correct the situation) P.S. I'm sorry, this question is probably very stupid, but I just can't figure it out.
03-18-2021 19:28:43
03-18-2021 19:28:43
Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discusss.huggingface.co) instead? Thanks!
transformers
10,797
closed
Pretrained XLNetTokenizer not returning tokenizer
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.4.1 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.10 - PyTorch version (GPU?): 1.8.0+cu101 (False) - Tensorflow version (GPU?): 2.4.1 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people.--> @patrickvonplaten @LysandreJik ## Information I am using XLNet Tokenizer. When trying to use `XLNetTokenizer.from_pretrained()`, `None` object is returned. I last worked with it in december and it was working fine till then. ## To reproduce Steps to reproduce the behavior: ``` from transformers import XLNetTokenizer tokenizer = XLNetTokenizer.from_pretrained("xlnet-base-cased") print(tokenizer) ``` Output is `None` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior A tokenizer should be returned instead of `None`. <!-- A clear and concise description of what you would expect to happen. -->
03-18-2021 17:50:39
03-18-2021 17:50:39
Hello! This is weird, you should be gotten an error before even being able to instantiate the tokenizer with `from_pretrained`. Such an error: ``` ImportError: XLNetTokenizer requires the SentencePiece library but it was not found in your environment. Checkout the instructions on the installation page of its repo: https://github.com/google/sentencepiece#installation and follow the ones that match your environment. ``` Could you install SentencePiece `pip install sentencepiece` and let me know if it fixes your issue?<|||||>Hi! I had actually did the `pip install sentencepiece`. I was getting `None` after it. I saw the source code and the embedding size used was `None` there. You can check it <a href="https://github.com/gauravsharma-97/transformers/blob/master/src/transformers/models/xlnet/tokenization_xlnet.py#L41-L44">here</a>. I think that is the issue and it should be some integer like BertTokenizer uses 512 as embedding size.<|||||>I may be wrong, but I think this can happen if you're in a colab environment and you install SentencePiece, but don't reload the kernel before re-running your cell. You say you're on Ubuntu, I managed to obtain a similar result by re-running the code you mentioned twice in the same Python runtime, by installing `sentencepiece` between the two code statements. Since sentencepiece is loaded on the fly, this can be the result. I stand by what I say that this is due to `sentencepiece` not being installed. If it's correctly installed in your environment, running your statement results in: ``` PreTrainedTokenizer(name_or_path='xlnet-base-cased', vocab_size=32000, model_max_len=1000000000000000019884624838656, is_fast=False, padding_side='left', special_tokens={'bos_token': '<s>', 'eos_token': '</s>', 'unk_token': '<unk>', 'sep_token': '<sep>', 'pad_token': '<pad>', 'cls_token': '<cls>', 'mask_token': AddedToken("<mask>", rstrip=False, lstrip=True, single_word=False, normalized=True), 'additional_special_tokens': ['<eop>', '<eod>']}) ``` You're mentioning the embedding size which is `None`, this is on purpose. The XLNet model uses relative positional embeddings, and has therefore no limitations on the size of the input (note the `model_max_len` in the above code statement); which isn't the case for BERT, that uses absolute positional embeddings which are limited to 512.<|||||>Yes you are correct. I was running this on colab and it might have required reloading the kernel. But funnily enough, its working today without reloading it. Yesterday might have been an isolated incident, although I did try to get it to run for very long before posting the issue. Anyway, thanks for the help @LysandreJik and for the explanation on embeddings.
transformers
10,796
closed
[Example] Fix a NaN bug in the flax mlm example
## What does this PR do? Fix a NaN bug in the flax masked language model example. This is a bug introduced in #9133 The min should be max. Otherwise, we will get a NaN. ## Who can review? @TevenLeScao @mfuntowicz
03-18-2021 17:23:45
03-18-2021 17:23:45
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>cc @patrickvonplaten <|||||>Hey @merrymercy - super sorry, I saw the PR too late and it was actually already fixed.<|||||>Thanks for your effort on Jax integration! @patrickvonplaten Could you also add some doc for these examples https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling?
transformers
10,795
closed
Fix distributed evaluation
# What does this PR do? #10778 introduced a bug in the distributed evaluation, this PR fixes it. cc @philschmid
03-18-2021 17:05:34
03-18-2021 17:05:34
transformers
10,794
closed
Add new community notebook - wav2vec2 with GPT
* Update:community.md, new nb add * feat: notebook of wav2vec xlsr ctc decoding with gpt logit adjustment * Update: Wav2vec2 CTC decoding with gpt2 adjustment
03-18-2021 17:02:26
03-18-2021 17:02:26
Do you want to take a look @patrickvonplaten?<|||||>thanks a lot!!!
transformers
10,793
closed
[doc] no more bucket
03-18-2021 16:36:24
03-18-2021 16:36:24
transformers
10,792
closed
[Example] Updating Question Answering examples for Predict Stage
# What does this PR do? Fixes #10482 1. It fixes the error that comes while using SQAUD_V2 on question-answering task using `max_val_sample_***` 2. Adds predict method for question-answering examples ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @stas00 @sgugger
03-18-2021 15:23:12
03-18-2021 15:23:12
transformers
10,791
closed
run_summarization script breaks with label_smoothing_factor and pad_to_max_length true
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: '4.5.0.dev0' (from source) - Platform: Linux - Python version: 3.6.9 - PyTorch version (GPU?): '1.8.0' (yes) ## Information I am running the `examples/seq2seq/run_summarization.py` script with BartForConditionalGeneration. The script breaks whenever these two parameters are passed together: - label_smoothing_factor - pad_to_max_length It seems that the source of this behaviour is setting collator to `default_data_collator` if `pad_to_max_length` is defined: https://github.com/huggingface/transformers/blob/5f19c07a704eca4db376b56f950b729dcaa73039/examples/seq2seq/run_summarization.py#L469-L477 while `prepare_decoder_input_ids_from_labels` is only handled by DataCollatorForSeq2Seq: https://github.com/huggingface/transformers/blob/5f19c07a704eca4db376b56f950b729dcaa73039/src/transformers/data/data_collator.py#L292-L294 It seems to be related with: [10452](https://github.com/huggingface/transformers/issues/10452), where passing a model argument to DataCollatorForSeq2Seq solves the problem `data_collator = DataCollatorForSeq2Seq(tokenizer, model=model)` This is more of a question than an issue as it is work in progress. A more general one would be: Is the `default_data_collator` intended for use with seq2seq models (e.g: Bart), with special cases (like label smoothing) to be handled by `DataCollatorForSeq2Seq`? Or should `DataCollatorForSeq2Seq` always be used with Seq2SeqTrainer in the future? The problem arises when using: * [x ] the official example scripts: (give details below) examples/seq2seq/run_summarization.py * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x ] an official GLUE/SQUaD task: (give the name) (xsum) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: ``` python examples/seq2seq/run_summarization.py \ --model_name_or_path sshleifer/distilbart-xsum-12-3 \ --do_train \ --do_eval \ --dataset_name xsum \ --output_dir /tmp/output_dir \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate \ --max_train_samples 500 \ --max_val_samples 500 \ --max_source_length 128 \ --max_target_length 64 \ --label_smoothing_factor 0.1 \ --pad_to_max_length true ``` Output: ``` Traceback (most recent call last): File "examples/seq2seq/run_summarization.py", line 595, in <module> main() File "examples/seq2seq/run_summarization.py", line 533, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/opt/anaconda3/envs/tensorflow2/lib/python3.6/site-packages/transformers/trainer.py", line 1082, in train tr_loss += self.training_step(model, inputs) File "/opt/anaconda3/envs/tensorflow2/lib/python3.6/site-packages/transformers/trainer.py", line 1472, in training_step loss = self.compute_loss(model, inputs) File "/opt/anaconda3/envs/tensorflow2/lib/python3.6/site-packages/transformers/trainer.py", line 1511, in compute_loss loss = self.label_smoother(outputs, labels) File "/opt/anaconda3/envs/tensorflow2/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 439, in __call__ smoothed_loss.masked_fill_(padding_mask, 0.0) RuntimeError: The expanded size of the tensor (128) must match the existing size (64) at non-singleton dimension 1. Target sizes: [4, 128, 1]. Tensor sizes: [4, 64, 1] 0%| ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Script works for a parameter set including: - label_smoothing_factor - pad_to_max_length Or info which collator class should be used in the future <!-- A clear and concise description of what you would expect to happen. -->
03-18-2021 13:42:43
03-18-2021 13:42:43
I think the `DataCollatorForSeq2Seq` should be used in all cases as it does more than just padding. If you want to suggest a PR with the fix, that would be more than welcome!<|||||>Assuming the goal is: - using DataCollatorForSeq2Seq in Seq2SeqTrainer as default when no data_collator is provided, while keeping the remaining functionality unchanged, the first approach could be: - providing Seq2SeqTrainer with an `__init__` method: - instantiating a DataCollatorForSeq2Seq if no collator provided, and - calling Trainer's `__init__` and passing the instance along with other parameters. Something like: ``` class Seq2SeqTrainer(Trainer): def __init__( self, model: Union[PreTrainedModel, torch.nn.Module] = None, args: TrainingArguments = None, data_collator: Optional[DataCollator] = None, train_dataset: Optional[Dataset] = None, eval_dataset: Optional[Dataset] = None, tokenizer: Optional["PreTrainedTokenizerBase"] = None, model_init: Callable[[], PreTrainedModel] = None, compute_metrics: Optional[Callable[[EvalPrediction], Dict]] = None, callbacks: Optional[List[TrainerCallback]] = None, optimizers: Tuple[torch.optim.Optimizer, torch.optim.lr_scheduler.LambdaLR] = (None, None), ): """ Setting DataCollatorForSeq2Seq as default if no data_collator is provided. """ if data_collator is None: # Perform validation and overwrite model with model_init before passing to collator, # as done in Trainer if tokenizer is None: raise RuntimeError( "`tokenizer` parameter is required by the default `DataCollatorForSeq2Seq`" ) if model is None and model_init is None: raise RuntimeError( "`Trainer` requires either a `model` or `model_init` argument" ) model_collator = model if model_init is not None: # No parameter handling for hyper-parameter search (trial) # Only passing the prepare_decoder_input_ids_from_labels function model_collator = model_init() data_collator = DataCollatorForSeq2Seq(tokenizer, model=model_collator) super().__init__( model, args, data_collator, train_dataset, eval_dataset, tokenizer, model_init, compute_metrics, callbacks, optimizers, ) ``` Of course, I would need to look further into the code and the handling of other DataCollatorForSeq2Seq parameters like: `pad_to_multiple_of=8 if training_args.fp16 else None` @sgugger, Thanks for the suggestion, it is very interesting;)<|||||>Mmm, I was thinking of an easier fix to just use that in the example script without necessary changing the default in `Seq2SeqTrainer`.
transformers
10,790
closed
HerbertTokenizer doesn't work on version 3.5.1
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.5.1 - Platform: MacOS X, Linux - Python version: 3.7 - PyTorch version (GPU?): 1.6.0 ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): allegro/herbert-base-cased I tried to use official script on model hub page with transformers version 3.5.1. Week ago it worked just fine, but now I am getting error listed below. @rmroczkowski maybe you have some information on this topic, I saw some new commits on model hub, but they shouldn't change anything For latest version it works fine with AutoTokenizers (EDIT: only version 4.4 works, I tasted version 3.5.1, 4.0.0, 4.3 and got same error) The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) I tried importing AutoTokenizers and HerbertTokenizer, but got the same error `OSError: Can't load tokenizer for 'allegro/herbert-base-cased'. Make sure that: - 'allegro/herbert-base-cased' is a correct model identifier listed on 'https://huggingface.co/models' - or 'allegro/herbert-base-cased' is the correct path to a directory containing relevant tokenizer files` The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. install transformers 3.5.1 2. try to use official script from https://huggingface.co/allegro/herbert-base-case <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior tokenizer loads and works
03-18-2021 11:20:34
03-18-2021 11:20:34
I guess this is related to URL issue #10744 ? and one should change model URL<|||||>I resolved this by updating URL's to models, this is my current code: ```PRETRAINED_VOCAB_FILES_MAP = { "vocab_file": {"allegro/herbert-base-cased": "https://huggingface.co/allegro/herbert-base-cased/resolve/main/vocab.json"}, "merges_file": {"allegro/herbert-base-cased": "https://huggingface.co/allegro/herbert-base-cased/resolve/main/merges.txt"}, }``` Is there a way to fix this to maintain backward compatibility? @LysandreJik<|||||>Cross-posting the Forum thread: https://discuss.huggingface.co/t/delete-organizations-models-from-the-hub/954/40
transformers
10,789
closed
[Deepspeed ZeRO-3] Broken model save on fresh Transformers branch
I have my own model, which utilize two T5 encoders, and I train it via DeepSpeed. It has it's own save_pretrained() and from_pretrained() methods, which makes a custom load/save logic: https://github.com/exelents/try_t5_siamese/blob/4140194978ac113c45e7370f40b3d9b932d0b35b/siamese_model.py#L80 When I run training and trainer starts to save checkpoint, there are going something strange: weights file for every saved encoder is going to be e few kilobytes - weights are not going to be saved. On the start of training trainer tries to load checkpoint using model.load_checkpoint(), but it seems this function has it's own loading logic, because it cannot exec my load model logic and throws an error: `ValueError: [deepspeed] failed to resume from checkpoint ./templates/siamese-t5-small-v1_1-template` I can comment this code, which loads checkpoint, but then I got described before problem with saving checkpoint... What should I do to make save my own custom model properly? It worked a month ago, but today I refreshed my Transformers repo and everything has broken.
03-18-2021 11:18:10
03-18-2021 11:18:10
I'm getting a similar problem after training BERT with MLM using DeepSpeed where all the saved weights are of size 1. The same `run_mlm` script worked as expected if I didn't use DeepSpeed. `RuntimeError: Error(s) in loading state_dict for BertForSequenceClassification: size mismatch for bert.embeddings.word_embeddings.weight: copying a param with shape torch.Size([1]) from checkpoint, the shape in current model is torch.Size([119547, 768]). size mismatch for bert.embeddings.position_embeddings.weight: copying a param with shape torch.Size([1]) from checkpoint, the shape in current model is torch.Size([512, 768]). size mismatch for bert.encoder.layer.0.attention.self.query.weight: copying a param with shape torch.Size([1]) from checkpoint, the shape in current model is torch.Size([768, 768]).`<|||||>Since this is using DeepSpeed, maybe @stas00 has an idea?<|||||>Just tried loading a model trained with `sharded_ddp` and got a different error: ```[INFO|modeling_utils.py:1044] 2021-03-18 12:56:04,792 >> loading weights file fs-test-mlm-mbert/checkpoint-1000/pytorch_model.bin Traceback (most recent call last): File "/export/proj/code/transformers/src/transformers/modeling_utils.py", line 1057, i n from_pretrained state_dict = torch.load(resolved_archive_file, map_location="cpu") File "/export/proj/env_cuda11_1/lib/python3.7/site-packages/torch/serialization.py", line 593, in load return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args) File "/export/proj/env_cuda11_1/lib/python3.7/site-packages/torch/serialization.py", line 762, in _legacy_load magic_number = pickle_module.load(f, **pickle_load_args) EOFError: Ran out of input ``` It seems the model saving might not be happening properly for these two integrations? I also noticed that only the config and weights were being saved when using `--sharded_ddp`. UPDATE: It's actually the checkpoint saving getting stuck that's causing this issue. Started another run to confirm and it got stuck while saving as well. UPDATE 2: This only happens with `zero_dp_2` and `zero_dp_3`. `simple` appears to work fine. For DeepSpeed, using stage 2 appears to fix the problem (I was previously using stage 3).<|||||>@samsontmr I have changed DeepSpeed stage to 2 and it seems works well - checkpoints are saved properly. I also used DeepSpeed stage 3 before. It seems problems are in Stage 3 integration. Maybe @stas00 could help, he did previous integration of DeepSpeed into trainer.<|||||>DeepSpeed Stage 3 integration is not finished yet, a wip PR is here if you'd like to try it - though it has a ton of debug statements still and a few more features are still missing. https://github.com/huggingface/transformers/pull/10753 Make sure you are using the latest deepspeed since zero3 had problems with saving checkpoint but the 0.3.13 release should be good. But I am pretty sure the issue is different, as I literally merged the code that generated the error you quoted 2 days ago: If it worked before please roll back to any sha before https://github.com/huggingface/transformers/pull/10760 and let me know if it works. The problem with DeepSpeed is that it doesn't currently have a way to save a fp32 checkpoint that can be loaded normally and not via DeepSpeed, https://github.com/microsoft/DeepSpeed/issues/800 so when you save a model you only get an fp16 version. However its special checkpoint (see e.g. `global-step10` folder in the checkpoint folder) contains all the right data and thus if you want to load deepspeed model you need to `train(resume_from_checkpoint)` instead. So if you want to resume training you can't use `from_pretrained()` at the moment, unless fp16 weights are sufficient for your work. And it sounds that it's broken at the moment. Let me know if any of this makes sense and let's see how we can make your code work with what we have. I'd be happy to adapt my recent changes to meet your needs. <|||||>Thanks for the detailed reply @stas00! Is the issue with the fp32 checkpoint saving only happening with zero3 or also with stage 2? My fine-tuning step started with no issues when I used the checkpoint from a stage 2 training run (hasn't completed yet so I'm not sure how it'll end up).<|||||>> Is the issue with the fp32 checkpoint saving only happening with zero3 or also with stage 2? It's an issue with any zero stage under deepspeed. Are you saying that the problem emerged once switching to zero3 config? I'm not at all sure it can resume from zero2 checkpoint to zero3 config - those are quite different setups. So we really need to get the fp32 saving sorted out Let's see if we can ask to make this a higher priority at https://github.com/huggingface/transformers/issues/10789 <|||||>> Are you saying that the problem emerged once switching to zero3 config? I'm not at all sure it can resume from zero2 checkpoint to zero3 config - those are quite different setups. So we really need to get the fp32 saving sorted out Yup, I didn't try going from zero2 to zero3; I just restarted my training using zero2, then fine-tuned the model without deepspeed... which somehow managed to load just by using `.from_pretrained`<|||||>As I tried to explain you were getting only fp16 weights when using from `from_pretrained` which may or may not be good enough for your needs. It mostly should be OK. Except some metrics or feature may break under fp16 if they weren't coded for it. e.g. https://github.com/huggingface/transformers/issues/10674 So let's lay out a test that I need to work on to reproduce your issues. Could you please lay out a sequence of events - ideally in code but pseudo-code will work too and then I will try to see where the breakage is. The PR I referred to includes several save/resume tests, so the saving is normal, and resume uses `train(resume_from_checkpoint)` and it works too. Though I need to add zero3 test as well. Only tested zero2 so far. The resume test is here: https://github.com/huggingface/transformers/blob/008672e6e5fb0f2d2fc6fbd367ab6e135eea3f2d/examples/tests/deepspeed/test_deepspeed.py#L279 You shouldn't get: ``` ValueError: [deepspeed] failed to resume from checkpoint ./templates/siamese-t5-small-v1_1-template ``` if you're not trying to do `train(resume_from_checkpoint)`, you can see where it gets triggered: https://github.com/huggingface/transformers/blob/008672e6e5fb0f2d2fc6fbd367ab6e135eea3f2d/src/transformers/integrations.py#L452 <|||||>As for me: I fixed my problem with unnessesary checkpoint load, where I get load error, but it still has an save error on DeepSpeed stage 3 mode. If you @stas00 could help me, I would appreciate. Here is steps to reproduce my error with model save: - Clone this repo: https://github.com/exelents/try_t5_siamese - Extract folder "qasc" from this archive: https://drive.google.com/file/d/1gwvFiPzWW0JLr0XLS25PuG2Br5S4fPbR/view?usp=sharing - Go to clonned repo folder and run ./create-siamese-template.sh - it will create siamese NN from two t5-small encoders in folder ./templates/siamese-t5-small-template - then you can run ./run-siamese-small.sh - you will see normal behaviour, in folder ./siamese_train_deepspeed/output_dir/ you will find there will be stored checkpoints every 3 steps? and you will can see a sight that weights are stored: weights files like ./siamese_train_deepspeed/output_dir/checkpoint-6/left/pytorch_model.bin will have size around hundred megabytes. - Then to see a problem open ./run-siamese-small.sh and change "ds_config.json" to "ds_config_stage3.json" and rerun training. You will see that weights files, like ./siamese_train_deepspeed/output_dir/checkpoint-6/left/pytorch_model.bin will have size for a few kilobytes, and you couldn't load model from that checkpoint. There is a probleb, and it appears only if I turn on "stage 3" mode in config.<|||||>Thank you for the detailed instructions, @exelents. Let me adapt the existing test first to zero3 so I am sure it's working and then will try your sequence. I will keep you posted.<|||||>I can reproduce the saved model size problem. `pytorch_model.bin` with: - zero2 135M - zero3 38K but as I mentioned currently Deepspeed doesn't provide a proper way to save a model on its own. It saves the model state in its own sub-folder, e.g., in your case: ``` ls -l output_dir/checkpoint-6/global_step6/ total 809M -rw-rw-r-- 1 stas stas 53K Mar 18 14:03 zero_pp_rank_0_mp_rank_00_model_states.pt -rw-rw-r-- 1 stas stas 809M Mar 18 14:03 zero_pp_rank_0_mp_rank_00_optim_states.pt ``` as you can see the optimizer states dict has everything in it. So you should be able to resume from it. Your script is a bit old and based on an old example - so it doesn't support the current mechanism of doing resume from command line using https://github.com/huggingface/transformers/blob/master/examples/README.md#resuming-training So for resume to currently work, you either need to bring your script up-to-date, by probably checking the latest version of the example you used as a base for your work. The key is `train(resume_from_checkpoint)` if you passed this as `output_dir/checkpoint-6` deepspeed reloads where it left on and continues on its merry way. To help you think the new script in your case is this and I pointed to where the critical part is: https://github.com/huggingface/transformers/blob/dcebe254fadfe142b6f0d6301cc8a875dca7d603/examples/seq2seq/run_translation.py#L500 (this is on master) So if you could bring your script up-to-date with the current way it'd automatically work, or you can adapt it manually as I suggested above. If any of my comments are unclear please don't hesitate to ask for clarifications. Meanwhile I will investigate why the model state_dict is almost empty under zero3 - this looks like a bug - making it work might help you move on w/o needing you to change your code. I will get back to you. <|||||>I investigated and `model.state_dict()` returns some sort of placeholder with `tensor([1.],` for each weights and no real data, that's why `pytorch_model.bin` is tiny. Filed a request: https://github.com/microsoft/DeepSpeed/issues/872 So until we find a way to reconstruct it, I suggest to stick to zero2 otherwise you will remain locked in into DeepSpeed data files, that is you should be able to continue training but not being able to use it w/o deepspeed. <|||||>While the Deepspeed team is sorting the addition of a method to extract model weights from its checkpoint, here is an update for you. Deepspeed stores the model weights in its checkpoint file (a file per gpu) which at the moment can only be loaded via its `deepspeed.load_checkpoint`. Therefore please adapt your code to rely on that to save and resume your custom models. Do not rely on `save_pretrained` and then expect `from_pretrained` to work, since the model weights won't be there. The new method we are discussing will be able to convert the deepspeed checkpoint into consolidated from multiple gpus model weights. This is quite expensive so it shouldn't happen on each checkpoint saving and definitely shouldn't be the default because there might not be enough memory to do the consolidation (e.g. a model spread out over dozens of gpus). Bottom line, should you choose to use deepspeed zero-3 things aren't as straightforward. And we will work out a solution in this case. I suppose it's a similar story with fairscale Sharded DDP, but I am working on DeepSpeed only at the moment and can't comment on the former. Unless @sgugger who did the initial integration of fairscale beats me to it I will be able to look at it once I complete the integration of DeepSpeed ZeRO-3, which is coming along nicely but requires changes on the DeepSpeed side - so it'll take some time. <|||||>@exelents, here is how to solve your specific problem of: ``` class T5Siamese(T5PreTrainedModel): [....] def init_from_base_t5_model(model_name_or_path='t5-base', output_root='./'): [...] model_left = T5EncoderModel.from_pretrained(MODEL) model_right = T5EncoderModel.from_pretrained(MODEL) ``` with DeepSpeed zero-3. If you don't mind continuing training and not being to retrieve the final weights until https://github.com/microsoft/DeepSpeed/issues/872 is addressed, here is what you can do immediately to be able to move forward: Do the above only when starting "cold", but when resuming from a checkpoint don't do that and let instead `T5Siamese` be restored from the deepspeed checkpoint at once. Once we get the method to extract the model weights out of the DeepSpeed checkpoint, you can then recover both sub-model weights if you want to upload them to the hub or to take them elsewhere. Please let me know if this solution resonates with you. Or if you run into any hiccups I haven't considered. Note that currently under zero-2 you're only recovering fp16 weights, so it is also not ideal either. So you want to use this solution for both cases. <|||||>@samsontmr, would you kindly open a separate issue since while this is related the use-case is quite different. Please tag me and we will work on solving your use case there. Thank you! p.s. also when you test please make sure you are using the `transformers` and `deepspeeed` master since there are constant fixes merged into it. <|||||>@stas00 Thank you for the explanation. So, to load stage-3 checkpoint I should make "cold load" from original T5 weights, and then load actual weights via `deepspeed.load_checkpoint` . The question is: is it possible to use this model in usual jupyter notebook, or usual python script, if I load model weights using deepspeed function? Or if I trained model via deepspeed once, I will be bound to it's runner forever?<|||||>> So, to load stage-3 checkpoint I should make "cold load" from original T5 weights, and then load actual weights via deepspeed.load_checkpoint . I haven't tested it, but I can't think of any reason why it won't work. If you run into problems that I haven't considered please let me know. > The question is: is it possible to use this model in usual jupyter notebook, or usual python script, if I load model weights using deepspeed function? Yes, of course. Just note that if you use the notebook directly and don't launch an external process which launches the distributed environment, you will be limited to 1 gpu and you will have to emulate the distributed environment like so: ``` import os dist_env_1_gpu = dict(MASTER_ADDR="localhost", MASTER_PORT="10999", RANK="0", LOCAL_RANK="0", WORLD_SIZE="1") for k,v in dist_env_1_gpu.items(): os.environ[k] = v ``` and please make sure you're on the master or very recent `transformers` version for this to work. But if you just use the notebook to open a shell with the `deepspeed` launcher then you have no limitation of one gpu, e.g. see: https://github.com/stas00/porting/blob/master/transformers/deepspeed/DeepSpeed_on_colab_CLI.ipynb > Or if I trained model via deepspeed once, I will be bound to it's runner forever? I'm not sure what you ask here, as I don't know whether you refer to the `deepspeed` launcher, or something else. 1. The `deepspeed` launcher is a more elaborate equivalent of `python -m torch.distributed.launch`. In simple cases of a single node you can use the latter. Here all DeepSpeed needs is to have a dedicated process per gpu and the distributed env set up (even in the case of one gpu). 2. If you're asking whether your data will be locked into the deepspeed checkpoints, then at the moment the answer is yes. Once https://github.com/microsoft/DeepSpeed/issues/872 is resolved you will be able to recover the consolidated weights and use them in any way you want.<|||||>Ok, thank you for the explanation. I'm not sure if I could test these changes on my code soon, but I'll do it sooner or later.<|||||>I just proposed yet another API in https://github.com/microsoft/DeepSpeed/issues/872: > being able to call `deepspeed.consolidate_weights()` in the rank0 process which would give users full weights back (perhaps with a bool arg of whether they want the fp16 or fp32 version). So now they can just save the model as they do with any other pytorch tools. This would only be practical for small-ish models. The key here is that while this would be somewhat costly they will be able to use their code almost w/o any change if they train in various ways and not just with deepspeed. So if that was added then your current code would also work with just adding this newly proposed API. Let's see.<|||||>@stas00 thanks! My problem is solved for now since I'm also using fp16 during fine-tuning so the current stage2 saves are good enough for me.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hello, @stas00. I have created an issue due to problems with converting model to fp32. Can you say something about it? https://github.com/microsoft/DeepSpeed/issues/1009
transformers
10,788
closed
TypeError: __init__() got an unexpected keyword argument 'filepath' when using RAG model
I was finetuning RAG model with cmd: python finetune_rag.py \ --data_dir ../../../../data/ms-marco/ \ --output_dir ../../../../data/ms-marco/ \ --model_name_or_path ~/model/rag/rag/rag-sequence-nq \ --model_type rag_sequence \ --fp16 \ --gpus 8 \ --do_train --do_predict where ~/model/rag/rag/rag-sequence-nq was completely download from https://huggingface.co/facebook/rag-sequence-nq. Here is the log: Model name '/nfs/users/s_xiangru/model/rag/rag/rag-sequence-nq' not found in model shortcut name list (facebook/dpr-question_encoder-single-nq-base, facebook/dpr-question_encoder-multiset-base). Assuming '/nfs/users/s_xiangru/model/rag/rag/rag-sequence-nq' is a path, a model identifier, or url to a directory containing tokenizer files. Didn't find file /nfs/users/s_xiangru/model/rag/rag/rag-sequence-nq/question_encoder_tokenizer/tokenizer.json. We won't load it. Didn't find file /nfs/users/s_xiangru/model/rag/rag/rag-sequence-nq/question_encoder_tokenizer/added_tokens.json. We won't load it. loading file /nfs/users/s_xiangru/model/rag/rag/rag-sequence-nq/question_encoder_tokenizer/vocab.txt loading file None loading file None loading file /nfs/users/s_xiangru/model/rag/rag/rag-sequence-nq/question_encoder_tokenizer/special_tokens_map.json loading file /nfs/users/s_xiangru/model/rag/rag/rag-sequence-nq/question_encoder_tokenizer/tokenizer_config.json Model name '/nfs/users/s_xiangru/model/rag/rag/rag-sequence-nq' not found in model shortcut name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). Assuming '/nfs/users/s_xiangru/model/rag/rag/rag-sequence-nq' is a path, a model identifier, or url to a directory containing tokenizer files. Didn't find file /nfs/users/s_xiangru/model/rag/rag/rag-sequence-nq/generator_tokenizer/tokenizer.json. We won't load it. Didn't find file /nfs/users/s_xiangru/model/rag/rag/rag-sequence-nq/generator_tokenizer/added_tokens.json. We won't load it. loading file /nfs/users/s_xiangru/model/rag/rag/rag-sequence-nq/generator_tokenizer/vocab.json loading file /nfs/users/s_xiangru/model/rag/rag/rag-sequence-nq/generator_tokenizer/merges.txt loading file None loading file None loading file /nfs/users/s_xiangru/model/rag/rag/rag-sequence-nq/generator_tokenizer/special_tokens_map.json loading file /nfs/users/s_xiangru/model/rag/rag/rag-sequence-nq/generator_tokenizer/tokenizer_config.json Traceback (most recent call last): File "finetune_rag.py", line 629, in <module> main(args) File "finetune_rag.py", line 597, in main checkpoint_callback=get_checkpoint_callback(args.output_dir, model.val_metric), File "/nfs/users/s_xiangru/transformers/examples/research_projects/rag/callbacks_rag.py", line 41, in get_checkpoint_callback period=1, # maybe save a checkpoint every time val is run, not just end of epoch. TypeError: __init__() got an unexpected keyword argument 'filepath'
03-18-2021 08:14:03
03-18-2021 08:14:03
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,787
closed
Can DeepSpeed ZeRO-3 be applied for training?
# 🌟 New model addition We have applied DeepSpeed v0.3.10(ZeRO-2) on T5 training. I heard DeepSpeed ZeRO-3 library has been released 10 days ago(8/MAR). I'd like to adopt ZeRO-3 for our training. Can this library be applied, especially for T5 training? Do you have any experience applying this library? If any, could share your experience?
03-18-2021 08:12:28
03-18-2021 08:12:28
Hi! You might find this reply https://github.com/huggingface/transformers/issues/10789#issuecomment-802100991 by @stas00 of interest.<|||||>@avionkmh, very soon it'll be supported, you may want to track: https://github.com/huggingface/transformers/pull/10753<|||||>@stas00, @LysandreJik, thank you for your kind information. I'm going to track the issues you recommended. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,786
closed
Add XLSR-Wav2Vec2 Fine-Tuning README.md
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-18-2021 08:01:06
03-18-2021 08:01:06
transformers
10,785
closed
Typo in M2M100 model page
Seems like there's a typo in the [m2m 100 page](https://huggingface.co/facebook/m2m100_418M): ```python from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer hi_text = "ΰ€œΰ₯€ΰ€΅ΰ€¨ ΰ€ΰ€• ΰ€šΰ₯‰ΰ€•ΰ€²ΰ₯‡ΰ€Ÿ ΰ€¬ΰ₯‰ΰ€•ΰ₯ΰ€Έ ΰ€•ΰ₯€ ΰ€€ΰ€°ΰ€Ή ΰ€Ήΰ₯ˆΰ₯€" chinese_text = "η”Ÿζ΄»ε°±εƒδΈ€η›’ε·§ε…‹εŠ›γ€‚" model = M2M100ForConditionalGeneration.from_pretrained("faGreekook/m2m100_418M") tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M") ``` Pretty sure it should be "facebook" instead of "faGreekook"
03-18-2021 03:57:29
03-18-2021 03:57:29
Indeed! Pinging @patil-suraj <|||||>urgh, my bad. Thanks for pointing it out. fixed!
transformers
10,784
closed
How to interpret fine-tuned model results and use model
Hello, Apologies if this is the wrong forum to ask these kinds of questions, but I was unable to find this in the documentation. I fine-tuned a seq2seq model on my custom dataset using the tutorial found here: https://github.com/huggingface/transformers/tree/master/examples/seq2seq I am trying to find out the F1 and EM accuracy for the fine-tuned model, but am not sure how to interpret the output. I've attached a link to the training's output below: https://github.com/zakerytclarke/transformers/tree/master/modelResults ``` { "epoch": 3.0, "eval_gen_len": 55.7429, "eval_loss": 2.063843250274658, "eval_mem_cpu_alloc_delta": 1998448, "eval_mem_cpu_peaked_delta": 638828, "eval_rouge1": 33.8505, "eval_rouge2": 13.1365, "eval_rougeL": 27.8332, "eval_rougeLsum": 31.5921, "eval_runtime": 119.8097, "eval_samples": 35, "eval_samples_per_second": 0.292 } ``` Can you point me to documentation about how to interpret these results and how I can load my fine-tuned model in order to evaluate it on a new piece of text? Thanks for your help, --Zak
03-18-2021 02:48:27
03-18-2021 02:48:27
Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discusss.huggingface.co) instead? @patrickvonplaten @stas00 Thanks!<|||||>@LysandreJik Thanks for pointing me in the right direction, I've moved the post over to the forum.
transformers
10,783
closed
Fix bug in input check for LengthGroupSampler
# What does this PR do? This commit fixes a bug in the LengthGroupSampler where if model_input_name is not set, the default value is None instead of "input_ids" ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? I did not write a test for this, but if necessary I can. Neither this sampler nor the distributed version currently have test coverage for the ValueError this bug raises, but it might not be bad to have. ## Who can review? @patrickvonplaten
03-18-2021 02:18:31
03-18-2021 02:18:31
transformers
10,782
closed
add dockerfile for zero optimzier
This PR adds a dockerfile for zero optimzier
03-18-2021 01:24:48
03-18-2021 01:24:48
transformers
10,781
closed
Add support for detecting intel-tensorflow version
`intel-tensorflow` pypi package is not currently detected by transformers. This PR adds support for detecting Intel TF version.
03-17-2021 20:49:44
03-17-2021 20:49:44
transformers
10,780
closed
Improve the speed of adding tokens from added_tokens.json
# What does this PR do? This PR significantly improves the speed of adding tokens from `added_tokens.json`, when it contains a large number of tokens (e.g., 20,000+). When adding one token at a time, it uses `bisect` to insert the token into `PreTrainedTokenizer.unique_no_split_tokens`. Please see a detailed description and motivation in this issue: https://github.com/huggingface/transformers/issues/10676 This change relies on the requirement that `unique_no_split_tokens` is sorted. (I'm not sure if this is a fair assumption, otherwise I can check if it's already sorted before the insertion.) Fixes #10676 ## Benchmark MacOS Mojave (CPU: 2.6 GHz Intel Core i7) python==3.7.9 torch==1.7.1+cpu transformers==3.5.1 or transformers==4.4.2 (similar results) ```python import json from timeit import default_timer as timer from transformers import DistilBertTokenizer model_dir = '/home/username/saved_model' # Load a pretrained model's tokenizer, save it to {model_dir} tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased') tokenizer.save_pretrained(model_dir) print('len(tokenizer) of a pretrained model distilbert-base-uncased:', len(tokenizer)) # Generate n values as token suffix, to randomize the insertion position of added tokens low = 30522 high = 82181 # exclusive random_suffix = list(range(low, high)) random.shuffle(random_suffix) # Save the n new tokens with correct indices to added_tokens.json added_tokens = {f'addedtoken{val}': low + idx for idx, val in enumerate(random_suffix)} with open(f'{model_dir}/added_tokens.json', 'w') as f: json.dump(added_tokens, f) print(f'saved {len(added_tokens)} tokens to added_tokens.json') # Load the tokenizer from {model_dir}, and print the elapsed time start = timer() tokenizer = DistilBertTokenizer.from_pretrained(model_dir) print('len(tokenizer) after loading from saved model:', len(tokenizer)) end = timer() print('Elapsed (seconds):', round(end - start, 3)) # Make sure tokenizer.unique_no_split_tokens remains sorted all_values = tokenizer.unique_no_split_tokens assert all(all_values[i+1] > all_values[i] for i in range(len(all_values) - 1)) ``` **If we save 21659 tokens in added_tokens.json**, output before the change: ```bash len(tokenizer) of a pretrained model distilbert-base-uncased: 30522 saved 21659 tokens to added_tokens.json len(tokenizer) after loading from saved model: 52181 *** Elapsed (seconds): 76.95 ``` output after the change: ```bash len(tokenizer) of a pretrained model distilbert-base-uncased: 30522 saved 21659 tokens to added_tokens.json len(tokenizer) after loading from saved model: 52181 *** Elapsed (seconds): 0.308 ``` **If we save 51659 tokens in added_tokens.json**, output before the change: ```bash len(tokenizer) of a pretrained model distilbert-base-uncased: 30522 saved 51659 tokens to added_tokens.json len(tokenizer) after loading from saved model: 82181 *** Elapsed (seconds): 527.795 ``` output after the change: ```bash len(tokenizer) of a pretrained model distilbert-base-uncased: 30522 saved 51659 tokens to added_tokens.json len(tokenizer) after loading from saved model: 82181 *** Elapsed (seconds): 0.83 ``` <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Who can review? @lhoestq @LysandreJik <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-17-2021 20:24:14
03-17-2021 20:24:14
Hi! Could you provide a way to benchmark the change so that we can see in which situations is the speedup visible? Thank you!<|||||>> Hi! Could you provide a way to benchmark the change so that we can see in which situations is the speedup visible? Thank you! Hi @LysandreJik , I just updated the benchmark code snippets and sample output in the description. Hopefully it can validate the change.<|||||>Hi @LysandreJik , just wanted to follow up on this, is there anything else you would like to see on this PR? I also tried to run the slow test. It looked ok, but with a few connection errors: `ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on`, and re-run didn't fix it. I was wondering how the slow tests look like on your end. Thanks!<|||||>Awesome, thanks to both of you!
transformers
10,779
open
EncoderDecoderModel with different model dimensions
## Who can help @patrickvonplaten, @patil-suraj ## Information When instantiating an `EncoderDecoderModel` from two pretrained models whose model dimensions are different, a `RunTimeError` occurs at the `CrossAttention` calculation step. The reason is, that regardless of a potentially different encoder model dimension, the projection layers for key and value are initialized with the decoder model dimension. This leads to a dimensionality mismatch when performing the matrix multiplication of encoder outputs (encoder model dimension) in the key and value projection layers (decoder model dimension). Looking a little bit deeper in the API I would suspect it should be easy to provide the correct encoder model dimension to the `Attention` module in most Model implementations and their key/value projection layers, if the `add_cross_attention=True` argument is set. Also, I think the encoder model dimension should be easily accessible via `self.encoder.config.d_model` or something along these lines. Generally, I think there is no reason against using `EncoderDecoderModel` with `encoder='bert-large-cased'` (`d_model=1024`) and `decoder='gpt2'` (`d_model=768`), but currently this setup doesnt't work. Thanks a lot for looking into it :) Best regards Lars
03-17-2021 18:40:33
03-17-2021 18:40:33
Hey @LarsHill, Yes, we should fix this indeed :-) I'll try to open a PR for this this week!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,778
closed
Smmp batch not divisible by microbatches fix
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes: Batch size not divisible by microbatches issue in sagemaker model parallel. Following is summary of changes: 1. Updated SequentialDistributedSampler to generate samples of multiples of batchsize. 2. Updated preds_gatherer and labels_gatherer calls to be multiple of batch size. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger @philschmid @anirudh2290 <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-17-2021 18:24:13
03-17-2021 18:24:13
Thanks @mansimane ! I added the test and realized the implementation was not working as expected, so fixed it (I had forgotten this does not behave like the `DistributedSampler` that takes one sample every other num_replicas but sliced the indices at the beginning). If you want to have a last look to check I didn't do anything bad that would be great. The rest of the changes are just the result of our styling scripts<|||||>Changes look good to me too. I tested with microbatch size 2.
transformers
10,777
closed
[trainer] make failure to find a resume checkpoint fatal + tests
As a follow up to https://github.com/huggingface/transformers/pull/10760 this PR: - makes a failure to find a valid checkpoint to resume from fatal - when an explicit `resume_from_checkpoint` was passed - extends `test_can_resume_training` to validate this change and also the boolean `resume_from_checkpoint` case. - adds a small `test_can_resume_training` refactoring - so it's easy to see they are the same args on each invocation. @sgugger
03-17-2021 17:48:00
03-17-2021 17:48:00
transformers
10,776
closed
[examples] document resuming
This PR documents how one can resume training in examples. Thanks to @sgugger for the notes. @sgugger
03-17-2021 17:00:47
03-17-2021 17:00:47
transformers
10,775
closed
Check copies blackify
# What does this PR do? This PR update the check_copies utils to apply black when checking if a copy has diverged from the original when replacement happen. An example of the problem is given with the diff in `modeling_mobilebert.py` here, where the check copies could not be applied to whole class because of styling divergences. It also fixes a bug where the check was not applied on functions after the end of the definition (it wasn't checking the function but was stopping at the first unindent when the closing parenthesis was). As a consequence, three files are changed because they diverged from the original function: - modeling_m2m_100.py - modeling_roberta.py - modeling_speech_to_text.py I'm not sure if the check should be removed on those or not (cc @patil-suraj )
03-17-2021 16:59:09
03-17-2021 16:59:09
transformers
10,774
closed
torch.nn.modules.module.ModuleAttributeError: 'AlbertEmbeddings' object has no attribute 'bias'
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> transformers-cli convert --model_type albert --tf_checkpoint $ALBERT_BASE_DIR/model.ckpt-64000 --config $ALBERT_BASE_DIR/albert_config.json --pytorch_dump_output $ALBERT_BASE_DIR/pytorch_model.bin i am running this script AlbertConfig { "attention_probs_dropout_prob": 0, "bos_token_id": 2, "classifier_dropout_prob": 0.1, "down_scale_factor": 1, "embedding_size": 128, "eos_token_id": 3, "gap_size": 0, "hidden_act": "gelu", "hidden_dropout_prob": 0, "hidden_size": 768, "initializer_range": 0.02, "inner_group_num": 1, "intermediate_size": 3072, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "albert", "net_structure_type": 0, "num_attention_heads": 12, "num_hidden_groups": 1, "num_hidden_layers": 12, "num_memory_blocks": 0, "pad_token_id": 0, "type_vocab_size": 2, "vocab_size": 31990 } Converting TensorFlow checkpoint from /data/NLP/ALBERT_Inspird_Train/albert_base/model.ckpt-64000 Loading TF weight bert/embeddings/layer_normalization/beta with shape [128] Loading TF weight bert/embeddings/layer_normalization/beta/adam_m with shape [128] Loading TF weight bert/embeddings/layer_normalization/beta/adam_v with shape [128] Loading TF weight bert/embeddings/layer_normalization/gamma with shape [128] Loading TF weight bert/embeddings/layer_normalization/gamma/adam_m with shape [128] Loading TF weight bert/embeddings/layer_normalization/gamma/adam_v with shape [128] Loading TF weight bert/embeddings/position_embeddings with shape [512, 128] Loading TF weight bert/embeddings/position_embeddings/adam_m with shape [512, 128] Loading TF weight bert/embeddings/position_embeddings/adam_v with shape [512, 128] Loading TF weight bert/embeddings/token_type_embeddings with shape [2, 128] Loading TF weight bert/embeddings/token_type_embeddings/adam_m with shape [2, 128] Loading TF weight bert/embeddings/token_type_embeddings/adam_v with shape [2, 128] Loading TF weight bert/embeddings/word_embeddings with shape [31990, 128] Loading TF weight bert/embeddings/word_embeddings/adam_m with shape [31990, 128] Loading TF weight bert/embeddings/word_embeddings/adam_v with shape [31990, 128] Loading TF weight bert/encoder/embedding_hidden_mapping_in/bias with shape [768] Loading TF weight bert/encoder/embedding_hidden_mapping_in/bias/adam_m with shape [768] Loading TF weight bert/encoder/embedding_hidden_mapping_in/bias/adam_v with shape [768] Loading TF weight bert/encoder/embedding_hidden_mapping_in/kernel with shape [128, 768] Loading TF weight bert/encoder/embedding_hidden_mapping_in/kernel/adam_m with shape [128, 768] Loading TF weight bert/encoder/embedding_hidden_mapping_in/kernel/adam_v with shape [128, 768] Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias with shape [768] Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel with shape [768, 768] Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias with shape [768] Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel with shape [768, 768] Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias with shape [768] Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel with shape [768, 768] Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias with shape [768] Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel with shape [768, 768] Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias with shape [3072] Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias/adam_m with shape [3072] Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias/adam_v with shape [3072] Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel with shape [768, 3072] Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel/adam_m with shape [768, 3072] Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel/adam_v with shape [768, 3072] Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias with shape [768] Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel with shape [3072, 768] Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel/adam_m with shape [3072, 768] Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel/adam_v with shape [3072, 768] Loading TF weight bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_1/beta with shape [768] Loading TF weight bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_1/beta/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_1/beta/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_1/gamma with shape [768] Loading TF weight bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_1/gamma/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_1/gamma/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_2/beta with shape [768] Loading TF weight bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_2/beta/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_2/beta/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_2/gamma with shape [768] Loading TF weight bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_2/gamma/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_2/gamma/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_3/beta with shape [768] Loading TF weight bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_3/beta/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_3/beta/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_3/gamma with shape [768] Loading TF weight bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_3/gamma/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_3/gamma/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_4/beta with shape [768] Loading TF weight bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_4/beta/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_4/beta/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_4/gamma with shape [768] Loading TF weight bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_4/gamma/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_4/gamma/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_21/beta with shape [768] Loading TF weight bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_21/beta/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_21/beta/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_21/gamma with shape [768] Loading TF weight bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_21/gamma/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_21/gamma/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_22/beta with shape [768] Loading TF weight bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_22/beta/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_22/beta/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_22/gamma with shape [768] Loading TF weight bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_22/gamma/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_22/gamma/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_23/beta with shape [768] Loading TF weight bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_23/beta/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_23/beta/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_23/gamma with shape [768] Loading TF weight bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_23/gamma/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_23/gamma/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_24/beta with shape [768] Loading TF weight bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_24/beta/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_24/beta/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_24/gamma with shape [768] Loading TF weight bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_24/gamma/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_24/gamma/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_5/beta with shape [768] Loading TF weight bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_5/beta/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_5/beta/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_5/gamma with shape [768] Loading TF weight bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_5/gamma/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_5/gamma/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_6/beta with shape [768] Loading TF weight bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_6/beta/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_6/beta/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_6/gamma with shape [768] Loading TF weight bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_6/gamma/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_6/gamma/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_7/beta with shape [768] Loading TF weight bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_7/beta/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_7/beta/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_7/gamma with shape [768] Loading TF weight bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_7/gamma/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_7/gamma/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_8/beta with shape [768] Loading TF weight bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_8/beta/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_8/beta/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_8/gamma with shape [768] Loading TF weight bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_8/gamma/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_8/gamma/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_10/beta with shape [768] Loading TF weight bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_10/beta/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_10/beta/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_10/gamma with shape [768] Loading TF weight bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_10/gamma/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_10/gamma/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_9/beta with shape [768] Loading TF weight bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_9/beta/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_9/beta/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_9/gamma with shape [768] Loading TF weight bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_9/gamma/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_9/gamma/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_11/beta with shape [768] Loading TF weight bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_11/beta/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_11/beta/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_11/gamma with shape [768] Loading TF weight bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_11/gamma/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_11/gamma/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_12/beta with shape [768] Loading TF weight bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_12/beta/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_12/beta/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_12/gamma with shape [768] Loading TF weight bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_12/gamma/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_12/gamma/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_13/beta with shape [768] Loading TF weight bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_13/beta/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_13/beta/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_13/gamma with shape [768] Loading TF weight bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_13/gamma/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_13/gamma/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_14/beta with shape [768] Loading TF weight bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_14/beta/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_14/beta/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_14/gamma with shape [768] Loading TF weight bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_14/gamma/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_14/gamma/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_15/beta with shape [768] Loading TF weight bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_15/beta/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_15/beta/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_15/gamma with shape [768] Loading TF weight bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_15/gamma/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_15/gamma/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_16/beta with shape [768] Loading TF weight bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_16/beta/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_16/beta/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_16/gamma with shape [768] Loading TF weight bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_16/gamma/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_16/gamma/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_17/beta with shape [768] Loading TF weight bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_17/beta/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_17/beta/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_17/gamma with shape [768] Loading TF weight bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_17/gamma/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_17/gamma/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_18/beta with shape [768] Loading TF weight bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_18/beta/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_18/beta/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_18/gamma with shape [768] Loading TF weight bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_18/gamma/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_18/gamma/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_19/beta with shape [768] Loading TF weight bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_19/beta/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_19/beta/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_19/gamma with shape [768] Loading TF weight bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_19/gamma/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_19/gamma/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_20/beta with shape [768] Loading TF weight bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_20/beta/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_20/beta/adam_v with shape [768] Loading TF weight bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_20/gamma with shape [768] Loading TF weight bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_20/gamma/adam_m with shape [768] Loading TF weight bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_20/gamma/adam_v with shape [768] Loading TF weight bert/pooler/dense/bias with shape [768] Loading TF weight bert/pooler/dense/bias/adam_m with shape [768] Loading TF weight bert/pooler/dense/bias/adam_v with shape [768] Loading TF weight bert/pooler/dense/kernel with shape [768, 768] Loading TF weight bert/pooler/dense/kernel/adam_m with shape [768, 768] Loading TF weight bert/pooler/dense/kernel/adam_v with shape [768, 768] Loading TF weight cls/predictions/output_bias with shape [31990] Loading TF weight cls/predictions/output_bias/adam_m with shape [31990] Loading TF weight cls/predictions/output_bias/adam_v with shape [31990] Loading TF weight cls/predictions/transform/dense/bias with shape [128] Loading TF weight cls/predictions/transform/dense/bias/adam_m with shape [128] Loading TF weight cls/predictions/transform/dense/bias/adam_v with shape [128] Loading TF weight cls/predictions/transform/dense/kernel with shape [768, 128] Loading TF weight cls/predictions/transform/dense/kernel/adam_m with shape [768, 128] Loading TF weight cls/predictions/transform/dense/kernel/adam_v with shape [768, 128] Loading TF weight cls/predictions/transform/layer_normalization_25/beta with shape [128] Loading TF weight cls/predictions/transform/layer_normalization_25/beta/adam_m with shape [128] Loading TF weight cls/predictions/transform/layer_normalization_25/beta/adam_v with shape [128] Loading TF weight cls/predictions/transform/layer_normalization_25/gamma with shape [128] Loading TF weight cls/predictions/transform/layer_normalization_25/gamma/adam_m with shape [128] Loading TF weight cls/predictions/transform/layer_normalization_25/gamma/adam_v with shape [128] Loading TF weight cls/seq_relationship/output_bias with shape [2] Loading TF weight cls/seq_relationship/output_bias/adam_m with shape [2] Loading TF weight cls/seq_relationship/output_bias/adam_v with shape [2] Loading TF weight cls/seq_relationship/output_weights with shape [2, 768] Loading TF weight cls/seq_relationship/output_weights/adam_m with shape [2, 768] Loading TF weight cls/seq_relationship/output_weights/adam_v with shape [2, 768] Loading TF weight global_step with shape [] bert/embeddings/layer_normalization/beta bert/embeddings/layer_normalization/beta/adam_m bert/embeddings/layer_normalization/beta/adam_v bert/embeddings/layer_normalization/gamma bert/embeddings/layer_normalization/gamma/adam_m bert/embeddings/layer_normalization/gamma/adam_v bert/embeddings/position_embeddings bert/embeddings/position_embeddings/adam_m bert/embeddings/position_embeddings/adam_v bert/embeddings/token_type_embeddings bert/embeddings/token_type_embeddings/adam_m bert/embeddings/token_type_embeddings/adam_v bert/embeddings/word_embeddings bert/embeddings/word_embeddings/adam_m bert/embeddings/word_embeddings/adam_v bert/encoder/embedding_hidden_mapping_in/bias bert/encoder/embedding_hidden_mapping_in/bias/adam_m bert/encoder/embedding_hidden_mapping_in/bias/adam_v bert/encoder/embedding_hidden_mapping_in/kernel bert/encoder/embedding_hidden_mapping_in/kernel/adam_m bert/encoder/embedding_hidden_mapping_in/kernel/adam_v bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias/adam_m bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias/adam_v bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel/adam_m bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel/adam_v bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias/adam_m bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias/adam_v bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel/adam_m bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel/adam_v bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias/adam_m bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias/adam_v bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel/adam_m bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel/adam_v bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias/adam_m bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias/adam_v bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel/adam_m bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel/adam_v bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias/adam_m bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias/adam_v bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel/adam_m bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel/adam_v bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias/adam_m bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias/adam_v bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel/adam_m bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel/adam_v bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_1/beta bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_1/beta/adam_m bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_1/beta/adam_v bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_1/gamma bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_1/gamma/adam_m bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_1/gamma/adam_v bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_2/beta bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_2/beta/adam_m bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_2/beta/adam_v bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_2/gamma bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_2/gamma/adam_m bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_2/gamma/adam_v bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_3/beta bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_3/beta/adam_m bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_3/beta/adam_v bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_3/gamma bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_3/gamma/adam_m bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_3/gamma/adam_v bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_4/beta bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_4/beta/adam_m bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_4/beta/adam_v bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_4/gamma bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_4/gamma/adam_m bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_4/gamma/adam_v bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_21/beta bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_21/beta/adam_m bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_21/beta/adam_v bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_21/gamma bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_21/gamma/adam_m bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_21/gamma/adam_v bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_22/beta bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_22/beta/adam_m bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_22/beta/adam_v bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_22/gamma bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_22/gamma/adam_m bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_22/gamma/adam_v bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_23/beta bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_23/beta/adam_m bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_23/beta/adam_v bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_23/gamma bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_23/gamma/adam_m bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_23/gamma/adam_v bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_24/beta bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_24/beta/adam_m bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_24/beta/adam_v bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_24/gamma bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_24/gamma/adam_m bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_24/gamma/adam_v bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_5/beta bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_5/beta/adam_m bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_5/beta/adam_v bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_5/gamma bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_5/gamma/adam_m bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_5/gamma/adam_v bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_6/beta bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_6/beta/adam_m bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_6/beta/adam_v bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_6/gamma bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_6/gamma/adam_m bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_6/gamma/adam_v bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_7/beta bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_7/beta/adam_m bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_7/beta/adam_v bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_7/gamma bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_7/gamma/adam_m bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_7/gamma/adam_v bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_8/beta bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_8/beta/adam_m bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_8/beta/adam_v bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_8/gamma bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_8/gamma/adam_m bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_8/gamma/adam_v bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_10/beta bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_10/beta/adam_m bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_10/beta/adam_v bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_10/gamma bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_10/gamma/adam_m bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_10/gamma/adam_v bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_9/beta bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_9/beta/adam_m bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_9/beta/adam_v bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_9/gamma bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_9/gamma/adam_m bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_9/gamma/adam_v bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_11/beta bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_11/beta/adam_m bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_11/beta/adam_v bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_11/gamma bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_11/gamma/adam_m bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_11/gamma/adam_v bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_12/beta bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_12/beta/adam_m bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_12/beta/adam_v bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_12/gamma bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_12/gamma/adam_m bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_12/gamma/adam_v bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_13/beta bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_13/beta/adam_m bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_13/beta/adam_v bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_13/gamma bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_13/gamma/adam_m bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_13/gamma/adam_v bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_14/beta bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_14/beta/adam_m bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_14/beta/adam_v bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_14/gamma bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_14/gamma/adam_m bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_14/gamma/adam_v bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_15/beta bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_15/beta/adam_m bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_15/beta/adam_v bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_15/gamma bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_15/gamma/adam_m bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_15/gamma/adam_v bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_16/beta bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_16/beta/adam_m bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_16/beta/adam_v bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_16/gamma bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_16/gamma/adam_m bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_16/gamma/adam_v bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_17/beta bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_17/beta/adam_m bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_17/beta/adam_v bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_17/gamma bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_17/gamma/adam_m bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_17/gamma/adam_v bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_18/beta bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_18/beta/adam_m bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_18/beta/adam_v bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_18/gamma bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_18/gamma/adam_m bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_18/gamma/adam_v bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_19/beta bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_19/beta/adam_m bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_19/beta/adam_v bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_19/gamma bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_19/gamma/adam_m bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_19/gamma/adam_v bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_20/beta bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_20/beta/adam_m bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_20/beta/adam_v bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_20/gamma bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_20/gamma/adam_m bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_20/gamma/adam_v bert/pooler/dense/bias bert/pooler/dense/bias/adam_m bert/pooler/dense/bias/adam_v bert/pooler/dense/kernel bert/pooler/dense/kernel/adam_m bert/pooler/dense/kernel/adam_v cls/predictions/output_bias cls/predictions/output_bias/adam_m cls/predictions/output_bias/adam_v cls/predictions/transform/dense/bias cls/predictions/transform/dense/bias/adam_m cls/predictions/transform/dense/bias/adam_v cls/predictions/transform/dense/kernel cls/predictions/transform/dense/kernel/adam_m cls/predictions/transform/dense/kernel/adam_v cls/predictions/transform/layer_normalization_25/beta cls/predictions/transform/layer_normalization_25/beta/adam_m cls/predictions/transform/layer_normalization_25/beta/adam_v cls/predictions/transform/layer_normalization_25/gamma cls/predictions/transform/layer_normalization_25/gamma/adam_m cls/predictions/transform/layer_normalization_25/gamma/adam_v cls/seq_relationship/output_bias cls/seq_relationship/output_bias/adam_m cls/seq_relationship/output_bias/adam_v cls/seq_relationship/output_weights cls/seq_relationship/output_weights/adam_m cls/seq_relationship/output_weights/adam_v global_step Skipping albert/embeddings/layer_normalization/beta Traceback (most recent call last): File "/home/dshah/venv/bin/transformers-cli", line 8, in sys.exit(main()) File "/home/dshah/venv/lib64/python3.8/site-packages/transformers/commands/transformers_cli.py", line 33, in main service.run() File "/home/dshah/venv/lib64/python3.8/site-packages/transformers/commands/convert.py", line 80, in run convert_tf_checkpoint_to_pytorch(self._tf_checkpoint, self._config, self._pytorch_dump_output) File "/home/dshah/venv/lib64/python3.8/site-packages/transformers/convert_albert_original_tf_checkpoint_to_pytorch.py", line 36, in convert_tf_checkpoint_to_pytorch load_tf_weights_in_albert(model, config, tf_checkpoint_path) File "/home/dshah/venv/lib64/python3.8/site-packages/transformers/modeling_albert.py", line 163, in load_tf_weights_in_albert pointer = getattr(pointer, "bias") File "/home/dshah/venv/lib64/python3.8/site-packages/torch/nn/modules/module.py", line 771, in getattr raise ModuleAttributeError("'{}' object has no attribute '{}'".format( torch.nn.modules.module.ModuleAttributeError: 'AlbertEmbeddings' object has no attribute 'bias' could you please look in to this @LysandreJik
03-17-2021 16:02:41
03-17-2021 16:02:41
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,773
closed
Wav2Vec2 - fix flaky test
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> The test: `tests/test_modeling_wav2vec2.py::Wav2Vec2RobustModelTest::test_ctc_loss_inference` is a bit flaky. Locally, these bug fixes seem to solve the problem. I ran the test 200 times locally. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-17-2021 13:45:00
03-17-2021 13:45:00
transformers
10,772
closed
Differences between S2T and Wav2Vec2
# πŸš€ Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> There seem to be differences between S2T and Wav2Vec2 which are hard to understand reason about that may be fixable. ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> Adding something like AutomaticSpeechRecognitionPipeline might be desirable and would be hard to do in current state. If/When new multimodal models are added it is going to add more and more complexity. Aiming for consistent API is desirable IMO. ## Description - There are no `AutoProcessor.from_pretrained`. - Wav2Vec2 cannot use `skip_special_tokens` for the decode variant (because it skips `<pad>` tokens early leading to removing all duplicate letters from the output. IMO it's a "bug" as `<pad>` in the context of CTC is not a special_tokens (at least until letters are resolved). - Wav2Vec2 uses 1 forward pass, where S2T uses generate function. It would be nice, if there could be 1 interface only. (maybe just overload the `Wav2Vec2ForCTC.generate` ? - S2T overloads `input_ids` with float tensors when `generating`, which works in practice but does seem like a piggy-back of the generate function and is definitely confusing to use. If `generate` is generic enough maybe then `input_ids` should be renamed to reflect that (`input_ids` are IDs everywhere in the rest of transformers. It could be a simple internal variable rename, I don't imply we should change any function signature anywhere, just that the variable is not necessarily IDs. Isn't it a bit like `inputs_embed` ? - Wav2Vec2Processor returns 'input_values' where S2TProcessor returns 'input_features'. They seem (at least in appearance) to be the same. Would it be better to use only 1 name if they do ? ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md --> Happy to contribute with PRs but I lack the more general view to be sure about what direction to take, and where are the "better" fixes.
03-17-2021 12:57:15
03-17-2021 12:57:15
@patrickvonplaten Who should I tag for S2T ?<|||||>@patil-suraj for s2t<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,771
closed
Fix ProphetNet Flaky Test
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> This PR aims at solving the flaky prohpetnet test: https://app.circleci.com/pipelines/github/huggingface/transformers/21170/workflows/749ec532-0847-4d1b-8078-ca27bfdbe318/jobs/182387 . I double-checked the code and everything looks correct. Also, I've ran the test 100 times locally with the increased tolerance to somewhat make sure that it fixes the flaky CI <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-17-2021 12:18:47
03-17-2021 12:18:47
I may be mistaken, but this test has appeared since the ProphetNet refactor, right? Is it due to that refactor, or is it a newly added test?
transformers
10,770
closed
TAPAS for Question Generation
Hi, Is there a way to generate questions for a table with TAPAS ? or is it only for Question Answering ?
03-17-2021 10:37:06
03-17-2021 10:37:06
Yes you can. TAPAS is an encoder-model, and can be used in an encoder-decoder set-up, like so: ``` from transformers import EncoderDecoderModel model = EncoderDecoderModel.from_encoder_decoder_pretrained("google/tapas-base", "bert-base-cased") ``` You can specify any decoder you want, here I'm using BERT as a decoder, but you can also use GPT-2, etc (any model that supports the `is_decoder` logic). For more information, see the [docs](https://huggingface.co/transformers/model_doc/encoderdecoder.html) of `EncoderDecoderModel`. <|||||>Thanks @NielsRogge , I went through the concept of `EncoderDecoderModel` and I have a doubt in implementing it for TAPAS - Unlike normal BERT models, TAPAS tokenizer takes `table`, `queries` and `answers` for fine-tuning. So if I want to generate questions, should I skip questions for TAPAS (currently using `google/tapas-large`) encoder and give them to decoder (currently using `GPT-2-medium`) instead?<|||||>Yes, if you want to generate questions given a table, then you should only encode the table (you can set `queries=None` when providing a table to `TapasTokenizer`). <|||||>Thanks @NielsRogge , I'll implement and let you know<|||||>Great, I'm curious to see the results. Another use case could be to generate answers given a question + table with an EncoderDecoder set-up. <|||||>Hi @NielsRogge , I tried with the above approach by passing `table` to encoder and `queries` to decoder. But while encoding, it's giving warning as - **TAPAS is a question answering model but you have not passed a query. Please be aware that the model will probably not behave correctly** which is because of [this](https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py#L1014). I thought passing 'queries' to TAPAS is mandatory. Anyhow I trained the model but it's not performing as expected. While inferencing, it is giving same question (not fully formed) for any input that I pass. Below is the sample snippet ![enc_dec](https://user-images.githubusercontent.com/41769919/111947822-27469800-8b04-11eb-8802-e2a1f7f53231.PNG) <|||||>Yeah that warning is shown because TAPAS has been pre-trained on text-table pairs. You can ignore that warning, because we can still encode just the tables. What kind of generation method are you using? Greedy decoding, beam search? (See [this post](https://huggingface.co/blog/how-to-generate) for the different arguments you can pass to `.generate()`). <|||||>@NielsRogge , I'm not specifying any generation method so it should be Greedy itself.<|||||>Can you provide a notebook?<|||||>Hi @NielsRogge , [Here](https://colab.research.google.com/drive/1d8m_hmipL-1ZzU15LfA2XHmypKipvnJR?usp=sharing) is the colab link for the replica of my work. As I cannot share or upload any files from my Office' VPN, I created this notebook which is same as the one I'm working with in our VM's. Only change is I used `google/tapas-base` and `gpt2` in colab whereas I'm using `google/tapas-large` and `gpt2-medium` for my official work.<|||||>Hi @NielsRogge I'm able to generate decent questions but only one generic question per table. How can I extend this to generate a question based on a particular cell value? Because currently it's giving very basic ones like - `what are all of the countries?` `what are the names of all the drivers?`<|||||>Ok great :) sorry I didn't have the time yet to look at your notebook. Looking at it now, it looks really clean! I think the questions that it will generate highly depend on the training data you provide. I see you're currently training on SQA questions, and only those for which `position==0`. These questions are almost always very generic, because SQA is a dataset involving conversational questions, which means that the first question (with position 0) is most of the time a very generic question regarding a table, and the ones that come after (with position 1, 2, 3) are then more specific follow-up questions (regarding particular cell values). So either you can also add those follow-up questions to your training dataset, or consider train on questions of the [WTQ](https://nlp.stanford.edu/blog/wikitablequestions-a-complex-real-world-question-understanding-dataset/) dataset? (Note that there is an overlap between WTQ and SQA questions - SQA was created based on WTQ). Or maybe questions from the WikiSQL dataset (which is available in HuggingFace datasets)? Very nice use case!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @NielsRogge, Thanks for the TAPAS implementation! I'm trying to follow this use-case in order train the model to perform conditional generation from tables. Since TAPAS can encode the semi-structured meaning in tables, I guessed it was a good choice to use it as an encoder and say GPT2 as decoder. I however encountered a problem when trying to generate from that EncoderDecoder model: Here is the relevant pieces of code, this: ![image](https://user-images.githubusercontent.com/4630195/149192558-a2e6b2b7-4eed-4792-9540-8d76b4fb7b9c.png) results in this error: ![image](https://user-images.githubusercontent.com/4630195/149193090-a88deb99-724f-4fdc-8010-615a1776e1b9.png) I guess this is since model.generate() for EncoderDecoder does not expect to have the extra `token_type_ids` that TAPAS has. Can you think of a way I can make this work? Thanks!
transformers
10,769
closed
[Generate] Add save mode logits processor to remove nans and infs if necessary
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> It can happen that the output logits of models contain `inf` or even `nan` values. Those values will necessarily lead to errors when using the `sample(...)` or `beam_sample(...)` method. This PR adds an optional `InfNanRemoveLogitsProcessor` that - enabled - should remove those values. It should help to fix flaky ci failures like this one: https://app.circleci.com/pipelines/github/huggingface/transformers/21081/workflows/36711d05-4282-4167-88df-59fbda03fe33/jobs/181274 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-17-2021 10:24:20
03-17-2021 10:24:20
transformers
10,768
closed
Bug in multi-gpu training setting max_iters
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.3.3 - Platform: - Python version: 3.8 - PyTorch version (GPU?): 1.8 - Tensorflow version (GPU?): - - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger, @patrickvonplaten, @patil-suraj HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> trainer: @sgugger ## Information I am training T5 model using the command in the repo on 4 GPUs in a distributed way, the issue arises that if one set max_iters then the number of iterations with 4 GPUs is not divided by 4 anymore, only one get speed up if max_iters is not set, and this looks like this is a bug. ## To reproduce Steps to reproduce the behavior: Please run python -m torch.distributed.launch \ --nproc_per_node 8 \ examples/seq2seq/run_summarization.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name xsum \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate \ --max_train_samples 500 \ --max_val_samples 500 --max_iters 100 compare the results with the case you run on 1 GPU, both would have the same number of iterations to get completed once running which is not correct ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
03-17-2021 09:28:37
03-17-2021 09:28:37
There is no `--max_iters` argument in the `run_summarization` script, so I'm not sure what you're referring.<|||||>Hi I apologize for the typo, this is max_steps, if you set it and run a code in a distributed way and compare it with non-distributed way, the number of steps would not differ, but if you try with setting max_train_epochs, you would see less number of iterations when training on multiple GPUs, meaning that the code is correctly setting the parameters in that case. thanks On Wed, Mar 17, 2021 at 2:08 PM Sylvain Gugger ***@***.***> wrote: > There is no --max_iters argument in the run_summarization script, so I'm > not sure what you're referring. > > β€” > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/10768#issuecomment-801066552>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AS45N4YLS7NRPPJ32GUWPW3TECSTBANCNFSM4ZKGUFPA> > . > <|||||>Yes, `max_steps` is the number of training steps, so whether you run on one or several GPUs, you will do that number of training steps. That is the intended behavior and it is not a bug. `num_epochs` is the number of training epochs. Depending on your number of GPUs you will not have the same number of training steps per epoch (as long as you keep `per_device_train_batch_size` the same) so you will not train for the same number of total steps.<|||||>Hi thanks for the response, still to me if a user needs max_steps on multiple gpus, it needs to become a smaller number as this divides per number of gpus, similar to number of epochs. On Thu, Mar 18, 2021 at 3:36 PM Sylvain Gugger ***@***.***> wrote: > Yes, max_steps is the number of training steps, so whether you run on one > or several GPUs, you will do that number of training steps. That is the > intended behavior and it is not a bug. > > num_epochs is the number of training epochs. Depending on your number of > GPUs you will not have the same number of training steps per epoch (as long > as you keep per_device_train_batch_size the same) so you will not train > for the same number of total steps. > > β€” > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/10768#issuecomment-801980651>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AS45N4ZF2CRND434WMFIIUDTEIFYBANCNFSM4ZKGUFPA> > . > <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,767
closed
add run_common_voice script
# What does this PR do? This PR adds `run_common_voice.py` script to fine-tune XLSR-Wav2Vec2 models on `common_voice` dataset
03-17-2021 07:46:46
03-17-2021 07:46:46
transformers
10,766
closed
auto model encodings for a text snippet returns different floating values across different batch sizes
## Environment info - `transformers` version: 4.4.1 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.10 - PyTorch version (GPU?): 1.8.0+cu101 (True) - Tensorflow version (GPU?): 2.4.1 (True) - Using GPU in script?: yes (but the bug issue is irrespective of it) - Using distributed or parallel set-up in script?: no ### Who can help @LysandreJik, @patrickvonplaten ## Information Model I am using : `bert-base-cased` and `sentence-transformers/distilbert-base-nli-stsb-mean-tokens` Consider the following code: ```python # pip install transformers import torch device = "cuda" if torch.cuda.is_available() else "cpu" print(device) import transformers from transformers import AutoModel, AutoTokenizer name = "sentence-transformers/distilbert-base-nli-stsb-mean-tokens" model = AutoModel.from_pretrained(name) tokenizer = AutoTokenizer.from_pretrained(name) model.to(device) model.eval() from tqdm.autonotebook import trange for ntimes in trange(1, 200, 1, desc="ntimes", disable=False): s = ['This framework generates embeddings for each input sentence' for _ in range(ntimes)] f = tokenizer(s, padding=True, truncation='longest_first', return_tensors="pt", max_length=128) f = f.to(device) with torch.no_grad(): out = model(**f, return_dict=False) t = out[0] # token_embedding print(str(ntimes).zfill(4), t[0][0][:5].tolist()) ``` The testing setup is as follows: For every batch size considered in 1 to 200, the model output (last layer's output) for first sentence is taken and compared. Ideally, it is expected to be same but depending on the batch size, the output varies. Although the differences are after several decimal places, it still creates an issue when rounding off or when used for exact-text-match tasks. An example when comparing first 5 values of CLS token's positional representation is printed below: ``` batch_size first_5_values 0001 [-0.4345831274986267, 0.19430403411388397, -0.008721709251403809, 0.16533663868904114, -0.21307958662509918] 0002 [-0.4345831274986267, 0.19430403411388397, -0.008721709251403809, 0.16533663868904114, -0.21307958662509918] 0003 [-0.4345839023590088, 0.19430400431156158, -0.008721785619854927, 0.16533628106117249, -0.21307939291000366] 0004 [-0.4345839023590088, 0.19430400431156158, -0.008721785619854927, 0.16533628106117249, -0.21307939291000366] 0005 [-0.4345839023590088, 0.19430400431156158, -0.008721785619854927, 0.16533628106117249, -0.21307939291000366] 0006 [-0.4345828890800476, 0.19430409371852875, -0.0087218526750803, 0.1653369963169098, -0.2130797803401947] 0007 [-0.43458378314971924, 0.19430388510227203, -0.008721890859305859, 0.16533657908439636, -0.21307970583438873] 0008 [-0.43458378314971924, 0.19430388510227203, -0.008721890859305859, 0.16533657908439636, -0.21307970583438873] 0009 [-0.43458378314971924, 0.19430388510227203, -0.008721890859305859, 0.16533657908439636, -0.21307970583438873] 0010 [-0.43458378314971924, 0.19430388510227203, -0.008721890859305859, 0.16533657908439636, -0.21307970583438873] 0011 [-0.43458303809165955, 0.19430424273014069, -0.0087218526750803, 0.16533637046813965, -0.21307967603206635] 0012 [-0.43458303809165955, 0.19430424273014069, -0.0087218526750803, 0.16533637046813965, -0.21307967603206635] 0013 [-0.4345836043357849, 0.19430403411388397, -0.008721555583178997, 0.16533656418323517, -0.21307919919490814] 0014 [-0.43458399176597595, 0.19430390000343323, -0.008721986785531044, 0.16533610224723816, -0.2130793035030365] 0015 [-0.43458399176597595, 0.19430390000343323, -0.008721986785531044, 0.16533610224723816, -0.2130793035030365] 0016 [-0.43458399176597595, 0.19430390000343323, -0.008721986785531044, 0.16533610224723816, -0.2130793035030365] 0017 [-0.43458399176597595, 0.19430390000343323, -0.008721986785531044, 0.16533610224723816, -0.2130793035030365] 0018 [-0.43458399176597595, 0.19430390000343323, -0.008721986785531044, 0.16533610224723816, -0.2130793035030365] 0019 [-0.43458399176597595, 0.19430390000343323, -0.008721986785531044, 0.16533610224723816, -0.2130793035030365] 0020 [-0.43458399176597595, 0.19430390000343323, -0.008721986785531044, 0.16533610224723816, -0.2130793035030365] 0021 [-0.4345839321613312, 0.19430312514305115, -0.008722005411982536, 0.16533663868904114, -0.21307897567749023] 0022 [-0.43458428978919983, 0.1943034529685974, -0.008722092024981976, 0.16533707082271576, -0.21307975053787231] 0023 [-0.43458428978919983, 0.1943034529685974, -0.008722092024981976, 0.16533707082271576, -0.21307975053787231] 0024 [-0.4345826506614685, 0.19430312514305115, -0.008722330443561077, 0.16533666849136353, -0.2130790799856186] 0025 [-0.43458348512649536, 0.19430328905582428, -0.008722656406462193, 0.16533656418323517, -0.21307960152626038] 0026 [-0.43458348512649536, 0.19430328905582428, -0.008722656406462193, 0.16533656418323517, -0.21307960152626038] 0027 [-0.43458327651023865, 0.19430407881736755, -0.008721843361854553, 0.16533705592155457, -0.2130795419216156] 0028 [-0.43458327651023865, 0.19430407881736755, -0.008721843361854553, 0.16533705592155457, -0.2130795419216156] 0029 [-0.43458327651023865, 0.19430407881736755, -0.008721843361854553, 0.16533705592155457, -0.2130795419216156] 0030 [-0.43458327651023865, 0.19430407881736755, -0.008721843361854553, 0.16533705592155457, -0.2130795419216156] 0031 [-0.43458327651023865, 0.19430407881736755, -0.008721843361854553, 0.16533705592155457, -0.2130795419216156] 0032 [-0.43458327651023865, 0.19430407881736755, -0.008721843361854553, 0.16533705592155457, -0.2130795419216156] 0033 [-0.43458327651023865, 0.19430407881736755, -0.008721843361854553, 0.16533705592155457, -0.2130795419216156] 0034 [-0.43458327651023865, 0.19430407881736755, -0.008721843361854553, 0.16533705592155457, -0.2130795419216156] 0035 [-0.4345836639404297, 0.19430400431156158, -0.008721727877855301, 0.16533680260181427, -0.2130795419216156] 0036 [-0.4345836639404297, 0.19430400431156158, -0.008721727877855301, 0.16533680260181427, -0.2130795419216156] 0037 [-0.4345836639404297, 0.19430400431156158, -0.008721727877855301, 0.16533680260181427, -0.2130795419216156] 0038 [-0.4345836639404297, 0.19430400431156158, -0.008721727877855301, 0.16533680260181427, -0.2130795419216156] 0039 [-0.4345836639404297, 0.19430400431156158, -0.008721727877855301, 0.16533680260181427, -0.2130795419216156] 0040 [-0.4345836639404297, 0.19430400431156158, -0.008721727877855301, 0.16533680260181427, -0.2130795419216156] 0041 [-0.43458354473114014, 0.1943034529685974, -0.008721929043531418, 0.1653369963169098, -0.2130795270204544] 0042 [-0.43458354473114014, 0.1943034529685974, -0.008721929043531418, 0.1653369963169098, -0.2130795270204544] 0043 [-0.43458354473114014, 0.1943034529685974, -0.008721929043531418, 0.1653369963169098, -0.2130795270204544] 0044 [-0.43458375334739685, 0.1943032294511795, -0.008721861988306046, 0.1653372347354889, -0.21307994425296783] 0045 [-0.43458375334739685, 0.1943032294511795, -0.008721861988306046, 0.1653372347354889, -0.21307994425296783] 0046 [-0.43458375334739685, 0.1943032294511795, -0.008721861988306046, 0.1653372347354889, -0.21307994425296783] 0047 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768] 0048 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768] 0049 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768] 0050 [-0.43458375334739685, 0.19430378079414368, -0.008721306920051575, 0.1653364896774292, -0.21308039128780365] 0051 [-0.43458375334739685, 0.19430378079414368, -0.008721306920051575, 0.1653364896774292, -0.21308039128780365] 0052 [-0.43458375334739685, 0.19430378079414368, -0.008721306920051575, 0.1653364896774292, -0.21308039128780365] 0053 [-0.4345836043357849, 0.1943032443523407, -0.008722072467207909, 0.16533635556697845, -0.21307991445064545] 0054 [-0.4345836043357849, 0.1943032443523407, -0.008722072467207909, 0.16533635556697845, -0.21307991445064545] 0055 [-0.4345836043357849, 0.1943032443523407, -0.008722072467207909, 0.16533635556697845, -0.21307991445064545] 0056 [-0.43458402156829834, 0.19430415332317352, -0.008722043596208096, 0.16533638536930084, -0.2130793184041977] 0057 [-0.43458402156829834, 0.19430415332317352, -0.008722043596208096, 0.16533638536930084, -0.2130793184041977] 0058 [-0.43458348512649536, 0.19430400431156158, -0.008722235448658466, 0.16533590853214264, -0.2130792737007141] 0059 [-0.43458348512649536, 0.19430400431156158, -0.008722235448658466, 0.16533590853214264, -0.2130792737007141] 0060 [-0.43458348512649536, 0.19430400431156158, -0.008722235448658466, 0.16533590853214264, -0.2130792737007141] 0061 [-0.43458348512649536, 0.19430400431156158, -0.008722235448658466, 0.16533590853214264, -0.2130792737007141] 0062 [-0.4345838725566864, 0.19430425763130188, -0.008721861988306046, 0.16533733904361725, -0.21307975053787231] 0063 [-0.4345838725566864, 0.19430425763130188, -0.008721861988306046, 0.16533733904361725, -0.21307975053787231] 0064 [-0.4345833957195282, 0.19430355727672577, -0.008721498772501945, 0.16533666849136353, -0.21307994425296783] 0065 [-0.4345833957195282, 0.19430355727672577, -0.008721498772501945, 0.16533666849136353, -0.21307994425296783] 0066 [-0.4345833957195282, 0.19430355727672577, -0.008721498772501945, 0.16533666849136353, -0.21307994425296783] 0067 [-0.4345833957195282, 0.19430355727672577, -0.008721498772501945, 0.16533666849136353, -0.21307994425296783] 0068 [-0.4345833957195282, 0.19430355727672577, -0.008721498772501945, 0.16533666849136353, -0.21307994425296783] 0069 [-0.4345833957195282, 0.19430355727672577, -0.008721498772501945, 0.16533666849136353, -0.21307994425296783] 0070 [-0.4345833957195282, 0.19430355727672577, -0.008721498772501945, 0.16533666849136353, -0.21307994425296783] 0071 [-0.4345833957195282, 0.19430355727672577, -0.008721498772501945, 0.16533666849136353, -0.21307994425296783] 0072 [-0.4345833957195282, 0.19430355727672577, -0.008721498772501945, 0.16533666849136353, -0.21307994425296783] 0073 [-0.4345824718475342, 0.1943037360906601, -0.00872176606208086, 0.16533659398555756, -0.2130804806947708] 0074 [-0.4345824718475342, 0.1943037360906601, -0.00872176606208086, 0.16533659398555756, -0.2130804806947708] 0075 [-0.43458399176597595, 0.19430390000343323, -0.008721986785531044, 0.16533610224723816, -0.2130793035030365] 0076 [-0.43458399176597595, 0.19430390000343323, -0.008721986785531044, 0.16533610224723816, -0.2130793035030365] 0077 [-0.43458399176597595, 0.19430390000343323, -0.008721986785531044, 0.16533610224723816, -0.2130793035030365] 0078 [-0.43458399176597595, 0.19430390000343323, -0.008721986785531044, 0.16533610224723816, -0.2130793035030365] 0079 [-0.43458399176597595, 0.19430390000343323, -0.008721986785531044, 0.16533610224723816, -0.2130793035030365] 0080 [-0.43458354473114014, 0.19430452585220337, -0.008721709251403809, 0.16533659398555756, -0.21307997405529022] 0081 [-0.43458354473114014, 0.19430452585220337, -0.008721709251403809, 0.16533659398555756, -0.21307997405529022] 0082 [-0.43458354473114014, 0.19430452585220337, -0.008721709251403809, 0.16533659398555756, -0.21307997405529022] 0083 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0084 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0085 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0086 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0087 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0088 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0089 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0090 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0091 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0092 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0093 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0094 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0095 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0096 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0097 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0098 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0099 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0100 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0101 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0102 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0103 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0104 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0105 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0106 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0107 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0108 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0109 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0110 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0111 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0112 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0113 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0114 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0115 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0116 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0117 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0118 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0119 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0120 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0121 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0122 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0123 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0124 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0125 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0126 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0127 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0128 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0129 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0130 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0131 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0132 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0133 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0134 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0135 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0136 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0137 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0138 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0139 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0140 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0141 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0142 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0143 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0144 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0145 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0146 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903] 0147 [-0.4345836937427521, 0.1943046748638153, -0.008721747435629368, 0.1653372347354889, -0.2130795568227768] 0148 [-0.4345836937427521, 0.1943046748638153, -0.008721747435629368, 0.1653372347354889, -0.2130795568227768] 0149 [-0.4345836937427521, 0.1943046748638153, -0.008721747435629368, 0.1653372347354889, -0.2130795568227768] 0150 [-0.4345836937427521, 0.1943046748638153, -0.008721747435629368, 0.1653372347354889, -0.2130795568227768] 0151 [-0.4345836937427521, 0.1943046748638153, -0.008721747435629368, 0.1653372347354889, -0.2130795568227768] 0152 [-0.4345836937427521, 0.1943046748638153, -0.008721747435629368, 0.1653372347354889, -0.2130795568227768] 0153 [-0.4345836937427521, 0.1943046748638153, -0.008721747435629368, 0.1653372347354889, -0.2130795568227768] 0154 [-0.4345836937427521, 0.1943046748638153, -0.008721747435629368, 0.1653372347354889, -0.2130795568227768] 0155 [-0.4345836937427521, 0.1943046748638153, -0.008721747435629368, 0.1653372347354889, -0.2130795568227768] 0156 [-0.4345836937427521, 0.1943046748638153, -0.008721747435629368, 0.1653372347354889, -0.2130795568227768] 0157 [-0.4345836937427521, 0.1943046748638153, -0.008721747435629368, 0.1653372347354889, -0.2130795568227768] 0158 [-0.4345836937427521, 0.1943046748638153, -0.008721747435629368, 0.1653372347354889, -0.2130795568227768] 0159 [-0.4345836937427521, 0.1943046748638153, -0.008721747435629368, 0.1653372347354889, -0.2130795568227768] 0160 [-0.4345836937427521, 0.1943046748638153, -0.008721747435629368, 0.1653372347354889, -0.2130795568227768] 0161 [-0.4345836937427521, 0.1943046748638153, -0.008721747435629368, 0.1653372347354889, -0.2130795568227768] 0162 [-0.4345826506614685, 0.19430312514305115, -0.008722330443561077, 0.16533666849136353, -0.2130790799856186] 0163 [-0.4345826506614685, 0.19430312514305115, -0.008722330443561077, 0.16533666849136353, -0.2130790799856186] 0164 [-0.4345826506614685, 0.19430312514305115, -0.008722330443561077, 0.16533666849136353, -0.2130790799856186] 0165 [-0.4345826506614685, 0.19430312514305115, -0.008722330443561077, 0.16533666849136353, -0.2130790799856186] 0166 [-0.4345826506614685, 0.19430312514305115, -0.008722330443561077, 0.16533666849136353, -0.2130790799856186] 0167 [-0.4345826506614685, 0.19430312514305115, -0.008722330443561077, 0.16533666849136353, -0.2130790799856186] 0168 [-0.4345826506614685, 0.19430312514305115, -0.008722330443561077, 0.16533666849136353, -0.2130790799856186] 0169 [-0.4345826506614685, 0.19430312514305115, -0.008722330443561077, 0.16533666849136353, -0.2130790799856186] 0170 [-0.4345826506614685, 0.19430312514305115, -0.008722330443561077, 0.16533666849136353, -0.2130790799856186] 0171 [-0.4345826506614685, 0.19430312514305115, -0.008722330443561077, 0.16533666849136353, -0.2130790799856186] 0172 [-0.4345826506614685, 0.19430312514305115, -0.008722330443561077, 0.16533666849136353, -0.2130790799856186] 0173 [-0.4345826506614685, 0.19430312514305115, -0.008722330443561077, 0.16533666849136353, -0.2130790799856186] 0174 [-0.4345826506614685, 0.19430312514305115, -0.008722330443561077, 0.16533666849136353, -0.2130790799856186] 0175 [-0.4345826506614685, 0.19430312514305115, -0.008722330443561077, 0.16533666849136353, -0.2130790799856186] 0176 [-0.4345826506614685, 0.19430312514305115, -0.008722330443561077, 0.16533666849136353, -0.2130790799856186] 0177 [-0.4345826506614685, 0.19430312514305115, -0.008722330443561077, 0.16533666849136353, -0.2130790799856186] 0178 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768] 0179 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768] 0180 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768] 0181 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768] 0182 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768] 0183 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768] 0184 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768] 0185 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768] 0186 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768] 0187 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768] 0188 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768] 0189 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768] 0190 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768] 0191 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768] 0192 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768] 0193 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768] 0194 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768] 0195 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768] 0196 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768] 0197 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768] 0198 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768] 0199 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768] ``` ## To reproduce Run the snippet of code provided above Detailed snippets are available at this [colab notebook](https://colab.research.google.com/drive/19yXek9nx4E2pZTqk8JsS-tAhYjgZ5yGG?usp=sharing) ## Expected behavior It is expected that an input has same representation irrespective of the batch size used to obtain it.
03-17-2021 07:36:42
03-17-2021 07:36:42
Hello! Thank you for your report. One question: - Do you still observe this when you're not using padding? Padding can influence values because of the padding tokens, even with attention masks. Also, you're using a `_batch_to_device` method, but you should just be able to cast the batch to the device :) ```py f = tokenizer(s, padding=True, truncation='longest_first', return_tensors="pt", max_length=128) f = f.to(device) ```<|||||>Hi @LysandreJik , thanks for `.to(device)` thingy. Regarding the bug, no i am not using padding tokens. For example, a batch two in above experimental setup looks like the following: ``` {'input_ids': tensor([[ 101, 2023, 7705, 19421, 7861, 8270, 4667, 2015, 2005, 2169, 7953, 6251, 102], [ 101, 2023, 7705, 19421, 7861, 8270, 4667, 2015, 2005, 2169, 7953, 6251, 102]], device='cuda:0'), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], device='cuda:0')} ``` I get this issue irrespective of whether I use `padding=True` or `padding=False` <|||||>Okay, I see, thank you! Second question: do you obtain the same if you're running on CPU? I'm currently on a CPU setup and tried running your code, I have exactly the same values for each. GPUs are known for numerical instabilities, so I wouldn't be surprised if this was the source of the issue!<|||||>@LysandreJik I tried on a CPU. I see different values for batch size 1 vs. greater than 1. For latter, all are exactly same. But still i see the following differences: ``` cpu 0001 [-0.43458425998687744, 0.19430384039878845, -0.008721470832824707, 0.16533654928207397, -0.2130793333053589] 0002 [-0.4345836639404297, 0.1943041831254959, -0.008721746504306793, 0.16533654928207397, -0.2130793035030365] 0003 [-0.4345836639404297, 0.1943041831254959, -0.008721746504306793, 0.16533654928207397, -0.2130793035030365] 0004 [-0.4345836639404297, 0.1943041831254959, -0.008721746504306793, 0.16533654928207397, -0.2130793035030365] 0005 [-0.4345828890800476, 0.19430366158485413, -0.008721619844436646, 0.16533659398555756, -0.21308015286922455] 0006 [-0.4345828890800476, 0.19430366158485413, -0.008721619844436646, 0.16533659398555756, -0.21308015286922455] 0007 [-0.4345828890800476, 0.19430366158485413, -0.008721619844436646, 0.16533659398555756, -0.21308015286922455] 0008 [-0.4345828890800476, 0.19430366158485413, -0.008721619844436646, 0.16533659398555756, -0.21308015286922455] 0009 [-0.4345828890800476, 0.19430366158485413, -0.008721619844436646, 0.16533659398555756, -0.21308015286922455] 0010 [-0.4345828890800476, 0.19430366158485413, -0.008721619844436646, 0.16533659398555756, -0.21308015286922455] 0011 [-0.4345828890800476, 0.19430366158485413, -0.008721619844436646, 0.16533659398555756, -0.21308015286922455] 0012 [-0.4345828890800476, 0.19430366158485413, -0.008721619844436646, 0.16533659398555756, -0.21308015286922455] 0013 [-0.4345828890800476, 0.19430366158485413, -0.008721619844436646, 0.16533659398555756, -0.21308015286922455] 0014 [-0.4345828890800476, 0.19430366158485413, -0.008721619844436646, 0.16533659398555756, -0.21308015286922455] 0015 [-0.4345828890800476, 0.19430366158485413, -0.008721619844436646, 0.16533659398555756, -0.21308015286922455] 0016 [-0.4345828890800476, 0.19430366158485413, -0.008721619844436646, 0.16533659398555756, -0.21308015286922455] 0017 [-0.4345828890800476, 0.19430366158485413, -0.008721619844436646, 0.16533659398555756, -0.21308015286922455] 0018 [-0.4345828890800476, 0.19430366158485413, -0.008721619844436646, 0.16533659398555756, -0.21308015286922455] 0019 [-0.4345828890800476, 0.19430366158485413, -0.008721619844436646, 0.16533659398555756, -0.21308015286922455] ``` This is obtained on following system: - `transformers` version: 4.3.2 - Platform: Darwin-20.3.0-x86_64-i386-64bit - Python version: 3.7.9 - PyTorch version (GPU?): 1.7.1 (False) - Tensorflow version (GPU?): not installed (NA) ``` cpu 0001 [-0.43458291888237, 0.19430391490459442, -0.00872180424630642, 0.1653362363576889, -0.21307975053787231] 0002 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635] 0003 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635] 0004 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635] 0005 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635] 0006 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635] 0007 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635] 0008 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635] 0009 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635] 0010 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635] 0011 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635] 0012 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635] 0013 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635] 0014 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635] 0015 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635] 0016 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635] 0017 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635] 0018 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635] 0019 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635] 0020 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635] 0021 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635] 0022 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635] 0023 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635] ``` This is obtained on following system: - `transformers` version: 4.4.1 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.10 - PyTorch version (GPU?): 1.8.0+cu101 (False) - Tensorflow version (GPU?): 2.4.1 (False) Are you getting similar results or are you ending up getting exact same values irrespective of batch size equal or greater than 1 ?<|||||>Yes, you're right, the difference is between batch size == 1 and batch size > 1! Talking about it with team members, we guess it's because the kernels used to compute the results differ according to the dimensions, as they're optimized differently. For batch size = 1, the model input would essentially be in one dimension (the vector of tokens), while for batch size > 1, the model input would essentially be in two dimension (an array of tokens). Imo this is more of a PyTorch issue (if it's an issue in the first place) than a `transformers` issue!<|||||>Thanks for the information! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,765
closed
Cannot import name swish from transformers.activations
I have installed `transformers v4.4.1` and `tensorflow v2.4.1`. I tried to run `from transformers.activations import gelu, gelu_new, swish`. I get the error like this: `ImportError: cannot import name 'swish' from 'transformers.activations' (/Users/array/opt/miniconda3/lib/python3.7/site-packages/transformers/activations.py)` Is there any solutions for this error? Thank youπŸ™
03-17-2021 06:24:03
03-17-2021 06:24:03
Hi! `swish` is not importable because it isn't available. `swish` is another name for `silu`, but arrived after it so the name you can use is `silu`: ```py >>> from transformers.activations import silu ``` However, in our `ACT2FN` dict we have support for both `swish` and `silu`, so that you can do: ```py >>> from transformers.activations import ACT2FN >>> swish = ACT2FN["swish"] >>> silu = ACT2FN["silu"] ```<|||||>Thank you
transformers
10,764
closed
TokenClassificationPipeline: top-k predictions
# πŸš€ Feature request Optional argument for TokenClassificationPipeline to output top-k predictions instead of limiting output to argmax. ## Motivation Having access to the top-k prediction distribution is useful in a number of scenarios, such as confidence calibration (https://arxiv.org/abs/1706.04599) or generating pseudo-labels (https://arxiv.org/abs/1911.04252). ## Your contribution I'm happy to submit a PR with the proposed changes, if this contribution is deemed useful.
03-17-2021 06:19:00
03-17-2021 06:19:00
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,763
closed
TokenClassificationPipeline: ignoring subwords
## Environment info - `transformers` version: 4.4.1 - Platform: Linux-4.15.0-136-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.8.0 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help Library: - pipelines: @LysandreJik ## Information Model I am using (Bert, XLNet ...): Any NER model, e.g. elastic/distilbert-base-cased-finetuned-conll03-english The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) Ignoring subwords using the TokenClassificationPipeline. ## To reproduce Steps to reproduce the behavior: ``` import transformers pl = transformers.pipeline('ner', model="elastic/distilbert-base-cased-finetuned-conll03-english", tokenizer="elastic/distilbert-base-cased-finetuned-conll03-english", ignore_labels=[], ignore_subwords=True) output = pl("Sir Testy McTest is testiful") ``` This outputs: ``` [{'word': 'Sir', 'score': 0.997665524482727, 'entity': 'O', 'index': 1, 'start': 0, 'end': 3}, {'word': 'Test', 'score': 0.7986497282981873, 'entity': 'B-PER', 'index': 2, 'start': 4, 'end': 8}, {'word': '##y', 'score': 0.9581826329231262, 'entity': 'B-PER', 'index': 3, 'start': 8, 'end': 9}, {'word': 'M', 'score': 0.9105736613273621, 'entity': 'I-PER', 'index': 4, 'start': 10, 'end': 11}, {'word': '##c', 'score': 0.9090507626533508, 'entity': 'I-PER', 'index': 5, 'start': 11, 'end': 12}, {'word': '##T', 'score': 0.9545289874076843, 'entity': 'I-PER', 'index': 6, 'start': 12, 'end': 13}, {'word': '##est', 'score': 0.9441993832588196, 'entity': 'I-PER', 'index': 7, 'start': 13, 'end': 16}, {'word': 'is', 'score': 0.9999386072158813, 'entity': 'O', 'index': 8, 'start': 17, 'end': 19}, {'word': 'test', 'score': 0.9998794198036194, 'entity': 'O', 'index': 9, 'start': 20, 'end': 24}, {'word': '##iful', 'score': 0.9999022483825684, 'entity': 'O', 'index': 10, 'start': 24, 'end': 28}] ``` ## Expected behavior The expected behavior would be the subwords token being merged with the preceding token, and their predictions ignored e.g. ``` {'word': 'Testy', 'score': 0.7986497282981873, 'entity': 'B-PER', 'index': 2, 'start': 4, 'end': 9} ``` instead of ``` {'word': 'Test', 'score': 0.7986497282981873, 'entity': 'B-PER', 'index': 2, 'start': 4, 'end': 8}, {'word': '##y', 'score': 0.9581826329231262, 'entity': 'B-PER', 'index': 3, 'start': 8, 'end': 9} ``` In the current logic the flag `ignore_subwords` seems to be used only in combination with the `grouped_entities` https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/token_classification.py#L216 . The output obtained from the example input above, setting both flags as True: ``` [{'entity_group': 'O', 'score': 0.997665524482727, 'word': 'Sir', 'start': 0, 'end': 3}, {'entity_group': 'PER', 'score': 0.8546116948127747, 'word': 'Testy McTest', 'start': 4, 'end': 16}, {'entity_group': 'O', 'score': 0.9999090135097504, 'word': 'is testiful', 'start': 17, 'end': 28}] ``` while setting `grouped_entities=True` and `ignore_subwords=False` outputs ``` [{'entity_group': 'O', 'score': 0.997665524482727, 'word': 'Sir', 'start': 0, 'end': 3}, {'entity_group': 'PER', 'score': 0.7986497282981873, 'word': 'Test', 'start': 4, 'end': 8}, {'entity_group': 'PER', 'score': 0.9353070855140686, 'word': '##y McTest', 'start': 8, 'end': 16}, {'entity_group': 'O', 'score': 0.9999067584673563, 'word': 'is testiful', 'start': 17, 'end': 28}] ``` This seems counterintuitive as the grouped entities shouldn't be fragmented by subwords, and ignoring subwords shouldn't be conditioned on grouping entitities.
03-17-2021 05:50:21
03-17-2021 05:50:21
Hello! Could you take a look at https://github.com/huggingface/transformers/pull/10568 and let me know if it's interesting for you? It proposes a refactor of the two keywords you mentioned.<|||||>> Hello! Could you take a look at #10568 and let me know if it's interesting for you? It proposes a refactor of the two keywords you mentioned. Yes! That would solve this issue. Thanks for the pointer. I'll post comments there.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,762
closed
[DeepSpeed] simplify init
This PR simplifies `deepspeed.initialize` setup thanks to this PR https://github.com/microsoft/DeepSpeed/pull/825 We already have the required version that includes that change in DeepSpeed in place. @sgugger
03-17-2021 05:28:48
03-17-2021 05:28:48
transformers
10,761
closed
[doc] [testing] extend the pytest -k section with more examples
This PR adds more examples on using `pytest -k` - I always forget that I want to use `-k A OR B` when I want several tests - I keep trying AND and it doesn't match any. @sgugger
03-17-2021 05:10:57
03-17-2021 05:10:57
transformers
10,760
closed
[DeepSpeed] improve checkpoint loading code plus tests
This PR further improves the DeepSpeed integration * checkpoint resuming code has been cleaned up * detailed checkpoint saving and resuming from checkpoint tests added * a small reshuffle made in `test_trainer.py` to enable re-using helper functions in other test modules * switched `test_trainer.py` to `TestCasePlus` so it's easier to deal with temp dirs during debug * adjusted `init_deepspeed` to make a a deepcopy of the config dict passed to it, so that the user's copy isn't affected - needed at least for tests Note that I made a failed attempt to load from resume point fatal under deepspeed. I'm not sure why the normal code just warns if a wrong path is passed. Unless I'm missing something, if a user expects to resume and it is not possible it should be fatal IMHO, so that they can correct their launching code. @sgugger
03-17-2021 04:25:50
03-17-2021 04:25:50
Great, thank you for the feedback, @sgugger - I will add it separately https://github.com/huggingface/transformers/pull/10777
transformers
10,759
closed
AlbertForMaskedLM always has bad results
I am the one who is using your great project. I'm trying to make my own ALBERT language model using scratch from [here](https://mlcom.github.io/Create-Language-Model/) This article seems similar to [your tutorial notebook.](https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb#scrollTo=YZ9HSQxAAbme) I have already made a BERT model that fits my language by referring to this, and it has shown satisfactory results. ``` BertForMaskedLM : about 0.6 loss BertForSequenceClassification : accuracy 0.88 in my dataset(binary classification) ``` In order to use ALBERT, I trained tokenizer by using Sentencepiece and trained the pretrain model. My tokenizer had good results, but my language model had higher loss than my BERT model.(about 2.7~2.8) I thought the result was bad after seeing the loss, so I checked the result with pipeline and fill-mask tasks, and all the results came out the same. Sentence A ```json [{'score': 0.7783917188644409, 'token': 32002, 'token_str': '<pad>'}, {'score': 0.008062483742833138, 'token': 3, 'token_str': '.'}, {'score': 0.0054806191474199295, 'token': 4, 'token_str': ','}, ... ``` Sentence B ```json [{'score': 0.7783915400505066, 'token': 32002, 'token_str': '<pad>'}, {'score': 0.008062485605478287, 'token': 3, 'token_str': '.'}, {'score': 0.005480623338371515, 'token': 4, 'token_str': ','}, ... ``` I want to solve this problem, but I couldn't find the answer even though I found a lot of articles. I will look forward to the opinions of the contributors and users of this project.
03-17-2021 03:58:16
03-17-2021 03:58:16
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,758
closed
Even slower when using multiple gpus with sharded_ddp
## Environment info - `transformers` version: 4.2.2 - Platform: Linux-3.10.0-1062.el7.x86_64-x86_64-with-centos-7.7.1908-Core - Python version: 3.7.10 - PyTorch version (GPU?): 1.8.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: True - Using distributed or parallel set-up in script?: True ### Who can help Library: - trainer: @sgugger --> ## Information Model I am using (Bert, XLNet ...): facebook/bart-large-cnn ## To reproduce Steps to reproduce the behavior: When I was trying to use sharded_ddp and multiple GPUs to accelerate the training process, my command I used is as follows: CUDA_VISIBLE_DEIVCES=6,7 python -m torch.distributed.launch --nproc_per_node=2 finetune_trainer.py \ --sharded_ddp --[other_args] However, the experiment even took a longer time than the experiment using a single GPU. May I ask which part I did was wrong?
03-17-2021 02:44:27
03-17-2021 02:44:27
`--sharded_ddp` is not there to accelerate your training, it's there to save GPU memory (for very large models) at some cost on the training time. So if you can finetune on one GPU, you should definitely use this option.<|||||>@sgugger as long as I know sharded_ddp is only for distributed training, not sure why you have suggested to use sharded_ddp on one GPU? <|||||>I have not suggested that. I have said to just fine-tune your model on one GPU without any kind of DDP (so no `--sharded_ddp`). It does not make any sense to use this option if your model and its training can fit on one GPU as it is there to reduce GPU memory, not speed up training.<|||||>thanks a lot, now I understood what you meant. On Mon, Mar 22, 2021 at 3:08 AM Sylvain Gugger ***@***.***> wrote: > I have not suggested that. I have said to just fine-tune your model on one > GPU without any kind of DDP (so no --sharded_ddp). It does not make any > sense to use this option if your model and its training can fit on one GPU > as it is there to reduce GPU memory, not speed up training. > > β€” > You are receiving this because you commented. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/10758#issuecomment-803714561>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AS37NMRT24IR4P3H5QAT753TE2RCVANCNFSM4ZJW56GA> > . > <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,757
closed
BERT for Regression predicts constant
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.3.2 - Platform: Ubuntu? - Python version: 3.8 - Tensorflow version (GPU?): 2.4.1 (Yes) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help - albert, bert, xlm: @LysandreJik - tensorflow: @jplu ## Information Model I am using (Bert, XLNet ...): BERT. The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: I am doing BERT for a regression `]0-1]` in a deep (sequential) genomic data. Similar to [DNABERT](https://github.com/jerryji1993/DNABERT) and [this medium post](https://towardsdatascience.com/bringing-bert-to-the-field-how-to-predict-gene-expression-from-corn-dna-9287af91fcf8) My code is quite simple, basically I am using BERT from the lib, averaging the outputs and passing it to a head model. ```python class Predictor(Model): def __init__( self, batch_size: int, sequence_size: int, hidden_layers: List[int], bert_params: dict ): super().__init__() self._embedder = Embedder(sequence_size, bert_params) self._head_model = _create_head_model(batch_size, hidden_layers, bert_params["hidden_size"]) def call(self, inputs): embedding = self._embedder(inputs) return self._head_model(embedding) class Embedder(Model): def __init__(self, sequence_size: int, bert_params: dict): super().__init__() self._bert = _create_bert_model(sequence_size, bert_params) self.avg_pooling = GlobalAveragePooling1D() def call(self, sequence): x = self._bert(sequence) return self.avg_pooling(x.last_hidden_state) def _create_bert_model(sequence_size: int, bert_params: dict) -> TFBertModel: tokenizer = KmerTokenizer.load() sequence_length = sequence_size - tokenizer.k config = BertConfig( vocab_size=tokenizer.vocab_size, max_position_embeddings=sequence_length, **bert_params, ) return TFBertModel(config) def _create_head_model(batch_size: int, n_mixtures: int, hidden_layers: List[int], input_size: int): embedding = Input(input_size, batch_size) x = embedding for n_neurons in hidden_layers: x = Dense(n_neurons, activation=nn.gelu)(x) output = Dense(1, x) return Model(embedding, output) ``` The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) ``` X -> Tokenized sequence of integers ([1,5,10010, 2,200, 304,1001,535,341]) y -> Float, ]0,1] ``` My Y variable is distributed like this: ![image](https://user-images.githubusercontent.com/11489228/111401808-c30d7980-86a8-11eb-86f4-21ce677ecf7d.png) ## Problem The problem I am having is that my model is predicting constant. Scatter (y_true x y_pred) ![image](https://user-images.githubusercontent.com/11489228/111402202-8a21d480-86a9-11eb-96fc-4679c7e81217.png) Predictions histogram: ![image](https://user-images.githubusercontent.com/11489228/111402564-2e0b8000-86aa-11eb-9b77-31239674e377.png) Parameters: ```yaml batch_size: 16 training_steps: 20 sequence_size: 600 bert: hidden_size: 32 num_attention_heads: 8 num_hidden_layers: 2 hidden_layers: [32] early_stopping: patience: 15 optimizer: type: "RectifiedAdam" lr: 0.0004 epsilon: 0.000001 beta_2: 0.98 total_steps: 100 weight_decay: 0.01 loss: "mean_squared_error" ``` ## What I have tried * Smaller/Larger learning rate * Smaller/Larger batch size * Shallower/Deeper network * Changing Y distribution (Std Scaling) * Mixture Density Networks I simply cannot get through this constant prediction
03-17-2021 01:53:57
03-17-2021 01:53:57
Hello! I think this more a question to address on the forum https://discuss.huggingface.co/ as it doesn't looks like to be related to a bug in the library.
transformers
10,756
closed
Google Colab TypeError: expected str, bytes or os.PathLike object, not NoneType
## Environment info - `transformers` version: 4.4.1 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.10 - PyTorch version (GPU?): 1.8.0+cu101 (True) - Tensorflow version (GPU?): 2.4.1 (True) The problem arises when using: I started getting this error this morning without any changes on my side just loading my old Colab notebook (that worked few hours ago without any problem!) The code that breaks is: ``` tokenizer = AutoTokenizer.from_pretrained('xlm-clm-ende-1024') /usr/local/lib/python3.7/dist-packages/transformers/models/auto/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) 417 else: 418 if tokenizer_class_py is not None: --> 419 return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) 420 else: 421 raise ValueError( /usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs) 1703 1704 return cls._from_pretrained( -> 1705 resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs 1706 ) 1707 /usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py in _from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs) 1774 # Instantiate tokenizer. 1775 try: -> 1776 tokenizer = cls(*init_inputs, **init_kwargs) 1777 except OSError: 1778 raise OSError( /usr/local/lib/python3.7/dist-packages/transformers/models/xlm/tokenization_xlm.py in __init__(self, vocab_file, merges_file, unk_token, bos_token, sep_token, pad_token, cls_token, mask_token, additional_special_tokens, lang2id, id2lang, do_lowercase_and_remove_accent, **kwargs) 645 self.encoder = json.load(vocab_handle) 646 self.decoder = {v: k for k, v in self.encoder.items()} --> 647 with open(merges_file, encoding="utf-8") as merges_handle: 648 merges = merges_handle.read().split("\n")[:-1] 649 merges = [tuple(merge.split()[:2]) for merge in merges] TypeError: expected str, bytes or os.PathLike object, not NoneType ``` ## To reproduce Steps to reproduce the behavior: ''' !pip install transformers !pip install pytorch-transformers !pip install tensorboardX ''' ## Expected behavior It all worked this morning without problem and for many month before that I have not touched the code.
03-16-2021 22:14:00
03-16-2021 22:14:00
Hi @lenyabloko, indeed, there is an issue with the online repository of `xlm-clm-ende-1024`. @sgugger is currently fixing it right now. Thanks for letting us know, we'll let you know when it is fixed.<|||||>It should be fixed now, thanks to @sgugger: [`huggingface#e824d7b`](https://huggingface.co/xlm-clm-ende-1024/commit/e824d7bf481ebf027a50407dd378ad3de4031d90)
transformers
10,755
closed
Online decoding for ASR
# πŸš€ Feature request Are there plans to implement online decoding for the speech recognition models such as wav2vec2 and XLSR? More specifically, to be able to receive audio in short chunks, and output partial transcripts as they become available. ## Motivation Many use cases are covered by the current wav2vec2 model in the library, involving batch recognition of pre-recorded text. However for an online application that wanted to continuously recognize speech on a live input stream, this may not be sufficient.
03-16-2021 21:40:27
03-16-2021 21:40:27
Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discusss.huggingface.co) instead? cc @patrickvonplaten Thanks!<|||||>Sure, no problem, sorry about that. There seems to be a typo in the forum link - for anybody reading this in the future here are the [forums:](https://discuss.huggingface.co/)
transformers
10,754
closed
run_clm.py gpt-2 training example in documentation runs out of memory on a 32gb v100, should be verified and/or modified
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.3.3 - Platform: Linux-4.15.0-135-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.6.0+cu101 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: aye! - Using distributed or parallel set-up in script?: no Models: - gpt2: @patrickvonplaten, @LysandreJik Library: - benchmarks: @patrickvonplaten Documentation: @sgugger ## Information Model I am using (Bert, XLNet ...): running the run_clm.py fine-tuning script on gpt-2 The problem arises when using: * [x] the official example scripts: tranformers/example/language-modeling/run_clm.py * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) Using the official huggingface wikitext-2-raw-v1 dataset * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: Using the example here: https://github.com/huggingface/transformers/tree/master/examples/language-modeling When fine-tuning gpt-2 with run_clm.py, this should run on a k80 (24gb of RAM) in about an hour according to the example. However, I'm running out of memory with default settings... and I'm using a v100 inside a DXG-9, with 32gb of memory: ``` nvidia-smi Tue Mar 16 15:19:23 2021 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 418.67 Driver Version: 418.67 CUDA Version: 10.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla V100-SXM2... On | 00000000:06:00.0 Off | 0 | | N/A 34C P0 44W / 300W | 0MiB / 32480MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ ``` Pasting the command here for clarity. ``` python run_clm.py \ --model_name_or_path gpt2 \ --dataset_name wikitext \ --dataset_config_name wikitext-2-raw-v1 \ --do_train \ --do_eval \ --output_dir /tmp/test-clm ``` and the eponymous error: `RuntimeError: CUDA out of memory. Tried to allocate 512.00 MiB (GPU 0; 31.72 GiB total capacity; 30.32 GiB already allocated; 187.88 MiB free; 30.38 GiB reserved in total by PyTorch)` That's a ton of memory... is this right? Or is there some type of memory leak? Now, this can be fixed by setting `--per_device_train_batch_size 4` but I highly doubt the current format will work out of the box on a k80 w/out changing anything (which I can't test because I don't have access to one) using the default batch size of `8`, and this should be reflected in the example and/or verified with the current `run_clm.py`. Now, on a v100, it did finish in a little under 14 minutes, which is incredibly fast-- so I'm not complaining-- but I know batch size should be as high as possible on these things to get the best results, and I was really hoping it would work with 8 (in fact I was hoping I could jack it up to 16 by wrapping it in nn.torchDataParallel, but that's for another day.) This leads me to another question-- I know you can do torch.distributed.launch with these scripts, is there one that wraps the model in `nn.parallel.DistrubtedDataParallel` so that you can chunk a larger batch size across multiple GPU's and utilize the extra memory, or should this be done by hand? If so, maybe I will create a PR and add an option for this inside the three example scripts as it would be quite beneficial. Example: ``` model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.local_rank], output_device=args.local_rank) ``` Results: ``` Training completed. Do not forget to share your model on huggingface.co/models =) {'train_runtime': 820.2706, 'train_samples_per_second': 2.121, 'epoch': 3.0} 100%|##########################################################################################################################| 1740/1740 [13:40<00:00, 2.12it/s] [INFO|trainer.py:1408] 2021-03-16 15:39:31,084 >> Saving model checkpoint to /tmp/test-clm [INFO|configuration_utils.py:304] 2021-03-16 15:39:31,085 >> Configuration saved in /tmp/test-clm/config.json [INFO|modeling_utils.py:817] 2021-03-16 15:39:32,049 >> Model weights saved in /tmp/test-clm/pytorch_model.bin 03/16/2021 15:39:32 - INFO - __main__ - ***** Train results ***** 03/16/2021 15:39:32 - INFO - __main__ - epoch = 3.0 03/16/2021 15:39:32 - INFO - __main__ - train_runtime = 820.2706 03/16/2021 15:39:32 - INFO - __main__ - train_samples_per_second = 2.121 03/16/2021 15:39:32 - INFO - __main__ - *** Evaluate *** [INFO|trainer.py:1600] 2021-03-16 15:39:32,118 >> ***** Running Evaluation ***** [INFO|trainer.py:1601] 2021-03-16 15:39:32,118 >> Num examples = 240 [INFO|trainer.py:1602] 2021-03-16 15:39:32,119 >> Batch size = 8 100%|##############################################################################################################################| 30/30 [00:08<00:00, 3.52it/s] 03/16/2021 15:39:40 - INFO - __main__ - ***** Eval results ***** 03/16/2021 15:39:40 - INFO - __main__ - perplexity = 20.967772820757663 ```
03-16-2021 20:58:14
03-16-2021 20:58:14
The doc has not been updated since a while ago so it's probably not up to date yes. I think the corresponding script probably had different defaults in earlier versions (either a shorter sequence length or a shorter batch size). As for your second question, the script does work with torch.distributed.launch without changes. See the [main examples README](https://github.com/huggingface/transformers/tree/master/examples#distributed-training-and-mixed-precision) for more information.<|||||>OK. Thanks for validating, that's kind of what I figured. Secondly, thank you-- I think I was expecting it to split the batch for me. So in the example on the page you sent: ``` python -m torch.distributed.launch \ --nproc_per_node 8 text-classification/run_glue.py \ --model_name_or_path bert-large-uncased-whole-word-masking \ --task_name mnli \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 8 \ --learning_rate 2e-5 \ --num_train_epochs 3.0 \ --output_dir /tmp/mnli_output/ ``` that would be the equivalent of running a total batch size of 64 on a single GPU? (`per_device_train_batch_size=8`) * (`nproc_per_node=8`) = `64` Much appreciated.<|||||>Yes, that's correct!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,753
closed
[DeepSpeed] ZeRO Stage 3
This PR implements DeepSpeed ZeR0 3 integration: * [x] removes the "wind-down" of the deepspeed setup at the end of train, since zero3 can't do inference w/o this setup - we will have some other ways to reclaim memory for the no longer needed optimizer in the future. * [x] adds initial support for eval w/o train - more work will be done in the future * [x] to support `predict_with_generate` extends `generate` and its 5 beam search ways to support a new `synced_gpus` flag which is needed by ZeRO stage3 - under ZeRO3 parallelization if this gpu finished before max_length was reached, it must continue running forward so that the other gpus who may have not finished their generate yet, can complete as they rely on this gpu to received its slice of params. Currently deployed for DeepSpeed - but may need to do the same for fairscale elsewhere. * [x] because now we are forced to run all gpus in sync, the `generate` logic is also now equipped with a stopping early mechanism that is synchronized across all participating gpus * [x] reworks how pretrained model is loaded - `from_pretrained` is now zero3 aware and does a whole lot to efficiently preload massive models * [x] since `state_dict` is fake under `zero3` it can't be saved and used - so care is taken to either not save the bogus model, or the weights get reconsolidated before saving if `stage3_gather_fp16_weights_on_model_save` is enabled * [x] adds new DeepSpeed configuration docs and basic tuning recommendation * [x] adds lots of new tests, now testing zero2 and zero3 separately * [x] fixes a disappearing std stream problem in an older test using a workaround * [x] a new DS feature: `deepspeed.zero.register_external_parameter(self, self.layer1.weight)` - haven't needed it so far - need to find which models may need this feature. this is needed for when a layer accesses weights of another layer. but most our models don't do that. so just documented this for now. DeepSpeed PRs that need to be merged and a new release is made 0.3.14: * [x] https://github.com/microsoft/DeepSpeed/pull/881 (memory issue) * [x] https://github.com/microsoft/DeepSpeed/pull/884 support lists of params in `GatheredParameters` (needed for pretrained model load) * [x] https://github.com/microsoft/DeepSpeed/pull/892 script to extract consolidated fp32 weights for zero2 and zero3 * [x] https://github.com/microsoft/DeepSpeed/pull/893 save the consolidated fp16 weights under zero3 * [x] https://github.com/microsoft/DeepSpeed/pull/896 leak memory fix needed for tests * [x] `deepspeed==0.3.14` released May be: * [x] https://github.com/microsoft/DeepSpeed/pull/882 - this needs to be changed to be optional - won't be efficient by default Future PR TODO: * [ ] make loading and resuming more efficient - gotta find a way to not preload the model from weights when we are resuming from a checkpoint. and of course not init the weights. Currently we do it 3 times! A huge overhead for big models.
03-16-2021 20:51:56
03-16-2021 20:51:56
OK, the code-base is ready for review. I want to add a few more performance notes to the docs tomorrow. I will work on the wasteful weights init/preloading/ovewriting/resuming in a separate PR next, as it's all intertwined and will also look at how to make `from_pretrained` support all those different ways (i.e. it's not just deepspeed-specific). This is what we started discussing at https://github.com/huggingface/transformers/issues/10893 and much earlier https://github.com/huggingface/transformers/issues/9205 Thank you!<|||||>@sgugger, the docs are ready for your review when you have a bit of time. I added some unrelated to ZeRO3 installation notes to both fairscale and deepspeed while at it. Thank you!
transformers
10,752
closed
Patches full import failure when sentencepiece is not installed
The `M2M100Tokenizer` and `DebertaV2Tokenizer` should be under the `if is_sentencepiece_available()` cc @patil-suraj
03-16-2021 19:53:34
03-16-2021 19:53:34
transformers
10,751
closed
Tensorflow Keras model.loads_weights() breaks on TFElectraModel trained with v4.3.0
## Environment info - `transformers` version: 4.4.0 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.10 - PyTorch version (GPU?): 1.8.0+cu101 (True) - Tensorflow version (GPU?): 2.4.1 (True) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help @jplu ## Information Model I am using TFElectraModel: The problem arises when using: * my own modified scripts: see attachment The tasks I am working on is: * Information Retrieval on SQuAD dataset ## To reproduce Steps to reproduce the behavior: 1. train a keras model whith a TFElectraModel as a layer on transformers 4.3.0. 2. switch to transformers 4.4.0 3. call model.load_weights on the keras model OR: 1. "run all" on attached notebook, wait for weights save after (very short) training 2. change pip tranasformers version to 4.4.0 in the first cell 3. comment model.fit to avoid override of weights 4. restart an run all, wait for model.load_weights to fail [IR_TFElectraModel.ipynb.zip](https://github.com/huggingface/transformers/files/6151831/IR_TFElectraModel.ipynb.zip) ValueError: Cannot assign to variable tf_electra_model_1/electra/embeddings/token_type_embeddings/embeddings:0 due to variable shape (2, 128) and value shape (512, 128) are incompatible ## Expected behavior I would expect the weights to load as it correctly does with both train and load performed on transformers 4.3.0 Anyway keep up with the great job! πŸ€—
03-16-2021 19:49:34
03-16-2021 19:49:34
Thanks for opening this issue! I have not access to a computer to extract and read you archive, can you, please, share a Colab or copy/paste the code here in this issue. Thanks!<|||||>> Thanks for opening this issue! > > I have not access to a computer to extract and read you archive, can you, please, share a Colab or copy/paste the code here in this issue. Thanks! yes sure: https://colab.research.google.com/drive/1tHzUMhveYwYkPOCxZwMA80IkVkRCWGRW?usp=sharing #### To Reproduce 1. "run all" on attached notebook, wait for weights save after (very short) training 2. change pip tranasformers version to 4.4.0 in the first cell 3. comment model.fit to avoid override of weights 4. restart an run all, wait for model.load_weights to fail <|||||>Sorry, I cannot open your Colab :( There is a restricted access<|||||>Sorry, access at the same link is now allowed. :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,750
closed
Patches the full import failure and adds a test
The full import currently fails because some layers are imported when they do not exist. This adds a test in `test_file_utils.py` by trying to import the entire transformers. This failed before the proposed fix. Fixes https://github.com/huggingface/transformers/issues/10749
03-16-2021 19:13:57
03-16-2021 19:13:57
transformers
10,749
closed
bug in new version 4.4.0 sentencepiece is not available
Hi, I am using Colab for a sentiment analysis model. The code suddenly stopped working after a fresh run from yesterday. I noticed that a new version of transformers was released which caused this issue to appear. when trying to import I get this error message: `ModuleNotFoundError: No module named 'sentencepiece'` and after installing sentencepiece using pip I get this error message: `AttributeError: module transformers.models.ibert has no attribute IBertLayer`
03-16-2021 16:58:06
03-16-2021 16:58:06
Hi, thank you for opening an issue. Could you respect the issue template? What code led to this error? What's your environment? I can run the following without any issues: ```py >>> from transformers import IBertModel >>> model = IBertModel.from_pretrained("kssteven/ibert-roberta-base") ``` Thank you for your understanding.<|||||>> Hi, thank you for opening an issue. Could you respect the issue template? What code led to this error? What's your environment? > > I can run the following without any issues: > > ```python > >>> from transformers import IBertModel > >>> model = IBertModel.from_pretrained("kssteven/ibert-roberta-base") > ``` > > Thank you for your understanding. Thanks for replying, I am running this Colab notebook https://colab.research.google.com/drive/1M0ls7EPUi1dwqIDh6HNfJ5y826XvcgGX?usp=sharing You can reproduce by running all cells and the error will appear on the import cell. ``` python # (1)load libraries import json, sys, regex import torch import GPUtil import torch.nn as nn from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler from keras.preprocessing.sequence import pad_sequences from sklearn.model_selection import train_test_split from pytorch_pretrained_bert import BertTokenizer, BertConfig, BertAdam, BertForSequenceClassification from tqdm import tqdm, trange import pandas as pd import os import numpy as np from sklearn.metrics import accuracy_score, f1_score, recall_score, precision_score, classification_report, confusion_matrix ##---------------------------------------------------- from transformers import * from transformers import XLMRobertaConfig from transformers import XLMRobertaModel from transformers import AutoTokenizer, AutoModelWithLMHead from transformers import XLMRobertaForSequenceClassification, XLMRobertaTokenizer, XLMRobertaModel from tokenizers import Tokenizer, models, pre_tokenizers, decoders, processors from transformers import AdamW, get_linear_schedule_with_warmup from transformers import AutoTokenizer, AutoModel ```<|||||>Thank you, I can reproduce. We'll release a patch for this in the coming days. By the way, is there a reason you're importing everything from `transformers`, before importing specific layers? ```py from transformers import * from transformers import XLMRobertaConfig from transformers import XLMRobertaModel from transformers import AutoTokenizer, AutoModelWithLMHead from transformers import XLMRobertaForSequenceClassification, XLMRobertaTokenizer, XLMRobertaModel from tokenizers import Tokenizer, models, pre_tokenizers, decoders, processors from transformers import AdamW, get_linear_schedule_with_warmup from transformers import AutoTokenizer, AutoModel ```<|||||>We just released version v4.4.1 with a patch for this. Thank you for letting us know!
transformers
10,748
closed
Fix URLs from #10744
# What does this PR do? Forgot the `resole/main` in the URLs in #10744 (cc @julien-c)
03-16-2021 15:27:23
03-16-2021 15:27:23
transformers
10,747
closed
Issues with MODEL_FOR_MASKED_LM_MAPPING.keys(), and transformer.utils.check_min_version()
Hey, I just recently wanted to pre-train on top of a BERT model, and ran into some issues. When I run `python run_mlm.py`, I get the following error: ``` Traceback (most recent call last): File "run_mlm.py", line 46, in <module> from transformers.utils import check_min_version ImportError: cannot import name 'check_min_version' from 'transformers.utils' (/PATH/TO/site-packages/transformers/utils/__init__.py) ``` https://github.com/huggingface/transformers/blob/d3d388b934ef515e96246ba643c924d675f6515d/examples/language-modeling/run_mlm.py#L46 After commenting out that line, and it's import (I know, shame on me), I get the following error: ``` Traceback (most recent call last): File "run_mlm.py", line 53, in <module> MODEL_CONFIG_CLASSES = list(MODEL_FOR_MASKED_LM_MAPPING.keys()) AttributeError: 'NoneType' object has no attribute 'keys' ``` https://github.com/huggingface/transformers/blob/d3d388b934ef515e96246ba643c924d675f6515d/examples/language-modeling/run_mlm.py#L54 Tried with python 3.7.10 and 3.8.3 Thanks
03-16-2021 15:18:53
03-16-2021 15:18:53
Hello! Have you checked the ["Important note" at the top of the examples README](https://github.com/huggingface/transformers/tree/master/examples#important-note)? Did you get this error with a source install?<|||||>@LysandreJik that does the trick - I'll be more thorough in the future with my reading :)
transformers
10,746
closed
Add DistributedSamplerWithLoop
# What does this PR do? This PR adds a new distributed sampler that will provide a round multiple of the batch size samples on all processes by looping back at the beginning of the (shuffled) dataset. This is useful: - for TPUs to avoid triggering a new XLA compilation for the last training batch - for model parallelism to have batches of the same size on all processes This PR also refactors some logic regarding the wold_size and process_rank in the `TrainingArguments`, as well as adds a test of the new `DistributedSamplerWithLoop`. Tested on: - single-GPU - multi-GPU - TPU - SageMaker MP
03-16-2021 14:42:44
03-16-2021 14:42:44
transformers
10,745
closed
fix M2M100 example
03-16-2021 14:39:51
03-16-2021 14:39:51
transformers
10,744
closed
Remove old links to CDN
# What does this PR do? This PR removes a few links left pointing to `https://cdn.huggingface.co` instead of `https://huggingface.co` (purely cosmetic since they are not actually used anymore, normally).
03-16-2021 14:39:41
03-16-2021 14:39:41
those are not valid files though (check the url template for the other models)<|||||>Oopsie, thanks for flagging!
transformers
10,743
closed
Fix DeBERTa + Conversational pipeline slow tests
03-16-2021 14:33:59
03-16-2021 14:33:59
transformers
10,742
closed
DialoGPT- cannot increase number of conversation turns
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.5.1 - Platform: Linux-5.4.0-58-generic-x86_64-with-debian-bullseye-sid - Python version: 3.6.13 - PyTorch version (GPU?): 1.7.1 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @patrickvonplaten @LysandreJik ## Information Model I am using (Bert, XLNet ...): DialoGPT The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: I fine-tuned DialoGPT on some sitcom subtitles, and now I am trying to chat with it using the commands listed [here](https://gist.github.com/albusdemens/9cd5602f088720e403f84038e088d696) (in the example, I use the Microsoft fine-tuned model). 1. When I increase the number of conversation turns from 5 to 10, everything goes OK with the Microsoft model. 2. If I increase the number of conversation turns to 10 using my fine-tuned model, reply number six is something like ``` >> User:hello OurBot: Hello. >> User:how are things? OurBot: They're fine. >> User:cool. Did you have lunch already? OurBot: I did, actually. >> User:what did you have? OurBot: Oh, um, i just wanted to toast. >> User:where? **OurBot: !!!hello!** ``` ## Expected behavior Using the DialoGPT model I fine-tuned, I would like to be able to have the same number of turns that I have using the Microsoft model.
03-16-2021 13:54:10
03-16-2021 13:54:10
Hey @albusdemens, Sorry I don't follow here completely. Does the model crash after 6 turns or does it just give a qualitatively bad answer?<|||||>Hey @patrickvonplaten, the second option (the quality of the answers is noticeably worse). Often I get outputs like `?!!`, `!.!!?!?` and similar. Usually low quality outputs don't show up in the first 3-4 conversation turns. When I use the Microsoft model instead, I don't get low-quality results after a few conversation turns. Is there a way I can fix this?<|||||>This sounds very much like the model wasn't trained on long conversations to me...I'm not sure whether it's possible to enforce better quality without retraining the model<|||||>Thanks for your reply! Besides improving the quality of the training data, do you think I should also increase the number of epochs? On Tue, Mar 30, 2021, 6:15 AM Patrick von Platen ***@***.***> wrote: > This sounds very much like the model wasn't trained on long conversations > to me...I'm not sure whether it's possible to enforce better quality > without retraining the model > > β€” > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/10742#issuecomment-809942835>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AACDP25QF63SYDYBES5FT5TTGFUB5ANCNFSM4ZITFYLA> > . >
transformers
10,741
closed
Fix S2T example
cc @patil-suraj
03-16-2021 12:38:34
03-16-2021 12:38:34
No worries, the doctests will catch that when they're re-enabled! Hopefully sooner rather than later.
transformers
10,740
closed
BigBird
## Environment info - `transformers` version: 4.4.0.dev0 - Platform: Linux-5.3.0-64-generic-x86_64-with-glibc2.30 - Python version: 3.9.2 - PyTorch version (GPU?): 1.8.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Y - Using distributed or parallel set-up in script?: N ### Who can help @patrickvonplaten, @patil-suraj (guessing) ## Information Model I am using **BigBird**: The problem arises when using: the official example scripts: **seq2seq** The tasks I am working on is: the official **BigPatent** dataset ## To reproduce I'm trying to use BigBird for a summarization task (on the BigPatent dataset). I'm using the official seq2seq script, which I run as (all lengths/batches are small for testing) ``` python run_seq2seq.py \ --model_name_or_path google/bigbird-roberta-base \ --dataset_name big_patent \ --max_source_length 3 \ --max_target_length 3 \ --val_max_target_length 3 \ --do_eval --do_predict \ --per_gpu_train_batch_size 1 \ --per_gpu_eval_batch_size 1 \ --num_train_epochs 1 \ --output_dir tmp ``` However, I get the following error message: ``` Traceback (most recent call last): File "/home/scasola/factuality/factuality/transformers/examples/seq2seq/run_seq2seq.py", line 657, in <module> main() File "/home/scasola/factuality/factuality/transformers/examples/seq2seq/run_seq2seq.py", line 344, in main config = AutoConfig.from_pretrained( File "/home/scasola/anaconda3/envs/factuality/lib/python3.9/site-packages/transformers/models/auto/configuration_auto.py", line 382, in from_pretrained config_class = CONFIG_MAPPING[config_dict["model_type"]] KeyError: 'big_bird' ```
03-16-2021 10:56:24
03-16-2021 10:56:24
hi @slvcsl `BigBird` is till WIP and not yet added into the lib,. You can follow the progress here #10183 <|||||>Thank you for your response. I'll check it out!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,739
closed
Tokenizer becomes very slow after adding new tokens
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.1.1 - Platform: Linux-5.4.0-58-generic-x86_64-with-debian-bullseye-sid - Python version: 3.7.9 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): 2.3.1 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @LysandreJik ## Information Hi, When I am trying to add a large number (50k) new tokens to BERT's tokenizer, the tokenizer becomes very slow, taking 29 seconds to tokenize a single short sentence. ## To reproduce ``` from transformers import BertTokenizer import time tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") new_tokens = ["aaa"+str(i) for i in range(50000)] tokenizer.add_tokens(new_tokens) # takes some time sentence = "a short sentence." start = time.time() tokenizer.tokenize(sentence) print(time.time() - start) > 29.049 ```
03-16-2021 10:48:50
03-16-2021 10:48:50
Hey @shauli-ravfogel, this should have been fixed on `master`. Can you try installing from source and let me know if it works better?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,738
closed
load wav2vec model from local path
I'm trying to run the wav2vec based ASR on my machine. This is my code: ``` import soundfile as sf import torch from transformers import Wav2Vec2ForMaskedLM, Wav2Vec2Tokenizer # load pretrained model cp = "./my_model_directory/wav2vec_small.pt" tokenizer = Wav2Vec2Tokenizer.from_pretrained(cp) model = Wav2Vec2ForMaskedLM.from_pretrained(cp) # load audio audio_input, _ = sf.read("/home/robot/Music/dinesh.flac") # transcribe input_values = tokenizer(audio_input, return_tensors="pt").input_values logits = model(input_values).logits predicted_ids = torch.argmax(logits, dim=-1) transcription = tokenizer.batch_decode(predicted_ids)[0] print(transcription) ``` Here cp is the path to the wav2ved local model file. But when i try to run this i'm getting error; `- or './my_model_directory' is the correct path to a directory containing relevant tokenizer files` Here when i use the model present in the cloud eg. cp = "facebook/wav2vec2-base-960h". This works perfectly. Isn't this possible to run the transformers wav2vec without cloud?
03-16-2021 10:41:36
03-16-2021 10:41:36
hi @roboticsai `from_pretrained` expects path of the directory where it can find `config.json` and `pytorch_model.bin` files. It seems that you haven't saved the model using `save_pretrained`. To use `from_pretrained`, the model should be saved using `save_pretrained` method.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,737
closed
`group_texts` duplicates special tokens
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.3.2 - Platform: Linux-5.4.0-66-generic-x86_64-with-debian-buster-sid - Python version: 3.7.9 - PyTorch version (GPU?): 1.7.1 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @sgugger, @patil-suraj ## Information Model I am using (Bert, XLNet ...): Bert The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Run [run_mlm.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py#L381) with pdb break point at line 409. 2. see value for `tokenized_datasets['train'][0]` ``` ipdb> tokenized_datasets['train'][0] {'attention_mask': ..., 'input_ids': [2, 11699, 6139, 23923, 6354, 6216, 3, 2, 124, 6149, 6228, 6164, 5125, 27479, 6228, 11699, 6139, 23923, 6354, 6216, 11961, 9121, 9804, 10602, 10293, 5328, 6721, 121, 6997, 15520, 16117, 10602, 11302, 5328, 6721, 121, 6997, 13014, 6177, 22111, 25147, 6189, 6106, 6315, 6110, 5084, 5158, 6291, 12836, 6108, 15512, 6726, 18139, 25596, 12701, 6291, 6106, 6315, 6616, 112, 5171, 113, 4363, 6380, 14946, 13769, 13928, 17518, 10216, 12299, 12571, 12850, 26355, 5315, 6457, 6117, 6303, 6213, 19358, 117, 122, 5201, 6361, 6211, 6377, 6312, 22259, 6631, 9268, 112, 10538, 113, 10728, 22278, 117, 14870, 13905, 142, 15214, 112, 10538, 113, 10728, 22278, 117, 14870, 14934, 7575, 10524, 186, 14921, 30912, 10758, 118, 10022, 680, 6275, 117, 181, 9860, 186, 14921, 30912, 10758, 136, 20107, 18973, 6358, 118, 3, 2, 10022, 7539, 25147, 6189, 116, 24690, 6915, 128, 116, 11134, 14216, 15650, 15373, 117, 13531, 20100, 117, 10028, 6132, 117, 127, 112, 124, 3866, 113, 15798, 15650, 15373, 117, 13531, 20100, 117, 9840, 6403, 117, 23999, 25006, 6131, 112, 14604, 113, 5084, 15466, 112, 5171, 113, 4363, 6380, 14946, 6213, 13579, 10393, 11023, 6187, 9218, 13014, 6236, 23534, 4587, 12827, 11069, 9422, 25686, 9112, 9220, 12112, 13538, 10112, 9427, 9215, 9260, 19036, 10393, 13514, 6187, 10112, 14882, 6130, 20150, 9279, 118, 3, 2, 14233, 15466, 18609, 16080, 118, 3, 2, 25147, 6189, 24864, 28007, 13581, 6149, 6228, 6164, 5125, 27479, 6228, 124, 5134, 16109, 28372, 3814, 6224, 20116, 6158, 12221, 6595, 105, 3, 2, 5466, 11794, 10393, 4700, 6224, 12819, 10694, 6187, 4671, 6628, 6119, 5502, 24468, 5743, 6125, 7111, 18452, 105, 3, 2, 14368, 6164, 5125, 27479, 6228, 6309, 6221, 6139, 4174, 6428, 6243, 167, 11699, 6139, 13295, 16589, 18619, 15924, 6131, 22573, 19515, 11914, 23850, 11914, 11512, 11346, 25763, 5134, 16109, 28372, 5134, 16109, 28372, 7031, 6114, 6114, 6626, 7020, 118, 3, 2, 5476, 6214, 116, 4121, 7788, 6107, 7788, 118, 4822, 6503, 6236, 15053, 4606, 9117, 118, 3, 2, 5676, 26156, 4973, 7088, 6114, 23122, 6114, 25444, 6422, 4218, 14246, 11920, 6147, 12097, 4011, 9117, 118, 3, 2, 4417, 25703, 9205, 9271, 9165, 19235, 4202, 6115, 14810, 6187, 19915, 6164, 4839, 6361, 11721, 4378, 7063, 15482, 9156, 11976, 30627, 9291, 3788, 19018, 20146, 4202, 9172, 118, 9868, 16712, 29634, 6115, 5206, 6203, 4469, 5294, 11019, 10250, 4973, 6284, 6203, 9691, 118, 3, 2, 9310, 10574, 5330, 9799, 11042, 13237, 6149, 9237, 118, 3, 2, 4378, 7063, 9126, 9271, 9242, 9822, 6236, 15472, 23041, 16135, 18119, 15314, 118, 3, 2, 5134, 6341, 6187, 9159, 14990, 5656, 4241, 14059, 6139, 4913, 12802, 9822, 9181, 9841, 4788, 18037, 116, 14059, 10825, 5087, 5178, 11699, 6213, 15171, 6333, 9242, 4645, 10212, 9691, 118, 3, 2, 9620, 10021, 11699, 6139, 17626, 6236, 12258, 4378, 7063, 6185, 16269, 26623, 30683, 12901, 118, 3, 2, 18276, 6130, 4378, 7063, 9126, 11699, 6187, 9159, 9242, 9319, 13793, 17451, 6260, 4184, 9242, 17561, 10724, 20756, 12126, 4789, 9172, 118, 9567, 4144, 12062, 10780, 5466, 6803, 4202, 4378, 7063, 9126, 4422, 12303, 6164, 9165, 10350, 571, 9467, 20853, 7177, 11947, 10441, 9270, 18480, 3795, 9207, 12098, 11725, 118], 'special_tokens_mask': ... , 'token_type_ids': ...} ``` When `group_texts` is `map`ed to `tokenized_datasets`, whose examples already contain special tokens (e.g. [CLS] and [SEP]), the mapped results have the following format: `[CLS] ... [SEP][CLS] ... [SEP][CLS] ... [SEP]` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior The output of `group_texts` applied `tokenized_datasets` has the following format `[CLS] ... [SEP]` or `[CLS] ... [SEP] ... [SEP]`. <!-- A clear and concise description of what you would expect to happen. --> The current input format is different from the original implementation, [ELECTRA](https://github.com/google-research/electra/blob/f93f3f81cdc13435dd3e85766852d00ff3e00ab5/build_pretraining_dataset.py#L100) for example. Is this a trivial issue? I think the downstream task performance of the model pretrained with the current script could tell if this is a serious bug or not. Could someone share the results?
03-16-2021 09:19:24
03-16-2021 09:19:24
ELECTRA or BERT are not pretrained using this option, so you should use `--line_by_line` to mimick their pretraining objective. Also note that this is a generic example, not an actual pretraining script (for instance the BERT next sentence prediction objective is not there). It's purpose is to expose all data processing so you can easily tweak it to your needs.<|||||>Thank you for the clarification. Could you name models that are trained with this option(not --line_by_line)?<|||||>GPT and GPT-2 is trained this way for instance.<|||||>#11840
transformers
10,736
closed
Position ids in RoBERTa
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: Ubuntu 20.04 - Python version: Python 3.7.10 - PyTorch version (GPU?): 1.6.0_py3.7_cuda10.1.243_cudnn7.6.3_0 - Tensorflow version (GPU?): No - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @LysandreJik ## Information Position ids in RoBERTa is not implemented properly. The problem arises when using: create_position_ids_from_input_ids in transformers.models.roberta.modeling_roberta.py Based on this function, position id 0 is never used. This may cause problem when the sequence is long, for example, 512. Token whose id >= 511 will not get its corresponding token ids.
03-16-2021 07:01:13
03-16-2021 07:01:13
There's a reason why position ids don't start at 0 for RoBERTa, see #5285<|||||>`RoBERTa` never uses 0 and 1 positional ids, in `ROBERTa`, all pad tokens have position id of 1, and the rest of the tokens have position ids in the range `(2, seq_length - num_pad_tokens)`. It's implemented like this to match the original implementation in fairseq.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,735
closed
Release utils
# What does this PR do? This PR adds utilities to help with the release process and four make commands to use them easily: 1. `make pre-release` will do all the necessary steps prior to the commit with the release tag (put the right version in all the good places and clean the README from the references to the master documentation). 2. `make post-release` will do all the necessary steps after the release has been made (put the right dev version in all the good places and add the latest version in the deploy doc/doc navbar) 3. `make pre-patch` will do all the necessary steps prior to the commit with the release patch tag (put the right version in all the good places). 4. `make post-past` will do all the necessary steps after the patch release has been made and we are back on master (put the right dev version in all the good places and add the latest version in the deploy doc/doc navbar)
03-16-2021 02:27:20
03-16-2021 02:27:20
transformers
10,734
closed
[examples/seq2seq/README.md] fix t5 examples
This PR: * switches the summarization example to use CNN/DailyMail as with t5-small it provides high scores out of the box. * fixes T5 examples to include `--source_prefix` - it's **not** optional. If you give it a try you will see that you get 10x worse bleu scores w/o it. w/ `27.6849`, w/o `2.374` * adds a normal translation example w/o the peculiarities of MBart and T5 * reduces the default max samples to 50 so it's much faster to test quickly * fixes the reference to the last custom dataset that I incorrectly added in the first place (was missing a username, but worked locally when I created it w/o it) * removes the 3 `--max*samples` from this README and puts a section on how to use these in the top-level `examples/README.md` @sgugger, @patrickvonplaten
03-16-2021 01:44:22
03-16-2021 01:44:22
This looks good to me! I think it's a good idea to add `--source_prefix` for the 5 t5 checkpoints in the examples. We should specify though that it's only for these 5 checkpoints. Let's discuss summarization in the issue.<|||||>As discussed in https://github.com/huggingface/transformers/issues/10733#issuecomment-800123545 updated the summarization example to use cnn_dailymail - now the prefix works for t5! Thank you, @patrickvonplaten <|||||>Sorry for disturbing you, but has the support for MBart been removed? This model needs source_lang and target_lang arguments, but the scripts don't accept them now.<|||||>They are sill [here](https://github.com/huggingface/transformers/blob/fd1d9f1ab89805fb2a8e773edbc27531b449ddea/examples/seq2seq/run_translation.py#L96).<|||||>Thank you very much! I can copy that part into summarization file.<|||||>Sorry to disturb you again. Even after the copy of the MBart part code, the error occurred. ```py Traceback (most recent call last): File "examples/seq2seq/run_summarization.py", line 609, in <module> main() File "examples/seq2seq/run_summarization.py", line 443, in main train_dataset = train_dataset.map( File "/home/zchelllo/anaconda3/envs/ex/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1407, in map update_data = does_function_return_dict(test_inputs, test_indices) File "/home/zchelllo/anaconda3/envs/ex/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1378, in does_function_return_dict function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) File "examples/seq2seq/run_summarization.py", line 424, in preprocess_function with tokenizer.as_target_tokenizer(): File "/home/zchelllo/anaconda3/envs/ex/lib/python3.8/contextlib.py", line 113, in __enter__ return next(self.gen) File "/home/zchelllo/anaconda3/envs/ex/lib/python3.8/site-packages/transformers/models/mbart/tokenization_mbart_fast.py", line 214, in as_target_tokenizer self.set_tgt_lang_special_tokens(self.tgt_lang) File "/home/zchelllo/anaconda3/envs/ex/lib/python3.8/site-packages/transformers/models/mbart/tokenization_mbart_fast.py", line 240, in set_tgt_lang_special_tokens suffix_tokens_str = self.convert_ids_to_tokens(self.suffix_tokens) File "/home/zchelllo/anaconda3/envs/ex/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py", line 286, in convert_ids_to_tokens index = int(index) TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType' ``` The command is ```sh CUDA_VISIBLE_DEVICES=0 python examples/seq2seq/run_summarization.py --model_name_or_path facebook/mbart-large-cc25 \ --do_train --do_predict --train_file ../data/lang8_ja_train.csv --test_file ../data/lang8_ja_test.csv \ --output_dir ../pre_models/mbart_lang8_ja \ --per_device_train_batch_size=4 --per_device_eval_batch_size=4 \ --predict_with_generate --text_column errorful_sent --summary_column correct_sent \ --save_steps=2000 --save_total_limit=3 --overwrite_output_dir \ --source_lang ja_XX --target_lang ja_XX ``` And the possible reason is that even I input the source and target language, these arguments haven't been sent into prepare_seq2seq_batch function in line 195 tokenization_mbart_fast.py. The `self.src_lang` is `en_XX` and `self.tgt_lang` is `None`.<|||||>Sorry, I found that I missed this part. After copying this part, it works well! Thank you very much! ```py # For translation we set the codes of our source and target languages (only useful for mBART, the others will # ignore those attributes). if isinstance(tokenizer, (MBartTokenizer, MBartTokenizerFast)): if data_args.source_lang is not None: tokenizer.src_lang = data_args.source_lang if data_args.target_lang is not None: tokenizer.tgt_lang = data_args.target_lang ```
transformers
10,733
closed
[examples run_summarization.py] t5 worse score w/ --source_prefix "summarize: " than w/o
I don't think the latest incarnation of summarization examples works for t5. I'm lost with all the proposed let's-not-do-anything special for t5, except as you will see from numbers something isn't right: With the latest master: ``` python examples/seq2seq/run_summarization.py --model_name_or_path t5-small --do_train --do_eval \ --dataset_name xsum --output_dir /tmp/tst-summarization --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 --overwrite_output_dir --predict_with_generate --max_train_samples 50 \ --max_val_samples 50 ***** eval metrics ***** epoch = 3.0 eval_gen_len = 60.14 eval_loss = 3.3003 eval_rouge1 = 19.3055 eval_rouge2 = 2.4192 eval_rougeL = 13.931 eval_rougeLsum = 16.3446 eval_runtime = 6.2317 eval_samples = 50 eval_samples_per_second = 8.023 ``` Then let's add the required `--source_prefix "summarize: "` ``` python examples/seq2seq/run_summarization.py --model_name_or_path t5-small --do_train --do_eval \ --dataset_name xsum --output_dir /tmp/tst-summarization --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 --overwrite_output_dir --predict_with_generate --max_train_samples 50 \ --max_val_samples 50 --source_prefix "summarize: " ***** eval metrics ***** epoch = 3.0 eval_gen_len = 52.94 eval_loss = 3.3734 eval_rouge1 = 18.7997 eval_rouge2 = 2.2857 eval_rougeL = 13.4997 eval_rougeLsum = 14.7778 eval_runtime = 5.2697 eval_samples = 50 eval_samples_per_second = 9.488 ``` As you can see the scores are worse than w/o `--source_prefix "summarize: "` and it should be in reverse. Where are we adding `task_specific_params`: ``` "summarization": { "early_stopping": true, "length_penalty": 2.0, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, ``` so that the model knows to do the right thing? **edit**: found where this was last discussed: https://github.com/huggingface/transformers/pull/10133#issuecomment-778071812 So should `README.md` just say that currently `run_summarization.py` cannot be used for T5 models and then find another summarization model instead. Of course, a lot of these repetitive breakages would have been avoided if we had quality-measuring tests for examples - perhaps when the dust settles around the examples we could have some of those added. @sgugger
03-16-2021 01:35:05
03-16-2021 01:35:05
`--max_train_samples 50` is a tiny sample, I am not surprised that the model doesn't learn anything here. Note that `t5-small` was pretrained on CNN/Dailymail and WMT, but **not** on XSum. So, it makes sense that one gets reasonable results when fine-tuning the model on just 50 translation samples of WMT16 because the model has already seen the whole training data in pretraining. However, the model has never seen XSum in pretraining, so fine-tuning on 50 samples will get us nowhere here I think. We could try to switch to `CNN/Dailymail`. I have fine-tuned the model on the whole corpus for CNN/Dailymail and have gotten good results. In the paper, it was reported that with `t5-small` a ROUGE-2 score of 19.56 can be achieved on CNN/Dailymail. So we should get something like 17 or 18 ROUGE-2 for full fine-tuning. Also, IMO for such low ROUGE number we cannot really say that "no prefix" works better than "with prefix" because both cases don't work well at all. Let's just try it with CNN/Dailymail instead and see what we get. Maybe first with just very few samples & if this doesn't work then let's run one full fine-tuning.<|||||>Definitely a jackpot on the example using a new dataset and too short of training: might be a good idea to add some of your notes to the README as well. New stats, this time on `--dataset_name cnn_dailymail --dataset_config "3.0.0"` ``` # w/o --source_prefix "summarize: " python examples/seq2seq/run_summarization.py --model_name_or_path t5-small --do_train --do_eval --dataset_name cnn_dailymail --dataset_config "3.0.0" --output_dir /tmp/tst-summarization --per_device_train_batch_size=4 --per_device_eval_batch_size=4 --overwrite_output_dir --predict_with_generate --max_train_samples 50 --max_val_samples 50 ***** eval metrics ***** epoch = 3.0 eval_gen_len = 65.24 eval_loss = 2.3276 eval_rouge1 = 25.4707 eval_rouge2 = 7.2334 eval_rougeL = 18.3807 eval_rougeLsum = 23.2505 eval_runtime = 6.4841 eval_samples = 50 eval_samples_per_second = 7.711 ``` ``` # w/ --source_prefix "summarize: " python examples/seq2seq/run_summarization.py --model_name_or_path t5-small --do_train --do_eval --dataset_name cnn_dailymail --dataset_config "3.0.0" --output_dir /tmp/tst-summarization --per_device_train_batch_size=4 --per_device_eval_batch_size=4 --overwrite_output_dir --predict_with_generate --max_train_samples 50 --max_val_samples 50 --source_prefix "summarize: " ***** eval metrics ***** epoch = 3.0 eval_gen_len = 62.38 eval_loss = 2.3243 eval_rouge1 = 30.0675 eval_rouge2 = 10.2052 eval_rougeL = 22.154 eval_rougeLsum = 27.2161 eval_runtime = 6.0876 eval_samples = 50 eval_samples_per_second = 8.213 ``` This is much better. I will try a longer train sequence next.<|||||>Pretrained with 5000 samples the score goes up nicely, this is 1/100th of the full dataset. ``` ***** eval metrics ***** epoch = 3.0 eval_gen_len = 61.66 eval_loss = 2.0773 eval_rouge1 = 30.174 eval_rouge2 = 12.0182 eval_rougeL = 23.5012 eval_rougeLsum = 27.3718 eval_runtime = 5.9836 eval_samples = 50 eval_samples_per_second = 8.356 ``` So I updated https://github.com/huggingface/transformers/pull/10734 with the recommendation you made @patrickvonplaten. Closing this.
transformers
10,732
closed
run_clm.py does not work with any other block_size other than 1024
**Note:** This issue can be fixed with one character change as described in last section. ## Environment info - `transformers` version: 4.4.0.dev0 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.8.5 - PyTorch version (GPU?): 1.7.1 (True) - Tensorflow version (GPU?): 2.4.1 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help - maintained examples (not research project or legacy): @sgugger, @patil-suraj ## Information Model I am using (Bert, XLNet ...): GPT2 The problem arises when using: * [X] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [X] an official GLUE/SQUaD task: Causal language modelling * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Use GPT based model which has block_size different than 1024 2. Try to train it or fine tune with run_clm.py setting setting block_size in data_args. In GPU mode, you will get following error: ``` RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)` ``` In CPU mode, you will get following error: ``` index out of range in self ``` ## Expected behavior Above error should not occur. ## Cause and Proposed Fix The issue is because block_size on [line 337](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_clm.py#L337) always get set to 1024 because of wrong indentation: ``` if data_args.block_size is None: block_size = tokenizer.model_max_length if block_size > 1024: logger.warn( f"The tokenizer picked seems to have a very large `model_max_length` ({tokenizer.model_max_length}). " "Picking 1024 instead. You can change that default value by passing --block_size xxx." ) block_size = 1024 # <<< THIS LINE NEEDS TO BE INTENDED!!! ``` So just tabbing that line should fix the issue.
03-16-2021 00:58:31
03-16-2021 00:58:31
Indeed! Would you mind making a PR with that change since you found the correct fix?<|||||>Sure, I'll get that prepared!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,731
closed
Fix log message for training from checkpoint with global step
# πŸš€ Feature request I think the log message here is wrong: https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L978 I think it should read something along these lines: ``` logger.info( f" Will skip the first {epochs_trained} epochs then the first {steps_trained_in_current_epoch} " "batches in the current epoch {epochs_trained + 1}." ) ``` The batches that are skipped are not skipped in the first epoch since the training was already done for `epochs_trained`. As the variable name indicates, the training is going to skip the steps already trained in the current epoch. The current epoch is `epochs_trained + 1`. ## Motivation The log message was confusing. I went and traced the code to ensure that it does the right skipping (which I think it does).
03-16-2021 00:25:10
03-16-2021 00:25:10
Yes, that sounds clearer. Would you mind making a PR with this?<|||||>Will make time this week. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,730
closed
Stacked Roberta run_mlm.py
# πŸ–₯ Benchmarking `transformers` ## Benchmark I try to run [transformers/experiments/language_modeling/run_mlm.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py) in order to train Roberta from scratch on Wikipedia dataset. ## Set-up Transformers version: 4.4.0.dev0 4 GPUs: NVIDIA Tesla V100 16 GB; 4096 Bit, PCI Express 3.0 x16. Used bash run script: ```bash #!/bin/bash export CUDA_LAUNCH_BLOCKING=1 source /data/env/bin/activate nohup python3 transformers/examples/language-modeling/run_mlm.py \ --dataset_name wikipedia \ --tokenizer_name roberta-base \ --model_type roberta \ --dataset_config_name 20200501.en \ --do_train \ --do_eval \ --learning_rate 1e-5 \ --num_train_epochs 5 \ --save_steps 5000 \ --warmup_steps=10000 \ --output_dir /data/models/wikipedia_roberta \ & ``` Tested with the script and also directly with python command (without nohup) ## Results Code seems stacked in `trainer.py`, at the first compute_loss step, when performing inference: ```python def compute_loss(self, model, inputs, return_outputs=False): """ How the loss is computed by Trainer. By default, all models return the loss in the first element. Subclass and override for custom behavior. """ if self.label_smoother is not None and "labels" in inputs: labels = inputs.pop("labels") else: labels = None print("STACKED HERE") outputs = model(**inputs) ... ``` I can't understand which running parameters are wrong. Could inference for a single batch take more than 30 mins? Thanks in advance! UPDATE: Without --warmup_steps is working.
03-15-2021 22:36:15
03-15-2021 22:36:15
transformers
10,729
closed
Multi-node training with the latest transformers/examples code
Hi, I am trying to follow the instructions on how to use the examples from [https://huggingface.co/transformers/examples.html] and I notice there is a difference between version 4.3 and 1.2 in the distributed training session. In the older version it seems that it supports multi-node training with " --node_rank=$THIS_MACHINE_INDEX \ --master_addr="192.168.1.1" \ --master_port=1234 run_bert_classifier.py \" But these options no longer exist in the latest tutorial. Does the latest version still support multi-node training? Thanks.
03-15-2021 21:36:33
03-15-2021 21:36:33
cc @sgugger <|||||>I'm unsure what you think is not supported. Launching any of the example scripts with ``` python -m torch.distributed.launch --nproc_per_node=xxx \ --node_rank=$THIS_MACHINE_INDEX \ --master_addr="192.168.1.1" \ --master_port=1234 \ run_xxx.py ``` is going to work.<|||||>Hi @sgugger , thank you for your answer. My understanding is that "--nproc_per_node" is the number of gpus will be used for the launched process? Also, if I want to launch another training node, I assume I will just run the same command with "node_rank=1"?<|||||>Yes, that is the number of GPUs. You can refer to the [PyTorch documentation](https://pytorch.org/docs/stable/distributed.html#launch-utility) for all the arguments of the PyTorch launcher as all the example scripts are fully compatible with it. You will also need to pass `--nnodes=$NUMBER_OF_NODES` for completeness.<|||||>Thanks, I am able to make it work now.
transformers
10,728
closed
[Issue template] need to update/extend who to tag
This PR * [x] adds an entry for what to do what to do when someone has model hub issues - thank you, @julien-c! TODO/Input needed: * [ ] need to update who to tag for `tensorflow` Issues @LysandreJik
03-15-2021 21:11:23
03-15-2021 21:11:23
For broken models you would just tag the model author(s) – we'll add a feature to the model hub to tag someone in a conversation thread, but in the meantime you can use the Forum to ping them<|||||>That's a great idea to tag the model author - how would a user know the model author's corresponding forum username? I guess this is temporary so probably can be somehow figured out...<|||||>We use SSO so one's username on Forum is guaranteed to be one's username on hf.co<|||||>Ah, that's perfect then! Thank you!
transformers
10,727
closed
Rename zero-shot pipeline multi_class argument
Renames the `multi_class` argument to `multi_label` in the `ZeroShotClassificationPipeline` and adds a deprecation warning to the former. Typically, "multi-label classification" is used to refer to this type of classification (where each class is evaluated independently). The name is changed in the zero-shot distillation script as well. Resolves #6668.
03-15-2021 20:37:15
03-15-2021 20:37:15
transformers
10,726
closed
broken models on the hub
Go to https://huggingface.co/sshleifer/distill-mbart-en-ro-12-6 click on "use in transformers", copy-n-paste and nope can't use this in `transformers`: ``` python -c 'from transformers import AutoTokenizer; AutoTokenizer.from_pretrained("sshleifer/distill-mbart-en-ro-12-6")' Traceback (most recent call last): File "<string>", line 1, in <module> File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/auto/tokenization_auto.py", line 410, in from_pretrained return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py", line 1704, in from_pretrained return cls._from_pretrained( File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py", line 1717, in _from_pretrained slow_tokenizer = (cls.slow_tokenizer_class)._from_pretrained( File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py", line 1776, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/roberta/tokenization_roberta.py", line 159, in __init__ super().__init__( File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/gpt2/tokenization_gpt2.py", line 179, in __init__ with open(vocab_file, encoding="utf-8") as vocab_handle: TypeError: expected str, bytes or os.PathLike object, not NoneType ``` this is with the latest master. These for example I tested to work fine: - `sshleifer/distill-mbart-en-ro-12-4` - `sshleifer/distill-mbart-en-ro-12-9` Perhaps we need a sort of CI that goes over the public models, validates that `run in transformers` code succeeds and sends an alert if it doesn't? We have no idea how many other models are broken on the hub right now.
03-15-2021 19:46:28
03-15-2021 19:46:28
I fixed this model (it had `bart` as a `model_type` instead of `mbart` in `config.json`). But the point of this issue is that perhaps we could add to a todo list to run a cronjob that will validate the models and tokenizers? @theo-m, is this something that would fit with the project you're currently working on, since you will have to run these things anyway for each model? Just asking - not piling it up on you. In fact if I understand the idea correctly this will be a requirement for your things to work, right?<|||||>We would not be aiming for 100% coverage, but yes, getting a sense of what's runnable on the hub would be awesome. Maybe a once-a-week CI job, as I expect the run to be terribly long/expensive? cc infra people @julien-c @n1t0 <|||||>I think we'll hook something into the git-push-event driven ML analytics system we've been talking about internally That's a medium-term goal though<|||||>It's a different thing though: on-push would ensure integrity on upload, which is indeed needed, but a recurrent job would enable us to detect regression in what the lib can support and give estimates on what is actually runnable.<|||||>The lib can change after the upload was made and the model/tokenizer stop working. We have seen this before with older models. I think @theo-m you're saying the same thing. <|||||>@theo-m, btw the low hanging fruit would be to just validate that the listed in "use in transformers" instructions indeed work. i.e. we just load the model and tokenizer and do nothing with it if it works and do something with it if it doesn't. <|||||>Note that this is not necessarily a low hanging fruit (depending on your definition of a low hanging fruit πŸ˜‚) given that: - we have 7,000+ models whose total weights represent multiple TBs of data - they change over time<|||||>the lowest hanging fruit is loading all that can be associated to a pipeline and run a single example in the associated pipeline, the results of this are stronger than just loading and yes it sure is a big big job, but it's the best we can do in order to build a good understanding of what is runnable on the hub - _in fine_ for non hf we won't be able to do much, but we can't give guarantees to code we don't manage.<|||||>I meant that just loading a model / tokenizer is cheaper/faster/requires almost 0 extra code to write - hence low-hanging fruit. I hear you that the hub is huge, a little bit at a time. It would have been the same code to validate 10 models or 7K models if there is no urgency to complete it fast, it just would take much much longer to complete. > * they change over time That was exactly my point, they and the codebase too, so it's not enough to check it once, even if we track when it was changed and when it was validated last. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,725
closed
Flax testing should not run the full torch test suite
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR adds a new `run_tests_torch_and_flax` circle ci job so that the flax test don't have to run the full pytorch test suite anymore. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-15-2021 19:16:44
03-15-2021 19:16:44
transformers
10,724
closed
Add minimum version check in examples
# What does this PR do? This PR adds a minimum version check to all examples, in order to avoid the waves of issues created each time they use a functionality that was just released into `Trainer`. The script will immediately error if the version of Transformers does not match the required minimum version. At each release, a script will set that to the version released automatically (work in progress for a second PR with other release utils) so that the examples associated with one tag will require the minimum version of that tag. The user can still remove that line to avoid the error (at their own risks). The error points out to: - the instruction for a source install - the examples README that now lists all examples folders with the various version tags.
03-15-2021 18:19:53
03-15-2021 18:19:53
transformers
10,723
closed
Train tokenizer for Deberta
Hi, I would like to know how can I train a DeBERTa tokenizer. From the paper I saw it uses BPETokenizer, but the BPETokenizer from huggingface/tokenizers doesn't work for this. Could you recommend me another implementation or library or a correct configuration for huggingface/tokenizers implementation to be able to train a DeBERTa model from scratch?
03-15-2021 16:58:02
03-15-2021 16:58:02
HuggingFace has another library called [tokenizers](https://github.com/huggingface/tokenizers) especially for this.<|||||>Currently, the training of Deberta Tokenizer is not supported directly by huggingface. Of course, you can create the required files by yourself from BPETokenizer training output, but you could also simply wait until #10703 is merged into the master branch and released. :-)<|||||>How would be the process of creating the required files from the BPETokenizer training output? @cronoik I'd really appreciate a little bit of explanation, as I tried to do so and I failed.<|||||>You can save me a lot of time by simply using the mentioned patch above. Just copy the DebertaTokenizer class to your runtime.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,722
closed
iterative evaluation in Trainer to save memory
# πŸš€ Feature request In Trainer.prediction_loop, compute metrics batch per batch (or allow to tweak this number) instead of gathering the whole predictions in a single array (nb: I'm aware of `eval_accumulation_steps` but it only allows to save GPU memory) ## Motivation Running a ses2seq evaluation on a dataset of 200K examples with a vocabulary of 50K and context of 77 words, gathering all of the output amounts to an array of 2.77 TiB, which I'm not sure everyone can afford: ``` Traceback (most recent call last): File "/mnt/beegfs/home/lerner/anaconda3/envs/transformers/bin/clip-train", line 33, in <module> sys.exit(load_entry_point('clip', 'console_scripts', 'clip-train')()) File "/mnt/beegfs/home/lerner/CLIP/clip/train.py", line 196, in main trainer.train(**config.get("checkpoint", {})) File "/mnt/beegfs/home/lerner/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/trainer.py", line 983, in train self._maybe_log_save_evaluate(tr_loss, model, trial, epoch) File "/mnt/beegfs/home/lerner/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/trainer.py", line 1058, in _maybe_log_save_evaluate metrics = self.evaluate() File "/mnt/beegfs/home/lerner/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/trainer_seq2seq.py", line 74, in evaluate return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix) File "/mnt/beegfs/home/lerner/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/trainer.py", line 1513, in evaluate metric_key_prefix=metric_key_prefix, File "/mnt/beegfs/home/lerner/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/trainer.py", line 1644, in prediction_loop preds_gatherer.add_arrays(self._gather_and_numpify(preds_host, "eval_preds")) File "/mnt/beegfs/home/lerner/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/trainer_pt_utils.py", line 330, in add_arrays self._storage = nested_new_like(arrays, self.total_samples, padding_index=self.padding_index) File "/mnt/beegfs/home/lerner/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/trainer_pt_utils.py", line 238, in nested_new_like return np.full_like(arrays, padding_index, shape=(num_samples, *arrays.shape[1:])) File "<__array_function__ internals>", line 6, in full_like File "/mnt/beegfs/home/lerner/anaconda3/envs/transformers/lib/python3.7/site-packages/numpy/core/numeric.py", line 382, in full_like res = empty_like(a, dtype=dtype, order=order, subok=subok, shape=shape) File "<__array_function__ internals>", line 6, in empty_like MemoryError: Unable to allocate 2.77 TiB for an array with shape (200206, 77, 49408) and data type float32 60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 3000/5000 [17:53<11:55, 2.80it/s] ``` ### Who can help Library: * trainer: @sgugger
03-15-2021 15:59:33
03-15-2021 15:59:33
This would result in an inaccurate metric in most cases, as metric functions are seldom linear. I'm afraid you will have to run evaluation on smaller chunks or use your own evaluation loop.<|||||>Yes of course I thought maybe to gather a reduced version of the predictions before actually computing the metric e.g. for sentence level accuracy, in my example, a bool array of shape `(200206, )` where the boolean value represents the accuracy of the output (i.e. `predictions == labels`). The actual `compute_metrics` would only have to reduce this array to a single value (using `np.mean` in my example).<|||||>You can do that using `Trainer` if your model returns that. `Trainer` is too generic to be able to guess that in this case it should gather a reduced version of the predictions (and how would it do it?). Otherwise writing the evaluation loop yourself is super easy (there is one example in [run_glue_no_trainer](https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_glue_no_trainer.py) for instance).<|||||>Ok, thanks for the advice :)
transformers
10,721
closed
Run Time Error: RuntimeError: Expected hidden[0] size (2, 1, 512), got [2, 128, 512] - Seq2Seq Model with PreTrained BERT Model
Hi, I am facing this run time error while training a seq2seq model with pretrained bert model. ``` RuntimeError Traceback (most recent call last) <ipython-input-63-472071541d41> in <module>() 8 start_time = time.time() 9 ---> 10 train_loss = train(model, train_iterator, optimizer, criterion, CLIP) 11 valid_loss = evaluate(model, valid_iterator, criterion) 12 8 frames /usr/local/lib/python3.7/dist-packages/torch/nn/modules/rnn.py in check_hidden_size(self, hx, expected_hidden_size, msg) 221 msg: str = 'Expected hidden size {}, got {}') -> None: 222 if hx.size() != expected_hidden_size: --> 223 raise RuntimeError(msg.format(expected_hidden_size, list(hx.size()))) 224 225 def check_forward_args(self, input: Tensor, hidden: Tensor, batch_sizes: Optional[Tensor]): RuntimeError: Expected hidden[0] size (2, 1, 512), got [2, 128, 512] ``` Related code snippets: ``` from torchtext.legacy.data import BucketIterator,TabularDataset BATCH_SIZE = 128 device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') train_iterator, valid_iterator = data.BucketIterator.splits( (train_data, valid_data), batch_size = BATCH_SIZE, device = device) ``` # Encoder class Encoder(nn.Module): def __init__(self, bert, hid_dim, n_layers, dropout): super().__init__() self.hid_dim = hid_dim self.n_layers = n_layers self.bert = bert emb_dim = bert.config.to_dict()['hidden_size'] self.rnn = nn.LSTM(emb_dim, hid_dim, n_layers, batch_first = True,dropout = dropout) self.dropout = nn.Dropout(dropout) def forward(self, sent1): #sent1 = [sent1 len, batch size] with torch.no_grad(): embedded = self.bert(sent1)[0] #embedded = [sent1 len, batch size, emb dim] outputs, (hidden, cell) = self.rnn(embedded) #outputs = [sent1 len, batch size, hid dim * n directions] #hidden = [n layers * n directions, batch size, hid dim] #cell = [n layers * n directions, batch size, hid dim] #outputs are always from the top hidden layer return hidden, cell The detailed code with error description is available here for your reference: https://github.com/Ninja16180/BERT/blob/main/Training_Seq2Seq_Model_using_Pre-Trained_BERT_Model.ipynb Kindly help me in resolving the issue Thanks in advance! ``` ```
03-15-2021 15:36:04
03-15-2021 15:36:04
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Kindly help in resolving the issue. It will help to build a seq2seq conversation model using pretrained bert model.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Maybe of interest to @patrickvonplaten and @patil-suraj <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,720
closed
Cannot use custom roberta tokenizer with run_mlm_wwm.py
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.4.dev0 - Platform: Ubuntu 18 - Python version: 3.7 - PyTorch version (GPU?): 1.7.1 (YES) - Tensorflow version (GPU?): - Using GPU in script?: YES - Using distributed or parallel set-up in script?: ### Who can help @patrickvonplaten @LysandreJik @ <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information When I try to use the BPE Tokenizer trained with huggingface/tokenizers with Roberta directly, it works: ```{python} tok = RobertaTokenizer.from_pretrained("bpe_tokenizer_0903", use_fast=True) ``` However, when I try to use this same tokenizer for training a language model, it fails: ```{bash} python -u transformers/examples/language-modeling/run_mlm_wwm.py \ --model_type deberta \ --config_name ./bpe_tokenizer_0903/config.json \ --tokenizer_name ./bpe_tokenizer_0903 \ --train_file ./prueba_tr.txt \ --validation_file ./final_valid.txt \ --output_dir ./roberta_1102 \ --overwrite_output_dir \ --do_train \ --do_eval \ --evaluation_strategy steps \ --per_device_train_batch_size 1 \ --per_device_eval_batch_size 2 \ --gradient_accumulation_steps 2 \ --learning_rate 6e-4 \ --save_steps 10 \ --logging_steps 10 \ --overwrite_cache \ --max_seq_length 128 \ --eval_accumulation_steps 10 \ --load_best_model_at_end \ --run_name deberta_0902 \ --save_total_limit 10 --warmup_steps 1750 \ --adam_beta2 0.98 --adam_epsilon 1e-6 --weight_decay 0.01 --num_train_epochs 1 ``` The error message is the following: ``` Traceback (most recent call last): File "transformers/examples/language-modeling/run_mlm_wwm.py", line 399, in <module> main() File "transformers/examples/language-modeling/run_mlm_wwm.py", line 286, in main use_fast=model_args.use_fast_tokenizer, File "/home/alejandro.vaca/data_rigoberta/transformers/src/transformers/models/auto/tokenization_auto.py", line 401, in from_pretrained return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/home/alejandro.vaca/data_rigoberta/transformers/src/transformers/tokenization_utils_base.py", line 1719, in from_pretrained resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs File "/home/alejandro.vaca/data_rigoberta/transformers/src/transformers/tokenization_utils_base.py", line 1790, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/home/alejandro.vaca/data_rigoberta/transformers/src/transformers/models/roberta/tokenization_roberta_fast.py", line 173, in __init__ **kwargs, File "/home/alejandro.vaca/data_rigoberta/transformers/src/transformers/models/gpt2/tokenization_gpt2_fast.py", line 145, in __init__ **kwargs, File "/home/alejandro.vaca/data_rigoberta/transformers/src/transformers/tokenization_utils_fast.py", line 87, in __init__ fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file) Exception: data did not match any variant of untagged enum ModelWrapper at line 1 column 1138661 ``` Why doesn't it fail when I try to load the tokenizer with RobertaTokenizer.from_pretrained() but it does fail when I try to run run_mlm_wwm.py ? @sgugger @patrickvonplaten @LysandreJik
03-15-2021 15:28:25
03-15-2021 15:28:25
That example only runs with `BERT`, which is why it has been moved to a separate research project.<|||||>I tried this script with albert and it worked, which script should I use to train a Roberta model from scratch with Whole word Masking??<|||||>Is that intended: `--model_type deberta` ? @alexvaca0 <|||||>Sorry, that was from the previous launch script, now it is roberta @cronoik <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,719
closed
[WIP] Extend LayoutLMTokenizer to handle bounding boxes
# What does this PR do? LayoutLMTokenizer does not take care properly of additional model input `bbox` (Bounding Boxes for words/tokens on a document), see https://github.com/huggingface/transformers/issues/10349. With this PR, LayoutLMTokenizer will take care of bounding boxes when doing tokenization, that is repeating a bounding box for a split text as is done [in the official LayoutLM code](https://github.com/microsoft/unilm/blob/23a7ea35b55279a171a118ac767e863aa92e692c/layoutlm/layoutlm/data/funsd.py#L252). Additionally, bounding box coordinates may be normalized to a target width and height. `LayoutLMTokenizerFast` is removed as it is currently only a copy of `BertTokenizerFast` and does not have the added functionality yet. Marked as WIP as I'm not sure this is the best way to tackle this problem, please see the discussion in the linked issue. Fixes #10349
03-15-2021 14:18:45
03-15-2021 14:18:45
The fixup target fails because `get_modified_files.py` reports `tokenization_layoutlm_fast.py` as differing from the master branch. Then calling e.g. `black` to format that file fails because it doesn't exist anymore. How would I fix that?<|||||>Hello! Thank you for your PR, and sorry for getting back to you so late. First of all, incredible work on doing the implementation and on overriding the tests that should be. I wonder if the approach here shouldn't make use of the recently introduced feature processors. @NielsRogge you have had extensive experience with feature processors and you were part of the initial conversation, what do you think would be the best approach here? It's a bit different to handling images as it's simply handling the bbox, so I might be wrong here. As a high level overview, I'm not keen on removing the fast tokenizer, and I'm wondering if we really need to accept two sequences or if LayoutLM is only made for single sequences - I haven't played with the model, so please tell me if I'm mistaken. Also cc @sgugger and @patil-suraj who might have some insights.<|||||>Yes I think you should look at the design used for `SpeechToText` or `Wav2Vec2`: there is a processor that combines a tokenizer and a feature extractor in those models. We should do the same here: leave the tokenizer unchanged and add a feature extractor to treat the bounding boxes separately, then merge the two in a `LayoutLMProcessor`.<|||||>Thank you for the input, I wasn't aware of feature processors. It sounds like this could be a way nicer solution here, I agree. > I'm wondering if we really need to accept two sequences or if LayoutLM is only made for single sequences The basic difference between LayoutLM and BERT is that the additional `bbox` input is [added to the embeddings](https://github.com/huggingface/transformers/blob/master/src/transformers/models/layoutlm/modeling_layoutlm.py#L104-L126). So the two sequences are processed slightly different inside the model. However, there's a one to one relationship between their items. That's also why the processing of the `bbox` sequence depends on how the tokenizer splits the input. In case of a split into N sub-tokens the corresponding bounding box is repeated N times to retain the one to one relationship. Will this be possible with a feature processor? From a first glance I'm not too sure but I might be wrong. Can someone clarify? > As a high level overview, I'm not keen on removing the fast tokenizer I removed the fast tokenizer to first discuss if this is a suitable approach before investing more time. Eventually, I planned to add support for it as well.<|||||>> However, there's a one to one relationship between their items. That's also why the processing of the bbox sequence depends on how the tokenizer splits the input. In case of a split into N sub-tokens the corresponding bounding box is repeated N times to retain the one to one relationship. In the fast tokenizer, you can rely on the `word_ids` method of the `BatchEncoding` (the type of the return of the tokenizer) to get back the word associated to each token. For the slow tokenizer you may have to compute it. The workflow I see is: the tokenizer returns a `BatchEnconding` with `input_ids`, `attention_mask` etc (like a usual tokenzier) and a field containing the mapping token to word, then the processor will extract that field form the batch encoding and pass it to the feature extractor responsible for the bounding boxes, so the proper repetition can happen. This way we still get a nice separation for the two modalities in two different objects.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>This is not a priority for me right now and correctly marked as stale. I didn't forget about it, though and hope to be able to come back to it in a near future.
transformers
10,718
closed
Fix backward compatibility with EvaluationStrategy
# What does this PR do? As mentioned in #10666, the switch from `EvaluationStrategy` to `IntervalStrategy` is not fully backward compatible. This PR fixes that.
03-15-2021 13:58:01
03-15-2021 13:58:01
transformers
10,717
closed
How can I get the exact position von answers?
I want to load a local model for Question and Answers task. I have copied the almost Code from Pipeline like follow: model = MyBert() checkpoint = torch.load('./training_mode_aim/checkpoint.pth.tar') model.load_state_dict(checkpoint['model']) model.eval() #read the context in local file texts = utils.readTxt() question = Question tokenizer = BertTokenizerFast.from_pretrained('') all_answers = [] for text in texts[:1]: encoding = tokenizer(question,text,truncation = True, max_length = 512,padding = True,stride = 256, return_tensors = 'np', return_overflowing_tokens=True, return_token_type_ids=True, return_offsets_mapping=True, return_special_tokens_mask=True,) num_span = len(encoding['input_ids']) answers = [] for span_idx in range(num_span): _,start_logits,end_logits = model(torch.tensor([encoding['input_ids'][span_idx]]), torch.tensor([encoding['attention_mask'][span_idx]]), torch.tensor([encoding['token_type_ids'][span_idx]])) with torch.no_grad(): start_logits,end_logits = start_logits.cpu().numpy(),end_logits.cpu().numpy() undesired_tokens = np.abs(np.array(encoding['token_type_ids'][span_idx]) - 1) & np.array(encoding['attention_mask'][span_idx]) undesired_tokens_mask = undesired_tokens == 0.0 start = np.where(undesired_tokens_mask,start_logits, -10000.0) end = np.where(undesired_tokens_mask, end_logits, -10000.0) start = np.exp(start - np.log(np.sum(np.exp(start), axis=-1, keepdims=True))) end = np.exp(end- np.log(np.sum(np.exp(end), axis=-1, keepdims=True))) starts,ends,scores = decode(start,end,5,128) print('starts: {}, end: {}'.format(starts,ends)) answers += [{ 'score':score.item(), 'start':encoding.token_to_word(s), 'end':encoding.token_to_word(e)} for s,e,score in zip(starts,ends,scores)] answers = sorted(answers, key=lambda x: x["score"], reverse=True)[: 5] print(answers) Then I got the relative positions for each subencodings. But finally I want to take the absolute positions of the answers. So someone know how to solve this problem?
03-15-2021 12:52:07
03-15-2021 12:52:07
Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discusss.huggingface.co) instead? Thanks!<|||||>> Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests. > Could you ask your question on the [forum](https://discusss.huggingface.co) instead? > > Thanks! okay, my bad