repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
13,228
closed
Improve documentation of pooler_output in ModelOutput
# What does this PR do? Improves the doc string for pooler_output in modeling_outputs.py – making it more clear, and opening its availability to a more generic use-case than just BERT-family of models. **Motivation**: I was writing a `cls_pooler` for a sentence embeddings usage, and initially thought this is the CLS token output from the last layer – which is not the case, that would just be `last_hidden_state[0]` ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? @sgugger
08-23-2021 23:28:59
08-23-2021 23:28:59
@sgugger noting: this PR is ready (tests have passed)<|||||>Thanks a lot!
transformers
13,227
closed
make test failing
## Environment info - `transformers` version: 4.9.2 - Platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-glibc2.10 - Python version: 3.8.0 - PyTorch version (GPU?): 1.9.0+cu102 (False) - Tensorflow version (GPU?): 2.6.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help Documentation: @sgugger ## Information Found this issue while following the [instructions](https://huggingface.co/transformers/contributing.html) on how to install transformers as [dev]. The [dev] command _pip install -e .[dev]_ gives me 58 failing tests. This happens with python=3.8.0 and python=3.8.8. I am using py=3.8.0 because of this [related issue](https://github.com/huggingface/transformers/issues/9410). The problem arises when using: * [ X ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ X ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: conda create -n hf_py380 python=3.8.0 conda activate hf_py380 git clone https://github.com/myuser/transformers.git cd transformers/ git checkout -b exploration pip uninstall transformers git clone https://github.com/huggingface/datasets cd datasets pip install -e ".[dev]" cd .. python -m pytest -n 3 --dist=loadfile -s -v ./tests/ As results, I am getting 60 failed tests. Log file below: > -- Docs: https://docs.pytest.org/en/stable/warnings.html =================================================== short test summary info =================================================== FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_encoder_decoder_with_configs - AssertionError: unexpectedly None FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_no_configs - AssertionError: unexpectedly None FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_encoder_decoder_with_configs - AssertionError: unexpected... FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_no_configs_only_pretrain - AssertionError: unexpectedly None FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_no_model_no_architectures - AssertionError: unexpectedly None FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_no_configs_eager - AssertionError: unexpectedly None FAILED tests/test_file_utils.py::GetFromCacheTests::test_bogus_url - requests.exceptions.ProxyError: HTTPSConnectionPool(hos... FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_torchscript - AssertionError: unexpectedly None FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_with_configs - AssertionError: unexpectedly None FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_no_configs_graph - AssertionError: unexpectedly None FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_no_configs_only_pretrain - AssertionError: unexpectedly None FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_with_configs_eager - AssertionError: unexpectedly None FAILED tests/test_benchmark.py::BenchmarkTest::test_train_encoder_decoder_with_configs - AssertionError: unexpectedly None FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_with_configs_graph - AssertionError: unexpectedly None FAILED tests/test_benchmark.py::BenchmarkTest::test_train_no_configs - AssertionError: unexpectedly None FAILED tests/test_benchmark.py::BenchmarkTest::test_train_with_configs - AssertionError: unexpectedly None FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_train_no_configs - AssertionError: unexpectedly None FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_train_with_configs - AssertionError: unexpectedly None FAILED tests/test_generation_utils.py::GenerationIntegrationTests::test_beam_search_warning_if_max_length_is_passed - OSErro... FAILED tests/test_modeling_tf_longformer.py::TFLongformerModelIntegrationTest::test_layer_attn_probs - OSError: Can't load w... FAILED tests/test_modeling_tf_longformer.py::TFLongformerModelIntegrationTest::test_layer_global_attn - OSError: Can't load ... FAILED tests/test_tokenization_blenderbot.py::Blenderbot3BTokenizerTests::test_3B_tokenization_same_as_parlai - requests.exc... FAILED tests/test_tokenization_bart.py::TestTokenizationBart::test_tokenization_python_rust_equals - requests.exceptions.Pro... FAILED tests/test_tokenization_big_bird.py::BigBirdTokenizationTest::test_build_inputs_with_special_tokens - requests.except... FAILED tests/test_tokenization_bart.py::TestTokenizationBart::test_tokenizer_mismatch_warning - IndexError: list index out o... FAILED tests/test_tokenization_big_bird.py::BigBirdTokenizationTest::test_compare_add_special_tokens - requests.exceptions.P... FAILED tests/test_tokenization_camembert.py::CamembertTokenizationTest::test_add_tokens - requests.exceptions.ProxyError: HT... FAILED tests/test_tokenization_big_bird.py::BigBirdTokenizationTest::test_compare_prepare_for_model - requests.exceptions.Pr... FAILED tests/test_tokenization_camembert.py::CamembertTokenizationTest::test_alignement_methods - requests.exceptions.ProxyE... FAILED tests/test_tokenization_camembert.py::CamembertTokenizationTest::test_batch_encode_dynamic_overflowing - requests.exc... FAILED tests/test_tokenization_byt5.py::ByT5TokenizationTest::test_empty_target_text - requests.exceptions.ProxyError: HTTPS... FAILED tests/test_tokenization_big_bird.py::BigBirdTokenizationTest::test_compare_pretokenized_inputs - requests.exceptions.... FAILED tests/test_tokenization_big_bird.py::BigBirdTokenizationTest::test_create_token_type_ids - requests.exceptions.ProxyE... FAILED tests/test_tokenization_byt5.py::ByT5TokenizationTest::test_eos_treatment - requests.exceptions.ProxyError: HTTPSConn... FAILED tests/test_tokenization_big_bird.py::BigBirdTokenizationTest::test_embeded_special_tokens - requests.exceptions.Proxy... FAILED tests/test_tokenization_byt5.py::ByT5TokenizationTest::test_max_length_integration - requests.exceptions.ProxyError: ... FAILED tests/test_tokenization_big_bird.py::BigBirdTokenizationTest::test_fast_only_inputs - requests.exceptions.ProxyError:... FAILED tests/test_tokenization_big_bird.py::BigBirdTokenizationTest::test_is_fast - requests.exceptions.ProxyError: HTTPSCon... FAILED tests/test_tokenization_big_bird.py::BigBirdTokenizationTest::test_max_length_equal - requests.exceptions.ProxyError:... FAILED tests/test_tokenization_camembert.py::CamembertTokenizationTest::test_compare_prepare_for_model - requests.exceptions... FAILED tests/test_tokenization_canine.py::CanineTokenizationTest::test_encoding_keys - requests.exceptions.ProxyError: HTTPS... FAILED tests/test_tokenization_big_bird.py::BigBirdTokenizationTest::test_num_special_tokens_to_add_equal - requests.excepti... FAILED tests/test_tokenization_camembert.py::CamembertTokenizationTest::test_compare_pretokenized_inputs - requests.exceptio... FAILED tests/test_tokenization_big_bird.py::BigBirdTokenizationTest::test_padding - requests.exceptions.ProxyError: HTTPSCon... FAILED tests/test_tokenization_camembert.py::CamembertTokenizationTest::test_create_token_type_ids - requests.exceptions.Pro... FAILED tests/test_tokenization_squeezebert.py::SqueezeBertTokenizationTest::test_batch_encode_dynamic_overflowing - requests... FAILED tests/test_tokenization_t5.py::T5TokenizationTest::test_padding - requests.exceptions.ProxyError: HTTPSConnectionPool... FAILED tests/test_tokenization_xlm_roberta.py::XLMRobertaTokenizationTest::test_alignement_methods - requests.exceptions.Pro... FAILED tests/test_tokenization_t5.py::T5TokenizationTest::test_padding_different_model_input_name - requests.exceptions.Prox... FAILED tests/test_tokenization_xlm_roberta.py::XLMRobertaTokenizationTest::test_batch_encode_dynamic_overflowing - requests.... FAILED tests/test_tokenization_t5.py::T5TokenizationTest::test_prepare_batch - requests.exceptions.ProxyError: HTTPSConnecti... FAILED tests/test_tokenization_xlm_roberta.py::XLMRobertaTokenizationTest::test_build_inputs_with_special_tokens - requests.... FAILED tests/test_tokenization_t5.py::T5TokenizationTest::test_save_pretrained - requests.exceptions.ProxyError: HTTPSConnec... FAILED tests/test_tokenization_xlm_roberta.py::XLMRobertaTokenizationTest::test_compare_add_special_tokens - requests.except... FAILED tests/test_tokenization_squeezebert.py::SqueezeBertTokenizationTest::test_build_inputs_with_special_tokens - requests... FAILED tests/test_tokenization_t5.py::T5TokenizationTest::test_special_tokens_initialization - requests.exceptions.ProxyErro... FAILED tests/test_tokenization_squeezebert.py::SqueezeBertTokenizationTest::test_compare_add_special_tokens - requests.excep... FAILED tests/test_tokenization_t5.py::T5TokenizationTest::test_special_tokens_map_equal - requests.exceptions.ProxyError: HT... FAILED tests/test_tokenization_xlm_roberta.py::XLMRobertaTokenizationTest::test_compare_prepare_for_model - requests.excepti... FAILED tests/test_trainer.py::TrainerIntegrationTest::test_mem_metrics - AssertionError: 'init_mem_cpu_alloc_delta' not foun... ERROR tests/test_modeling_bart.py ERROR tests/test_modeling_encoder_decoder.py ERROR tests/test_modeling_flax_bart.py ERROR tests/test_modeling_flax_marian.py ERROR tests/test_modeling_flax_mbart.py ERROR tests/test_modeling_fsmt.py ERROR tests/test_modeling_rag.py ERROR tests/test_skip_decorators.py ERROR tests/deepspeed/test_deepspeed.py ERROR tests/deepspeed/test_model_zoo.py ERROR tests/extended/test_trainer_ext.py ERROR tests/sagemaker/test_multi_node_data_parallel.py ERROR tests/sagemaker/test_multi_node_model_parallel.py ERROR tests/sagemaker/test_single_node_gpu.py ===================== 60 failed, 7773 passed, 2260 skipped, 653 warnings, 14 errors in 7663.28s (2:07:43) ===================== ## Expected behavior Since I am not changing the code, just cloning repo etc, I expetected having all tests as PASSED. What am I doing wrong? Thank you!
08-23-2021 21:02:48
08-23-2021 21:02:48
Just to complement my issue above, I got some errors and warnings when calling _transformers-cli env_ I believe it is not related with the issue since it is cuda related messages but I am sharing it anyway. ``` $ transformers-cli env 2021-08-24 08:21:49.993817: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory 2021-08-24 08:21:49.993881: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. WARNING:tensorflow:From /home/yuzhou/miniconda3/envs/hf_py380/lib/python3.8/site-packages/transformers/commands/env.py:50: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.config.list_physical_devices('GPU')` instead. 2021-08-24 08:21:52.426412: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2021-08-24 08:21:52.437177: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory 2021-08-24 08:21:52.437216: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303) 2021-08-24 08:21:52.437251: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (sr507): /proc/driver/nvidia/version does not exist Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.9.2 - Platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-glibc2.10 - Python version: 3.8.0 - PyTorch version (GPU?): 1.9.0+cu102 (False) - Tensorflow version (GPU?): 2.6.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ```<|||||>Hello @merleyc! I can't see the full stack trace of your error logs, but it seems that you're having connection issues? It seems that nearly all errors are `requests.exceptions.ProxyError`<|||||>Thanks for observing that, @LysandreJik ! I was able to successfully use the commands below without setting proxies: ``` git remote add upstream https://github.com/huggingface/transformers.git git pull upstream master ``` But I set the http_proxy, https_proxy and ftp_proxy and I am running again the tests. They will take >2h to complete. At least until now I don't see proxy related errors but already see FAILING tests, like: ``` 24 [gw1] FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_encoder_decoder_with_configs 82 [gw1] FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_no_configs 83 tests/test_benchmark.py::BenchmarkTest::test_inference_no_configs_only_pretrain 84 [gw1] FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_no_configs_only_pretrain ``` Any idea why are these tests failing? I will paste the entire log here once the tests are finished. Thanks!<|||||>Hi, I got 49 failing tests when running the test after setting the proxies. Below are the lines that contains FAILED on it. Please see the entire log file attached. ` 34:[gw2] FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_encoder_decoder_with_configs 82:[gw2] FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_no_configs 84:[gw2] FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_no_configs_only_pretrain 148:[gw2] FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_no_model_no_architectures 264:[gw1] FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_encoder_decoder_with_configs 266:[gw0] FAILED tests/test_file_utils.py::GetFromCacheTests::test_bogus_url 268:[gw2] FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_torchscript 272:[gw2] FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_with_configs 274:[gw1] FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_no_configs_eager 282:[gw1] FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_no_configs_graph 286:[gw2] FAILED tests/test_benchmark.py::BenchmarkTest::test_train_encoder_decoder_with_configs 290:[gw1] FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_no_configs_only_pretrain 294:[gw2] FAILED tests/test_benchmark.py::BenchmarkTest::test_train_no_configs 298:[gw1] FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_with_configs_eager 366:[gw2] FAILED tests/test_benchmark.py::BenchmarkTest::test_train_with_configs 454:[gw1] FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_with_configs_graph 506:[gw1] FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_train_no_configs 528:[gw1] FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_train_with_configs 14436:[gw2] FAILED tests/test_tokenization_distilbert.py::BertTokenizationTest::test_padding 14438:[gw0] FAILED tests/test_tokenization_deberta.py::DebertaTokenizationTest::test_tokenizer_mismatch_warning 14450:[gw1] FAILED tests/test_tokenization_dpr.py::BertTokenizationTest::test_padding 18032:[gw2] FAILED tests/test_tokenization_reformer.py::ReformerTokenizationTest::test_special_tokens_initialization 18036:[gw0] FAILED tests/test_tokenization_pegasus.py::BigBirdPegasusTokenizationTest::test_compare_add_special_tokens 18038:[gw2] FAILED tests/test_tokenization_reformer.py::ReformerTokenizationTest::test_special_tokens_map_equal 18054:[gw2] FAILED tests/test_tokenization_reformer.py::ReformerTokenizationTest::test_tokenization_python_rust_equals 18060:[gw2] FAILED tests/test_tokenization_reformer.py::ReformerTokenizationTest::test_tokenizer_mismatch_warning 18222:[gw1] FAILED tests/test_tokenization_roberta.py::RobertaTokenizationTest::test_num_special_tokens_to_add_equal 18258:[gw0] FAILED tests/test_tokenization_pegasus.py::BigBirdPegasusTokenizationTest::test_compare_prepare_for_model 18262:[gw2] FAILED tests/test_tokenization_small_blenderbot.py::BlenderbotSmallTokenizerTest::test_empty_word_small_tok 18330:[gw0] FAILED tests/test_tokenization_pegasus.py::BigBirdPegasusTokenizationTest::test_compare_pretokenized_inputs 18518:[gw1] FAILED tests/test_tokenization_roberta.py::RobertaTokenizationTest::test_padding 18538:[gw1] FAILED tests/test_tokenization_roberta.py::RobertaTokenizationTest::test_padding_different_model_input_name 18566:[gw0] FAILED tests/test_tokenization_pegasus.py::BigBirdPegasusTokenizationTest::test_create_token_type_ids 18568:[gw2] FAILED tests/test_tokenization_squeezebert.py::BertTokenizationTest::test_alignement_methods 18586:[gw0] FAILED tests/test_tokenization_pegasus.py::BigBirdPegasusTokenizationTest::test_embeded_special_tokens 18602:[gw1] FAILED tests/test_tokenization_roberta.py::RobertaTokenizationTest::test_save_pretrained 18614:[gw0] FAILED tests/test_tokenization_pegasus.py::BigBirdPegasusTokenizationTest::test_equivalence_to_orig_tokenizer 18616:[gw1] FAILED tests/test_tokenization_roberta.py::RobertaTokenizationTest::test_special_tokens_initialization 18626:[gw0] FAILED tests/test_tokenization_pegasus.py::BigBirdPegasusTokenizationTest::test_is_fast 18646:[gw1] FAILED tests/test_tokenization_roberta.py::RobertaTokenizationTest::test_tokenization_python_rust_equals 18650:[gw0] FAILED tests/test_tokenization_pegasus.py::BigBirdPegasusTokenizationTest::test_max_length_equal 18654:[gw2] FAILED tests/test_tokenization_squeezebert.py::BertTokenizationTest::test_build_inputs_with_special_tokens 18668:[gw2] FAILED tests/test_tokenization_squeezebert.py::BertTokenizationTest::test_compare_add_special_tokens 18674:[gw0] FAILED tests/test_tokenization_pegasus.py::BigBirdPegasusTokenizationTest::test_num_special_tokens_to_add_equal 18688:[gw2] FAILED tests/test_tokenization_squeezebert.py::BertTokenizationTest::test_compare_prepare_for_model 18692:[gw1] FAILED tests/test_tokenization_t5.py::T5TokenizationTest::test_add_tokens 18698:[gw2] FAILED tests/test_tokenization_squeezebert.py::BertTokenizationTest::test_compare_pretokenized_inputs 18704:[gw0] FAILED tests/test_tokenization_pegasus.py::BigBirdPegasusTokenizationTest::test_padding 20268:[gw2] FAILED tests/test_trainer.py::TrainerIntegrationTest::test_mem_metrics 59432:FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_encoder_decoder_with_configs 59433:FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_no_configs - As... 59434:FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_no_configs_only_pretrain 59435:FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_no_model_no_architectures 59436:FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_encoder_decoder_with_configs 59437:FAILED tests/test_file_utils.py::GetFromCacheTests::test_bogus_url - requests... 59438:FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_torchscript - A... 59439:FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_with_configs - ... 59440:FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_no_configs_eager 59441:FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_no_configs_graph 59442:FAILED tests/test_benchmark.py::BenchmarkTest::test_train_encoder_decoder_with_configs 59443:FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_no_configs_only_pretrain 59444:FAILED tests/test_benchmark.py::BenchmarkTest::test_train_no_configs - Assert... 59445:FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_with_configs_eager 59446:FAILED tests/test_benchmark.py::BenchmarkTest::test_train_with_configs - Asse... 59447:FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_with_configs_graph 59448:FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_train_no_configs - A... 59449:FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_train_with_configs 59450:FAILED tests/test_tokenization_distilbert.py::BertTokenizationTest::test_padding 59451:FAILED tests/test_tokenization_deberta.py::DebertaTokenizationTest::test_tokenizer_mismatch_warning 59452:FAILED tests/test_tokenization_dpr.py::BertTokenizationTest::test_padding - r... 59453:FAILED tests/test_tokenization_reformer.py::ReformerTokenizationTest::test_special_tokens_initialization 59454:FAILED tests/test_tokenization_pegasus.py::BigBirdPegasusTokenizationTest::test_compare_add_special_tokens 59455:FAILED tests/test_tokenization_reformer.py::ReformerTokenizationTest::test_special_tokens_map_equal 59456:FAILED tests/test_tokenization_reformer.py::ReformerTokenizationTest::test_tokenization_python_rust_equals 59457:FAILED tests/test_tokenization_reformer.py::ReformerTokenizationTest::test_tokenizer_mismatch_warning 59458:FAILED tests/test_tokenization_roberta.py::RobertaTokenizationTest::test_num_special_tokens_to_add_equal 59459:FAILED tests/test_tokenization_pegasus.py::BigBirdPegasusTokenizationTest::test_compare_prepare_for_model 59460:FAILED tests/test_tokenization_small_blenderbot.py::BlenderbotSmallTokenizerTest::test_empty_word_small_tok 59461:FAILED tests/test_tokenization_pegasus.py::BigBirdPegasusTokenizationTest::test_compare_pretokenized_inputs 59462:FAILED tests/test_tokenization_roberta.py::RobertaTokenizationTest::test_padding 59463:FAILED tests/test_tokenization_roberta.py::RobertaTokenizationTest::test_padding_different_model_input_name 59464:FAILED tests/test_tokenization_pegasus.py::BigBirdPegasusTokenizationTest::test_create_token_type_ids 59465:FAILED tests/test_tokenization_squeezebert.py::BertTokenizationTest::test_alignement_methods 59466:FAILED tests/test_tokenization_pegasus.py::BigBirdPegasusTokenizationTest::test_embeded_special_tokens 59467:FAILED tests/test_tokenization_roberta.py::RobertaTokenizationTest::test_save_pretrained 59468:FAILED tests/test_tokenization_pegasus.py::BigBirdPegasusTokenizationTest::test_equivalence_to_orig_tokenizer 59469:FAILED tests/test_tokenization_roberta.py::RobertaTokenizationTest::test_special_tokens_initialization 59470:FAILED tests/test_tokenization_pegasus.py::BigBirdPegasusTokenizationTest::test_is_fast 59471:FAILED tests/test_tokenization_roberta.py::RobertaTokenizationTest::test_tokenization_python_rust_equals 59472:FAILED tests/test_tokenization_pegasus.py::BigBirdPegasusTokenizationTest::test_max_length_equal 59473:FAILED tests/test_tokenization_squeezebert.py::BertTokenizationTest::test_build_inputs_with_special_tokens 59474:FAILED tests/test_tokenization_squeezebert.py::BertTokenizationTest::test_compare_add_special_tokens 59475:FAILED tests/test_tokenization_pegasus.py::BigBirdPegasusTokenizationTest::test_num_special_tokens_to_add_equal 59476:FAILED tests/test_tokenization_squeezebert.py::BertTokenizationTest::test_compare_prepare_for_model 59477:FAILED tests/test_tokenization_t5.py::T5TokenizationTest::test_add_tokens - r... 59478:FAILED tests/test_tokenization_squeezebert.py::BertTokenizationTest::test_compare_pretokenized_inputs 59479:FAILED tests/test_tokenization_pegasus.py::BigBirdPegasusTokenizationTest::test_padding 59480:FAILED tests/test_trainer.py::TrainerIntegrationTest::test_mem_metrics - Asse...` [results_cli3_v2.txt](https://github.com/huggingface/transformers/files/7042503/results_cli3_v2.txt) I'd appreciate any help! :) Thanks!<|||||>Any thought on this issue, @sgugger and @LysandreJik ? Thanks!<|||||>It's impossible to know what went wrong without having the whole output of the tests. The log file does not contain the logs, just which test passed and which did not.<|||||>Hi @sgugger , Apologies but how can I get the log file ? To get the results I sent to you in the file "results_cli3_v2.txt" I run this command: `python -m pytest -n 3 --dist=loadfile -s -v ./tests/ >> results_cli3_v2.txt`<|||||>Hi @sgugger , I have tried this command, which includes `--tb=long ` : `python -m pytest --tb=long -n 8 --dist=loadfile -s -v ./tests/ > ~/resultsCLI.txt` Is this the log file that you mentioned that should contain the _whole output of the tests_? If not, please advise. After runnning the mentioned command, I got 3 failing tests and the error ``` INTERNALERROR> Traceback (most recent call last): INTERNALERROR> File "/home/mypath/miniconda3/envs/hf-dev-py380_v2/lib/python3.8/site-packages/_pytest/main.py", line 269, in wrap_session INTERNALERROR> session.exitstatus = doit(config, session) or 0 INTERNALERROR> File "/home/mypath/miniconda3/envs/hf-dev-py380_v2/lib/python3.8/site-packages/_pytest/main.py", line 323, in _main INTERNALERROR> config.hook.pytest_runtestloop(session=session) INTERNALERROR> File "/home/mypath/miniconda3/envs/hf-dev-py380_v2/lib/python3.8/site-packages/pluggy/_hooks.py", line 265, in __call__ INTERNALERROR> return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) INTERNALERROR> File "/home/mypath/miniconda3/envs/hf-dev-py380_v2/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec INTERNALERROR> return self._inner_hookexec(hook_name, methods, kwargs, firstresult) INTERNALERROR> File "/home/mypath/miniconda3/envs/hf-dev-py380_v2/lib/python3.8/site-packages/pluggy/_callers.py", line 60, in _multicall INTERNALERROR> return outcome.get_result() INTERNALERROR> File "/home/mypath/miniconda3/envs/hf-dev-py380_v2/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result INTERNALERROR> raise ex[1].with_traceback(ex[2]) INTERNALERROR> File "/home/mypath/miniconda3/envs/hf-dev-py380_v2/lib/python3.8/site-packages/pluggy/_callers.py", line 39, in _multicall INTERNALERROR> res = hook_impl.function(*args) INTERNALERROR> File "/home/mypath/miniconda3/envs/hf-dev-py380_v2/lib/python3.8/site-packages/xdist/dsession.py", line 112, in pytest_runtestloop INTERNALERROR> self.loop_once() INTERNALERROR> File "/home/mypath/miniconda3/envs/hf-dev-py380_v2/lib/python3.8/site-packages/xdist/dsession.py", line 135, in loop_once INTERNALERROR> call(**kwargs) INTERNALERROR> File "/home/mypath/miniconda3/envs/hf-dev-py380_v2/lib/python3.8/site-packages/xdist/dsession.py", line 256, in worker_collectionfinish INTERNALERROR> self.sched.schedule() INTERNALERROR> File "/home/mypath/miniconda3/envs/hf-dev-py380_v2/lib/python3.8/site-packages/xdist/scheduler/loadscope.py", line 341, in schedule INTERNALERROR> self._reschedule(node) INTERNALERROR> File "/home/mypath/miniconda3/envs/hf-dev-py380_v2/lib/python3.8/site-packages/xdist/scheduler/loadscope.py", line 323, in _reschedule INTERNALERROR> self._assign_work_unit(node) INTERNALERROR> File "/home/mypath/miniconda3/envs/hf-dev-py380_v2/lib/python3.8/site-packages/xdist/scheduler/loadscope.py", line 261, in _assign_work_unit INTERNALERROR> worker_collection = self.registered_collections[node] INTERNALERROR> KeyError: <WorkerController gw10> ``` as showed in this output file: [resultsCLI-pytestv625.txt](https://github.com/huggingface/transformers/files/7140353/resultsCLI-pytestv625.txt) I downgraded the pytest version from 6.2.5 to 6.2.2 as stated [here](https://stackoverflow.com/questions/66803324/how-can-i-resolve-an-error-running-pytest-in-parallel-via-xdist-in-bitbucket-pip), but didn't help it. The output file with pytest v6.2.2 is: [resultsCLI-pytestv622.txt](https://github.com/huggingface/transformers/files/7140352/resultsCLI-pytestv622.txt) Please advise. Thanks!<|||||>We need the stack trace and the error message of the failing test to understand what is going on, this is not it.<|||||>@sgugger Could you please tell me what is the command to get what you are looking for? I didn’t find it in the documentation . Thank you!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,226
closed
Bump notebook from 6.1.5 to 6.4.1 in /examples/research_projects/lxmert
Bumps [notebook](http://jupyter.org) from 6.1.5 to 6.4.1. [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=notebook&package-manager=pip&previous-version=6.1.5&new-version=6.4.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
08-23-2021 20:55:38
08-23-2021 20:55:38
transformers
13,225
closed
Allow local_files_only for fast pretrained tokenizers
# What does this PR do? There seems to have been a legacy issue where `local_files_only` did not work for fast tokenizers. I understand that priority focus is given to the environment variable `TRANSFORMERS_OFFLINE` (which did work) but I'd argue that it is best to have such file-related arguments work in the same manner across models, tokenizers, configs. This change is quite small. The argument `local_files_only` already existed in `PretrainedTokenizerBase.from_pretrained` (but was not present in the docstring, I added it now) - but it was never passed to `get_fast_tokenizer_file`. This latter function ultimately only skipped online look-up if `is_offline_mode()`. But as discussed above it might be better to include a local argument to control this behaviour in addition to an absolute (environmental) one. This PR makes sure that `local_files_only` has the same effect when loading a slow or fast tokenizer. Fixes #12571 ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). ## Who can review? @n1t0, @LysandreJik
08-23-2021 17:20:37
08-23-2021 17:20:37
transformers
13,224
closed
Add RemBert to AutoTokenizer
The RemBert tokenizer was not added to the `AutoTokenizer` factory. This fixes it.
08-23-2021 17:15:22
08-23-2021 17:15:22
transformers
13,223
closed
Unable to load 'rembert' checkpoint
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.10.0.dev0 - Platform: Linux-5.4.0-54-generic-x86_64-with-debian-bullseye-sid - Python version: 3.6.10 - PyTorch version (GPU?): 1.7.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: Contributor Author @Iwontbecreative - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information from transformers import RemBertTokenizer,RemBertTokenizerFast,RemBertForQuestionAnswering tokenizer = RemBertTokenizer.from_pretrained('rembert') ## Output HTTPError Traceback (most recent call last) <ipython-input-23-22b8f1f94b36> in <module> 1 from transformers import RemBertTokenizer,RemBertTokenizerFast,RemBertForQuestionAnswering ----> 2 tokenizer = RemBertTokenizer.from_pretrained('rembert') ~/anaconda3/envs/Sam1/lib/python3.6/site-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs) 1646 # At this point pretrained_model_name_or_path is either a directory or a model identifier name 1647 fast_tokenizer_file = get_fast_tokenizer_file( -> 1648 pretrained_model_name_or_path, revision=revision, use_auth_token=use_auth_token 1649 ) 1650 additional_files_names = { ~/anaconda3/envs/Sam1/lib/python3.6/site-packages/transformers/tokenization_utils_base.py in get_fast_tokenizer_file(path_or_repo, revision, use_auth_token) 3409 """ 3410 # Inspect all files from the repo/folder. -> 3411 all_files = get_list_of_files(path_or_repo, revision=revision, use_auth_token=use_auth_token) 3412 tokenizer_files_map = {} 3413 for file_name in all_files: ~/anaconda3/envs/Sam1/lib/python3.6/site-packages/transformers/file_utils.py in get_list_of_files(path_or_repo, revision, use_auth_token) 1693 token = None 1694 model_info = HfApi(endpoint=HUGGINGFACE_CO_RESOLVE_ENDPOINT).model_info( -> 1695 path_or_repo, revision=revision, token=token 1696 ) 1697 return [f.rfilename for f in model_info.siblings] ~/anaconda3/envs/Sam1/lib/python3.6/site-packages/huggingface_hub/hf_api.py in model_info(self, repo_id, revision, token) 246 ) 247 r = requests.get(path, headers=headers) --> 248 r.raise_for_status() 249 d = r.json() 250 return ModelInfo(**d) ~/anaconda3/envs/Sam1/lib/python3.6/site-packages/requests/models.py in raise_for_status(self) 939 940 if http_error_msg: --> 941 raise HTTPError(http_error_msg, response=self) 942 943 def close(self): HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/api/models/rembert The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
08-23-2021 15:52:07
08-23-2021 15:52:07
The checkpoint is `google/rembert`: https://huggingface.co/google/rembert<|||||>thanks :)
transformers
13,222
closed
Add TFEncoderDecoderModel + Add cross-attention to some TF models
# What does this PR do? - Add TFEncoderDecoderModel + Add cross-attention to some TF models - Add cross attention & cache mechanism (`use_cache` & `past_key_values`) to some TF models - Add `test_modeling_tf_encoder_decoder.py` ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Did you make sure to update the documentation with your changes? - [x] Did you write any new necessary tests? ## Who can review? @patrickvonplaten @LysandreJik Closes https://github.com/huggingface/transformers/issues/9863
08-23-2021 12:40:17
08-23-2021 12:40:17
Hey @ydshieh, It's awesome that you already give `TFEncoderDecoder` a stab! Note that they were a lot of difficulties when adding TFRag with saving/loading and parameter scopes - see: https://github.com/huggingface/transformers/pull/9002 so it's maybe a good idea to not include to many models in the first PR and try to keep it as simple as possible :-) The most important part here is to make sure that saving & loading works correctly depending on how the TFEncoderDecoder was constructed. *E.g.* we should have all those tests we have for TFRag also for TFEncoderDecoder: https://github.com/huggingface/transformers/blob/cf5744764821c3254773a62e4cc160dd6f09df8e/tests/test_modeling_tf_rag.py#L945 . It's very much not easy to make sure saving and loading works correctly for all models in TF so it would be important to focus on that part first I think before adding cross-attention to many other models :-) Happy to help you here whenever you're stuck, but we should be careful to keep it simple in the beginning :-)<|||||>@patrickvonplaten Yes, I did have some troubles with saving/loading and parameter scopes. That took me quite some time, but currently I am able to solve the issues I had, but I will check the PR you mentioned, and will also try to add the equivalent tests contained in TFRag.<|||||>Hi @patrickvonplaten , a silly question, but it would be great if you can explain to me what `Model templates runner / run_tests_templates (pull_request)` does, and why it failed here (if possible). I am out of idea about the reason<|||||>> Hi @patrickvonplaten , a silly question, but it would be great if you can explain to me what `Model templates runner / run_tests_templates (pull_request)` does, and why it failed here (if possible). I am out of idea about the reason It's a test that makes sure that the cookie cutter keeps working correctly: https://github.com/huggingface/transformers/tree/master/templates/adding_a_new_model It your case it fails because you've adapted some fundamental TF models which are used in the cookie cutter as a template. In order to make the test pass you should adapt the cookie cutter template analogous so that the changes done to TFBart are also added here: https://github.com/huggingface/transformers/blob/master/templates/adding_a_new_model/cookiecutter-template-%7B%7Bcookiecutter.modelname%7D%7D/modeling_tf_%7B%7Bcookiecutter.lowercase_modelname%7D%7D.py and here: https://github.com/huggingface/transformers/blob/master/templates/adding_a_new_model/cookiecutter-template-%7B%7Bcookiecutter.modelname%7D%7D/test_modeling_tf_%7B%7Bcookiecutter.lowercase_modelname%7D%7D.py But for the beginning I wouldn't pay too much attention to this test (it's not super important). Once your PR is ready, it's a good idea to fix the cookiecutter test in a final commit. If it doesn't work, I can help you with it :-)<|||||>@patrickvonplaten I have added https://github.com/huggingface/transformers/blob/f73cab3be8f0dc2cd816ce8f5c9a50e113f8eacb/tests/test_modeling_tf_encoder_decoder.py#L742 similiar to `TFRagModelSaveLoadTests` for `TFRag`. (There is no pretrained TFEncoderDecoder model on model hub yet, so I made some adjustment for the test) The PR is ready for review :-) <|||||>@patrickvonplaten , I am trying to add `TFBartForCausalLM` similar to `BartForCausalLM`. Howerver, there is one last issue: In TF / PyTroch CausalLM models, there are shift inside their call method, like: In TensorFlow ``` if inputs["labels"] is not None: # shift labels to the left and cut last logit token logits = logits[:, :-1] labels = inputs["labels"][:, 1:] loss = self.compute_loss(labels=labels, logits=logits) ``` or in PyTorch ``` if labels is not None: # we are doing next-token prediction; shift prediction scores and input ids by one shifted_prediction_scores = prediction_scores[:, :-1, :].contiguous() labels = labels[:, 1:].contiguous() loss_fct = CrossEntropyLoss() lm_loss = loss_fct(shifted_prediction_scores.view(-1, self.config.vocab_size), labels.view(-1)) ``` You can find some of them in https://github.com/huggingface/transformers/blob/0ebda5382b6456cba2d92a3670383f9adf61533a/src/transformers/models/gpt2/modeling_tf_gpt2.py#L745 https://github.com/huggingface/transformers/blob/0ebda5382b6456cba2d92a3670383f9adf61533a/src/transformers/models/gpt2/modeling_gpt2.py#L973 https://github.com/huggingface/transformers/blob/0ebda5382b6456cba2d92a3670383f9adf61533a/src/transformers/models/bert/modeling_bert.py#L1233 https://github.com/huggingface/transformers/blob/0ebda5382b6456cba2d92a3670383f9adf61533a/src/transformers/models/bert/modeling_tf_bert.py#L1244 Howerver, for `BartForCausalLM` (and the new added TF version), this shift is not done inside the call https://github.com/huggingface/transformers/blob/0ebda5382b6456cba2d92a3670383f9adf61533a/src/transformers/models/bart/modeling_bart.py#L1780 I think for Bart, it expected the `(decoder's) input_ids` and `labels` being preprocessed outside the `call`. However, this difference will cause a problem in TF test, because for TF causal LM models (Bert/GPT2/...), it returns the truncated `logits` https://github.com/huggingface/transformers/blob/0ebda5382b6456cba2d92a3670383f9adf61533a/src/transformers/models/bert/modeling_tf_bert.py#L1246 BTW, In PyTorch causal LM models, they return the complete logits https://github.com/huggingface/transformers/blob/662b143b71eb5ef775e27a8f79798bb28b3283bd/src/transformers/models/bert/modeling_bert.py#L1235 The test for `TFEncoderDecoderModel.check_encoder_decoder_model_labels` therefore expects the logits has `seq_len - 1`. https://github.com/huggingface/transformers/blob/f73cab3be8f0dc2cd816ce8f5c9a50e113f8eacb/tests/test_modeling_tf_encoder_decoder.py#L279 This works all fine until I introduce `TFBartForCausalLM`, as currently it will retrun logits of `seq_len`. Do you have some opinions on how should I deal with this situation?<|||||>Awesome work so far @ydshieh! Mostly left nits, but the following things should be checked before merging: 1. - `EncoderDecoderModel` and `TFEncoderDecoder` model should be exactly the same. We should write a test for this similar to https://github.com/huggingface/transformers/blob/ba1b3db70907b975b5ca52b9957c5ed7a186a0fa/tests/test_modeling_tf_common.py#L431 . In this test IMO we can use two small BERT models. We also should have added a test for Flax, but I've forgotten to mention it. For Flax we can do this in another PR, for TF we should do it in this PR ideally :-) 2. - `TFEncoderDecoder` has to load and save weights correctly in multiple scenarios. Essentially we need all those tests that we have for TFRag passing for TFEncoderDecoder as well: https://github.com/huggingface/transformers/blob/ba1b3db70907b975b5ca52b9957c5ed7a186a0fa/tests/test_modeling_tf_rag.py#L945 . Here I can help you, so maybe you can add some tests and let them fail for the moment and then I can go in and fix them :-) 3. - We also have to adapt the TF templates here: https://github.com/huggingface/transformers/blob/master/templates/adding_a_new_model/cookiecutter-template-%7B%7Bcookiecutter.modelname%7D%7D/modeling_tf_%7B%7Bcookiecutter.lowercase_modelname%7D%7D.py since we are doing core changes to TFBert. Feel free to give it a try - otherwise I'm happy to take over this part as well if it becomes time-consuming #13288 4. - Finally we need to run all BERT & RoBERTa slow tests to make sure nothing is broken. I can do this before merging => If ok maybe you can look at the above suggestions and write some tests for 1), 2) and then I can help you make the tests for 2 pass? :-) Really great work so far - this is one of the most complex architectures in the repo!<|||||>@patrickvonplaten Thanks for the feedbacks. I will make the changes. I will try to write `test_pt_tf_model_equivalence(self): `. About `class TFRagModelSaveLoadTests(unittest.TestCase): `, the last time I checked, it always passed, but I will verify again (since I reverted some changes done in the core tf weights loading/saving). (All the slow tests have passed when I run them locally, but again, I will verify)<|||||>Also @ydshieh rebasing onto master would likely help resolve some of the currently failing tests, they don't seem related to this PR at all.<|||||>> > > Also @ydshieh rebasing onto master would likely help resolve some of the currently failing tests, they don't seem related to this PR at all. Yes. There is `run_tests_tf` failed which is related to this PR. Once this is resolved, and having a note on the big hack for PT <-> TF, I think this PR will be ready :) - Let's see what Patrick say.<|||||>I agree with both of you! Once we fix the https://app.circleci.com/pipelines/github/huggingface/transformers/28285/workflows/d3d182a4-44b3-4bc8-b61e-dafcb341c2eb/jobs/278321?invite=true#step-108-4434 test (which is caused by this PR), we should add a note to `TFEncoderDecoder.from_pretrained(...)` and then we can merge this PR :tada: - very good work @ydshieh :-)<|||||>> > > I agree with both of you! Once we fix the https://app.circleci.com/pipelines/github/huggingface/transformers/28285/workflows/d3d182a4-44b3-4bc8-b61e-dafcb341c2eb/jobs/278321?invite=true#step-108-4434 test (which is caused by this PR), we should add a note to `TFEncoderDecoder.from_pretrained(...)` and then we can merge this PR 🎉 - very good work @ydshieh :-) I already know a way to fix the issue, but want to know which way might be better in your opinion. (The question I posted on Slack). Let me know what you think once you check it :) - In short, the question is about if setting `GPT2Config.is_decoder=True` makes sense.<|||||>@ydshieh - the PR looks to be in a very good state to me! In a final step, could you maybe adapt the test: `tests/test_modeling_tf_encoder_decoder.py::TFEncoderDecoderModelSaveLoadTests::test_encoder_decoder_save_load_from_encoder_decoder_from_pt` to showcase how the load a checkpoint from pytorch using the encoder and decoder seperately? After that I think we are good to merge :-)<|||||>> > > @ydshieh - the PR looks to be in a very good state to me! In a final step, could you maybe adapt the test: `tests/test_modeling_tf_encoder_decoder.py::TFEncoderDecoderModelSaveLoadTests::test_encoder_decoder_save_load_from_encoder_decoder_from_pt` to showcase how the load a checkpoint from pytorch using the encoder and decoder seperately? > > After that I think we are good to merge :-) Hey, @patrickvonplaten Sure. Let me make sure: you are saying to change the hack in `test_encoder_decoder_save_load_from_encoder_decoder_from_pt` to use encoder and decoder separately (and load their pytorch weights ), right? <|||||>I made the change to ``` test_encoder_decoder_save_load_from_encoder_decoder_from_pt ``` Here is the change I made https://github.com/huggingface/transformers/blob/0cd88b8538b8a27b4f3df2e8974a41d7e027dd70/tests/test_modeling_tf_encoder_decoder.py#L684 ``` # PyTorch => TensorFlow with tempfile.TemporaryDirectory() as tmp_dirname_1, tempfile.TemporaryDirectory() as tmp_dirname_2: encoder_decoder_pt.encoder.save_pretrained(tmp_dirname_1) encoder_decoder_pt.decoder.save_pretrained(tmp_dirname_2) encoder_decoder_tf = TFEncoderDecoderModel.from_encoder_decoder_pretrained( tmp_dirname_1, tmp_dirname_2, encoder_from_pt=True, decoder_from_pt=True ) ``` We also have a note in the doc of `TFEncoderDecoderModel.from_pretrained` (which also explains how to deal with a pytorch checkpoint) https://github.com/huggingface/transformers/blob/0cd88b8538b8a27b4f3df2e8974a41d7e027dd70/src/transformers/models/encoder_decoder/modeling_tf_encoder_decoder.py#L243 @patrickvonplaten , @Rocketknight1 Thank you for your review! I am glad we have a TensorFlow Encoder Decoder now :)<|||||>Hi @patrickvonplaten & @Rocketknight1, I made the change to ``` test_encoder_decoder_save_load_from_encoder_decoder_from_pt ``` Here is the change I made https://github.com/huggingface/transformers/blob/0cd88b8538b8a27b4f3df2e8974a41d7e027dd70/tests/test_modeling_tf_encoder_decoder.py#L684 ``` # PyTorch => TensorFlow with tempfile.TemporaryDirectory() as tmp_dirname_1, tempfile.TemporaryDirectory() as tmp_dirname_2: encoder_decoder_pt.encoder.save_pretrained(tmp_dirname_1) encoder_decoder_pt.decoder.save_pretrained(tmp_dirname_2) encoder_decoder_tf = TFEncoderDecoderModel.from_encoder_decoder_pretrained( tmp_dirname_1, tmp_dirname_2, encoder_from_pt=True, decoder_from_pt=True ) ``` We also have a note in the doc of `TFEncoderDecoderModel.from_pretrained` (which also explains how to deal with a pytorch checkpoint) https://github.com/huggingface/transformers/blob/0cd88b8538b8a27b4f3df2e8974a41d7e027dd70/src/transformers/models/encoder_decoder/modeling_tf_encoder_decoder.py#L243 Do you have any further comments? Thank you for your review! Looking forward for the merge and having a TensorFlow Encoder Decoder in HF :)<|||||>@ydshieh At this point I'm pretty happy with it! @patrickvonplaten do you have any objections, or should we merge?<|||||>@sgugger, About `assert self.is_decoder, f"{self} should be used as a decoder model if cross attention is added"`, I copied it from Pytorch models. Do you think it is a good idea for me to change all of them (PT/TF files) to `if not self.is_decoder: raise ValueError(xxx)` in this PR, or just the TF files involved currently (and a new PR for all other occurrences)?<|||||>> About assert self.is_decoder, f"{self} should be used as a decoder model if cross attention is added", I copied it from Pytorch models. Yes we have some old ones in the codebase, we are just not accepting new ones, so please adapt your PR. We can adapt the PyTorch files in a separate PR.<|||||>@patrickvonplaten , I tried to run slow tests for the changed models, and found some issues. (Previously, I only run the tests for `TFEncoderDecoderModel`). I will let you know when I finish fixing them :)<|||||>> Are there any caveats to be known for this implementation vs the PyTorch implementation which should be put in the docs, or should they behave identically? > There is also one thing I pointed much earlier: For a given `TFEncoderDecoderModel`, if we do ``` model.encoder.save_pretrained(encoder_path) model.decoder.save_pretrained(decoder_path) ``` Then ``` new_model = TFEncoderDecoderModel.from_encoder_decoder_pretrained( encoder_path, decoder_path ) ``` won't load the TF checkpoint weights correctly. This is somehow strange (logically), but the chance of doing so is very low -> If we already have a `TFEncoderDecoderModel`, it's more likely `save_pretrained` will be used rather than saving the 2 components separately. I can add this to the doc if necessary. (I will verify again to make sure)<|||||>@LysandreJik I added the PT->TF information to `encoderdecoder.rst`, along with the model contributors (I hope this is fine). https://github.com/huggingface/transformers/blob/f021eec0c7e97334a6c2fc3b9b1a1b43ec06fce3/docs/source/model_doc/encoderdecoder.rst#L30 All the suggestions have been addressed. @sgugger , I left the following unchanged in this PR (we can clean things up in another PR) ``` # T5 has a mask that can compare sequence ids, ``` @patrickvonplaten I ran the slow tests locally with all the models changed in this PR, except `TFRemBert` (my poor laptop just can't ran it). It's ready for you to do a final verification, thank you! (The `get_tf_activation("gelu")` issue is fixed )<|||||>This looks good to me, thank you @ydshieh!<|||||>Awesome - looked through the PR again and it looks good to me! Thanks a lot for all your amazing work on this :-)<|||||>@patrickvonplaten , Thank you! Do you want to upload a converted TF checkpoint to `"patrickvonplaten/bert2bert-cnn_dailymail-fp16"` (so we can change the examples, and adding 1 or 2 more tests). Otherwise, would it be a good idea for me to upload to `"ydshieh/bert2bert-cnn_dailymail-fp16"`? I assume that the checkpoints used officially for the tests/examples should be under the name of Hugging Face or its staffs. Kindly tag @LysandreJik for this.
transformers
13,221
closed
Typo in M2M100 1.2B model card page, strange translation results and new M2M100 615M model
@patil-suraj thank you so much for your great work, seems like there's a typo in the [M2M100 1.2B page:](https://huggingface.co/facebook/m2m100_1.2B) >model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M") >tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M") It should be "m2m100_1.2B" instead of "m2m100_418M". Model m2m100_1.2B sometimes gives a strange translation results on news titles - incorrectly translates the names of countries and cities in sentences, but model m2m100_418M translates correctly (i'm saw this in many languages pairs) - it is normal, or maybe there error in uploaded "facebook/m2m100_1.2B" tokenizer/model or function code M2M100Tokenizer.from_pretrained("facebook/m2m100_1.2B")? For example: > from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M") tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M") sentence = "في ميسان" tokenizer.src_lang = "ar" encoded_zh = tokenizer(sentence, return_tensors="pt") generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en")) print(tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)) model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_1.2B") tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_1.2B") tokenizer.src_lang = "ar" encoded_zh = tokenizer(sentence, return_tensors="pt") generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en")) print(tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)) gives ['in Messan.'] and ['in Messengers'] Try also sentence = "متهمين في ميسان" - gives ['Accused in Messiah.'] ['Prosecutors in Missouri'] - why [ميسان](https://en.wikipedia.org/wiki/Maysan_Governorate) in news titles translates by m2m100_1.2B as Messengers, Missouri, Mexico, Munich? It is possible to add [new M2M100 615M model?](https://github.com/huggingface/transformers/issues/12775#issuecomment-889437365)
08-23-2021 12:10:17
08-23-2021 12:10:17
Hi @Fikavec Thank you for reporting this, the typo is fixed now! > It is possible to add a new M2M100 615M model? Yes, I will take a look. > Model m2m100_1.2B sometimes gives a strange translation results on news titles - incorrectly translates the names of countries and cities in sentences, but model m2m100_418M translates correctly (i'm saw this in many languages pairs) - it is normal, or maybe there error in uploaded "facebook/m2m100_1.2B" tokenizer/model or function code M2M100Tokenizer.from_pretrained("facebook/m2m100_1.2B")? I don't think this is an error but I will take a look. But I have observed this behavior with multi-linguial models, the translations sometimes could be wrong especially for low-resource languages.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,220
closed
[Tentative] Moving slow tokenizer to the Trie world.
# What does this PR do? This PR attempts to solve the slow tokenizer `added_tokens` source of slowness. Currently the splitting is done in O(n) manner, with very non obvious algorithm to "pre-tokenize" (`tokenize` function). This will yield extremely slow tokenization even by slow tokenization standards. It also affects slow-only tokenizers like ByT5. The proposed fix simply moves the splitting into a O(1) algorithm (relative to `added_tokens`). It does that by manually implementing a real Trie (more information why Python regexp can't be trusted on this: https://stackoverflow.com/questions/42742810/speed-up-millions-of-regex-replacements-in-python-3). There is at least one know breaking change here, it's that users could rely on token ORDER to force splitting on some `added_tokens` before others (https://github.com/huggingface/tokenizers/issues/615). This won't be the case anymore with this code, as the splitting will happen on ~~first~~ longest encounter of `added_tokens` regardless. This is a pretty standard practice. ~~We could instead split on longest match first, but it's also a breaking change (although most likely less breaking).It does mean adding backtracking so the algorithm will be more complex and more state management~~ Edit: Implemented Benchmarking code: ```python import datetime from transformers import GPT2Tokenizer # They used to have to be sorted in reverse by length, otherwise the tokens arent newtokens = range(0, 20000) newtokens = list(newtokens) newtokens.sort(reverse=True) newtokens = [f"new_{x}" for x in newtokens] slow = GPT2Tokenizer.from_pretrained("gpt2") # Add new vocab slow_custom = GPT2Tokenizer.from_pretrained("gpt2") slow_custom.add_tokens(newtokens) # Differences when tokenising the text... text = "this is a sentence containing new_200" for tokenizer in [slow, slow_custom]: start = datetime.datetime.now() print(tokenizer.tokenize(text)) print(datetime.datetime.now() - start) ``` This goes from 4~7s on the `slow_custom` to 1ms (and ~0.3ms without `added_tokens`) <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes https://github.com/huggingface/tokenizers/issues/615 (unrelated, because users seem to still be using slow tokenizers there. @LysandreJik @patrickvonplaten @n1t0 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ```bash RUN_SLOW=1 pytest -sv tests/test_tokenization_* Results (1250.63s): 3957 passed 353 skipped ```
08-23-2021 09:52:52
08-23-2021 09:52:52
@patrickvonplaten It speeds up all the time, but it's only that impressive with adding tons of `added_tokens`. The reason is because of the change in complexity. Even for simple `ByT5` that uses 125 extra `added_tokens` by default, that's already 100x speed (so not really that exotic). For "regular" slow tokenizers, with <5 added_tokens the speedup exists but is rather negligible. Also I've seen 2/3 different issues regarding this. The one reported (https://github.com/huggingface/tokenizers/issues/615) has 8 participants since February, so while not super urgent, there definitely seems to be more than a couple of people doing that. And always fine adding more documentation, but fwiw it's a pretty standard data structure. <|||||>Hi @LysandreJik Do we have benchmarks tests anywhere ? This doesn't fix anything that was broken before, it just make things faster (some usage of the lib was so slow that it was unusable, but it was definitely working). I could add a test that some tokenization takes too long, but it's always a tricky business to add tests related to timings because it might depend on the hardware running the tests, so it would definitely NOT be a unit test.<|||||>@LysandreJik Ok, I added 1 `common` test which fails only on Canine (`extra_id_1` is not valid over there). Also added Trie specific tests (the one in the doc basically)<|||||>Thanks @SaulLu , the matching was incorrect in that edge case where some token is rigourously included in another, then we would match the inner token instead of the first match. The lookahead part got more complex but will now work in that edge case (which is important to at least follow the documentation) Regarding the idea of `added_tokens` following order, recaping some offline conversation: - It's doable, but would make code even more complex. We would need to keep track of ranks in the Trie, whenever we have a full match, resolve all partial matches, sort by order and take the highest rank. - Seems overly complex for what seems to be pathological cases at best, so out of scope of this one. <|||||>Will merge this later today unless there are still some comments (But I feel it's ok in current state)
transformers
13,219
open
"Resource exhausted" when loading Flax GPT-Neo 2.7B
## Environment info - `transformers` version: 4.10.0.dev0 - Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.29 - Python version: 3.8.5 - PyTorch version (GPU?): 1.8.1+cu102 (False) - Tensorflow version (GPU?): 2.6.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.3.4 (cpu) - Jax version: 0.2.19 - JaxLib version: 0.1.70 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help @patrickvonplaten @patil-suraj @LysandreJik ## Information I am not able to load the Flax GPT-Neo 2.7B model in my TPU VM v3-8 instance. ```python tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-2.7B", pad_token="</s>", padding_side="left") model = FlaxAutoModelForCausalLM.from_pretrained("EleutherAI/gpt-neo-2.7B", pad_token_id=tokenizer.eos_token_id) ``` The model will download but will fail to load with ``` RuntimeError: Resource exhausted: Failed to allocate request for 100.00MiB (104857600B) on device ordinal 0 ``` However, the pytorch version will load and run just fine.
08-22-2021 15:12:49
08-22-2021 15:12:49
Hi, any updates on this?<|||||>Thanks for reporting this, I'm looking into it.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Unsale<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Running into this same issue when trying to load t5-3b as a Flax model from the Pytorch version on TPU3.8<|||||>Working on a feature that should fix this issue. This is probably because the model is initialized randomly and the weights are on the device, and then the pre-trained weights are also loaded directly on the device. So working on a feature that allows initializing the model only abstractly to consume less memory. Should be available in a couple of weeks :) <|||||>Any update on this @patil-suraj We are experiencing this when trying to load RoBERTa using Flax with TPU v3-8
transformers
13,218
closed
How to run GLUE tasks on my model?
I trained a BERT model on my dataset. Now , I want to run it on GLUE tasks, just to get eval score (no finetuning on GLUE). Is this possible? I found this proposed example: https://pypi.org/project/pytorch-transformers/#quick-tour-of-the-fine-tuningusage-scripts but it doesn't explain where I can find the `run_glue.py `script. I found this link: https://github.com/huggingface/transformers/blob/master/examples/run_glue.py But it is broken
08-22-2021 12:59:21
08-22-2021 12:59:21
You can run the `run_glue.py` script, only specifying `--do_eval` (and not `--do_train`). It's located here: https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,217
closed
Update clip loss calculation
Hello, I'm the author of the blog you took the snippet from. I think this way of calculating is possibly slightly more accurate for calculation.
08-21-2021 23:01:53
08-21-2021 23:01:53
transformers
13,216
closed
Use DS callable API to allow hf_scheduler + ds_optimizer
This PR: - Used the (new) Callable api of deepspeed.initialize() to enable combining hf schedulers with deepspeed optimizers. - `create_scheduler` now has an optional `optimizer` arg - Updates relevant unit test. Blocking events: All unblocked now. - [x] depends on deepspeed PR [1316](https://github.com/microsoft/DeepSpeed/pull/1316). - [x] needs new deepspeed version after PR is merged and need to update the dependencies when that happens. deepspeed: @stas00.
08-21-2021 19:19:33
08-21-2021 19:19:33
* [x] https://github.com/microsoft/DeepSpeed/pull/1316 is merged * [x] v0.5.1 released to PyPI: https://pypi.org/project/deepspeed/0.5.1/
transformers
13,215
closed
Input to a Tensorflow model where a dictionary cannot be used
Made a Tensorflow fuctional API model on top of TFAutoModelForSequenceClassification with 3 sentence as input. Training model directly on tokenized input raises **ValueError: Failed to find data adapter that can handle input: (<class 'list'> containing values of types {'(<class \'list\'> containing values of types {"<class \'tensorflow.python.framework.ops.EagerTensor\'>"})'}), (<class 'list'> containing values of types {"<class 'int'>"})** If I convert it into numpy array it raises **ValueError: Data cardinality is ambiguous:** `model(X_train[0])` prduces the desired result in both cases but on training the model it raises errors. Code can be found in this [notebook](https://colab.research.google.com/drive/1wsVVHiaqBF8joIEsP_XSMF35fnDQS19D?usp=sharing)
08-21-2021 19:15:41
08-21-2021 19:15:41
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Maybe of interest to @Rocketknight1 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,214
closed
✨ add citation file
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> I have added a new file to make it easier to quote the software. Once again, there is more information in [this documentation](https://docs.github.com/en/github/creating-cloning-and-archiving-repositories/creating-a-repository-on-github/about-citation-files#citing-something-other-than-software). ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @sgugger
08-21-2021 18:03:31
08-21-2021 18:03:31
transformers
13,213
closed
Questions on generating using encoder-decoder models
Hi, I want to conduct a Grammatical Error Correction task with BART, which takes corrupted sentences as inputs and make corrected answers as outputs. The model I'm using is `BartForConditionalGeneration`. I want to ask several things. 1. What is the difference between `decoder_input_ids` and `labels`? [The doc](https://huggingface.co/transformers/model_doc/bart.html#bartforconditionalgeneration) says, when handling seq2seq problems such as translation or summarization, `decoder_input_ids` should be given, otherwise the model just put the shifted encoder input into the decoder, which is not the desired process. However, there is another argument `labels` and I think I should give the answer sequence as `labels` to get the loss. And according to [here](https://huggingface.co/transformers/glossary.html#decoder-input-ids), I assume that BART takes the answer outputs as `labels`. Then what is `decoder_input_ids`? Is this not necessary when using `model.forward` function to train the model? 2. Should I pad the decoder inputs with `-100`? According to the doc, to make the loss function ignore the unwanted locations, it should be set to `-100`. But I want to make it ignore the pad token. Should I just set the pad token as `-100` or is there any way to make the loss function ignore the value I set? 3. Unlike the training, inference does not require the answers. However, like I mentioned above, if the model is not given `decoder_input_ids` or `labels`, then the model put the shifted inputs into the decoder. But this is not what we want. The decoder should start only with the start token at first. Then is it right to use `model.generate` not `model.forward` function without any decoder inputs given? I think I should use `model.generate` when inferencing but I want to make sure that `model.generate(input_ids=input_ids)` works as I described, which gives only the start token in the beginning. In fact, like the image below, it seems the input ids might be just copied judging by the values. So I'm worried if the decoder just took the input ids. ![image](https://user-images.githubusercontent.com/16731987/130325911-4c911ec7-6f5f-49e6-9c3c-802509163c56.png) 4. According to [this](https://huggingface.co/transformers/model_doc/bart.html#bartforconditionalgeneration), BART was pretrained to use EOS token as the start token of the decoder. I don't know why it should be, but anyway like the above image, we can see that all outputs start with both EOS and BOS token. Then may I assume that the model put both EOS and BOS token as the starting sign? 5. The last question is about beam search. I want to get the last hidden state from the decoder to conduct multi-task learning combined with LM and sentence classification. But when using the beam search, the shape of one tensor from `decoder_hidden_states` becomes `(batch_size*num_beams*num_return_sequences, generated_length, hidden_size)`. Then how can we know which one is from the best result? Thank you for reading this long questions.
08-21-2021 15:14:39
08-21-2021 15:14:39
Hi, encoder-decoder models like T5 and BART create the `decoder_input_ids` automatically based on the `labels` you provide. So you should only provide the encoder inputs (`input_ids`, `attention_mask`, possibly `token_type_ids`) and the decoder targets (`labels`). As you can see [here](https://github.com/huggingface/transformers/blob/f689743e7454b93f6cab4343026de03fa530bfb9/src/transformers/models/bart/modeling_bart.py#L1287), `BartForConditionalGeneration` will automatically create the `decoder_input_ids` by shifting the `labels` one position to the right. Let's consider what happens with a small example. Suppose we want to train BART for translation, and we have: * input sequence: "HuggingFace is a company based in New York and Paris." * target sequence: "HuggingFace est une société basée à New York et à Paris." => to prepare this example for `BartForConditionalGeneration`, we can use `BartTokenizer`. We can prepare the input for BART by encoding the input sequence, like so: ``` from transformers import BartTokenizer tokenizer = BartTokenizer.from_pretrained("facebook/bart-large") input_sequence = "HuggingFace is a company based in New York and Paris." encoding = tokenizer(input_sequence, return_tensors="pt") input_ids, attention_mask = encoding.input_ids, encoding.attention_mask ``` To create the labels, we can also use `BartTokenizer`. The labels are just the `input_ids` from the encoding of the target sequence: ``` target_sequence = "HuggingFace est une société basée à New York et à Paris." target_encoding = tokenizer(target_sequence, return_tensors="pt") labels = target_encoding.input_ids ``` Now we have everything we need to do a forward pass and obtain a loss, like so: ``` from transformers import BartForConditionalGeneration model = BartForConditionalGeneration.from_pretrained("facebook/bart-large") outputs = model(input_ids=input_ids, attention_mask=attention_mask, labels=labels) loss = outputs.loss print(loss.item()) ``` We can also check how these labels look like in text, by decoding them: ``` for id in labels.squeeze().tolist(): print(id, tokenizer.decode([id])) # this prints: 0 <s> 40710 Hug 3923 ging 34892 Face 3304 est 12515 une 17380 soc 118 i 10221 ét 1140 é 11909 bas 9703 ée 6534 à 188 New 469 York 4400 et 6534 à 2201 Paris 4 . 2 </s> ``` What internally happens, is that first the encoded input sequence (i.e. the `input_ids` and `attention_mask`) are forwarded through the encoder of BART. The encoder will output a tensor of shape `(batch_size, sequence_length, hidden_size)`. In this case, we only have a single example which means that the batch size is 1, the sequence length (which is the number of tokens) is equal to `len(input_ids) = len(attention_mask)`, which in this case is 15 tokens, and the hidden size of BART-large is 1024 (BART-base would be 768). So the encoder will output a tensor of shape (1, 15, 1024). This tensor is often refered to as the "last hidden states", as these are the hidden representations for all tokens from the last layer of the encoder. Next, we have the decoder. The decoder needs to spit out the desired `input_ids` of the target sequence (in other words, the `labels`). The decoder of BART (and T5) is autoregressive, which is a fancy term to say "from left to right". So what happens is, we provide the first `decoder_input_id` to the decoder (which is the `decoder_start_token_id`, which for BART is equal to the \</s> token). Then, the decoder outputs a probability over all possible `input_ids`, and this is compared to the first label (which will be the first input_id of the labels we created, i.e. the \<s> token). Next, we provide the first two decoder input ids, i.e. \</s> \<s> to the decoder, and then it needs to spit out the first two labels, i.e. \<s> Hug. Next, we provide the first three decoder input ids, i.e. \</s> \<s> Hug to the decoder, and then it needs to spit out the first three labels, i.e. \<s> Hug ging, and so on. NOTE: this was just a single example. In practice, deep learing models are always trained in batches. As the input_ids and labels have different lengths for each example in the batch, we use padding and truncation to make sure they are all of the same length. One typically defines a `max_source_length` and `max_target_length` as hyperparameters, and then prepares all data like so: ``` # encode the inputs encoding = tokenizer(text, padding="max_length", max_length=max_source_length, truncation=True, return_tensors="pt") input_ids, attention_mask = encoding.input_ids, encoding.attention_mask # encode the labels target_encoding = tokenizer(text, padding="max_length", max_length=max_target_length, truncation=True, return_tensors="pt") labels = target_encoding.input_ids ``` An additional thing to keep in mind is to replace padding tokens of the labels by -100, such that they are not taken into account by the loss function. For that, I use the following code (assuming the `labels` of a batch are still lists rather than PyTorch tensors): ``` labels_with_ignore_index = [] for labels_example in labels: labels_example = [label if label != tokenizer.pad_token_id else -100 for label in labels_example] labels_with_ignore_index.append(labels_example) ``` Regarding your third question, yes, during inference one should use `model.generate` instead of `model.forward`. Check out [this blog post](https://huggingface.co/blog/how-to-generate) to know all the details about generating after training your model.<|||||>I really appreciate with your help. About the last question, I think I can get the desired last decoder hidden states based on output scores. Thank you so much and have a nice day.<|||||>@NielsRogge Hi Niels, I'm new to NLP and was reading this to try and further understand the BART model for seq2seq summarization. As you said above, the encoder outputs a tensor of the shape `(batch_size, sequence_length, hidden_size)` , and the decoder then goes and generate probabilities over all the `input_ids`. The decoder now outputs the softmax result, in the shape of `(batch_size, sequence_length, hidden_size)`. However, as I'm trying provide summarization, I want to convert this result into text. I understand greedy and beam searching, but am unsure of how to get to the generated text from the decoder's `last_hidden_state`. Any help would be much appreciated. Thanks in advance. <|||||>The decoder of `BartModel` outputs a tensor of shape `(batch_size, sequence_length, hidden_size)`, indeed (no softmax is involved). Next, the language modeling head that `BartForConditionalGeneration` places on top of the decoder will transform this into a tensor (usually called logits) of shape `(batch_size, sequence_length, vocab_size)`. To know which tokens BART predicts, you can apply an argmax on the last dimension, i.e. `logits.argmax(dim=-1)`. This will give you a new tensor of shape `(batch_size, sequence_length)`, containing the token IDs as predicted by BART. However, at inference time, it's recommended to use the `generate()` method, which will autoregressively (i.e. from left to right) predict token ids. There are several decoding strategies available, such as greedy decoding, beam search, top-k sampling, etc. Let's take an example: ``` from transformers import BartTokenizer, BartForConditionalGeneration tokenizer = BartTokenizer.from_pretrained("sshleifer/distilbart-cnn-12-6") model = BartForConditionalGeneration.from_pretrained("sshleifer/distilbart-cnn-12-6") text = """The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct.""" # prepare text for model encoding = tokenizer(text, return_tensors="pt") # generate IDs autoregressively predicted_ids = model(**encoding) # decode IDs back to text generated_text = tokenizer.batch_decode(predicted_ids)[0] print(generated_text) ```<|||||>@NielsRogge Yes that's what I used at the start. The problem lies in the fact that I want to convert my model to onnx, where the `generate` function is not available. I guess I will have to write my own greedy decoding method. <|||||>We've actually just added an [example](https://github.com/huggingface/transformers/tree/master/examples/onnx/pytorch/translation) of converting BART to ONNX, including beam search generation. However, the example doesn't include a README right now, it will be added soon.
transformers
13,212
closed
fix: typo spelling grammar
# What does this PR do? fix typo spelling grammar, and replace to correct words with reference from [merriam webster](merriam-webster.com) and [wiktionary](https://www.wiktionary.org/) <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> <!-- Fixes # (issue) --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Documentation: @sgugger
08-21-2021 12:53:15
08-21-2021 12:53:15
Could you run `make fixup` at the root of your `transformers` clone to fix the code quality issues? Thank you!<|||||>thanks sir @LysandreJik , how can i do ??<|||||>I guess you have clones `transformers` the following way: ``` git clone https://github.com/huggingface/transformers ``` You can `cd` in the directory: ``` cd transformers ``` install the code quality tools: ``` pip install -e ".[quality]" ``` and run the command: ``` make fixup ``` If there's an error it can solve by itself, it will do so; if an error cannot be solved programmatically, it will tell you so :) Afterwards, you can commit the changes and push to your branch, the code quality issues should be fixed!<|||||>make fixup and push on commit [c233be1](https://github.com/huggingface/transformers/pull/13212/commits/c233be17db7ecce48b54f1eb71070a1dee39d342)<|||||>thank you sir @sgugger
transformers
13,211
closed
correcting group beam search function output score bug
#13177 This PR Fixes [#13177](https://github.com/huggingface/transformers/issues/13177) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @patrickvonplaten
08-21-2021 04:08:58
08-21-2021 04:08:58
transformers
13,210
closed
Add support for XLM-R XL and XXL models
This PR adds support for the newly released XL and XXL models for XLM-R. These models are described in the "Larger-Scale Transformers for Multilingual Masked Language Modeling" paper. I compared fairseq and transformers side by side, and managed output same. torch.Size([1, 10, 250880]) torch.Size([1, 10, 250880]) max_absolute_diff = 0.00022125244140614 Do both models outut the same tensors? 🔥 Since fairseq roberta to transformers conversion was made a long time ago, transformers architecture differs far from fairseq which originally started from, and it makes quite confusion to write right code. I synced transformers code to allow fairseq model structure. And the original PR https://github.com/huggingface/transformers/pull/12082#issue-665786049 was closed by its author @stefan-it and the PR(https://github.com/stefan-it/transformers/pull/1) I pushed for his repo about 40 days ago but got no response, so I opened the new PR.
08-21-2021 02:41:24
08-21-2021 02:41:24
Hi @Soonhwan-Kwon , sorry for the late reply! I discussed this topic with @patrickvonplaten a while ago and we came to the conclusion that it would be better to have a new model/class name for it, such as `XLMRobertaExtraLarge` to avoid these `if self.normalize_before` switches. I've also tested the model implementation on a GLUE task, but the result was not very good. The model is so large, that it was impossible for me to test it on a GPU - even with batch size 1. Then I did some DeepSpeed tests, but on my V100 I would have to wait more than 3 days for the smallest GLUE task - and the final result was not performing well :thinking: <|||||>@stefan-it thank you for the reply, and I have A100 80gb machine if you need any cross check.<|||||>@Soonhwan-Kwon @stefan-it Can you share your Deepspeed configuration for loading the XLMR-xl? I'm getting Nan as the loss from deepspeed after using your code changes for the conversion. @Soonhwan-Kwon Do you have a plan to create a standalone file for XLMRobertaExtraLarge? The reason is that you current file change breaks the conversion for the large and base model.<|||||>> @Soonhwan-Kwon @stefan-it Can you share your Deepspeed configuration for loading the XLMR-xl? I'm getting Nan as the loss from deepspeed after using your code changes for the conversion. @Soonhwan-Kwon Do you have a plan to create a standalone file for XLMRobertaExtraLarge? The reason is that you current file change breaks the conversion for the large and base model. Maybe I could paste my fine-tuning script by loading the XLM-Roberta-XLarge model, which is converted from @Soonhwan-Kwon 's script. You could run the script and have a double check with it. ```bash deepspeed --num_gpus=8 run_xnli.py --model_name_or_path /mnt/xlm-roberta-xlarge \ --deepspeed ds_config_zero3.json \ --language zh \ --train_language en \ --do_predict \ --max_seq_length 128 \ --per_device_train_batch_size 4 \ --learning_rate 2e-6 \ --logging_steps 100 \ --eval_steps 100 \ --save_steps 5000 \ --num_train_epochs 5 \ --output_dir /mnt/output_xlmr \ --cache_dir /mnt/cache \ --fp16 \ --overwrite_output_dir \ --evaluation_strategy "steps" \ --dataloader_num_workers 8 \ --use_fast_tokenizer False ```
transformers
13,209
closed
fix `AutoModel.from_pretrained(..., torch_dtype=...)`
This PR fixes one of the 2 issues reported in https://github.com/huggingface/transformers/issues/13076 ``` python -c "import torch; from transformers import AutoModel; AutoModel.from_pretrained('sshleifer/tiny-gpt2', torch_dtype=torch.float16)" 2021-08-20 18:45:07.802651: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0 Traceback (most recent call last): File "<string>", line 1, in <module> File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/auto/auto_factory.py", line 382, in from_pretrained config, kwargs = AutoConfig.from_pretrained( File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/auto/configuration_auto.py", line 511, in from_pretrained return config_class.from_dict(config_dict, **kwargs) File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/configuration_utils.py", line 581, in from_dict logger.info(f"Model config {config}") File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/configuration_utils.py", line 613, in __repr__ return f"{self.__class__.__name__} {self.to_json_string()}" File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/configuration_utils.py", line 677, in to_json_string return json.dumps(config_dict, indent=2, sort_keys=True) + "\n" File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/json/__init__.py", line 234, in dumps return cls( File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/json/encoder.py", line 201, in encode chunks = list(chunks) File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/json/encoder.py", line 431, in _iterencode yield from _iterencode_dict(o, _current_indent_level) File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/json/encoder.py", line 405, in _iterencode_dict yield from chunks File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/json/encoder.py", line 438, in _iterencode o = _default(o) File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/json/encoder.py", line 179, in default raise TypeError(f'Object of type {o.__class__.__name__} ' TypeError: Object of type dtype is not JSON serializable ``` Additionally, it corrects the config object to convert the short "float32" string into `torch.float32` at object creation time. Note, I had a to change `from_dict` a bit to preserve `torch_dtype` arg in `AutoModel.from_pretrained(..., torch_dtype=...), as without this change `from_pretrained` was ignoring this argument. To remind, the issue is that we decided to store `torch_dtype` in the config object, but ignore it for now at load time. Which this PR also documents. Of course, tests added. Thank you. Fixes: https://github.com/huggingface/transformers/issues/13076 (note: 2 separate issues were reported there but it looks like only this is the real issue, so linking to close it with this PR) @sgugger, @LysandreJik
08-21-2021 01:48:06
08-21-2021 01:48:06
Note, I first tried a simple monkeypatching method, but it doesn't work with C extensions, which `torch.dtype` is: ``` if config.torch_dtype is not None: # in v5 convert str to torch.dtype import torch if not hasattr(torch.dtype, "to_json_string"): import builtins #torch.dtype.to_json_string = builtins.str setattr(torch.dtype, "to_json_string", builtins.str) ``` got: ``` setattr(torch.dtype, "to_json_string", builtins.str) TypeError: can't set attributes of built-in/extension type 'torch.dtype' ```
transformers
13,208
closed
Loading of a model takes much RAM, passing to CUDA doesn’t free RAM
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.10.0.dev0 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.11 - PyTorch version (GPU?): 1.7.0+cu110 (True) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help - benchmarks: @patrickvonplaten - pipelines: @LysandreJik ## Information Model I am using: EleutherAI/gpt-neo-1.3B The problem arises when using: * my own modified scripts: (give details below) ## To reproduce Steps to reproduce the behavior: [Google Colaboratory](https://colab.research.google.com/drive/1qptTsxuRvxnTq2FI39a9p8VSquH5Qafl?usp=sharing) notebook You will need “Large memory” instance, since while transferring to CUDA it even overshoots 13GB RAM limit. I use Torch 1.7.0+cu110 since instance has CUDA 11.2. But with the default 1.9.0+cu102 it is more or less the same. I’m trying to finetune 1.3B model. And so I search for the way to optimize RAM usage (to be able to use cpu_offload with deep_speed). I noted that after load of a model it takes much RAM. ![image](https://user-images.githubusercontent.com/21180686/130306142-402e8aec-a7d9-48ca-bbbd-6b6189294c77.png) After model is loaded: 11.51 GB total memory used 0.0 GB used by torch objects on GPU 2 MiB total mem used on GPU And when I move it to GPU it a) takes only 5GB in VRAM (perhaps another 1.3GB is taken by Torch). b) doesn’t free any RAM, even takes some 2.5GB more. So the problem I see: a) Model occupates much more space in RAM then in VRAM. b) It doesn’t free RAM upon moving to CUDA. Python garbage collector doesn’t help also. Any thoughts on this?
08-21-2021 01:26:20
08-21-2021 01:26:20
Hey @Artyrm, It's quite difficult for us to reproduce the error - could you post a link to a google colab where we can rerun the code? Also do you see the same behavior locally or is it just on google colab?<|||||>Hi, @patrickvonplaten. I have provided the link in my post, at the beggining of "reproduce" section. And yes, I can see it locally too, though I have only 2GB local VRAM, so I have to use small models, and memory waste is less obvious.<|||||>Related I think: https://github.com/huggingface/transformers/pull/12106#discussion_r649876604<|||||>@patil-suraj - we wanted to change GPT-Neo to use a local attention mask instead of the local attention layers as it was shown to be faster and less memory intensive no? Should we tackle that again?<|||||>Also related: https://github.com/huggingface/transformers/pull/11736<|||||>Need to say, that I have same problem locally (in less scale, since much less VRAM available) with other models, for example `sberbank-ai/rugpt3medium_based_on_gpt2` Although it is also "GPT-3-like" model.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>That's a pity. I hoped someone would have some ideas about it.<|||||>@patil-suraj - did you look into this by any chance? <|||||>Similar to unresolved issue https://github.com/huggingface/transformers/issues/13624<|||||>I tried some experiments, and it seems it's related to PyTorch rather than Transformers model. It seems that when a model is moved to GPU, all CPU RAM is not immediately freed, as you could see in this [colab](https://colab.research.google.com/drive/1FvUtyCXFfx1cMexO24IvXXrkLPTt_Ok5?usp=sharing), but you could still use the RAM to create other objects, and it'll then free the memory or you could manually call `gc.collect`. Also note that, `py.memory_info()[0]` gives total memory used by the process and not the current memory in use. We could use the `psutil.virtual_memory().available` to get the available RAM. I've used it in the colab above so you could see the difference. Also gently pinging @stas00 who might be able to shed some light here :) <|||||>It's most likely a python issue and not torch's - this is because `gc.collect()` is a scheduled event and doesn't always run when a large object is freed. You can read more about it here: https://docs.python.org/3/library/gc.html You can experiment with setting a lower threshold https://docs.python.org/3/library/gc.html#gc.set_threshold I don't think there is any harm in `transformers` calling `gc.collect` immediately after switching the model to gpu - it'll be run anyway sooner or later, and thus it's not like it'll be introducing a performance hit at that particular point. Wrt debug/tracing memory usage in notebooks I highly recommend using https://github.com/stas00/ipyexperiments since it prints out all that memory usage automatically for you after each cell is run, so it's much easier to run. if using on colab there are some nuances to handle: https://github.com/stas00/ipyexperiments#google-colab Hmm, but actually `ipyexperiments` calls `gc.collect()` by itself to measure things correctly, so it's going to hide this issue. So probably scratch that idea in this particular context. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,207
closed
Support for Training with BF16
# What does this PR do? As seen in [this pr](https://github.com/huggingface/transformers/pull/10956), there is demand for bf16 compatibility in training of transformers models. The pytorch folks just added [this feature](https://github.com/pytorch/pytorch/pull/61002) to their master branch, so we are now able to work on adding it to this repo. This pr follows from [this issue](https://github.com/huggingface/transformers/issues/13170). Fixes https://github.com/huggingface/transformers/issues/13170 ------------------ (OP edited by @stas00) Also merged here and adapted changes proposed by @manuelciosici at https://github.com/huggingface/transformers/pull/14448 This PR: - adds helper utils: `require_torch_bf16` and `is_torch_bf16_available` - modifies `invert_attention_mask` and one `forward` in t5 to include bf16 mode switches HF Trainer: - adds `--bf16` and `--bf16_full_eval` modes - same as fp16 equivalents - renames and deprecates `--fp16_backend` and replaces it with `--half_precision_backend` - since we now have more than one half precision mode Tests: - adds `--bf16` and `--bf16_full_eval` tests @sgugger, @LysandreJik, Also tagging @patrickvonplaten, @patil-suraj since once this is merged you can start sending users that have problems with bf16 pre-trained models and have Amphere hardware to use this `--bf16` mode. Deepspeed bf16 support will follow soon.
08-20-2021 21:55:11
08-20-2021 21:55:11
ok, pt-nightly installed so that I could test the new functionality. so I added: ``` - `require_torch_bf16` - `is_torch_bf16_available` - super basic test that validates that `--bf16` doesn't fail - placeholder for the future bf16 full eval test ``` So now the tests should be expanded to actually validate that bf16 is happening, with some number checking - e.g. we could check that the numbers are indeed bf16. <|||||>I'm not observing as much of a memory improvement as I expected. Memory improvements I'm seeing are 0-15%, whereas I expected around 40% (per derrick's observation [here](https://github.com/huggingface/transformers/pull/10956#issuecomment-841431396)). Is there anywhere in master where autocast is disabled for some reason? For example, that was going to be the case [here](https://github.com/huggingface/transformers/pull/10956/files#diff-ebaff1224cad711fd3cefb771ce17d1392ae2dfc7f74dc7da9dd014d7642a344R308), but that change is not currently in master. The two questions I just commented are part of my digging into whether there is a bug somewhere. EDIT: I found it interesting that `fp16` was giving similarly lackluster gains as `bf16`. That suggests it's not a `bf16`-specific issue<|||||>I saw some recent comments on the torch slack that suggestion that bf16 hasn't quite been figured out performance-wise and can actually be slower depending on the hardware. One issue being cuDNN has no `bfloat16` at the moment, the other issue is that many bf16 kernels are simply not there, so it falls back to some slower functionality. May I suggest to ask this question on https://discuss.pytorch.org/ and hopefully some knowledgeable devs with experience could give us a definitive answer. Please share the link if you do. I think on our side the main priority for providing bf16 support is to overcome the overflow issue in models pretrained in mixed bf16, performance being secondary. But of course, it'd be great to actually benefit from the new Ampere cards which have a lot of power but almost a year passed and we still can't quite harness that power. BTW, which card are you testing it with?<|||||>>BTW, which card are you testing it with? RTX A6000. I'm pretty sure it's not related to this pr, per [this](https://discuss.pytorch.org/t/amp-autocast-not-faster-than-fp32/111757/11?u=jadeantonis)<|||||>Thank you for posting there, @JamesDeAntonis. Let's see if Piotr has some encouraging feedback. Otherwise the whole thing is very veiled at the moment as nobody wrote any definitive answers.<|||||>Also fyi, I updated `s/fast_dtype/dtype/` as it changed in nightly. But the current nightly has a broken `is_bf16_supported()` function, so it won't work - I reported it - hope should be fixed in a day or two. So best don't update your nightly just yet.<|||||>Quoting from: https://github.com/pytorch/pytorch/issues/57806#issuecomment-834697571 > Concerning the lousy "speedup" with amp on A100, first of all I'd expect less of a relative difference because A100 should use TF32 tensor cores by default for FP32 (non-amp) runs, which is 2X less throughput than FP16 for matmuls on paper, but much faster than not using tensor cores at all. This closes the performance gap to amp, so I do expect the FP32 vs amp difference to be more modest on Ampere. It's possible that with FP32 backed by TF32 library math, ops that benefit most from tensor cores (ie matmuls, convs) have been accelerated enough that the network is mainly bound by CPU overhead or by ops Amp doesn't affect as much, so turning on Amp doesn't squeeze much more blood from the stone. So if that is so, then you're not seeing an improvement from amp/bf16 because behind the scenes it already uses tf32. We definitely need some more definitive guides as currently we can only collect such comments shared here and there and no proper document that covers all the grounds.<|||||>>So if that is so, then you're not seeing an improvement from amp/bf16 because behind the scenes it already uses tf32. Interesting, but what about the below snippet from [here](https://moocaholic.medium.com/fp64-fp32-fp16-bfloat16-tf32-and-other-members-of-the-zoo-a1ca7897d407) >For comparison, A100’s peak performances are: >FP32 without tensor core: 19.5 TFLOPS >TF32 tensor core: 156 TFLOPS (so, using TF32 in place of FP32 can give you an easy speed improvement). >FP16/BF16 tensor core: 312 TFLOPS (so, thoughtfully designed switch to FP16/BF16 can give you more speed improvements, but the costs are higher). It looks like the gains are still "immodest" in the presence of an fp16/bf16 tensor core. Is the point that 156 TFLOPS is already so fast that further improvements are not worth the costs of making the switch from fp32 to fp/bf16? Because otherwise, it should still be at least somewhat faster, not slower.<|||||>I don't yet have the understanding of this domain to comment intelligently. My feeling is that the devil is in the detail and will heavily depend on each model's components. And it's best to discuss this subject matter on the pytorch side where the experts who understand this domain are. Using my limited understanding my answer to your question would be: Most likely if you were to take a single tensor and run it through an OP that natively supports TF32 and BF16 you should see the numbers you quoted. But since there is a lot of back and forth casting happening around amp and not all ops support these native functions, the overall results with hundreds of different ops combined in the single fwd/bwd pass of a model the results are quite different. In lieu of having an expert advice the other approach is to run your code through a native torch profiler, watch which ops get invoked on what dtypes, look them up whether they support the new tensor cores, etc. <|||||>Updates: - pt-nightly from 09.01 can be used with this PR (dates before that had a bug) - merged the 2 bf16 util functions as suggested by Sylvain @JamesDeAntonis, you now have a green light to address the proposed changes after updating your install of pt-nightly, then when it's done we will update/complete the tests and then we can merge this. If something is unclear please let us know. If you're not sure how to deal with deprecation, then you can just complete the new API and will add the deprecation afterwards. But you can look up at other cli args deprecations done in `training_args.py`. Thank you!<|||||>Update on this: my teammate is investigating the slowdown and doing some tests on both inference and training. He should have some results pretty soon that we can work with in a resumed discussion, including some new commits. Thanks for all your help so far!<|||||>Awesome, thank you for the update, @JamesDeAntonis!<|||||>Hi @stas00, what do you think of these results and justifications? All numbers from `t5-3b` on A100 cards ``` bf16 train: INFO - Finished in 140.42053532600403s INFO - Peak memory usage: 68.577 GB fp32 train: INFO - Finished in 131.2668480873108s INFO - Peak memory usage: 71.861 GB bf16 generate 32 tokens: INFO - Finished in 1.271615743637085s INFO - Peak memory usage: 6.927 GB fp32 generate 32 tokens: INFO - Finished in 1.2117650508880615s INFO - Peak memory usage: 13.057 GB ``` ## Some justifications: Memory: * 32-token generation: 47% improvement because, in the fp32 case, even though computations are done in 19-bit, all the 32-bit weights are stored in memory. in the bf16 case, memory is only allocated for 16-bit weights * Training: 5% improvement because the only difference between fp32 and bf16 is that fp32 does default auto-casting to tf32/bf19 while bf16 does auto-casting to bf16. so, the gains come from the 16% bit reduction during computation Time: * 5-6% time increase both times when using bf16. I don't understand why this is happening<|||||>@stas00 one other detail is that we're having trouble training to the same loss as regular precision (1.5 for bf16 amp vs 1.2 for full precision). Furthermore, when we generate with the 1.5-loss model, we get gibberish regardless of whether generating at full precision or half. This leads me to question whether our branch is completely correct. With this in mind, I don't understand why we wouldn't scale when training with bf16. Rationale: if fp32 is -1000.0 to 1000.0 (precise to tenth's place), fp16 is like the integers -500 to 500 and bf16 is like the even integers -1000 to 1000. To avoid underflow, fp16 amp convention in this analogy is to scale by a factor of 10 to make fp16's most precise unit (integer) analogous to fp32's most precise unit (tenth's place). By this logic, bf16 should be scaled by 20 to have the same effect. Do you agree with that logic, or do you understand where I go wrong?<|||||>> Hi @stas00, what do you think of these results and justifications? All numbers from `t5-3b` on A100 cards > [...] > Memory: > > * 32-token generation: 47% improvement because, in the fp32 case, even though computations are done in 19-bit, all the 32-bit weights are stored in memory. in the bf16 case, memory is only allocated for 16-bit weights > > * Training: 5% improvement because the only difference between fp32 and bf16 is that fp32 does default auto-casting to tf32/bf19 while bf16 does auto-casting to bf16. so, the gains come from the 16% bit reduction during computation Shouldn't bf16 be 2x faster than tf32 according to nvidia GPU specs? At least for some ops? I don't suppose we have a way to tell pytorch to tell cuda not to cast to tf32 - so that we could compare bf16 to the actual fp32. I'd say post all these benchmarks in that pytorch thread, since that's the bf16 experts are. And ask whether what you got makes sense and if it doesn't why and how can we/they fix that.<|||||>> With this in mind, I don't understand why we wouldn't scale when training with bf16. Excellent catch! It's because we forgot to do that! It's currently done for fp16 only: https://github.com/huggingface/transformers/blob/5e3b4a70d3d17f2482d50aea230f7ed42b3a8fd0/src/transformers/trainer.py#L436-L444 while at it perhaps check if there are other `if args.fp16` checks that need to have ` or args.bf16` added.<|||||>>Excellent catch! It's because we forgot to do that! It's currently done for fp16 only: Ok, and I think the default of `2 ** 16` would work for bf16, because precision is pooled up by 16 bits (by the way, I think `2 ** 16` is overkill for fp16 because precision for fp16 is only pooled up by only 13 bits [the other three bits are saved by decreasing range, ie the source of the original issue], but it doesn't really matter)<|||||>Let's see if your latest code leads to a better outcome in your quality and speed benchmarks.<|||||>🎉🪄🥇🏆🚀<|||||>Unfortunately, it doesn't seem like scaling fixed the issue. The loss went down to the fp32 level (actually eclipsed it), but inference still gave gibberish<|||||>Sorry to hear it didn't help, James. Here are some ideas to potentially proceed with: Is this something we could ask for the pytorch team to reproduce? i.e. ideally writing a few lines of code that they could reproduce the issue with? Do you know if the inference works ok, if you were to train in amp/bf16 but then doing inference in fp32? and if it has to do with amp/bf16 or full bf16? Perhaps something is wrong only in the inference stage? The other or an additional approach could be to take a normally trained public model and to try to run inference on it in (1) amp/bf16 (2) full bf16 and comparing the outcome with the fp32 mode?<|||||>Hi @JamesDeAntonis I am trying to train mt5-xxl-13B model with 8x40GB A100. I was wondering what is the condition for this PR. Is this ok to use it for training or any red flags?<|||||>Hi James, I'm trying to fine-tune T5-3B on a single A100 GPU (40gb memory) and I tried this PR out of desperate search. It seems like a promising direction to use `bf16` as it's natively supported by pytorch. However, while `fp16` with `amp` didn't, this version of the code seems to give `CUDA_MEMORY_ERROR` even with batch size 1<|||||>OK, since we have 2 half-baked PRs, https://github.com/huggingface/transformers/pull/13207 https://github.com/huggingface/transformers/pull/14448 I'm going to try to merge the 2 to keep the credits and start a new PR. If you have something to push now is the time.<|||||>@manuelciosici, FYI: we will deal with deepspeed in a separate PR https://github.com/huggingface/transformers/pull/14569 - in particular since the ZeRO3 support hasn't been merged yet and we always need a new release from deepspeed to be able to update our integration side.<|||||>@sgugger, please kindly have a look. I merged 2 PRs and cleaned things up and added a single deprecation. I also reverted the earlier attempt to use a shared `--half_precision_full_eval` since it didn't make sense - `--fp16_full_eval` and `--bf16_full_eval` are similar but 2 different modes requiring different code. If we want a shared one then we have to additionally require either `--fp16` or `--bf16` and then adjust the logic accordingly. If you prefer that let me know. Since bf16 has a much larger dynamic range most of the fp16 workarounds of that type aren't needed. So I grep'ed for `if torch.float16` checks and I didn't see anything other 2 places. I'm sure I may have missed some, but it'll surely let itself known when we start using it. Note, I've updated the OP with the up-to-date list of changes, so please refer to it for an overview. So I think we just need a couple of tests and if everybody is happy this is good to go. (tests added) The CI failure is unrelated.<|||||>OK, a few tests added. @JamesDeAntonis and @manuelciosici - please have a look - let me know if anything else is needed in your opinion. Thanks.<|||||>@sgugger, would it help to document the bf16 API as experimental and a subject to change at a moment's notice? <|||||>Yes please!<|||||>Thanks a lot for the review and the suggestions, @manuelciosici - all integrated, plus added a warning that this API is experimental, so if once we start using it we find that we could improve it we can.<|||||>I have been working on a guide to all these new modes including tf32, @manuelciosici, et al - if you get a chance to proofread, please have a look at https://github.com/huggingface/transformers/pull/14579 Thank you!
transformers
13,206
closed
CausalLM vs HeadModel
@patrickvonplaten, @LysandreJik @sgugger GPT-Neo implements the class `GPTNeoForCausalLM` and GPT-2 implements the class `GPT2LMHeadModel`. These look like they're supposed to do roughly the same thing. What is the reasoning behind having different names? Do they have any functional differences (other than using different models obviously)?
08-20-2021 15:42:07
08-20-2021 15:42:07
They are exactly the same `LMHeadModel` was a badly chosen name in the beginning of the library - we are trying to have all causal language models called `...ForCausalLM` now<|||||>> They are exactly the same `LMHeadModel` was a badly chosen name in the beginning of the library - we are trying to have all causal language models called `...ForCausalLM` now I see. FYI, there are already downstream users who are basing their codebases on `LMHeadModel` such as [Google's BIG-Bench](https://github.com/google/BIG-bench/blob/main/bigbench/models/huggingface_models.py). I worry that this disconnect will build significant technical debt if it is not resolved promptly.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,205
closed
Fixes #12941 where use_auth_token not been set up early enough
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #12941 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-20-2021 14:37:38
08-20-2021 14:37:38
transformers
13,204
closed
[Optimization] AdaFactor not working on TPU but works on GPU.
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.9.2 / 4.10.0.dev0 - Platform: Kaggle / Colab - Python version: 3.7.10 - PyTorch version (GPU?): 1.9.0a0+git1a7c23c (Kaggle) / 1.9.0+cu102 (Colab) - Tensorflow version (GPU?): 2.4.1 - Using GPU in script?: No - Using distributed or parallel set-up in script?: Yes, TPU ## Information Model I am using (Bert, XLNet ...): T5 The tasks I am working on is: * [x] an official GLUE/SQUaD task: (XSum) * [ ] my own task or dataset: (give details below) I am trying to finetune t5-small on XSum using `AdaFactor` and `get_linear_schedule_with_warmup`. I am able to do this when I use GPU but when using TPU, the model doesn't converge. The train loss varies but doesn't decrease and validation loss stays constant. Linear Schedule works properly, I saw my `comet_ml` graph, and `lr` was changing the way it should. It's like the loss is not modifying the weights at all. Code for initializing optimizer and lr_scheduler: ```python optimizer = Adafactor(model.parameters(), scale_parameter=False, relative_step=False, warmup_init=False, lr=1e-3) lr_scheduler = get_linear_schedule_with_warmup(optimizer, num_training_steps=Config.total_train_steps, num_warmup_steps =Config.warmup_steps ) ``` ## To reproduce The below given colabs are similar but I have provided GPU and TPU notebooks so that running and waiting for results is not needed: TPU: [Colab Link](https://colab.research.google.com/drive/1MAID8RhaLSevIyhhotUmxZAgjZj9IXR_?usp=sharing) GPU: [Colab Link](https://colab.research.google.com/drive/111i6_P7PTtpuQLMU26NBCx9qam-q7WSW?usp=sharing) <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> Training Loss should decrease and eventually converge (like it does on GPU) #### Edit: Provided links of two notebooks, for GPU and TPU each.
08-20-2021 13:20:04
08-20-2021 13:20:04
Gently pinging @sgugger here<|||||>Hey @patrickvonplaten, can I get an update on this issue? Thanks!<|||||>Adding Adafactor in the Transformers library was a mistake, Transformers is a library for models, not optimizers. I don't think this will be addressed @prikmm so you should look for another implementation of this optimizer to use.<|||||>@sgugger It worked. I was initializing the `optimizer` and `lr_scheduler` in global scope. ```python model = AutoModelForSeq2SeqLM.from_pretrained(....) WRAPPED_MODEL = xmp.MpModelWrapper(model) optimizer = Adafactor(model.parameters(), scale_parameter=False, relative_step=False, warmup_init=False, lr=1e-3) lr_scheduler = get_linear_schedule_with_warmup(optimizer, num_training_steps=Config.total_train_steps, num_warmup_steps =Config.warmup_steps ) def _mp_fn(): ..... trainer = Trainer(....., optimizers=(optimizer, lr_scheduler)) trainer.train() ..... xmp.spawn(_mp_fn, start_methods="fork") ``` When I initialized them inside `_mp_fn()`, everything worked fine. ```python def _mp_fn(): ...... optimizer, lr_scheduler = get_optim_lr(model) trainer = Trainer(....., optimizers=(optimizer, lr_scheduler)) trainer.train() ...... xmp.spawn(_mp_fn, start_method="fork") ``` I think in method-1, the optimizer gets linked to model weights present in host memory. And when the optimizer gets copied to each TPU device. It will still be linked to model weights present in host memory (or to nothing), and the loss will update the model weights in host memory (or it won't, I have not been able to check that), and not the model present in each TPU device. Whereas , in method-2, since, the optimizer is defined in TPU device scope, and uses the model present there. It is able to update model weights using the loss of that device. I tried `AdamW` using both the methods, and found `AdamW` too doesn't work in method-1 but works in method-2. For GPU, I perform: ```python model = model.to("cuda") ``` before initializing the optimizer. So, here the optimizer is linked to right model weights (present in GPU). Hence, everything worked while training on single GPU. Generally, when using GPU, if two or more variables in use are not on the same device, it will throw an error. This is not the case with TPU, it throws no error, because of which it took such a long time to solve. This is what I have theorised. If I am wrong, please let me know? I use a single GPU majority of the time, so pardon me for my lack of TPU knowledge (It's increasing everyday) :)<|||||>Ah yes, you should always define your optimizer after transferring your model on the TPU when working on TPUs, because moving the model to the TPU actually creates new tensors for each parameter. So in your case 1, the optimizer was completely disconnected from the model.
transformers
13,203
closed
How do i get the CLS token from the model output?
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.8.2 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.11 - PyTorch version (GPU?): 1.9.0+cu102 (False)- Tensorflow version (GPU?): 2.6.0 (False)- Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - funnel: @sgugger - rag: @patrickvonplaten, @lhoestq Library: - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. Examples: - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): roBerta The problem arises when using: * [X ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ X] my own task or dataset: (give details below) My dataset is like a bunch of sentences with labels with them ranging from 0, 1, 2 , 3 ## To reproduce Steps to reproduce the behavior: 1. Looping throw each sentence 2. getting input Id's 3. trying to get the CLS vector for each example My questions are; Am i doing it right? How do i know i am getting the CLS token? ``` for idx, row in df.iterrows(): #Looping through each Row input_ids = torch.tensor(tokenizer.encode(row.Sentence)).unsqueeze(0) #Getting the input id's of the sentence output = model(input_ids) #Passing it to model print( output.last_hidden_state) #Here is where i want to get the [CLS] token vector ``` ## Expected behavior How do i get the [CLS] token for each example of Sentence? Asking this becuase, the transformer [documentation](https://huggingface.co/transformers/main_classes/output.html) does not specify how to get the [CLS] token vector Any Help is much Appreciated
08-20-2021 12:56:13
08-20-2021 12:56:13
You can get the final hidden state of the [CLS] token as follows: `cls_token_final_hidden_state = output.last_hidden_state[:,0,:]` This is because the last hidden states are of shape (batch_size, sequence_length, hidden_size), and the [CLS] token is the first element across the sequence (also called time) dimension.<|||||>Ah i see that makes sense. Huge thanks again Niels!
transformers
13,202
closed
-100 when calculating perplexity of a model..
Hi there, I see in : https://huggingface.co/transformers/perplexity.html there is a code block saying: ``` max_length = model.config.n_positions stride = 512 lls = [] for i in tqdm(range(0, encodings.input_ids.size(1), stride)): begin_loc = max(i + stride - max_length, 0) end_loc = min(i + stride, encodings.input_ids.size(1)) trg_len = end_loc - i # may be different from stride on last loop input_ids = encodings.input_ids[:,begin_loc:end_loc].to(device) target_ids = input_ids.clone() target_ids[:,:-trg_len] = -100 with torch.no_grad(): outputs = model(input_ids, labels=target_ids) log_likelihood = outputs[0] * trg_len lls.append(log_likelihood) ppl = torch.exp(torch.stack(lls).sum() / end_loc) ``` I am wondering why we are setting the tokens to -100. Is it a hard-coded number?
08-20-2021 10:53:14
08-20-2021 10:53:14
-100 is the `ignore_index` of PyTorch's `CrossEntropyLoss`, as explained in their [docs](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html). It means that labels that are set to -100 to not contribute to the loss.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,201
closed
Train Bart model only use one cpu core, Any solutions to use more cores?
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: linux on arm - Python version: 3.7 - PyTorch version (GPU?):CPU-1.9.0 - Tensorflow version (GPU?):None - Using GPU in script?: No - Using distributed or parallel set-up in script?:No I am training a Bart Model(BartForConditionalGeneration) The problem arises when using: when I run my train script, only use one core of my cpu. I doesn't have any GPUs, but I got a cpu with 96 cores. How can I make it to use more cores? my own scripts: `from transformers import BartForConditionalGeneration,BartConfig,BartTokenizerFast,LineByLineTextDataset,DataCollatorForLanguageModeling,Trainer,TrainingArguments from tokenizers import models, normalizers, pre_tokenizers,Tokenizer from tokenizers.trainers import BpeTrainer `_special_tokens = ["<s>","</s>","<unk>","<pad>","<mask>"]` def trainBartTokenizer(files, vocab_size, tokenize_save_floder): tokenizer = Tokenizer(models.BPE(unk_token='<unk>')) tokenizer.normalizer = normalizers.Sequence( [normalizers.NFD(), normalizers.Lowercase(), normalizers.Strip()] ) tokenizer.pre_tokenizer = pre_tokenizers.CharDelimiterSplit(" ") print(tokenizer.pre_tokenizer.pre_tokenize_str("This is an example!\r\n")) trainer = BpeTrainer(vocab_size=vocab_size, show_progress=True, special_tokens=_special_tokens) tokenizer.train(files=files,trainer=trainer) tokenizer.model.save(tokenize_save_floder) print("Tokenizer Trainning Completed! Vocab size {}".format(tokenizer.get_vocab_size())) def train_bart_model(config:BartConfig,tokenizer:BartTokenizerFast,corpus_file_path,model_save_path): model = BartForConditionalGeneration(config=config) print('model size:{}'.format(model.num_parameters())) dataset = LineByLineTextDataset(tokenizer=tokenizer,file_path=corpus_file_path,block_size=128) data_collator = DataCollatorForLanguageModeling( tokenizer=tokenizer, mlm=False ) training_args = TrainingArguments( output_dir=model_save_path, overwrite_output_dir=True, num_train_epochs=1, per_device_train_batch_size=32, save_steps=10000, save_total_limit=2 ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=dataset ) trainer.train() trainer.save_model(model_save_path) `data_file_path = '/Users/beansprouts/Documents/corpus/small.txt' trainBartTokenizer([data_file_path],5000,'./bart_tokenize') tokenizer = BartTokenizerFast.from_pretrained('./bart_tokenize',max_len=512) tokenizer.normalizer = normalizers.Sequence( [normalizers.NFD(), normalizers.Lowercase(), normalizers.Strip()] ) tokenizer.pre_tokenizer = pre_tokenizers.CharDelimiterSplit(" ") config = BartConfig(vocab_size=tokenizer.vocab_size, max_position_embeddings=514) train_bart_model(config,tokenizer,data_file_path,'./bart_model')` @patrickvonplaten, @patil-suraj
08-20-2021 10:05:26
08-20-2021 10:05:26
The HuggingFace Trainer does not support multi-CPU training. From the [docs](https://huggingface.co/transformers/main_classes/trainer.html): > The API supports distributed training on multiple GPUs/TPUs, mixed precision through NVIDIA Apex and Native AMP for PyTorch and tf.keras.mixed_precision for TensorFlow. You can perhaps use [HuggingFace Accelerate](https://github.com/huggingface/accelerate) for this, as it supports multi-CPU both on a single machine as well as multiple machines. From their README: > Supported integrations CPU only multi-CPU on one node (machine) multi-CPU on several nodes (machines) single GPU multi-GPU on one node (machine) multi-GPU on several nodes (machines) TPU FP16 with native AMP (apex on the roadmap) DeepSpeed support (experimental)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,200
closed
Some tokenizers are not really picklable
## Environment info - `transformers` version: 4.9.2 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.3 - PyTorch version (GPU?): 1.7.1 (False) - Tensorflow version (GPU?): 2.3.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help @LysandreJik ## Information The xlmr tokenizer is not really picklable, in that it depends on things on disk to be unpickled. This causes issues if you want to use tokenizers in a spark udf, which will pickle the tokenizer, and send it to other nodes to execute, as these other nodes will not have the same things on disk. The only tokenizer I know this happens with is XLMRobertaTokenizer but I imagine there may be more. ## To reproduce ```python import pickle import os import sentencepiece as spm from transformers import XLMRobertaTokenizer # location on disk of tokenizer tokenizer_directory = './xlmrBaseLocal' def unpickle_when_file_in_same_place_and_when_it_isnt(pickled_tokenizer): # this works because the vocab file hasnt moved pickle.loads(pickled_tokenizer) print('successfully unpickled when file NOT MOVED') # we move the vocab file and try to unpickle os.rename(tokenizer_directory, tokenizer_directory + 'Moved') try: pickle.loads(pickled_tokenizer) print('successfully unpickled when file MOVED') except OSError: print('failed to unpickle when file MOVED') # put tokenizer back os.rename(tokenizer_directory + 'Moved', tokenizer_directory) # load tokenizer and pickle it tokenizer = XLMRobertaTokenizer.from_pretrained(tokenizer_directory) pickled_tokenizer = pickle.dumps(tokenizer) # this prints # > successfully unpickled when file NOT MOVED # > failed to unpickle when file MOVED unpickle_when_file_in_same_place_and_when_it_isnt(pickled_tokenizer) # fix the pickling defined here # https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm_roberta/tokenization_xlm_roberta.py#L171 def __getstate__(self): state = self.__dict__.copy() state["sp_model"] = None state["sp_model_proto"] = self.sp_model.serialized_model_proto() return state def __setstate__(self, d): self.__dict__ = d # for backward compatibility if not hasattr(self, "sp_model_kwargs"): self.sp_model_kwargs = {} self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs) self.sp_model.LoadFromSerializedProto(self.sp_model_proto) XLMRobertaTokenizer.__getstate__ = __getstate__ XLMRobertaTokenizer.__setstate__ = __setstate__ # repickle tokenizer = XLMRobertaTokenizer.from_pretrained(tokenizer_directory) pickled_tokenizer = pickle.dumps(tokenizer) # this prints # > successfully unpickled when file NOT MOVED # > successfully unpickled when file MOVED unpickle_when_file_in_same_place_and_when_it_isnt(pickled_tokenizer) ``` ## Expected behavior The expected behaviour would be that once the tokenizer is pickled and I have the prerequisite libraries, I should be able to unpickle it regardless of what is on disk and where.
08-20-2021 09:05:37
08-20-2021 09:05:37
Hello, thanks you for opening this issue! Do you want to open a PR with your fix?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,199
closed
How to use transformers for batch inference
I use transformers to train text classification models,for a single text, it can be inferred normally. The code is as follows from transformers import BertTokenizer, TFAlbertForSequenceClassification text = 'This is a sentence' model_path ='../albert_chinese_tiny' tokenizer = BertTokenizer.from_pretrained(model_path) model = TFAlbertForSequenceClassification.from_pretrained('../model_tf/20210818') encoding = tokenizer(text, truncation=True, padding=True, max_length=30, return_tensors="tf") result = model(encoding) When I predict more than one text at a time, an error will be reported. The code is as follows texts = ['This is a sentence', 'This is another sentence'] encodings = [] model_path ='../albert_chinese_tiny' tokenizer = BertTokenizer.from_pretrained(model_path) model = TFAlbertForSequenceClassification.from_pretrained('../model_tf/20210818') for text in texts: encoding = tokenizer(text, truncation=True, padding=True, max_length=30, return_tensors="tf") encodings.append(encoding) result = model(np.array(encodings)) The error information is as follows: tensorflow.python.framework.errors_impl.InvalidArgumentError: Value for attr ‘Tindices’ of string is not in the list of allowed values: int32, int64 ; NodeDef: {{node ResourceGather}}; Op<name=ResourceGather; signature=resource:resource, indices:Tindices → output:dtype; attr=batch_dims:int,default=0; attr=validate_indices:bool,default=true; attr=dtype:type; attr=Tindices:type,allowed=[DT_INT32, DT_INT64]; is_stateful=true> [Op:ResourceGather]
08-20-2021 07:41:02
08-20-2021 07:41:02
Refer to the [docs](https://huggingface.co/transformers/model_doc/albert.html#tfalbertforsequenceclassification) of `TFAlbertForSequenceClassification`: ``` from transformers import AlbertTokenizer, TFAlbertForSequenceClassification import tensorflow as tf tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2') model = TFAlbertForSequenceClassification.from_pretrained('albert-base-v2') texts = ['This is a sentence', 'This is another sentence'] inputs = tokenizer(texts, return_tensors="tf") outputs = model(inputs) logits = outputs.logits ``` You can just provide a list of strings to the tokenizer, and it will prepare them for the model.<|||||>@NielsRogge Can we also write a `loop for Pytorch batch DataLoaders ` and do inferencing. As DataLoaders are very fast? ``` for batch in Batches: inp=tokenizer(batch, return_tensors="tf") model(inp) ```<|||||>> @NielsRogge Can we also write a `loop for Pytorch batch DataLoaders ` and do inferencing. As DataLoaders are very fast? > > ``` > for batch in Batches: > > inp=tokenizer(batch, return_tensors="tf") > model(inp) > ``` Hi @pratikchhapolika, I am interested to know is writing loop for pytorch batch Dataloaders doable?
transformers
13,198
closed
Correct wrong function signatures on the docs website
# What does this PR do? Trying to address #13171. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger @patrickvonplaten
08-20-2021 07:24:10
08-20-2021 07:24:10
After a series of investigations, here is the concluding matrix: | Env | Python version | Sphinx version | Correctness | |-------------------- |------------------|-----------------|-----------------| | Circle CI Docker Image | 3.6 | 3.2.1 | X | | Circle CI Docker Image | 3.6 | 3.5.4 | X | | Circle CI Docker Image | 3.7(3.7.11) | 3.2.1 | X | | Circle CI Docker Image | 3.7(3.7.11) | 3.5.4 | *O [Artifact](https://258083-155220641-gh.circle-artifacts.com/0/docs/_build/html/main_classes/trainer.html) | | Ubuntu 18.04 Anaconda | 3.6.13 | 3.2.1 | X | | Ubuntu 18.04 Anaconda | 3.7.11 | 3.2.1 | O | | Ubuntu 18.04 Anaconda | 3.8.5 | 3.2.1 | O | X: `model: torch.nn.modules.module.Module = None` (Union and PreTrainedModel missing) O: `model: Union[transformers.modeling_utils.PreTrainedModel, torch.nn.modules.module.Module] = None` (Correct) *O: `Optional[Union[transformers.modeling_utils.PreTrainedModel, torch.nn.modules.module.Module]] = None` (An `Optional` type hint was added by Sphinx which wasn't defined in the code, maybe inferred from the default value `None`) As shown in the above matrix, Sphinx (3.2.1 & 3.5.4) with python 3.6 failed to generate correct html in testing environments, should we upgrade CI/CD environments all together to python 3.7 in order to keep the consistency? Besides, I noticed that in `docs/source/conf.py`, the release version is `4.7.0`, which isn't the latest version `4.9.2` , should this also need to be updated? <|||||>@sgugger correctly mentions I merged this without the last comment being taken into account - Sorry about that, Sylvain is pushing directly on `master` with the comment's request.<|||||>I actually made a PR in #13337 :-) <|||||>This had an unintended side-effect: the search functionality doesn't seem to be working anymore on huggingface.co/transformers. I tracked the issue to Sphinx version v3.4.0. Checking out your useful table @qqaatw, switching back to v3.2.1 with Python v3.7x would be the second best choice?<|||||>@LysandreJik I've checked the search functionality, it's not working indeed. As you said, maybe we should switch back Sphinx's version to v3.2.1 but not with Python v3.7.11 because Sphinx v3.2.1 with Python v3.7.11 provided by CircleCI docker image seems not working either. It's weird though as I tested this combination on my machine (Ubuntu 18.04 Anaconda) and the output was correct. I think another try would be using [Next-gen language images](https://circleci.com/docs/2.0/circleci-images/#next-gen-language-images). According to what the website states, these images are faster to build and have improved reliability and stability. Perhaps switching to this one can solve this problem.<|||||>Since we are exploring a move away from sphinx anyway, we will revert this commit for now to re-enable the search. If we end up not moving away from sphinx we can explore more which image to pick and which versions to use, but in the meantime, it's more important to have the search enabled than the sometime wrong signatures.<|||||>Got it. Sorry for the inconvenience.<|||||>No worries!
transformers
13,197
closed
Training DetrForObjectDetection failed in a multiple-GPU environment.
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.9.2 - Platform: Linux-5.4.0-80-generic-x86_64-with-glibc2.27 - Python version: 3.8.0 - PyTorch version (GPU?): 1.9.0+cu111 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <yes> - Using distributed or parallel set-up in script?: <yes> ### Who can help @NielsRogge ## Information Model I am using (Bert, XLNet ...): DetrForObjectDetection The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Save this script as `run.py` It is the same as https://colab.research.google.com/drive/1oIHGwr1U0sw-6KW-MG60s-ksXA-kYyUO?usp=sharing#scrollTo=VCr7Y7zW5a2a 2. Put sample.json, sample.jpg, sample2.jpg in [detr_samples.tar.gz](https://github.com/huggingface/transformers/files/7019224/detr_samples.tar.gz) to the same directory. ```python from typing import Any, Dict, List, Union from dataclasses import dataclass import torch from torchvision.datasets import CocoDetection from transformers import ( DetrConfig, DetrFeatureExtractor, DetrForObjectDetection, HfArgumentParser, Trainer, TrainingArguments, ) class DetrTrainer(Trainer): # Overwrite _prepare_inputs method to make sure dict is also placed on device def _prepare_inputs(self, inputs: Dict[str, Union[torch.Tensor, Any]]) -> Dict[str, Union[torch.Tensor, Any]]: """ Prepare :obj:`inputs` before feeding them to the model, converting them to tensors if they are not already and handling potential state. """ for k, v in inputs.items(): if isinstance(v, torch.Tensor): kwargs = dict(device=self.args.device) if self.deepspeed and inputs[k].dtype != torch.int64: # NLP models inputs are int64 and those get adjusted to the right dtype of the # embedding. Other models such as wav2vec2's inputs are already float and thus # may need special handling to match the dtypes of the model kwargs.update(dict(dtype=self.args.hf_deepspeed_config.dtype())) inputs[k] = v.to(**kwargs) # labels are a list of dictionaries, each dictionary being a COCO annotation if isinstance(v, list): for annotation_dict in v: for key, value in annotation_dict.items(): annotation_dict[key] = value.to(self.args.device) if self.args.past_index >= 0 and self._past is not None: inputs["mems"] = self._past return inputs def load_category(category): id2label = {} label2id = {} maxid = 0 for k, v in category.items(): id2label[int(k)] = v["name"] label2id[v["name"]] = int(k) maxid = max(maxid, int(k)) for i in range(maxid): if not (i in id2label): id2label[i] = None return id2label, label2id class DetrData(CocoDetection): def __init__(self, img_folder, annotations, feature_extractor, train=True): super(DetrData, self).__init__(img_folder, annotations) self.feature_extractor = feature_extractor def __getitem__(self, idx): # read in PIL image and target in COCO format img, target = super(DetrData, self).__getitem__(idx) # preprocess image and target (converting target to DETR format, resizing + normalization of both image and target) image_id = self.ids[idx] target = {'image_id': image_id, 'annotations': target} encoding = self.feature_extractor(images=img, annotations=target, return_tensors="pt") encoding["pixel_values"] = encoding["pixel_values"].squeeze() # remove batch dimension encoding["labels"] = encoding["labels"][0] # remove batch dimension return encoding @dataclass class DataCollatorDetr: feature_extractor: DetrFeatureExtractor def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]: pixel_values = [item["pixel_values"] for item in features] encoding = self.feature_extractor.pad_and_create_pixel_mask(pixel_values, return_tensors="pt") encoding["labels"] = [item["labels"] for item in features] return encoding def main(): training_args = TrainingArguments(output_dir=".") feature_extractor = DetrFeatureExtractor() train_dataset = DetrData(img_folder=".", annotations="sample.json", feature_extractor=feature_extractor) id2label, label2id = load_category(train_dataset.coco.cats) config = DetrConfig.from_pretrained("facebook/detr-resnet-50") config.id2label = id2label config.label2id = label2id model = DetrForObjectDetection.from_pretrained( "facebook/detr-resnet-50", config=config) # Initialize our Trainer trainer = DetrTrainer( model=model, args=training_args, train_dataset=train_dataset, tokenizer=feature_extractor, data_collator=DataCollatorDetr(feature_extractor=feature_extractor), ) train_result = trainer.train() if __name__ == "__main__": main() ``` 3. Run by `python run.py` in a multiple-GPU environment. Then `IndexError` is caused. ``` Traceback (most recent call last): File "run.py", line 112, in <module> main() File "run.py", line 109, in main train_result = trainer.train() File "/home/christopher/detr_samples/env/lib/python3.8/site-packages/transformers/trainer.py", line 1286, in train tr_loss += self.training_step(model, inputs) File "/home/christopher/detr_samples/env/lib/python3.8/site-packages/transformers/trainer.py", line 1779, in training_step loss = self.compute_loss(model, inputs) File "/home/christopher/detr_samples/env/lib/python3.8/site-packages/transformers/trainer.py", line 1811, in compute_loss outputs = model(**inputs) File "/home/christopher/detr_samples/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/christopher/detr_samples/env/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 168, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/home/christopher/detr_samples/env/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 178, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/home/christopher/detr_samples/env/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply output.reraise() File "/home/christopher/detr_samples/env/lib/python3.8/site-packages/torch/_utils.py", line 425, in reraise raise self.exc_type(msg) IndexError: Caught IndexError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/christopher/detr_samples/env/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker output = module(*input, **kwargs) File "/home/christopher/detr_samples/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/christopher/detr_samples/env/lib/python3.8/site-packages/transformers/models/detr/modeling_detr.py", line 1430, in forward loss_dict = criterion(outputs_loss, labels) File "/home/christopher/detr_samples/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/christopher/detr_samples/env/lib/python3.8/site-packages/transformers/models/detr/modeling_detr.py", line 2004, in forward indices = self.matcher(outputs_without_aux, targets) File "/home/christopher/detr_samples/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/christopher/detr_samples/env/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context return func(*args, **kwargs) File "/home/christopher/detr_samples/env/lib/python3.8/site-packages/transformers/models/detr/modeling_detr.py", line 2132, in forward indices = [linear_sum_assignment(c[i]) for i, c in enumerate(cost_matrix.split(sizes, -1))] File "/home/christopher/detr_samples/env/lib/python3.8/site-packages/transformers/models/detr/modeling_detr.py", line 2132, in <listcomp> indices = [linear_sum_assignment(c[i]) for i, c in enumerate(cost_matrix.split(sizes, -1))] IndexError: index 1 is out of bounds for dimension 0 with size 1 ``` It works fine with a single GPU. ## Expected behavior Successfully complete training. <!-- A clear and concise description of what you would expect to happen. -->
08-20-2021 05:36:04
08-20-2021 05:36:04
Hi, As explained in the [docs](https://huggingface.co/transformers/model_doc/detr.html): > If you want to train the model in a distributed environment across multiple nodes, then one should update the num_boxes variable in the DetrLoss class of modeling_detr.py. When training on multiple nodes, this should be set to the average number of target boxes across all nodes, as can be seen in the original implementation here. I had to remove the distributed training-related code from the modeling file, which is perhaps a bit unfortunate, because now people need to fork the library in order for DETR to work properly in a distributed environment. cc @sgugger @LysandreJik <|||||>Hi, thank you for the quick reply. > When training on multiple nodes, this should be set to the average number of target boxes across all nodes, I'd like to ask you two questions. 1. Do I need to insert `num_boxes / (the number of nodes)` after the following line https://github.com/huggingface/transformers/blob/master/src/transformers/models/detr/modeling_detr.py#L2013 ? For example, if I'm training on two GPUs, should I insert `num_boxes = num_boxes / 2` after the line? 2. The error occurs at https://github.com/huggingface/transformers/blob/master/src/transformers/models/detr/modeling_detr.py#L2004 which is before the declaration of `num_boxes`. Could you tell me more about how to solve this error? <|||||>For now, the code has not been tested to work on multiple GPUs, so this is a good opportunity to make it work. We can perhaps write a guide on which things to take into account. > Do I need to insert num_boxes / (the number of nodes) after the following line https://github.com/huggingface/transformers/blob/master/src/transformers/models/detr/modeling_detr.py#L2013 ? For example, if I'm training on two GPUs, should I insert num_boxes = num_boxes / 2 after the line? The [original implementation](https://github.com/facebookresearch/detr/blob/a54b77800eb8e64e3ad0d8237789fcbf2f8350c5/models/detr.py#L230-L232) used the following code: ``` if is_dist_avail_and_initialized(): torch.distributed.all_reduce(num_boxes) num_boxes = torch.clamp(num_boxes / get_world_size(), min=1).item() ``` with ``` import torch.distributed as dist def is_dist_avail_and_initialized(): if not dist.is_available(): return False if not dist.is_initialized(): return False return True def get_world_size(): if not is_dist_avail_and_initialized(): return 1 return dist.get_world_size() ``` The world size is 2 if you're training on a single node with 2 GPUs, so you can divide them indeed by 2. > The error occurs at https://github.com/huggingface/transformers/blob/master/src/transformers/models/detr/modeling_detr.py#L2004 which is before the declaration of num_boxes. Could you tell me more about how to solve this error? This could have to do with the `targets` not being on the proper devices, which is the responsibility of the `Trainer`. In the original implementation, they use `DistributedSampler`. Can you perhaps print the `sizes` that are computed right before it? These should be a list containing the number of bounding boxes for every example in the batch. <|||||>> For now, the code has not been tested to work on multiple GPUs, so this is a good opportunity to make it work. We can perhaps write a guide on which things to take into account. It would be great if you could support multiple GPUs. > The world size is 2 if you're training on a single node with 2 GPUs, so you can divide them indeed by 2. I found out that if you do it manually, divide by the number of GPUs. > This could have to do with the targets not being on the proper devices, which is the responsibility of the Trainer. In the original implementation, they use DistributedSampler. Can you perhaps print the sizes that are computed right before it? These should be a list containing the number of bounding boxes for every example in the batch. The reason for the error is that DistributedSampler does not support `labels` data. Thank you very much.<|||||>1. If I want to extend this to panoptic segmentation using coco stuff classes, how should I change the class config to do it. I have only 1things categories balloon and 53 stuff categories from coco dataset 2. How do I freeze the weights for training mask head for 25 epochs 3. How do we edit the classifier layer of the model say by default this will have 92 class, but from the above example if I have only 2 class (balloon, 'N/A') how should I change them? <|||||>> If I want to extend this to panoptic segmentation using coco stuff classes, how should I change the class config to do it. I have only 1things categories balloon and 53 stuff categories from coco dataset If you want to do panoptic segmentation, you first need to load the model as follows: ``` from transformers import DetrForSegmentation # specify a custom number of classes model = DetrForSegmentation.from_pretrained("facebook/detr-resnet-50-panoptic", num_labels=54, ignore_mismatched_sizes=True) ``` You can possibly also add the `id2label` and `label2id` dictionaries as additional arguments. > How do I freeze the weights for training mask head for 25 epochs ``` for name, param in model.named_parameters(): if name.startswith('detr'): param.requires_grad = False ``` > How do we edit the classifier layer of the model say by default this will have 92 class, but from the above example if I have only 2 class (balloon, 'N/A') how should I change them? There's a new argument called `ignore_mismatched_sizes` which you can set to `True`. If you then specify a different number of labels, no error will be thrown (only a warning), as shown above. <|||||>> If you want to do panoptic segmentation, you first need to load the model as follows: Instead of using resnet50-panoptic how can I use my model from object detection (`DetrForObjectDetection` method) to train for panoptic segmentation<|||||>So for panoptic segmentation, DETR works as follows: 1) you first need to train a `DetrForObjectDetection` model to detect bounding boxes + classes (around both things + stuff classes). Let's say you have 10 classes in total (things + stuff), then you can initialize the model as follows: ``` from transformers import DetrForObjectDetection # replace COCO classification head by custom one object_detection_model = DetrForObjectDetection.from_pretrained('facebook/detr-resnet-50', num_labels=10, ignore_mismatched_sizes=True) # fine-tune this model on custom data ``` You've probably already done this, see my tutorial notebook: https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DETR/Fine_tuning_DetrForObjectDetection_on_custom_dataset_(balloon).ipynb 2) next, you can initialize a `DetrForSegmentation` model with the weights you obtained in step 1. This can be done as follows: ``` from transformers import DetrConfig, DetrForSegmentation config = DetrConfig() model = DetrForSegmentation(config) # set the weights of the object detection model model.detr = object_detection_model ``` This will give you a model that has all layers already initialized with some trained weights, except the mask head, which will be randomly initialized. 3) next, you can freeze all layers except the mask head, and train for 25 epochs. Freezing can be done as follows: ``` for name, param in model.named_parameters(): if name.startswith('detr'): param.requires_grad = False ``` <|||||>@NielsRogge Thanks for your reply. I still can't figure out on adding the **53 COCO stuff** class to my custom data in Objectdetection. I am following the above finetune notebook which you have shared. I have this doubt, should I download the COCO-17 val dataset and combine my custom data for the model to learn the stuff classes or just increase the class_emed layer from 100,4 (3+1 things class) to 100,57 (4+53). But in this case how to add this class to DetrConfig (id2class). this is the notebook link :- [colab](https://colab.research.google.com/drive/1v1G2grxKrsnvVbJMMulY7xr4k9AwE5IF) this custom data link:- [drive](https://drive.google.com/file/d/1ydE8KAojQk5HRfNG6GMLzkq-E_GeDOLr/view?usp=sharing) One general question in your notebook the model is trained using `pl` rather that `torch` what is the reason for using `pl` <|||||>If you want a neural network to learn additional classes, it's advised to add a new classification head and fine-tune the model on all classes you want. So indeed, now the class embedding layer should have 57 outputs. > One general question in your notebook the model is trained using pl rather that torch what is the reason for using pl Because it's very easy to train PyTorch models. You can of course just train using native PyTorch or using HuggingFace Accelerate, or using HuggingFace's Trainer, etc.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,196
closed
check torch_dtype in config as well
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #13195. A detailed problem description and reproducer is in the issue! ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @stas00 as we discussed this issue in #13076 . (As #13076 deals with a number of issues, I opened #13195 to focus on `torch_dtype` with AutoModel issue.)
08-20-2021 04:05:52
08-20-2021 04:05:52
to fix code quality issues please run: `make fixup` and push the changes<|||||>As answered in the issue: https://github.com/huggingface/transformers/issues/13195#issuecomment-903009666 not using `config.torch_type` is by design for v4 and will likely to change in v5.<|||||>The original problem will be fixed by https://github.com/huggingface/transformers/pull/13209 - please don't hesitate to validate that it indeed solves it. Thank you!<|||||>> As answered in the issue: [#13195 (comment)](https://github.com/huggingface/transformers/issues/13195#issuecomment-903009666) > not using `config.torch_type` is by design for v4 and will likely to change in v5. I see. So should this PR be closed or left open for later reference? Thank you!<|||||>We can close it for now. It will still be here for reference in either form.
transformers
13,195
closed
'torch_dtype' keyword not working with 'AutoModel'
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> transformers version: 4.9.2 Platform: Linux-4.18.0-25-generic-x86_64-with-glibc2.10 Python version: 3.8.5 PyTorch version (GPU?): 1.8.0a0+52ea372 (True) Tensorflow version (GPU?): not installed (NA) Flax version (CPU?/GPU?/TPU?): not installed (NA) Jax version: not installed JaxLib version: not installed Using GPU in script?: Using distributed or parallel set-up in script?: No ### Who can help @stas00 as he is the writer of the [#12316](https://github.com/huggingface/transformers/pull/12316). <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce 1. Inspect the model weight data type. ```bash wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_lm_345m/versions/v0.0/zip -O checkpoint.zip unzip checkpoint.zip python -c "import torch; from pprint import pprint as print; sd=torch.load('./release/mp_rank_00/model_optim_rng.pt'); d= {d.dtype: 1 for d in sd['model']['language_model']['transformer'].values()}; print(d.keys())" # dict_keys([torch.float16]) ``` 2. Try to load it with transformers in float16, which `torch_dtype` is supposed to be responsible for. But this only works with specific model classes and AutoModel blindly loads it into float32. ```bash git clone https://github.com/huggingface/transformers.git python3 transformers/src/transformers/models/megatron_bert/convert_megatron_gpt2_checkpoint.py checkpoint.zip # load correctly with the specific model class python -c "from transformers import GPT2LMHeadModel; print(GPT2LMHeadModel.from_pretrained('.', torch_dtype='auto').dtype)" # torch.float16 # but fails to load it into float 16 with AutoModelForCausalLM python -c "from transformers import AutoModelForCausalLM; print(AutoModelForCausalLM.from_pretrained('.', torch_dtype='auto').dtype)" # torch.float32 ``` 3. This is because AutoModel [first puts `torch_dtype` argument passed to `AutoModel.from_pretrained` method into config](https://github.com/huggingface/transformers/blob/91ff480e2693f36b11aaebc4e9cc79e4e3c049da/src/transformers/configuration_utils.py#L576) and `PretrainedModel.from_config`, which is called by `AutoModel.from_pretrained`, checks for `torch_dtype` argument in [only in `kwargs` and not in config](https://github.com/huggingface/transformers/blob/91ff480e2693f36b11aaebc4e9cc79e4e3c049da/src/transformers/modeling_utils.py#L1297). <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Setting`torch_dtype` to `auto` works correctly as explained in [#12316](https://github.com/huggingface/transformers/pull/12316). I will open a PR to address this issue :) <!-- A clear and concise description of what you would expect to happen. -->
08-20-2021 04:00:04
08-20-2021 04:00:04
> This is because AutoModel first puts torch_dtype argument passed to AutoModel.from_pretrained method into config and PretrainedModel.from_config, which is called by AutoModel.from_pretrained, checks for torch_dtype argument in only in kwargs and not in config. But this is intentional. My PR was originally designed to have the dtype figuring out to be fully automated, but that wasn't accepted, so the `config.dtype` is saved, but at the moment being ignored on purpose. i.e. the user has to actively set `torch_dtype`. See this part of the discussion https://github.com/huggingface/transformers/pull/12316#discussion_r659959617 Perhaps we should document somewhere that `config.torch_dtype` is saved for the future use (probably v5) but currently isn't automatically used. The user can of course do `from_pretrained(..., torch_dtype=config.torch_dtype)`.<|||||>This has been fixed in https://github.com/huggingface/transformers/pull/13209
transformers
13,194
closed
use float 16 in causal mask and masked bias
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #13193 (issue). Problem description and reproducer is provided in the issue. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @LysandreJik @NielsRogge as they reviewed [the original converting script PR](https://github.com/huggingface/transformers/pull/12007) <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-20-2021 02:56:05
08-20-2021 02:56:05
Pinging @jdemouth and @novatig <|||||>@novatig it looks good to me. Are you ok with the changes? @hwijeen and @LysandreJik, sorry for the delay, I was on holidays ;)<|||||>No worries, thanks for taking a look! :)<|||||>This is a kindly reminder for @novatig :)<|||||>Merging since @jdemouth approved - will reverse if @novatig disagrees.<|||||>Sorry all, I did not see the notification in my inbox and it slipped my mind. A very belated LGTM
transformers
13,193
closed
Megatron conversion code converts some weights in fp16 to fp32(or uint8).
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> transformers version: 4.9.2 Platform: Linux-4.18.0-25-generic-x86_64-with-glibc2.10 Python version: 3.8.5 PyTorch version (GPU?): 1.8.0a0+52ea372 (True) Tensorflow version (GPU?): not installed (NA) Flax version (CPU?/GPU?/TPU?): not installed (NA) Jax version: not installed JaxLib version: not installed Using GPU in script?: Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @novatig @jdemouth @LysandreJik ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce 1. Check the data type of original megatron checkpoint. It's all in fp16. ```bash wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_lm_345m/versions/v0.0/zip -O checkpoint.zip unzip checkpoint.zip python -c "import torch; from pprint import pprint as print; sd=torch.load('./release/mp_rank_00/model_optim_rng.pt'); d= {d.dtype: 1 for d in sd['model']['language_model']['transformer'].values()}; print(d.keys())" # dict_keys([torch.float16]) ``` 2. But the [current conversion script](https://github.com/huggingface/transformers/blob/master/src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py) converts some into [float32](https://github.com/huggingface/transformers/blob/91ff480e2693f36b11aaebc4e9cc79e4e3c049da/src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py#L164) and [uint8](https://github.com/huggingface/transformers/blob/91ff480e2693f36b11aaebc4e9cc79e4e3c049da/src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py#L160). This leads to a model with data type which is not faithful to the original model, and potentially a problem as discussed in #13076 ``` python3 /hf/transformers-master/src/transformers/models/megatron_bert/convert_megatron_gpt2_checkpoint.py checkpoint.zip python -c "import torch; sd=torch.load('pytorch_model.bin'); d = {p.dtype:1 for p in sd.values() }; print(d.keys())" # dict_keys([torch.float16, torch.float32, torch.uint8]) ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Converted checkpoint should have the same data type as the original one. <!-- A clear and concise description of what you would expect to happen. --> I will open a new PR to address this :)
08-20-2021 02:37:15
08-20-2021 02:37:15
transformers
13,192
closed
Incosistent behaviour between fast and slow RoBERTa tokenizers
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.10.0.dev0 - Platform: Linux-4.15.0-122-generic-x86_64-with-debian-stretch-sid - Python version: 3.7.10 - PyTorch version (GPU?): 1.9.0+cu102 (True) - Tensorflow version (GPU?): 2.6.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.3.4 (cpu) - Jax version: 0.2.19 - JaxLib version: 0.1.70 - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help - tokenizers: @LysandreJik <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information I was testing a fix for #9933 and when debugging the test for RoBERTa tokenizers I found that fast and slow return different results for the test and neither of the results are what I expected. The code snipet below is a copy of `tests.test_tokenization_roberta.RobertaTokenizationTest.test_special_tokens_mask`. As you can see from the output, slow tokenizer outputs `<unk>` ids eventhough the flag state to not return special tokens. Also, the special tokens mask returned doesn't take into account the `<unk>` tokens as if they weren't special tokens. On the other hand, the fast tokenizer doesn't output those tokens since `<unk>` is defined as a special token, as expected. However, when adding special tokens the `<unk>` tokens are not added at all. So I am wondering which behaviour is correct since niether seems to be 100%? The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: run the following code ```python """Based on test from test_tokenization_roberta.RobertaTokenizationTest.test_special_tokens_mask """ import os import json import tempfile import shutil from transformers import RobertaTokenizer, RobertaTokenizerFast from transformers.models.roberta.tokenization_roberta import VOCAB_FILES_NAMES # Setup vocab = [ "l", "o", "w", "e", "r", "s", "t", "i", "d", "n", "\u0120", "\u0120l", "\u0120n", "\u0120lo", "\u0120low", "er", "\u0120lowest", "\u0120newer", "\u0120wider", "<unk>", ] vocab_tokens = dict(zip(vocab, range(len(vocab)))) merges = ["#version: 0.2", "\u0120 l", "\u0120l o", "\u0120lo w", "e r", ""] special_tokens_map = {"unk_token": "<unk>"} tmpdirname = tempfile.mkdtemp() vocab_file = os.path.join(tmpdirname, VOCAB_FILES_NAMES["vocab_file"]) merges_file = os.path.join(tmpdirname, VOCAB_FILES_NAMES["merges_file"]) with open(vocab_file, "w", encoding="utf-8") as fp: fp.write(json.dumps(vocab_tokens) + "\n") with open(merges_file, "w", encoding="utf-8") as fp: fp.write("\n".join(merges)) kwargs = {} kwargs.update(special_tokens_map) kwargs.update(do_lower_case=False) slow_tok = RobertaTokenizer.from_pretrained(tmpdirname, use_fast=False, **kwargs) fast_tok = RobertaTokenizerFast.from_pretrained(tmpdirname, use_fast=True, **kwargs) sequence = "Encode this." print("Slow tokenizer:") print(f" Encoding: {slow_tok.encode(sequence, add_special_tokens=False)}") encoded = slow_tok.encode_plus(sequence, add_special_tokens=True, return_special_tokens_mask=True) print(f" Encoding with special: {encoded['input_ids']}") print(f" Special tokens mask: {encoded['special_tokens_mask']}") print("Fast tokenizer") print(f" Encoding: {fast_tok.encode(sequence, add_special_tokens=False)}") encoded = fast_tok.encode_plus(sequence, add_special_tokens=True, return_special_tokens_mask=True) print(f" Encoding with special: {encoded['input_ids']}") print(f" Special tokens mask: {encoded['special_tokens_mask']}") shutil.rmtree(tmpdirname) ``` The output I get from running this code: ``` file /tmp/tmpros2tpnz/config.json not found Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. file /tmp/tmpros2tpnz/config.json not found Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. file /tmp/tmpros2tpnz/config.json not found Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. Slow tokenizer: Encoding: [19, 9, 19, 1, 8, 3, 10, 6, 19, 7, 5, 19] Encoding with special: [20, 19, 9, 19, 1, 8, 3, 10, 6, 19, 7, 5, 19, 21] Special tokens mask: [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1] Fast tokenizer Encoding: [9, 1, 8, 3, 10, 6, 7, 5] Encoding with special: [20, 9, 1, 8, 3, 10, 6, 7, 5, 21] Special tokens mask: [1, 0, 0, 0, 0, 0, 0, 0, 0, 1] ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Both tokenizers should output the same results: ``` Slow tokenizer: Encoding: [9, 1, 8, 3, 10, 6, 7, 5] Encoding with special: [20, 19, 9, 19, 1, 8, 3, 10, 6, 19, 7, 5, 19, 21] Special tokens mask: [1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1] Fast tokenizer Encoding: [9, 1, 8, 3, 10, 6, 7, 5] Encoding with special: [20, 19, 9, 19, 1, 8, 3, 10, 6, 19, 7, 5, 19, 21] Special tokens mask: [1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1] ``` <!-- A clear and concise description of what you would expect to happen. -->
08-20-2021 02:06:56
08-20-2021 02:06:56
Maybe @SaulLu do you have an idea here? :-)<|||||>Hi @patrickvonplaten @SaulLu, Can I help in resolving this issue? But as I'm quite new to this so I will need some guidance in where should I start from.<|||||>@ofirzaf, thanks for the detailed issue! **About the missing id corresponding to the unknown token for the fast tokenizer** Yes, I can see why the fast tokenizer does not take into account the unknow token. Since the original RoBERTa tokenizer does not need this token (since it is byte-based and contains the exhaustive list in its vocabulary), when converting from the slow to the fast version of the tokenizer the information of the unknown token you added in the kwargs is not passed (precisely, the information is "lost" in [this method](https://github.com/huggingface/transformers/blob/v4.9.2/src/transformers/convert_slow_tokenizer.py#L216)). @ofirzaf , in the short term, if you need to initialize the fast tokenizer from the slow version files, you can do so: ```python special_tokens_map = {"unk_token": "<unk>"} kwargs = {} kwargs.update(special_tokens_map) kwargs.update(do_lower_case=False) fast_tok = RobertaTokenizerFast.from_pretrained(tmpdirname, use_fast=True, **kwargs) fast_tok.backend_tokenizer.model.unk_token = special_tokens_map["unk_token"] fast_tok.save_pretrained("local_tok") fast_tok = RobertaTokenizerFast.from_pretrained("local_tok", use_fast=True) ``` @sourabh112, It's very kind of you to offer your help. However, in the immediate future, I'm still not sure we want to change this behaviour without ensuring that there will be no adverse effects (the GPT2 tokenizer being reused in several places) knowing that these are initially tokenizers that should not need the unknown token. @patrickvonplaten, @LysandreJik and @sgugger do you have an opinion on this? Could we at least add a short-term warning? **About the `return_special_tokens_mask`** It seems to me that this behavior is common to all tokenizers. Special tokens are tokens added to transform the tokenized text into a format compatible with the input expected by the model. The unknown token is different in that it is a necessary token for the tokenisation algorithm. <|||||>@SaulLu Thanks for the reply. I agree that this issue shouldn't occure when using the tokenizer for the reasons you mentioned. I think, however, that the test should be fixed to reflect that. As I mentioned, the example I brought here is straight from the library's built in tests. Regarding the special tokens mask, if the `<unk>` token shouldn't be considered as a special token, shouldn't it be removed from the special tokens list of the tokenizer? In the OP I mentioned another issue I wanted to fix, can you take a look at the issue and the proposed fix and tell me if this is something you think is worth fixing/contributing or the team doens't think this is an issue? Thanks<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,191
closed
Why repeat initializing loss modules in every forward?
Hello, I find that your implementations usually initialize loss modules, e.g., nn.CrossEntropyLoss, inside models' forward functions. I am curious about the reason of doing this. Generally, in Pytorch, a module should be initialized in __init__ and used in forward. Does the frequent initialization cause overhead and memory issues? Thanks,
08-20-2021 01:27:58
08-20-2021 01:27:58
The module `nn.CrossEntropyLoss` does not contain any weights so here we don't allocate any memory really when initializing the module at every forward. However if we would do this with `nn.Linear(...)` at every forward step it should be considered bad practice IMO since in this case we would allocate a big tensor (the weights of the linear layer) at every forward step.<|||||>@patrickvonplaten , thanks for the answer. Pytorch has `functional.cross_entropy()`, which should be more suitable to use in forward. Although `nn.CrossEntropyLoss ` doesn't cause overhead, it doesn't follow Pytorch's convention that initializes a module in init and use it in forward. It confused me a little lit when reading the code. I was wondering any specific reason of using a `nn` module instead of a `functional` method in forward.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,190
closed
[Documentation] PLEASE HELP with very simple tasks!!!
Hello hugginface team,. First of all, I wanted to report a bug I am getting in Google Colab. When I do: from transformers import AutoTokenizer, AutoModel ``` tokenizer = AutoTokenizer.from_pretrained('allenai/scibert_scivocab_uncased') input_ids = tokenizer('I enjoy walking with my cute dog', return_tensors='pt').input_ids model = AutoModel.from_pretrained('allenai/scibert_scivocab_uncased') model.eval() model.generate(input_ids) ``` I get: ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-10-85c40f45fd28> in <module>() 1 input_ids = tokenizer('I enjoy walking with my cute dog', return_tensors='pt').input_ids ----> 2 model.generate(input_ids=input_ids) 2 frames /usr/local/lib/python3.7/dist-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs) 26 def decorate_context(*args, **kwargs): 27 with self.__class__(): ---> 28 return func(*args, **kwargs) 29 return cast(F, decorate_context) 30 /usr/local/lib/python3.7/dist-packages/transformers/generation_utils.py in generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, encoder_no_repeat_ngram_size, num_return_sequences, max_time, max_new_tokens, decoder_start_token_id, use_cache, num_beam_groups, diversity_penalty, prefix_allowed_tokens_fn, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, forced_bos_token_id, forced_eos_token_id, remove_invalid_values, synced_gpus, **model_kwargs) 996 return_dict_in_generate=return_dict_in_generate, 997 synced_gpus=synced_gpus, --> 998 **model_kwargs, 999 ) 1000 /usr/local/lib/python3.7/dist-packages/transformers/generation_utils.py in greedy_search(self, input_ids, logits_processor, stopping_criteria, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, **model_kwargs) 1301 continue # don't waste resources running the code we don't need 1302 -> 1303 next_token_logits = outputs.logits[:, -1, :] 1304 1305 # Store scores, attentions and hidden_states when required AttributeError: 'BaseModelOutputWithPoolingAndCrossAttentions' object has no attribute 'logits' ``` =========================================================================== Secondly, I am reporting to you another very serious issue IMHO that needs to be addressed ASAP!!! THERE ARE NO CLEAR AND SIMPLE EXAMPLES ON HOW TO USE HUGGINFACE models/software ANYWHERE !!!! WTF??? I do not mean to be rude but this is ridiculous and insulting. I have wasted hours going through your docs, w/o ANY success. Everything is either absolutely unclear or does not work properly. WHAT I PERSONALLY NEED: GOOGLE COLABS that show in a few lines of code how to train Huggingface models from scratch (NOT A SINGLE EXAMPLE ANYWHERE). And also most of your examples and colabs are either incomplete/not working or very specific so they can't be used elsewhere!!! ==================================================================== I would really appreciate it if you would address all of these issues ASAP because otherwise, I will not be able to use Huggingface transformers nor would I recommend it to anyone. Thank you very much for listening to my criticism. I do not mean to chastise, only to help make huggingface better! :) Alex.
08-19-2021 18:42:06
08-19-2021 18:42:06
Hey Alex, I'm sorry that you didn't manage to find what you were looking for in the docs. 1) Please not that "bert-like" models should not be used for sequence generation, but rather for sequence classification. The model "`allenai/scibert_scivocab_uncased`" is essentially a `bert-base-uncased` model you can checkout [here](https://huggingface.co/bert-base-uncased) where you can see some examples. Only seq2seq and lm-head models should make use of `generate`. This doc might help you for more explanation: https://huggingface.co/transformers/model_summary.html 2) Re: Examples: - We try to have at least one example for every model architecture which you can find under the model pages in the docs, *e.g.* here for BERT: https://huggingface.co/transformers/model_doc/bert.html#bertforsequenceclassification - Also we have a couple of "Quickstart" sections on how to use our models here: https://huggingface.co/transformers/training.html which you can open as a Google Colab (there is a button on the top right) - There is also the Hugging Face course which dives a bit deeper into how everything works here: https://huggingface.co/course/chapter1<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,189
closed
Question about xla_spawn.py script and torch_xla.distributed.xla_multiprocessing
I am able to fine-tune a large BERT model using your `examples/xla_spawn.py` script, by calling it from a colab notebook shell. However, when I try essentially the same thing in a colab notebook, putting the code in a cell and calling torch_xla.distributed.xla_multiprocessing.spawn((_mp_fn, start_method="fork") also in a colab cell, I get errors that the TPUs have run out of memory, when trying to train. Is this because the start_method `fork` is less memory efficient? Or should this also work with "native" colab code? I can give a MRE colab if it's helpful.
08-19-2021 18:22:02
08-19-2021 18:22:02
I'm unaware of what might be causing this - maybe @sgugger has an answer for you!<|||||>I have confirmed that the issue is not with the `start_method="fork"`. Your `run_glue.py` script uses `HfArgumentParser` to set up the `training_args` parameter of the `Trainer`. If I instead set the `training_args` manually, I get the same errors on the TPUs as in the colab notebook, even though the script uses `start_method="spawn"`. I haven't attempted to figure out exactly what is being set in `training_args` to allow the large BERTSs to be trained on TPUs. <|||||>The training arguments handle the initialization logic for the distributed setup, so they should only be initiliazed inside the `_mp_fn` you launch in parallel. To launch your training from a colab, you should check the `notebook_launcher` from Accelerate.<|||||>Yes, right. I found out about the training arguments after spending some time experimenting. I am using a python script, which is called from colab via the shell, and everything is working fine. I did try `accelerate` once before but could not get it working, but I create an new issue if I go back to that route and still have problems.
transformers
13,188
closed
Fall back to `observed_batch_size` when the `dataloader` does not know the `batch_size`.
# What does this PR do? Motivated by #12995, this adds support for users to provide a `batch_sampler` to the DataLoader instead of a (single index) `sampler`. (The [pytorch docs](https://pytorch.org/docs/stable/data.html) has more info on these two sampler types.) When we provide a `batch_sampler`, a DataLoader doesn't know the batch size, so it's set to `None`. Currently, the Trainer retrieves the batch size from the data loader: https://github.com/huggingface/transformers/blob/1fec32adc6a4840123d5ec5ff5cf419c02342b5a/src/transformers/trainer.py#L2172 ... leading to a crash a few lines later when it tries to use it: https://github.com/huggingface/transformers/blob/1fec32adc6a4840123d5ec5ff5cf419c02342b5a/src/transformers/trainer.py#L2217 ```txt TypeError: repeat(): argument 'repeats' (position 1) must be tuple of ints, not NoneType ``` Fortunately, the observed batch size is calculated between those two spots, so this change simply uses it instead if the batch size wasn't found on the data loader. I added the `None` check just to ensure this does not change existing behavior, though I would imagine it would not even without the check. _Re: testing: I was not sure how much code you want surrounding this fix / added support, as I don't think Transformers includes any batch samplers itself yet, so I didn't include a test. Let me know otherwise and I can take a stab at it!_ ## Who can review? I suggest @sgugger due to issue context and Trainer :-)
08-19-2021 17:34:07
08-19-2021 17:34:07
Apologies for the ping @sgugger, but might you be able to take a look at this? tl;dr with this 2-line change, users can now provide batch samplers without evaluation crashing 🥳 <|||||>Thanks, Sylvain! Gaah, sorry, your inbox must have been crazy by the time you came back 😅 — I hope you had a nice break! <|||||>It was great, thanks for asking!
transformers
13,187
closed
Unable to load model by ignoring size mismatch; TypeError: __init__() got an unexpected keyword argument 'ignore_mismatched_sizes'
I want to save the pre-trained model at a local path and later again load it using `from_pretrained` method. I'm doing this as I want to use hugging face on server with no internet. I used following script to save the model: ```python3 from transformers import BertTokenizer, BertForSequenceClassification pretrained_path = "pretrained_models/bert_base_uncased_pretrained/" model = BertForSequenceClassification.from_pretrained('bert-base-uncased') model.save_pretrained(pretrained_path) ``` So I tried 2 approaches to load model from local path, but both aren't working. ### Approach 1: Code Snippet 1: ``` model = BertForSequenceClassification.from_pretrained( pretrained_path, num_labels = 27) ``` Error 1: ```bash Traceback (most recent call last): File "<stdin>", line 5, in <module> File "/path/lib/python3.6/site-packages/transformers/models/auto/auto_factory.py", line 395, in from_pretrained return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs) File "/path/lib/python3.6/site-packages/transformers/modeling_utils.py", line 1220, in from_pretrained model, state_dict, pretrained_model_name_or_path, _fast_init=_fast_init File "/path/lib/python3.6/site-packages/transformers/modeling_utils.py", line 1360, in _load_state_dict_into_model raise RuntimeError(f"Error(s) in loading state_dict for {model.__class__.__name__}:\n\t{error_msg}") RuntimeError: Error(s) in loading state_dict for BertForSequenceClassification: size mismatch for classifier.weight: copying a param with shape torch.Size([2, 768]) from checkpoint, the shape in current model is torch.Size([27, 768]). size mismatch for classifier.bias: copying a param with shape torch.Size([2]) from checkpoint, the shape in current model is torch.Size([27]). ``` ---- ### Approach 2: Code Snippet 2: ```python3 model = BertForSequenceClassification.from_pretrained( pretrained_path, num_labels = 27, ignore_mismatched_sizes=True) ``` Error 2: ```bash Traceback (most recent call last): File "<stdin>", line 4, in <module> File "/path/python3.6/site-packages/transformers/modeling_utils.py", line 1179, in from_pretrained model = cls(config, *model_args, **model_kwargs) TypeError: __init__() got an unexpected keyword argument 'ignore_mismatched_sizes' ``` Kindly specify the way to load model with size mismatch, or any other way to save and load model from local machine every time with different number of classes.
08-19-2021 16:00:23
08-19-2021 16:00:23
HI, what version of transformers are you using? As `ignore_mismatched_sizes` option was newly added at v4.9.0, you should probably upgrade to v4.9.0+ in order to use it. I tested the following snippet on Colab, it worked as expected. The transformers version I used is v4.9.0: ``` from transformers import BertTokenizer, BertForSequenceClassification pretrained_path = "./test_path" model = BertForSequenceClassification.from_pretrained('bert-base-uncased') model.save_pretrained(pretrained_path) model = BertForSequenceClassification.from_pretrained( pretrained_path, num_labels = 27, ignore_mismatched_sizes=True) ``` Output: ``` Some weights of BertForSequenceClassification were not initialized from the model checkpoint at ./test_path and are newly initialized because the shapes did not match: - classifier.weight: found shape torch.Size([2, 768]) in the checkpoint and torch.Size([27, 768]) in the model instantiated - classifier.bias: found shape torch.Size([2]) in the checkpoint and torch.Size([27]) in the model instantiated You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,186
closed
Add SpeechEncoderDecoder & Speech2Text2
This PR adds Facebook's new Speech Translation models - see [paper here](https://arxiv.org/pdf/2104.06678.pdf) that are based on a pretrained Wav2Vec2 and achieve SOTA on CoVoST-2 @kahne . Since those checkpoints are based on `Wav2Vec2`, we can use this PR to create the `SpeechEncoderDecoder` class which essentially allows one to use any pretrained speech encoder with any text decoder model. The Speech Translation models are converted to fit the format of `SpeechEncoderDecoderModel` and should be used as follows: ```python import torch from transformers import Speech2Text2Processor, SpeechEncoderDecoder from datasets import load_dataset import soundfile as sf model = SpeechEncoderDecoder.from_pretrained("facebook/s2t-wav2vec2-large-en-de") processor = Speech2Text2Processor.from_pretrained("facebook/s2t-wav2vec2-large-en-de") def map_to_array(batch): speech, _ = sf.read(batch["file"]) batch["speech"] = speech return batch ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") ds = ds.map(map_to_array) inputs = processor(ds["speech"][0], sampling_rate=16_000, return_tensors="pt") generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"]) transcription = processor.batch_decode(generated_ids) ``` Since the decoder and tokenizer is different from the previous `Speech2Text` model: https://github.com/huggingface/transformers/tree/master/src/transformers/models/speech_to_text a new model folder speech_to_text_2 is created. Currently, the tokenizer only supports decoding and not encoding (which is only needed for training) because the tokenizer merges files are not published (cc @kahne) The model can only be used in combination with `SpeechEncoderDecoderModel`. The `SpeechEncoderDecoderModel` is also fully added in this PR and tests for `Wav2Vec2Bert`, `Speech2TextBert`, `Wav2Vec2SpeechToText2` are added. The ASR pipeline is slighly adapted to make it work with `SpeechEncoderDecoder`. @LysandreJik @anton-l - it would be great if you could take a look at the general model architecture @Narsil - it would be very nice if you could check the changes to the pipeline All models are uploaded and can be accessed here: https://huggingface.co/models?other=speech2text2 ## Future TODO: - Currently the tokenizer support only decoding, not training. If the community is interested in getting tokenizer training support for `Speech2Text2` in the future, please ping @patrickvonplaten
08-19-2021 15:09:23
08-19-2021 15:09:23
@patrickvonplaten I've updated the tarball with the fastBPE codes file (`bpe.10k`). Please re-download and let me know if you have questions :)<|||||>-Hi @patrickvonplaten , I was trying to try https://huggingface.co/facebook/s2t-wav2vec2-large-en-tr however I'm getting an error when I'm trying to implement the model. there is no error message or stack trace that is available so I can share it. I also tried to run it as a python script but it did work too. ``` from transformers import SpeechEncoderDecoderConfig ``` ``` Traceback (most recent call last): File "main.py", line 1, in <module> from transformers import SpeechEncoderDecoderConfig ImportError: cannot import name 'SpeechEncoderDecoderConfig' from 'transformers' (/venv/lib/python3.7/site-packages/transformers/__init__.py) ``` However, Pycharm gives me an error that this package is not available. ![image](https://user-images.githubusercontent.com/9295206/133619519-a493069f-bec8-4554-be8c-c1e181b62e04.png) transformers `__version__ = "4.10.2"` python `3.7.4`
transformers
13,185
closed
Adding CvT Model : Convolution based Image Transformers
# What does this PR do? Adding CvT Model : Convolution based Image Transformers A new architecture, named Convolutional vision Transformers (CvT), that improves Vision Transformers (ViT) in performance and efficiently by introducing convolutions into ViT to yield the best of both designes. This is accomplished through two primary modifications: a hierarchy of Transformers containing a new convolutional token embedding, and a convolutional Transformer block leveraging a convolutional projection. These changes introduce desirable properties of convolutional neural networks (CNNs) to the ViT architecture (e.g. shift, scale, and distortion invariance) while maintaining the merits of Transformers (e.g. dynamic attention, global context, and better generalization). <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [NO ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [YES ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [https://github.com/huggingface/transformers/issues/13158 ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? https://github.com/huggingface/transformers/issues/13158 to it if that's the case. - [No ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ No] Did you write any new necessary tests? ## Who can review? @NielsRogge I have few queries and doubts and need help for further addition of pretrained models and adaption to respective base classes Models: -cvt @@NielsRogge Library:
08-19-2021 15:08:45
08-19-2021 15:08:45
@NielsRogge. Currently I have loaded model using another script for testing. The model works fine on samples images I have tested. But I need help at few steps: 1 . Adaption to base classes, especially for pretrained models 2. How to upload to hugging face archive 3. I also need to understand few thing in feature extractor part too. So, yeah further guidance needed from here<|||||>```python from transformers import CvTConfig, CvTModel, BeitFeatureExtractor, BeitForImageClassification import torch from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' # tabby cat image image = Image.open(requests.get(url, stream=True).raw) feature_extractor = BeitFeatureExtractor.from_pretrained('microsoft/beit-base-patch16-224-pt22k') model1 = BeitForImageClassification.from_pretrained('microsoft/beit-base-patch16-224') inputs = feature_extractor(images=image, return_tensors="pt") out1 = model1(**inputs) logit1 = out1.logits print(model1.config.id2label[logit1.argmax(-1).item()]) config = CvTConfig() model2 = CvTModel(config) model_file = '/home/naman/CvT/models/CvT-21-384x384-IN-1k.pth' state_dict = torch.load(model_file, map_location="cpu") model2.load_state_dict(state_dict, strict=False) logit2 = model2(inputs['pixel_values']) pred = logit2.argmax(-1).item() print(model1.config.id2label[pred]) ```` You can test it here. I have used BeITFeatureExtractor which is similar to CvT I think.<|||||>Hi, Thanks for your PR. > Adaption to base classes, especially for pretrained models I've seen that currently, the modeling file is a copy from the original repository. However, to add CvT to this library, we need the follow the same implementation as other models like ViT and BEiT (i.e. the HuggingFace API). Therefore, the `Block` class for example (which is used in the original timm-based implementation) will have to be translated to a `CvtLayer` class (similar to `ViTLayer`). I also opt to use `CvtModel` instead of `CvTModel`, as it will be more difficult for people to type ;) we should have done this for ViT too actually, and we've done it for BEiT now (`BeitModel` instead of `BEiTModel`). Looking at the modeling file, the main difference between ViT and CvT seems to happen in the attention layer. So probably, you can just copy everything from `modeling_vit.py`, rename every Vit from that file to Cvt, and then update the attention layer. The code example looks great already! Does it predict a reasonable class (like cat or remote)? Do you have an email address? Then I set up a Slack channel to further guide you.<|||||>@NielsRogge Yeah prediction is good. I tested it on a small set of 20 images it okay there. Yeah I will need little guidance. 😅. Yeah email is [email protected] It's midnight here. I will work on it tomorrow. <|||||>@NielsRogge I have done code as hugging face api. I have problems in tests. I need help there.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@AnugunjNaman Do you have any updates? I'd like to help contribute if need be.<|||||>Yup, sorry yeah you can help. I got busy in job search since it was my final year. We can contact and continue from there. Can you write me your email? We can set up a time to discuss it.<|||||>Hey @AnugunjNaman, my email is [email protected]. Happy to discuss more!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,184
closed
Custom errors and BatchSizeError
# What does this PR do? The PR addresses the issue #12789. I have added a file `custom_exceptions.py` holding a class instance to be used by `modeling_gpt2.py` to replace the assert based error with suitable exception based error. We can add other errors as well to address the type of error happening. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? - @willfrey - @sgugger - Anyone in the community is free to review the PR once the tests have passed.
08-19-2021 15:06:04
08-19-2021 15:06:04
Just adding a best practice note: You want to inherit from `Exception` and not `BaseException`. https://docs.python.org/3/library/exceptions.html > The built-in exception classes can be subclassed to define new exceptions; programmers are encouraged to derive new exceptions from the Exception class or one of its subclasses, and not from BaseException. More information on defining exceptions is available in the Python Tutorial under User-defined Exceptions. IMO, I'd say that this could be a [ValueError](https://docs.python.org/3/library/exceptions.html#ValueError) But that's just my opinion. If the core team has an opinion on the matter, listen to them :)<|||||>> Just adding a best practice note: You want to inherit from `Exception` and not `BaseException`. > > https://docs.python.org/3/library/exceptions.html > > > The built-in exception classes can be subclassed to define new exceptions; programmers are encouraged to derive new exceptions from the Exception class or one of its subclasses, and not from BaseException. More information on defining exceptions is available in the Python Tutorial under User-defined Exceptions. > > IMO, I'd say that this could be a [ValueError](https://docs.python.org/3/library/exceptions.html#ValueError) > > But that's just my opinion. If the core team has an opinion on the matter, listen to them :) The reason behind using Custom Exception is to help users know what's the error from their side is, BatchSizeError sounds more clear and directly addresses that the problem is with the batch size.<|||||>You should still inherit from `Exception` and not `BaseException`, per the official Python docs https://docs.python.org/3/library/exceptions.html#BaseException >The base class for all built-in exceptions. It is not meant to be directly inherited by user-defined classes (for that, use Exception). If str() is called on an instance of this class, the representation of the argument(s) to the instance are returned, or the empty string when there were no arguments. Inheriting from BaseException can cause problems with having KeyboardInterrupt exceptions getting clobbered and having programs hang.<|||||>> You should still inherit from `Exception` and not `BaseException`, per the official Python docs > > https://docs.python.org/3/library/exceptions.html#BaseException > > > The base class for all built-in exceptions. It is not meant to be directly inherited by user-defined classes (for that, use Exception). If str() is called on an instance of this class, the representation of the argument(s) to the instance are returned, or the empty string when there were no arguments. > > Inheriting from BaseException can cause problems with having KeyboardInterrupt exceptions getting clobbered and having programs hang. Yes, I have changed that. 🤝 `BaseException` -> `Exception` <|||||>Ok, I am using `ValueError` and made all other changes as well. Please go through it and let me know if there's anything else that is needed. @LysandreJik <|||||>Thanks, @LysandreJik. :)
transformers
13,183
closed
Fix LUKE tests
# What does this PR do? 3 tests defined in `test_tokenization_luke.py` were having a timeout because they were too slow: ``` FAILED tests/test_tokenization_luke.py::Luke::test_add_special_tokens FAILED tests/test_tokenization_luke.py::Luke::test_maximum_encoding_length_pair_input FAILED tests/test_tokenization_luke.py::Luke::test_maximum_encoding_length_single_input ``` This was caused by the `get_clean_sequence` method (used in each of those methods), which is defined in `test_tokenization_common.py` and was inherited by default. By overwriting this method with a much simpler one, the tests are significantly faster. No bottleneck anymore.
08-19-2021 13:14:26
08-19-2021 13:14:26
transformers
13,182
closed
T5TokenizerFast not reversible when text contains special tokens
## Environment info - `transformers` version: 4.8.2 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.8 - PyTorch version (GPU?): 1.9.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ## Who can help @patrickvonplaten, @patil-suraj ## Information I am using `T5TokenizerFast` initialized with `t5-base` tokenizer. ## To reproduce ``` from transformers import T5TokenizerFast def main(): tokenizer = T5TokenizerFast.from_pretrained('t5-base') """Note that all those strings will be decoded to the same string!""" s1 = "Hello <unk>world" s2 = "Hello<unk> world" s3 = "Hello <unk> world" s4 = "Hello<unk>world" for s in [s1, s2, s3, s4]: assert tokenizer.decode(tokenizer(s)['input_ids']) == 'Hello<unk> world</s>' if __name__ == "__main__": main() ```
08-19-2021 12:20:20
08-19-2021 12:20:20
Heey @zorikg, Is this a problem in your case? Some special tokens always strip away the space on the left so that we can assure the same expected behavior for the two use cases. *e.g.* when thinking about "<mask>" prediction, some users process the text in the form: "`The capital of <mask> is Paris`" while others use "`The capital of<mask> is Paris`" => we want both cases to yield the correct <mask> token = France so that for some special tokens we think it's better to just always strip away the white space on the left (could be on the right as well) <|||||>Hey @patrickvonplaten, In your example both cases contain the same string "The capital of is Paris" (typo?) and I am not sure what is the difference between them, could you clarify? It seems that you don't only strip away white space form the left, you also add a white space to the right. This is indeed a problem in my use case. I work under the assumption that sentence piece tokenizer should be fully reversible, which meant that `detokenize(tokenize(x)) == x`. In my scenario I look for answer spans inside a paragraph and checking if it contains certain subtext. I have many bugs around this issue when text contains unknown tokens. For example: ``` tokenizer = T5TokenizerFast.from_pretrained('t5-base') s = 'maternal grandfather ʻAikanaka' s_encode_decode = tokenizer.decode(tokenizer(s)['input_ids']) print(s_encode_decode) s_encode_decode_2 = tokenizer.decode(tokenizer(s_encode_decode)['input_ids']) print(s_encode_decode_2) ``` The first print is `maternal grandfather <unk>Aikanaka</s>` and the second is `maternal grandfather<unk> Aikanaka</s></s>`. Due to the fact that I may encode & decode the paragraph and the answer many times I had many cases where I thought that string did not contain certain substring but it actually did (because spaces around the <unk> token were mismatched). I do think that the contract should be that if I encode and decode I get the same result. If there are other use cases, I would consider supporting them with explicit argument, right now I feel that the API is a bit misleading and it actually took us a lot of time to figure out the reason for our bug. WDYT? Thanks!<|||||>Sorry @zorikg, I forgot to put the text in `code format`. Now my example above should make more sense... But looking more into it, I think this looks like a bug to me... The following should work IMO: ```python from transformers import T5TokenizerFast, AddedToken tokenizer = T5TokenizerFast.from_pretrained('t5-base') tokenizer.unk_token = AddedToken("<unk>", lstrip=False, rstrip=False) s = "Hello <unk>there" tokenizer.decode(tokenizer(s)['input_ids']) == s ``` cc'ing @LysandreJik @SaulLu - what do you think about this?<|||||>Thanks for the answer @patrickvonplaten. Unfortunately I ran your code and `tokenizer.decode(tokenizer(s)['input_ids']) == s` returns `False` :( Also it seems that the type of `tokenizer.unk_token` is `str` and not `AddedToken`. Do you have other workaround? Also I need this behavior to be consistent for all special tokens, including the `eos` token and all the additional tokens with special ids.<|||||>Thank you for the issue @zorikg , Indeed, I share your point of view, this behaviour is surprising. As a side note, because it could cause you some problems, tokenizers have not been designed with the idea of being reversible (the normalization operation can be non-reversible). After some research, I don't see a solution to achieve what you want. Maybe @n1t0 pr @Narsil has an idea on the `tokenizer_backend` side ? I think we should spend some time investigating how the `rstrip` and `lstrip` attributes are taken into account as the output does not seem natural to me. For example, on this example, I don't understand why 1) there is a not `"▁"` between `'▁grandfather'` and `'<unk>'` and 2) the `"▁A"` start with a `"▁"`. ```python tokenizer = T5TokenizerFast.from_pretrained('t5-base', unk_token=AddedToken("<unk>", lstrip=False, rstrip=False)) s = 'maternal grandfather <unk>Aikanaka' s_encode_decode_tokens = tokenizer.convert_ids_to_tokens(tokenizer(s)['input_ids']) ``` Output: ``` ['▁maternal', '▁grandfather', '<unk>', '▁A', 'i', 'kan', 'aka', '</s>'] ``` [Edit]: as it is written in the documentation: > lstrip (bool, defaults to False) – Defines whether this token should strip all potential whitespaces on its left side. If True, this token will greedily match any whitespace on its left. For example if we try to match the token [MASK] with lstrip=True, in the text "I saw a [MASK]", we would match on " [MASK]". (Note the space on the left). > rstrip (bool, defaults to False) – Defines whether this token should strip all potential whitespaces on its right side. If True, this token will greedily match any whitespace on its right. It works just like lstrip but on the right. <|||||>Hmm, looked into it a bit, it's not `transformers` that's swallowing the extra space, it's T5 specific. To do that, I checked with the slow tokenizer to get `tokenizers` out of the equation. If you look at that, within `src/transformers/tokenization_utils_base.py::tokenize` you can check that everything gets split properly, BUT the t5 tokenization uses for `_tokenize` : `self.sp_model.encode(text, out_type=str)` And if you check, ```python # notice extra space self._tokenize("maternal grandfather ", out_type=str) # ['▁maternal', '▁grandfather'] # SPACE GONE ``` The fact that `tokenizers` just replicates that behavior seems ok to me. Anyway, the culprit is NOT transformers/tokenizers but really `T5`. (or `spm`) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,181
closed
SageMaker: Fix sagemaker DDP & metric logs
# What does this PR do? This PR fixes the fix introduced in #12853. Since `sm_dist.Barrier()` is not available in `smd 1.0.0 2021-01-26` release and smd `1.2.0`, which are used for the DLC with PyTorch 1.7.1 & 1.8.1 (they maintained ones). Further is the fix `sm_dist.barrier()` also working with `smd 1.0.0 2020-12-06`. cc @sgugger Additionally, does this PR update: * the SageMaker test image_uris * instances type for distributed training -> there are capacity issues with the 24xlarge * and moves the adding of the `StreamHandler(sys.stdout)` for logs to the `trainer_pt_utils.py` to also cover the `log_metrics function` more to this below. --- When running a training job on SageMaker all stdout or stderr are sent to Amazon CloudWatch Logs. With the introduction of the new `log_metrics` function, SageMaker lost its output. Therefore I moved the `StreamHandler(sys.stdout)` to the `trainer_pt_utils.py` and removed it in the `trainer`. More information can be found here #10633
08-19-2021 12:02:27
08-19-2021 12:02:27
Thanks a lot @philschmid !
transformers
13,180
closed
Conversion of Wav2vec2 model to TFWav2vec2 model
Hi, I trained model using fairseq toolkit. I have successfully converted the model from fairseq to huggingface .bin model. How can I convert to pure pytorch (.pt) and tensorflow (.h5) format. Are there any scripts for that?
08-19-2021 09:33:41
08-19-2021 09:33:41
Hey @harveenchadha, to convert from PT to TF, you can just do: ```python from transformers import TFWav2Vec2Model model = TFWav2Vec2Model.from_pretrained("<path/to/hf.bin model folder>", from_pt=True) model.save_pretrained("<path/to/save/.h5>") ```<|||||>Also note that (`.pt`) and (`.bin`) is the same format in most cases as far as I understand: https://stackoverflow.com/questions/57245332/what-are-the-difference-between-bin-and-pt-pytorch-saved-model-types#:~:text=1%20Answer&text=There%20is%20no%20difference%20as,torch%20can%20read%20either%20.
transformers
13,179
closed
Correct order of overflowing_tokens for slow tokenizer
# What does this PR do? When using a slow tokenizer (LayoutLM, Bert, Alberta, etc.), the `overflowing_tokens` were obtained in the wrong order. I have made the necessary changes that will produce the `overflowing_tokens` in the correct order. ## Tasks summary - - [x] making sure overflowing tokens are returned in the correct order for all `truncation_strategy` for a sequences of input ids. - [x] if a pair of sequences of input ids (or batch of pairs) is provided, an error should be raised for the `truncation_strategy=True` or `longest_first` stating _"Not possible to return overflowing tokens for pair of sequences with the `longest_first`.Please select another truncation strategy than `longest_first`, for instance `only_second` or `only_first`."_ - [x] Replaced the deprecated method `encode_plus` to regular `__call__` method in `test\test_tokenization_common.py`. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR fixes the issue [ huggingface/transformers#13148 ](https://github.com/huggingface/transformers/issues/13148 ) Fixes huggingface#13148 ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),Pull Request section? Yes 👍🏻 - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Yes, [ huggingface/transformers#13148 ](https://github.com/huggingface/transformers/issues/13148 ) - [x] Did you write any new necessary tests?Yes 👍🏻 , Required tests are added in `tests/test_tokenization_common.py` - [x] Did you make sure to update the documentation with your changes? Anyone in the community is free to review the PR once the tests have passed. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @patrickvonplaten @NielsRogge @LysandreJik @n1t0 @SaulLu
08-19-2021 08:33:28
08-19-2021 08:33:28
Thank you very much for working on this PR. Did you check the test `test_maximum_encoding_length_single_input` and `test_maximum_encoding_length_pair_input` passed locally? :slightly_smiling_face: I think it would be good to have some tests to check that the tokens are in the right order now. What do you think? The first thing I see would be to complete the tests performed in methods [`test_maximum_encoding_length_single_input`](https://github.com/huggingface/transformers/blob/master/tests/test_tokenization_common.py#L846) and [`test_maximum_encoding_length_pair_input` ](https://github.com/huggingface/transformers/blob/master/tests/test_tokenization_common.py#L938) in file `test_tokenization_common.py`. For example, currently, sometimes we check that the content of the overflowing tokens corresponds to what we expect for the [fast tokenizers](https://github.com/huggingface/transformers/blob/master/tests/test_tokenization_common.py#L928) but not for the [slow ones](https://github.com/huggingface/transformers/blob/master/tests/test_tokenization_common.py#L937) (and [here](https://github.com/huggingface/transformers/blob/master/tests/test_tokenization_common.py#L1079) and [here](https://github.com/huggingface/transformers/blob/master/tests/test_tokenization_common.py#L1111)). Our goal is really to check in the tests the resulting overflowing tokens for all cases , i.e. all `TruncationStrategy` and with 1 sequence or a pair of sequences. Don't hesitate to tell me if you need more help to complete these tests. :slightly_smiling_face: <|||||>> Did you check the test `test_maximum_encoding_length_single_input` and `test_maximum_encoding_length_pair_input` passed locally? No. These two tests weren't passed locally either. > I think it would be good to have some tests to check that the tokens are in the right order now. I ran the updated code on several tokenizers to verify the result. They all passed. I will try making better test cases. As mentioned, Firstly I will try to resolve the ` test_maximum_encoding_length_single_input` and `test_maximum_encoding_length_pair_input`. Thank you @SaulLu.<|||||>@SaulLu I would like to make a request. I want to know the correct order of overflowing tokens for the test case : ``` from transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") seq = ["hello my name is Ted Mosby ", "I am an Architect in Boston "] encoding = tokenizer(seq[0],seq[1], padding=True, max_length=6, truncation=True, return_overflowing_tokens=True) print(tokenizer.decode(encoding.input_ids)) print(tokenizer.decode(encoding.overflowing_tokens)) ```<|||||>You make a very good point! Indeed, we have an API choice to make here. I'm going to list the possibilities I see here because I don't see an "ideal" solution that would fit into the current framework (i.e. a single list). I take the example you gave: ```python from transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") seq = ["hello my name is Ted Mosby ", "I am an Architect in Boston "] encoding = tokenizer(seq[0],seq[1], padding=True, max_length=6, truncation=True, return_overflowing_tokens=True) print(tokenizer.decode(encoding.input_ids)) ``` Output ``` [CLS] hello my [SEP] i [SEP] ``` the possibilities I see for the output of `encoding.overflowing_tokens`: 1. Concatenate in the same list the overflowing tokens of the sequence 1 and the sequence 2. As a result ``print(tokenizer.decode(encoding.overflowing_tokens))`` will return: ``` 'name is Ted Mo ##sby am an Architect in Boston' ``` Advantage: the output format is the same; Disadvantage: we can't distinguish between the first and the second sequence 2. Create a tuple of 2 lists for the overflowing tokens of the sequence 1 and the sequence 2. As a result ``print(tokenizer.decode(encoding.overflowing_tokens[0]), tokenizer.decode(encoding.overflowing_tokens[1]))`` will return: ``` 'name is Ted Mo ##sby' 'am an Architect in Boston' ``` Advantage: we can distinguish between the first and the second sequence; Disadvantage: the output format is not the same. This can be seen as a temporary micro-change if we ever want to standardize the API of fast and slow tokenizers in a second PR. 3. Raise an error because as many comments in the tests show, before it was not possible to return overflowing tokens for slow tokenizers with the longest_first strategy @LysandreJik , @sgugger, @patil-suraj or @patrickvonplaten I think your point of view can be useful here. Should we change the output format ? Should we also aim to have the same behavior for the slow and fast tokenizers ? :slightly_smiling_face: <|||||>> Disadvantage: we can't distinguish between the first and the second sequence For the above Disadvantage, The following method might resolve it - The use of special tokens might help in this problem For instance: On the same test case mentioned above - ` [CLS] name is Ted Mo ##sby [SEP] am an Architect in Boston [SEP] ` @SaulLu Thank you for helping me out with the test case.<|||||>For `test_maximum_encoding_length_single_input` earlier the order was not correct for the slow tokenizer (i.e. reverse order if stride = 0 ) but now I guess we can add this line of code https://github.com/huggingface/transformers/blob/143738214cb83e471f3a43652617c8881370342c/tests/test_tokenization_common.py#L928 for Slow tokenizer as well. @SaulLu <|||||>@SaulLu @LysandreJik @NielsRogge @patrickvonplaten, could you please review the changes I have done in the code.<|||||>Hello, thank you for your PR! Could you please add some tests to ensure correct behavior? You can add them in `tests/tests_tokenization_common.py` so that all tokenizers get tested. Thank you!<|||||>> Hello, thank you for your PR! Could you please add some tests to ensure correct behavior? You can add them in `tests/tests_tokenization_common.py` so that all tokenizers get tested. Thank you! @LysandreJik Thank you for reviewing the PR. Sure, I will add the necessary test to ensure the correct behavior.<|||||>@LysandreJik All the necessary tests have been added.<|||||>@LysandreJik, could you please review the tests I have added to the code. Thank you<|||||>Thanks for the ping @Apoorvgarg-creator, @SaulLu will take over and review :)<|||||>@Apoorvgarg-creator , thanks again for your work, I am trying to look at your PR quickly. > @SaulLu I would like to make a request. I want to know the correct order of overflowing tokens for the test case : In the meantime, sorry for the delay, but after discussing it, for this case (pair of sequences and `longest_first` strategy) we think it would be better to return an error.<|||||>> @Apoorvgarg-creator , thanks again for your work, I am trying to look at your PR quickly. > > > @SaulLu I would like to make a request. > > I want to know the correct order of overflowing tokens for the test case : > > In the meantime, sorry for the delay, but after discussing it, for this case (pair of sequences and `longest_first` strategy) we think it would be better to return an error. So the code should raise an error message whenever we try to return overflowing tokens for a pair of sequences with the `longest_first` strategy. And For single_input, I have corrected the order and also added the necessary tests in `test_tokenization_common.py`. Do these require any changes? @SaulLu , Thank you for the review.<|||||><img width="1076" alt="Screenshot 2021-08-27 at 9 04 24 AM" src="https://user-images.githubusercontent.com/57873504/131067867-0f681b61-a82c-44ec-97f1-fa40b44fd3f3.png"> [Documentation/Preprocessing data]( https://huggingface.co/transformers/preprocessing.html ),Here they have mentioned when truncation_strategy is set to 'True' it means `only_first` instead of `longest_first`. @sgugger <|||||>> <img alt="Screenshot 2021-08-27 at 9 04 24 AM" width="1076" src="https://user-images.githubusercontent.com/57873504/131067867-0f681b61-a82c-44ec-97f1-fa40b44fd3f3.png"> > > [Documentation/Preprocessing data](https://huggingface.co/transformers/preprocessing.html),Here they have mentioned when truncation_strategy is set to 'True' it means `only_first` instead of `longest_first`. Great catch. Indeed, the documentation seems to differ between the section ["Everything you always wanted to know about padding and truncation"](https://huggingface.co/transformers/preprocessing.html?highlight=truncation#everything-you-always-wanted-to-know-about-padding-and-truncation) and [the docstring of the _call__ methode of `PreTrainedTokenizerBase`](https://huggingface.co/transformers/internal/tokenization_utils.html?highlight=truncation#transformers.tokenization_utils_base.PreTrainedTokenizerBase.__call__). It would be best if you opened a dedicated issue so that we can deal with the problems separately?<|||||>> It would be best if you opened a dedicated issue so that we can deal with the problems separately? Sure. I will make a separate issue for this. Thank you for the reviews, @SaulLu. I will do the dedicated changes at the earliest.<|||||>> I didn't check if that the case or, did you check it? No, I haven't. But I will go through the things you have mentioned above. <|||||>@SaulLu, All the changes that I could find in the docstring have been done.<|||||>@sgugger Thank you for reviewing the PR. I have made the changes mentioned above. Do I need to change every use case of `tokenizer.encode_plus` or only in the `test_maximum_encoding_length_pair_input`.<|||||>@LysandreJik @SaulLu, all the dedicated changes have been resolved. Could you please review the PR? Thank you <|||||>Thanks for fixing this, great work!<|||||>@SaulLu @LysandreJik @sgugger @NielsRogge, Thank you for the guidance. It was an insightful experience and I hope to contribute more.
transformers
13,178
closed
how to finetune based huggingface: run_glue.py
python transformers-master/examples/pytorch/text-classification/run_glue.py \ --model_name_or_path chinese_bert-base \ --train_file=transformers-master/dataset/class/train.csv \ --validation_file=transformers-master/dataset/class/dev.csv \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3 \ --output_dir output When I use run_glue.py, and train.csv/dev.csv data format has two columns which contain “sentence” and “label”. But It reports an error "pandas.errors. ParserError: Error tokenizing data. C error: Expected 1 fields in line 10, saw 2". What is reason of this error? It is the data format error?
08-19-2021 06:51:17
08-19-2021 06:51:17
Can you provide a Colab notebook to reproduce your issue?<|||||>what is the format of the classification data which used in run_glue.py?<|||||>The [README](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) mentions that "your own data in a csv or a JSON file (the script might need some tweaks in that case, refer to the comments inside for help)". Looking at the comments, it says: # Get the datasets: you can either provide your own CSV/JSON training and evaluation files (see below) # or specify a GLUE benchmark task (the dataset will be downloaded automatically from the datasets Hub). # # For CSV/JSON files, this script will use as labels the column called 'label' and as pair of sentences the # sentences in columns called 'sentence1' and 'sentence2' if such column exists or the first two columns not named # label if at least two columns are provided. # # If the CSVs/JSONs contain only one non-label column, the script does single sentence classification on this # single column. You can easily tweak this behavior (see below) A bit further down, the dataset is read as follows: ``` from datasets import load_dataset data_files = {"train": data_args.train_file, "validation": data_args.validation_file} raw_datasets = load_dataset("csv", data_files=data_files, cache_dir=model_args.cache_dir) ``` You can perhaps isolate the error by running the code above.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,177
closed
Bug of PyTorch group_beam_search function
## Environment info - `transformers` version: 4.9.1 - Platform: Ubuntu 18.04 - Python version: 3.8.8 - PyTorch version (GPU?): 1.8.1+cu111 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help - text generation: @patrickvonplaten ## Information Code Location: `src/transformers/generation_utils.py --> lines 2411 - 2480 (in group_beam_search function)` ```python for beam_group_idx in range(num_beam_groups): group_start_idx = beam_group_idx * num_sub_beams group_end_idx = min(group_start_idx + num_sub_beams, num_beams) group_size = group_end_idx - group_start_idx # indices of beams of current group among all sentences in batch batch_group_indices = [] ########################################################### if output_scores: processed_score = torch.zeros_like(outputs.logits[:, -1, :]) ########################################################### for batch_idx in range(batch_size): batch_group_indices.extend( [batch_idx * num_beams + idx for idx in range(group_start_idx, group_end_idx)] ) group_input_ids = input_ids[batch_group_indices] # select outputs of beams of current group only next_token_logits = outputs.logits[batch_group_indices, -1, :] # hack: adjust tokens for Marian. For Marian we have to make sure that the `pad_token_id` # cannot be generated both before and after the `nn.functional.log_softmax` operation. next_token_logits = self.adjust_logits_during_generation(next_token_logits, cur_len=cur_len) next_token_scores = nn.functional.log_softmax( next_token_logits, dim=-1 ) # (batch_size * group_size, vocab_size) vocab_size = next_token_scores.shape[-1] next_token_scores = logits_processor( group_input_ids, next_token_scores, current_tokens=current_tokens, beam_group_idx=beam_group_idx ) next_token_scores = next_token_scores + beam_scores[batch_group_indices].unsqueeze(-1).expand_as( next_token_scores ) ########################################################### if output_scores: processed_score[batch_group_indices] = next_token_scores ########################################################### # reshape for beam search next_token_scores = next_token_scores.view(batch_size, group_size * vocab_size) next_token_scores, next_tokens = torch.topk( next_token_scores, 2 * group_size, dim=1, largest=True, sorted=True ) next_indices = next_tokens // vocab_size next_tokens = next_tokens % vocab_size # stateless beam_outputs = beam_scorer.process( group_input_ids, next_token_scores, next_tokens, next_indices, pad_token_id=pad_token_id, eos_token_id=eos_token_id, ) beam_scores[batch_group_indices] = beam_outputs["next_beam_scores"] beam_next_tokens = beam_outputs["next_beam_tokens"] beam_idx = beam_outputs["next_beam_indices"] input_ids[batch_group_indices] = group_input_ids[beam_idx] group_input_ids = torch.cat([group_input_ids[beam_idx, :], beam_next_tokens.unsqueeze(-1)], dim=-1) current_tokens[batch_group_indices] = group_input_ids[:, -1] # (beam_idx // group_size) -> batch_idx # (beam_idx % group_size) -> offset of idx inside the group reordering_indices[batch_group_indices] = ( num_beams * (beam_idx // group_size) + group_start_idx + (beam_idx % group_size) ) ``` ```python if output_scores: processed_score = torch.zeros_like(outputs.logits[:, -1, :]) ``` ```python if output_scores: processed_score[batch_group_indices] = next_token_scores ``` -------------------------------------------------- When `output_scores=True` is set, the `processed_score` will be reset by `torch.zeros_like` in each `for loop`. I'm wondering if this is a bug. It will cause the `output scores` to not match expectations. (Except for the last beam group, the rest are all 0).
08-19-2021 05:24:51
08-19-2021 05:24:51
Hey @Changyu-Guo, Thanks a lot for the issue! I see what you mean & I think you're right! We should probably move the ``` if output_scores: processed_score = torch.zeros_like(outputs.logits[:, -1, :]) ``` out of the inner loop no? To be sure that beam group indices 0 - (last - 1) are not always 0...would you like to open a PR to fix it? :-)<|||||>Hi @patrickvonplaten, Can I take on this issue( If it is not assigned to someone else )? It looks fairly simple to me. But as I'm quite new to this so I might need some guidance.<|||||>@patrickvonplaten Sorry for the late reply, you are right. Move the ```python if output_scores: processed_score = torch.zeros_like(outputs.logits[:, -1, :]) ``` out of the inner `for loop` will solve this problem. I think @sourabh112 can take on this issue, perhaps you should carefully read "[How to contribute to transformers?](https://huggingface.co/transformers/contributing.html)" first.<|||||>I have read the [contributing guidelines](https://github.com/huggingface/transformers/blob/91ff480e2693f36b11aaebc4e9cc79e4e3c049da/CONTRIBUTING.md) and made the changes. Should I make/run some test cases (Help with some examples would be appreciable) to make sure that now output scores are giving expected values or directly make a PR?<|||||>@patrickvonplaten Should I make a PR?
transformers
13,176
closed
GPT2 error when we try to run torch.jit.script
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.9.2 - Platform: Ubuntu 18.04 - Python version: Python3.6 - PyTorch version (GPU?): 1.9.0 GPU - Tensorflow version (GPU?): N/A - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @patrickvonplaten, @LysandreJik <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): GPT2 The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce I was trying to run torch.jit.script and I get the following error from JIT frontend ``` File "/home/ubuntu/anaconda3/envs/python3/lib/python3.6/site-packages/torch/jit/frontend.py", line 330, in __call__ raise UnsupportedNodeError(ctx, node) torch.jit.frontend.UnsupportedNodeError: GeneratorExp aren't supported: File "/home/ubuntu/anaconda3/envs/python3/lib/python3.6/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 756 # Ensure layer_past is on same device as hidden_states (might not be correct) if layer_past is not None: layer_past = tuple(past_state.to(hidden_states.device) for past_state in layer_past) ~ <--- HERE # Ensure that attention_mask is always on the same device as hidden_states if attention_mask is not None: ``` I am curious to know if this is a known issue or learn if I am doing something wrong. #### Sample Code: ``` from transformers import GPT2LMHeadModel, GPT2Config import torch configuration = GPT2Config(n_embd=1600, n_layer=48, n_head=25) model = GPT2LMHeadModel(configuration) script_model = torch.jit.script(model.base_model) ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
08-18-2021 21:44:44
08-18-2021 21:44:44
Can you try the following: ```python from transformers import GPT2LMHeadModel, GPT2Config import torch configuration = GPT2Config(n_embd=1600, n_layer=48, n_head=25, use_cache=False) model = GPT2LMHeadModel(configuration) script_model = torch.jit.script(model.base_model) ``` just to first see whether disabling the cache solves the problem<|||||>> Can you try the following: > > ```python > from transformers import GPT2LMHeadModel, GPT2Config > import torch > configuration = GPT2Config(n_embd=1600, n_layer=48, n_head=25, use_cache=False) > model = GPT2LMHeadModel(configuration) > script_model = torch.jit.script(model.base_model) > ``` > > just to first see whether disabling the cache solves the problem I still see the same error. ``` torch.jit.frontend.UnsupportedNodeError: GeneratorExp aren't supported: File "/home/ubuntu/anaconda3/envs/python3/lib/python3.6/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 756 # Ensure layer_past is on same device as hidden_states (might not be correct) if layer_past is not None: layer_past = tuple(past_state.to(hidden_states.device) for past_state in layer_past) ~ <--- HERE # Ensure that attention_mask is always on the same device as hidden_states if attention_mask is not None: ``` This seems like GPT model as it is implemented today is not supported by `torch.jit.script` because the model uses generator expr and Script doesn't support it yet. Is this accurate or am I missing anything? Also, would `torch.jit.trace`ing the GPT2 model cause any correctness issues? <|||||>Hello! The documentation regarding TorchScript can be found [here](https://huggingface.co/transformers/serialization.html#torchscript). You're correct in your analysis: `torch.jit.script` isn't supported, while `torch.jit.trace` is supported!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,175
closed
GPT-Neo ONNX Inference with past is broken
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.10.0.dev0 (1fec32adc6a4840123d5ec5ff5cf419c02342b5a) - Platform: Linux - Python version: 3.8.8 - PyTorch version (GPU?): 1.9.0a0+2ecb2c7, True - Tensorflow version (GPU?): Not Installed, False - Using GPU in script?: Yes (3090) - Using distributed or parallel set-up in script?: No ### Who can help The issue is connected with a pull #12911: @michaelbenayoun @mfuntowicz @sgugger @LysandreJik ## Information Model I am using is gpt-neo 1.3B The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Model export ``` from pathlib import Path from transformers import GPTNeoForCausalLM, GPT2TokenizerFast, GPTNeoConfig from transformers.models.gpt_neo import GPTNeoOnnxConfig from transformers.onnx.convert import export MODEL_PATH = 'EleutherAI/gpt-neo-1.3B' TASK = 'causal-lm' ONNX_MODEL_PATH = Path("onnx_dir/gpt_neo_13b.onnx") ONNX_MODEL_PATH.parent.mkdir(exist_ok=True, parents=True) def main(): tokenizer = GPT2TokenizerFast.from_pretrained(MODEL_PATH) config = GPTNeoConfig.from_pretrained(MODEL_PATH) onnx_config = GPTNeoOnnxConfig.with_past(config, task=TASK) model = GPTNeoForCausalLM(config=config).from_pretrained(MODEL_PATH) onnx_inputs, onnx_outputs = export(tokenizer=tokenizer, model=model, config=onnx_config, opset=12, output=ONNX_MODEL_PATH) print(f'Inputs: {onnx_inputs}') print(f'Outputs: {onnx_outputs}') if __name__ == '__main__': main() ``` 2. Inference code ``` import numpy as np import onnxruntime as ort from transformers import GPT2TokenizerFast, GPTNeoConfig from pathlib import Path MODEL_PATH = 'EleutherAI/gpt-neo-1.3B' ONNX_MODEL_PATH = Path("onnx_dir/gpt_neo_13b.onnx") PROMPTS = ['Hello there'] def _get_inputs(prompts, tokenizer, config): encodings_dict = tokenizer.batch_encode_plus(prompts) # Shape: [batch_size, seq_length] input_ids = np.array(encodings_dict["input_ids"], dtype=np.int64) # Shape: [batch_size, seq_length] attention_mask = np.array(encodings_dict["attention_mask"], dtype=np.float32) batch_size, seq_length = input_ids.shape past_seq_length = 0 num_attention_heads = config.num_attention_heads hidden_size = config.hidden_size even_present_state_shape = [ batch_size, num_attention_heads, past_seq_length, hidden_size // num_attention_heads ] odd_present_state_shape = [batch_size, past_seq_length, hidden_size] onnx_inputs = {} for idx in range(config.num_layers): if idx % 2 == 0: onnx_inputs[f'past_key_values.{idx}.key'] = np.empty(even_present_state_shape, dtype=np.float32) onnx_inputs[f'past_key_values.{idx}.value'] = np.empty(even_present_state_shape, dtype=np.float32) else: onnx_inputs[f'past_key_values.{idx}.key_value'] = np.empty(odd_present_state_shape, dtype=np.float32) onnx_inputs['input_ids'] = input_ids onnx_inputs['attention_mask'] = attention_mask return onnx_inputs def main(): config = GPTNeoConfig.from_pretrained(MODEL_PATH) tokenizer = GPT2TokenizerFast.from_pretrained(MODEL_PATH) ort_session = ort.InferenceSession(str(ONNX_MODEL_PATH)) onnx_inputs = _get_inputs(PROMPTS, tokenizer, config) outputs = ort_session.run(['logits'], onnx_inputs) if __name__ == '__main__': main() ``` The inference code runs into the following error: ``` Traceback (most recent call last): .... File "inference.py", line 60, in main outputs = ort_session.run(['logits'], onnx_inputs) File "/opt/conda/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 188, in run return self._sess.run(output_names, input_feed, run_options) onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Reshape node. Name:'Reshape_501' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:42 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, std::vector<long int>&, bool) gsl::narrow_cast<int64_t>(input_shape.Size()) == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{1,1,1, 4096}, requested shape:{1,1,1,16,128} ``` ## Expected behavior Onnx Inference for a model with past states should work. While converting without past states the inference works fine.
08-18-2021 21:13:02
08-18-2021 21:13:02
Gently pinging @mfuntowicz here<|||||>@michaelbenayoun @mfuntowicz @sgugger @LysandreJik would you be so kind to assist in resolving this issue?<|||||>Hello @whiteRa2bit, thanks for testing out the experimental `-with-past` feature of the ONNX export! @michaelbenayoun and @mfuntowicz are the best suited to answer, but they're off until early next week. We'll make sure to attend to this issue as soon as they're back! Thank you for your understanding.<|||||>@LysandreJik, thanks a lot for letting me know!<|||||>An update from my side: Inference works fine with the sequence length equals 1, while for all other lengths it breaks with the error I described above: I tried to visualize the converted onnx graph using netron and found the node where the error occurs: ![image](https://user-images.githubusercontent.com/28367451/131477923-36584e89-1bf9-4023-9c49-37efd6896890.png) <|||||>Hi @whiteRa2bit, I've actually made the same observation this morning, I am working on it!<|||||>#13491 along with #13524 solve the issue, but be careful of 2 things: - when exporting the model with past keys and values, the attention mask should have a sequence length of past_sequence_length + input_ids_sequence_length - ORT seems to not like inputs produced by np.empty (it produces NaN on my end compared to proper output when using np.zeros or np.ones for instance)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,174
closed
[Benchmark]
# 🖥 Benchmarking `transformers` ## Benchmark Which part of `transformers` did you benchmark? ## Set-up What did you run your benchmarks on? Please include details, such as: CPU, GPU? If using multiple GPUs, which parallelization did you use? ## Results Put your results here!
08-18-2021 21:09:54
08-18-2021 21:09:54
Closing this for now as it doesn't contain any information
transformers
13,173
closed
enable mixed precision for Tensorflow training benchmarks
# 🚀 Feature request Currently [Tensorflow Benchmarks](https://github.com/huggingface/transformers/blob/master/src/transformers/benchmark/benchmark_tf.py) implemented in transformers package only supports training in FP32 mode and FP16 support is [unimplemented](https://github.com/huggingface/transformers/blob/master/src/transformers/benchmark/benchmark_tf.py#L173). It could be helpful for the community to be able benchmark the training for the models in FP16 mode as using mixed precision greatly improves the performance of training. ## Motivation Enabling mixed precision in training is [shown](https://medium.com/tensorflow/automatic-mixed-precision-in-tensorflow-for-faster-ai-training-on-nvidia-gpus-6033234b2540) to significantly improve the throughput of the training process. We wanted to implement the missing support for fp16 for training in [Tensorflow Benchmarks](https://github.com/huggingface/transformers/blob/master/src/transformers/benchmark/benchmark_tf.py) to gauge the performance uplift we noticed for 1.5x improvement in performance for `bert-base-uncased` using batch size of `8` and sequence length of `128`. ## Your contribution The code for this is implemented in [amp_tf_training_benchmarks](https://github.com/huggingface/transformers/compare/master...harishneit:amp_tf_training_benchmarks) branch in a fork. I can submit a pull request with tests if the community is interested in this.
08-18-2021 20:42:42
08-18-2021 20:42:42
Sorry for the slow reply - this is definitely something we'd be interested in seeing! Can I ask why you used `tf.compat.v1.mixed_precision.enable_mixed_precision_graph_rewrite` in your fork rather than `set_global_policy` or similar? It's not necessarily wrong, but I'm curious what the tradeoffs are there!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,172
closed
No module named: Regex while importing GPT2Tokenizer
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.9.0 - Platform: Linux - Python version: 3.6.13 - PyTorch version (GPU?): 1.9.0 - Tensorflow version (GPU?): 2.6.0 - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): GPT2 Tokenizer The problem arises when using: * [ X] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: I am still confused about this error but it is very simple to reproduce: ``` from transformers import GPT2Tokenizer tokenizer = GPT2.from_pretrained('gpt2') ``` The problem is occurred due to this line: https://github.com/huggingface/transformers/blob/1fec32adc6a4840123d5ec5ff5cf419c02342b5a/src/transformers/models/bertweet/tokenization_bertweet.py#L25 Is it about my python version? Normally regex imported as `re`, but can't understand why it is happened! Thanks. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Able to initialize GPT2 <!-- A clear and concise description of what you would expect to happen. -->
08-18-2021 19:53:30
08-18-2021 19:53:30
I can't reproduce the error. I ran the following code on Colab without any error: ``` from transformers import GPT2Tokenizer tokenizer = GPT2Tokenizer.from_pretrained('gpt2') ``` Tested python version: 3.7.1, transformers version: v4.9.0 and v4.9.2.<|||||>Thats true, it should work. Probably problem is environmental but the case that the line I added raise an error since it should be `re`. e.g: [this one](https://github.com/huggingface/transformers/blob/1c06240e1b3477728129bb58e7b6c7734bb5074e/examples/research_projects/seq2seq-distillation/sentence_splitter.py#L1) <|||||>@akalieren The `regex` package should not be the problem as it is automatically installed with transformers. The reason why it used `regex` instead of `re` can be found at the following comment in that file. https://github.com/huggingface/transformers/blob/1fec32adc6a4840123d5ec5ff5cf419c02342b5a/src/transformers/models/bertweet/tokenization_bertweet.py#L461 However, I think using `GPT2Tokenizer` should not be linked to `Bertweet` since they are not dependent to each other. Did you add additional code to use them together?<|||||>I deleted environment and created again. It is working now as it is expected. I jumped [this issue](https://github.com/conda-forge/conda-forge.github.io/issues/1161) from the link you sent. I guess the problem is about conda environment. Probably `Bertweet` is called from [`__init__.py`](https://github.com/huggingface/transformers/blob/1fec32adc6a4840123d5ec5ff5cf419c02342b5a/src/transformers/models/__init__.py#L19)
transformers
13,171
closed
[Docs] Function signatures on website not correctly reflecting current code.
## Environment info Tested v4.9.2 and master. ### Who can help @sgugger ## Information In [Trainer docs](https://huggingface.co/transformers/main_classes/trainer.html#transformers.Trainer), the first argument `model` in the function signature should be `model: Union[transformers.modeling_utils.PreTrainedModel, torch.nn.modules.module.Module] = None` as defined in the [code](https://github.com/huggingface/transformers/blob/1fec32adc6a4840123d5ec5ff5cf419c02342b5a/src/transformers/trainer.py#L267). But it currently shows `model: torch.nn.modules.module.Module = None`, which seems to be outdated, however. I've tested building the docs on my system, and the resulting html is correct.
08-18-2021 19:15:21
08-18-2021 19:15:21
Would you like to open a PR to correct it @qqaatw ? :-)<|||||>@patrickvonplaten I could try, but the problem seems to be related to the CI/CD since function signatures are automatically generated by Sphinx, and the problem didn't occur on the docs I built manually on my machine. (Tested ubuntu 18.04 with python 3.8 / windows 10 with python 3.8) <|||||>Closed as the PR has been merged.
transformers
13,170
closed
Using `bf16` instead of `fp16`
# 🚀 Feature request As seen in [this pr](https://github.com/huggingface/transformers/pull/10956), there is demand for `bf16` compatibility in training of transformers models. The pytorch folks just [added this feature](https://github.com/pytorch/pytorch/pull/61002) to their master branch, so we are now able to work on adding it to this repo. ## Motivation Related to [this issue](https://github.com/huggingface/transformers/pull/10956) and [this pytorch pr](https://github.com/pytorch/pytorch/pull/61002). This feature would allow for proper half-precision training of google-trained models, for example any `T5` model. ## Your contribution I am currently working on a PR for this [here](https://github.com/JamesDeAntonis/transformers/tree/bf16), and would gladly field any suggestions and contributions. @stas00
08-18-2021 18:39:55
08-18-2021 18:39:55
Very much looking forward to enable bf16 in PyTorch :-) Think we should probably wait though until the next PyTorch release is out. But it would be a good idea to have it supported as soon as the release is out then. cc @LysandreJik @sgugger <|||||>Indeed. I will install pt-nightly and work on this. Thank you for staying on top of the pytorch development, @JamesDeAntonis <|||||>@stas00 I'm going to open a pr shortly for my [branch](https://github.com/JamesDeAntonis/transformers/tree/bf16). Feel free to check that out for starter
transformers
13,169
closed
RobertaTokenizerFast object has no attribute '_convert_token_to_id'
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.9 - Platform: Linux - Python version: 3.6 - Tensorflow version (GPU?): 2.5 - Using GPU in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. @patrickvonplaten, @LysandreJik Models: Roberta(Tokenizer)Fast ## Information Model I am using (Bert, XLNet ...): Roberta(Tokenizer)Fast The problem arises when using: * [ ] the official example scripts: (give details below) * [X ] my own modified scripts: (give details below) LM-BFF The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ X] my own task or dataset: (give details below) Fine-tuning of existing language model for my own task ## To reproduce Steps to reproduce the behavior: 1) Create RobertaFastTokenizer 2) Try to call _convert_token_to_id ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> The example given by the maintainers of LM-BFF supposedly runs successfully, at least with what they claim is version 3.4 or higher of transformers. However, upon my inspection of source code of transformers, I see that _convert_token_to_id is not associated with fast tokenizers only with standard tokenizers. For example, if I view transformers.tokenization_gpt2 (Roberta tokenizer being built on gpt2) for v 3.4.0, I see _convert_token_to_id present and implemented. However, if I go to transformers.tokenization_gpt2_fast, it is not there. Is this a bug, is this something that was removed at some point, or are we simply not able to access _convert_token_to_id when using a Fast tokenizer?
08-18-2021 15:51:46
08-18-2021 15:51:46
Hey @demongolem-biz, Could you maybe post a short, reproducible code snippet that showcases the problem? Also note that `_convert_token_to_id` is a private method and that the corresponding public method `convert_token_to_id` is the one to be checked.<|||||>The code is not mine, I am trying to use someone else's, but here is the `__init__` function from the beginning to the point when `_convert_token_to_id` is used. I know tokenizer is being passed in, but runtime the error message claims the tokenizer object is a RobertaTokenizerFast. ``` ` def __init__(self, args, tokenizer, cache_dir=None, mode="train", use_demo=False): self.args = args self.task_name = args.task_name self.processor = processors_mapping[args.task_name] self.tokenizer = tokenizer self.mode = mode # If not using demonstrations, use use_demo=True self.use_demo = use_demo if self.use_demo: logger.info("Use demonstrations") assert mode in ["train", "dev", "test"] # Get label list and (for prompt) label word list self.label_list = self.processor.get_labels() self.num_labels = len(self.label_list) if args.prompt: assert args.mapping is not None self.label_to_word = eval(args.mapping) for key in self.label_to_word: # For RoBERTa/BART/T5, tokenization also considers space, so we use space+word as label words. if self.label_to_word[key][0] not in ['<', '[', '.', ',']: # Make sure space+word is in the vocabulary assert len(tokenizer.tokenize(' ' + self.label_to_word[key])) == 1 self.label_to_word[key] = tokenizer._convert_token_to_id(tokenizer.tokenize(' ' + self.label_to_word[key])[0]) else: self.label_to_word[key] = tokenizer._convert_token_to_id(self.label_to_word[key]) logger.info("Label {} to word {} ({})".format(key, tokenizer._convert_id_to_token(self.label_to_word[key]), self.label_to_word[key])) ``` `<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,168
closed
Issue with `Speech2TextFeatureExtractor` method `from_pretrained` and `from_dict`
## Environment info - `transformers` version: `master` - Platform: ubuntu - Python version: Python 3.7.11 (default, Jul 3 2021, 18:01:19) - PyTorch version (GPU?): 1.9.0+cu102 - Tensorflow version (GPU?): - - Using GPU in script?: - - Using distributed or parallel set-up in script?: - There is an issue when trying to load the `Speech2TextFeatureExtractor` from a local path. **How to reproduce** ```python !git lfs install !git clone https://huggingface.co/facebook/s2t-small-mustc-en-fr-st from transformers import AutoFeatureExtractor extractor = AutoFeatureExtractor.from_pretrained("s2t-small-mustc-en-fr-st") ``` producing ```python --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-6-c149f0c996ea> in <module>() 3 from transformers import AutoFeatureExtractor 4 ----> 5 extractor = AutoFeatureExtractor.from_pretrained("s2t-small-mustc-en-fr-st") /usr/local/lib/python3.7/dist-packages/transformers/models/auto/feature_extraction_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs) 158 159 if model_type is not None: --> 160 return FEATURE_EXTRACTOR_MAPPING[type(config)].from_dict(config_dict, **kwargs) 161 elif "feature_extractor_type" in config_dict: 162 feature_extractor_class = feature_extractor_class_from_name(config_dict["feature_extractor_type"]) AttributeError: type object 'Speech2TextFeatureExtractor' has no attribute 'from_dict' ```` also the `Speech2TextFeatureExtractor` doesn't work ```python !git lfs install !git clone https://huggingface.co/facebook/s2t-small-mustc-en-fr-st from transformers import Speech2TextFeatureExtractor extractor = Speech2TextFeatureExtractor.from_pretrained("s2t-small-mustc-en-fr-st") ``` producing ```python --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-7-c87c386101dd> in <module>() 3 from transformers import Speech2TextFeatureExtractor 4 ----> 5 extractor = Speech2TextFeatureExtractor.from_pretrained("s2t-small-mustc-en-fr-st") AttributeError: type object 'Speech2TextFeatureExtractor' has no attribute 'from_pretrained' ```
08-18-2021 15:34:55
08-18-2021 15:34:55
@philschmid, I think you are missing some dependencies which is why `Speech2TextFeatureExtractor` refers to the dummy object, defined here: https://github.com/huggingface/transformers/blob/1fec32adc6a4840123d5ec5ff5cf419c02342b5a/src/transformers/utils/dummy_speech_objects.py#L5 which doesn't have a `from_pretrained(...)` method. Can you try install `transformers` with `pip install -e ".[speech]"` ? That should fix the error<|||||>Some `torchaudio` dependency is missing I think: https://github.com/huggingface/transformers/blob/1fec32adc6a4840123d5ec5ff5cf419c02342b5a/src/transformers/models/speech_to_text/feature_extraction_speech_to_text.py#L23<|||||>@LysandreJik @sgugger - I think those kinds of errors have shown up for often already...would it make sense to add a `.from_pretrained(....)` method to all dummy objects that give a better error message?<|||||><del>Maybe just add nice error messages to `.from_pretrained(....)` and `.__init__(...)`</del> `__init__` already has it - just `from_pretrained(...)` then I think would be nice<|||||>The suggestion from @patrickvonplaten solved it, should we close it for now? <|||||>Just for posterity: @patrickvonplaten and I agreed that `Speech2Text` could be refactored to use `requires_backends("speech")` similarly to DETR and [TAPAS](https://github.com/huggingface/transformers/blob/ab7551cd7ff84cb5b7328bc37a06e06fa19f02bb/src/transformers/models/tapas/modeling_tapas.py#L803) to provide a user-friendly error on model loading.
transformers
13,167
closed
Update namespaces inside torch.utils.data to the latest.
# What does this PR do? Address #13036 . 1. Replace `torch.utils.data.dataset` with `torch.utils.data` 2. Replace `torch.utils.data.sampler` with `torch.utils.data` 3. Replace `torch.utils.data.dataloader` with `torch.utils.data` ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
08-18-2021 14:07:23
08-18-2021 14:07:23
Is this compatible with older versions of PyTorch? Given the issue history this looks good though<|||||>@patrickvonplaten Yes, it's backward compatible with PyTorch 1.2.0+, which meets transformers' PyTorch requirements: 1.3.1+. Ref: https://pytorch.org/docs/1.2.0/data.html https://github.com/pytorch/pytorch/blob/v1.2.0/torch/utils/data/__init__.py
transformers
13,166
closed
[AutoFeatureExtractor] Fix loading of local folders if config.json exists
# What does this PR do? Currently there is a problem when loading the feature extractor locally via `AutoFeatureExtractor` as spotted by @philschmid: ```bash !git lfs install !git clone https://huggingface.co/facebook/wav2vec2-base-960h ``` and then: ```python from transformers import AutoFeatureExtractor extractor = AutoFeatureExtractor.from_pretrained("wav2vec2-base-960h") ``` This leads to an error. This PR fixes it and also improves the error message.
08-18-2021 13:52:59
08-18-2021 13:52:59
@sgugger - merging for now. Let me know if something is not well and we can change afterwards. Tests have been added so this PR should be more or less safe.
transformers
13,165
closed
Performance issues in the program
Hello,I found a performance issue in the definition of `convert_dataset_for_tensorflow` , examples/tensorflow/text-classification/run_glue.py, [tf_dataset = tf_dataset.batch(batch_size=batch_size, drop_remainder=drop_remainder).map](https://github.com/huggingface/transformers/blob/439a43b6b403205eeda2d62645fc16c93627d30d/examples/tensorflow/text-classification/run_glue.py#L83) was called without **num_parallel_calls**. I think it will increase the efficiency of your program if you add this. The same issues also exist in [tf_dataset.batch(batch_size=batch_size, drop_remainder=drop_remainder).map](https://github.com/huggingface/transformers/blob/439a43b6b403205eeda2d62645fc16c93627d30d/examples/tensorflow/text-classification/run_text_classification.py#L98) , [.map(densify_ragged_batch)](https://github.com/huggingface/transformers/blob/439a43b6b403205eeda2d62645fc16c93627d30d/examples/tensorflow/multiple-choice/run_swag.py#L109) and [tf_dataset.batch(batch_size=batch_size, drop_remainder=drop_remainder).map](https://github.com/huggingface/transformers/blob/439a43b6b403205eeda2d62645fc16c93627d30d/examples/tensorflow/question-answering/run_qa.py#L253) Here is [the documemtation of tensorflow](https://tensorflow.google.cn/api_docs/python/tf/data/Dataset?hl=en#map) to support this thing. Looking forward to your reply. Btw, I am very glad to create a PR to fix it if you are too busy.
08-18-2021 13:08:05
08-18-2021 13:08:05
Hi @DLPerf, thanks for the warning! We're actually in the process of refactoring our Datasets to automatically support conversion to TF Datasets, at which point I'll be removing this part of our examples and replacing it with a call to the conversion method. However, if you have any insights for how we can improve the performance of our conversion methods there, that would be very helpful! You can review the code at https://github.com/huggingface/datasets/pull/2731 and leave suggestions as comments on that PR.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,164
closed
Missing weight in pretrained model `pegasus-xsum`
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.9.2 - Platform: Linux - Python version: 3.9.6 - PyTorch version (GPU?): 1.9.0 with GPU - Tensorflow version (GPU?): - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @patrickvonplaten ## Information I am using the PegasusForConditionalGeneration model. I found that the pretrained weight `google/pegasus-xsum` hosted by HuggingFace does not have the weight `lm_head.weight` defined here: https://github.com/huggingface/transformers/blob/master/src/transformers/models/pegasus/modeling_pegasus.py#L1210. ## To reproduce Steps to reproduce the behavior: 1. Download the `google/pegasus-xsum` weight from HuggingFace. 2. Load it use `torch.load`. 3. List its keys, no `lm_head.weight` is contained! <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> We should have the `lm_head` weights, because the weight file actually contains the bias `final_logits_bias` defined here: https://github.com/huggingface/transformers/blob/master/src/transformers/models/pegasus/modeling_pegasus.py#L1209. And the pretrained model name `google/pegasus-xsum` suggests that it is finetuned on the XSum dataset (which is a ConditionalGeneration task), so the weight for `lm_head` should be contained to make the finetuned model complete!
08-18-2021 09:34:43
08-18-2021 09:34:43
I just tested this, it works fine for me, `lm_head` is included. Colab notebook here: https://colab.research.google.com/drive/1oCrC3Tb07C7V1l-0Fx6_c8xdEbyqx9Km?usp=sharing It's best to use `.from_pretrained` instead of `torch.load`.<|||||>A lot of thanks for your reply! :^) I just figured out that the `lm_head.weight` actually maps the internal embeddings back to word predictions, whose weights will be tied to the input word embeddings! So the lm_head's weight will definitely not be included in the pretrained weight file. I did not realize this due to my lack of knowledge about NLP models :^(. Thanks again for your time!
transformers
13,163
closed
is there any <SOS> or <EOS> token in reformer-enwik8?
## Environment info None ### Who can help @patrickvonplaten Models: reformer-enwik8 ## Information Model I am using reformer-enwik8: The problem arises when using: ``` def encode(list_of_strings, pad_token_id=0): max_length = max([len(string) for string in list_of_strings]) # create emtpy tensors attention_masks = torch.zeros((len(list_of_strings), max_length), dtype=torch.long) input_ids = torch.full((len(list_of_strings), max_length), pad_token_id, dtype=torch.long) for idx, string in enumerate(list_of_strings): # make sure string is in byte format if not isinstance(string, bytes): string = str.encode(string) input_ids[idx, :len(string)] = torch.tensor([x + 2 for x in string]) attention_masks[idx, :len(string)] = 1 return input_ids, attention_masks model = ReformerModelWithLMHead.from_pretrained("google/reformer-enwik8") ids, masks = encode(["I COULD LABEL THIS ON THE INGREDIENTS AS MEAT".capitalize()]) logits = model(input_ids=ids, attention_mask=masks)["logits"] ``` The tasks I am working on is: try to get LM prob of one certain sequence ## Expected behavior I expect 0 and 1 represent `<SOS>` and `<EOS>` respectively, but I don't know if it is correct
08-18-2021 08:14:30
08-18-2021 08:14:30
I think `reformer-enwik8` was not trained using a `<SOS> or <EOS>` token - it's just the model to evaluate Reformer's compression capabilities on enwik8, see paper: https://arxiv.org/abs/2001.04451<|||||>> I think `reformer-enwik8` was not trained using a or token - it's just the model to evaluate Reformer's compression capabilities on enwik8, see paper: https://arxiv.org/abs/2001.04451 Thanks a lot for the quick reply. btw, is there any other pretrained character-level language model provide by huggingface now?<|||||>To add a related question: Is there any way of knowing which characters are part of the vocabulary of the pre-trained enwik-8 model? To my knowledge there only exists information on the `vocab_size` which is set to 258, but no information on which characters are part of the vocabulary of the pre-trained model.<|||||>Reformer simple uses Python's `chr()` and `ord()` methods to tokenize and decode. See: https://huggingface.co/google/reformer-enwik8#reformer-language-model-on-character-level-and-trained-on-enwik8 <|||||>This is true. But those methods work with the Unicode character set (i.e. up to 1,114,111) which does not correspond to a `vocab_size` of 258. This can also be seen with the shape of `outputs.scores` which is (1, 258). It seems to me that the vocab_size is just the first 258 characters of the unicode standard, i.e. Basic Latin & Latin-1 Supplement. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,162
closed
Fine-tuned Robust Wav2Vec 2.0 models
# 🌟 New model addition ## Model description The pretrained Robust Wav2Vec 2.0 model is already available on the Hugging Face model hub (https://huggingface.co/facebook/wav2vec2-large-robust). Facebook also released two fine-tuned models -- one which is fine-tuned on Librispeech and another which is fine-tuned on Switchboard. Would be good to have these fine-tuned models on the hub as well. ## Open source status * [x] the model weights are available on fairseq: https://github.com/pytorch/fairseq/blob/master/examples/wav2vec/README.md @patrickvonplaten
08-18-2021 07:47:28
08-18-2021 07:47:28
Adding them now :-) BTW @Nithin-Holla, It could be a cool project to fine-tune the `wav2vec2-large-robust` checkpoint on the [AMI dataset](https://groups.inf.ed.ac.uk/ami/corpus/datasets.shtml) since the model was not pretrained on AMI and AMI is a conversational dataset instead of a read-out corpus. Would be pretty interesting to see what performance can be achieved there (also compared to "non-robust" Wav2Vec2. If you'd be interested in such a project, let me know I'd be more than happy to help you there :-)<|||||>@patrickvonplaten Awesome, thanks for the speedy addition! Sure, I'd be interested in working on fine-tuning on the AMI dataset :)<|||||>Awesome, I'll send you a mail :-) BTW, there is still a problem with the just published checkpoints: https://github.com/pytorch/fairseq/issues/3799<|||||>Hi @Nithin-Holla @patrickvonplaten Wondering if this thread went anywhere? I'm attempting to finetune Hubert-large on the AMI dataset, would be interested to see where you guys got to and share results.<|||||>Yeah, we didn't manage to get good results yet sadly with AMI - it's mainly due AMI not being chunked by default.<|||||>Thanks @patrickvonplaten. I think It would be cool to make a robust spoken-to-written-language engine. I'm thinking we could supplement the AMI corpus, eg. youtube, or even make our own spoken wikipedia/similar. If you have started a working group on this, let me know. Feel free to send me an email. :) From my experiments, even though WER isn't "great," I see the finetuned model picking features of meeting conversation, which encourages me to see possibility here. I think the AMI data mimics adult spoken language better (than just reading of text, aka librispeech), and _together_ is like how humans learn language (from hearing -> reading + writing). <|||||>ps. I had chunked AMI to 10-second chunks, using actually the scripts you already published on HG.<|||||>Hey @i-am-neo, We've run quite some experiments here: https://huggingface.co/ami-wav2vec2, but didn't get super good results so far compared to other research papers. I think it's mainly due the way we've chunked AMI. Think we need to align ourselves with how other people chunked the data. Here is how the data should be chunked (didn't manage to take a look into this yet - sadly): > The first question we have is whether IHM corresponds to all 4 individual headset audio files are used, *i.e.* the Individual headsets 120M four individual WAV headsets data or whether it corresponds to the single headset mix, *i.e.*, the Headset mix 30M single wav file (on https://groups.inf.ed.ac.uk/ami/download/) > The Kaldi recipe that we wrote for the paper(s) uses separate channels for each headset (see: [here](https://github.com/kaldi-asr/kaldi/blob/master/egs/ami/s5/local/ami_download.sh)), though I would be surprised if the mix-headset variant resulted in statistically different WERs, saving tons of bandwidth (of course, one never knows until you the experiment). The segmentations we use for separate individual headsets are of course compatible with the mixed channel waveform. > The second question that we have is how the long audio clips are exactly chunked. Sadly we couldn't really find an "official" pre-processing script and t[here](https://github.com/kaldi-asr/kaldi/blob/master/egs/ami/s5/local/ami_xml2text.sh) is very little exact information on how the data is chunked. Do you guys have an official preprocessing script that you could maybe share? > The script here starts with the original AMI annotations in XML format. The AMI data comes with manual segmentations and timings for interpunction signs, and this is how we broke the long utterances. One caveat here, for end2end models you probably do this anyways as decoding graphs are much smaller than in HMM days, thus decoding time for these will not be an issue. That script can either query textual annotations from the AMI-provided JAVA tool, or download exported version in textual form. In the Kaldi recipe we did not want to make a dependency on JAVA thus we follow from exported files by default, though you can grasp from the script how to do so if you wish. <|||||>Hi @patrickvonplaten, thanks. I got similar loss and wer results with two flavors of hubert (3 epochs, max 10-sec chunks, single-headset). Were your experiment results run with the 20-sec max chunks? Which version of the dataset? I suspect there's more to the results than how we chunked. One can find in the single-headset version - 1. two speakers speaking over each other 2. and text like `i d i don't thi i don't think that it would be a a structural weakness` and `yes that's wi uh this will definitely`. Examples [here](https://colab.research.google.com/drive/1vSAabtxv_4HHi3mEdht4B6ud60djtj1z?usp=sharing). Some of the timings also seem a bit off, though I have to find time to look into it further. What are your thoughts? I agree with chunking by `interpunction` - it seems more "natural" to me, though I grouped short phrases into a minimum of 6 words if possible.<|||||>We used the processing as described here: https://huggingface.co/datasets/ami#dataset-preprocessing Think we should apply the official Kaldi preprocessing though. Agree the target text is quite noisy <|||||>Hi @patrickvonplaten Yes, the preprocess steps I had used for training was almost verbatim from [https://huggingface.co/datasets/ami#dataset-preprocessing](https://huggingface.co/datasets/ami#dataset-preprocessing), except for setting `MAX_LENGTH_IN_SECONDS = 10.0`. I've taken a closer look at the segment timestamps, and think they are actually inaccurate, likely due to > The AMI data comes with _manual_ segmentations and timings for interpunction signs, and this is how we broke the long utterances (italics mine). Have a look at some examples [here](https://colab.research.google.com/drive/1hUrREy7kV1HuUqJlNvzJqbr_KwZZBX2Y?usp=sharing). It looks to me that cleanly labeled audio is freely mixed in with inaccurately-labeled ones, enough that we can't use AMI by itself without someone going in to clean the dataset manually. Still, I think it a helpful exercise to find other datasets similar to AMI to train a robust transcription model, either by supplementing AMI, or replacing it. You?<|||||>Agree that it'd be very important to test Wav2Vec2 on more "real-world" / "robust" data. Sadly, I don't know any other dataset besides AMI that could fit that use-case. Do you have any ideas?<|||||>I'm thinking Youtube. We can try training first on some English Youtube public AMI-like videos. (I suspect we'd need to hook up a "spoken" version of a language model for decoding as well, but this is probably the easier part of the task). If the WER/whatever-metric-we-choose from training looks promising, we could take a subset of [AudioSet](https://research.google.com/audioset///index.html) (those identified as human speech) and build out a larger dataset. To me, Hubert-large seems a more robust acoustic model than Wav2Vec2 to start with. Let me know what you think.
transformers
13,161
closed
Cannot run run_mlm.py on a Japanese dataset - AttributeError: module transformers.models.mbart50 has no attribute BertJapaneseTokenizerFast
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.10.0.dev0 - Platform: Linux-4.18.0-25-generic-x86_64-with-debian-buster-sid - Python version: 3.7.9 - PyTorch version (GPU?): 1.8.1+cu111 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help Models: - albert, bert, xlm: @LysandreJik Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [x] the official example scripts: (give details below) transformers/examples/pytorch/language-modeling/run_mlm.py The tasks I am working on is: * [x] my own task or dataset: (give details below) It's a Japanese corpus in .txt format. ## To reproduce Steps to reproduce the behavior: 1. I followed the instructions at https://huggingface.co/transformers/examples.html: git cloned the transformers repository, installed it, along with requirements in language-modeling. 2. I tried to run it with `python run_mlm.py --model_name_or_path cl-tohoku/bert-base-japanese-whole-word-masking --train_file /path/to/train/file.txt --do_train --output_dir output_dir/ ` Traceback (most recent call last): File "run_mlm.py", line 550, in <module> main() File "run_mlm.py", line 337, in main tokenizer = AutoTokenizer.from_pretrained(model_args.model_name_or_path, **tokenizer_kwargs) File "/my/.pyenv/versions/anaconda3-5.1.0/envs/jp/lib/python3.7/site-packages/transformers/models/auto/tokenization_auto.py", line 424, in from_pretrained tokenizer_class = tokenizer_class_from_name(tokenizer_class_candidate) File "/my/.pyenv/versions/anaconda3-5.1.0/envs/jp/lib/python3.7/site-packages/transformers/models/auto/tokenization_auto.py", line 219, in tokenizer_class_from_name return getattr(module, class_name) File "/my/.pyenv/versions/anaconda3-5.1.0/envs/jp/lib/python3.7/site-packages/transformers/file_utils.py", line 1992, in __getattr__ raise AttributeError(f"module {self.__name__} has no attribute {name}") AttributeError: module transformers.models.mbart50 has no attribute BertJapaneseTokenizerFast ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> It should be done without an error. I have done this in July, and it went through without a problem.
08-18-2021 07:41:28
08-18-2021 07:41:28
Hi, I think that you need to run the whole word masking script which can be found [here](https://github.com/huggingface/transformers/tree/master/examples/research_projects/mlm_wwm) instead of the regular `run_mlm.py` script (as you're doing whole word masking instead of just masking tokens). I've created a Colab notebook, it seems to work fine! https://colab.research.google.com/drive/1d2yGWLYy44KgSId1WbSfusX0Jp8JhKyD?usp=sharing<|||||>It worked! Thank you so much!<|||||>I needed to run run_mlm.py, not run_mlm_wwm.py, this time, and tried to run `python run_mlm.py --model_name_or_path cl-tohoku/bert-base-japanese --train_file /path/to/train/file.txt --do_train --output_dir output_dir/` and got the same error message: ``` Traceback (most recent call last): File "run_mlm.py", line 550, in <module> main() File "run_mlm.py", line 337, in main tokenizer = AutoTokenizer.from_pretrained(model_args.model_name_or_path, **tokenizer_kwargs) File "/home/cl/jungmin-c/.pyenv/versions/anaconda3-5.1.0/envs/bert-japanese/lib/python3.7/site-packages/transformers/models/auto/tokenization_auto.py", line 431, in from_pretrained tokenizer_class = tokenizer_class_from_name(tokenizer_class_candidate) File "/home/cl/jungmin-c/.pyenv/versions/anaconda3-5.1.0/envs/bert-japanese/lib/python3.7/site-packages/transformers/models/auto/tokenization_auto.py", line 226, in tokenizer_class_from_name return getattr(module, class_name) File "/home/cl/jungmin-c/.pyenv/versions/anaconda3-5.1.0/envs/bert-japanese/lib/python3.7/site-packages/transformers/file_utils.py", line 1995, in __getattr__ raise AttributeError(f"module {self.__name__} has no attribute {name}") AttributeError: module transformers.models.rembert has no attribute BertJapaneseTokenizerFast ``` I cannot figure out how to resolve this. I would greatly appreciate if you could look into it. @NielsRogge <|||||>I found the root of your issue and the PR mentioned above should fix it.<|||||>Thank you very much!
transformers
13,160
open
Advice needed: Adding more FSMT models
# 🌟 New model addition ## Model description I am planning to contribute a series of FSMT models to the model hub. The models have been trained for a paper that is currently under review. Before working on a PR I wanted to ask for some advice: ### normalize_before The new models have been trained with Fairseq's option `normalize_before=True`, while the existing FSMT implementation uses `normalize_before=False`. I understand that copy-pasting model code is preferred to extending the configuration. This would mean that a near-duplicate module `fsmt_prenorm` needs to be created. Is this correct? ### Adequate base branch The FSMT module is currently being refactored (https://github.com/huggingface/transformers/pull/11218). Do you recommend that I start from the master branch or from the PR's feature branch, which is nearly completed?
08-18-2021 06:26:00
08-18-2021 06:26:00
@patil-suraj I am still very motivated to work on the pull request :) Just let me know if you need more information to answer my question. In case you're interested, the paper describing our models is now public (https://openreview.net/forum?id=RvO9DqoWI9V). I believe the models could be of value to others in the community.
transformers
13,159
closed
Fix load_tf_weights alias.
# What does this PR do? 1. Address #13154 2. I'm checking except for Albert, whether other models have the same problem or not. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @LysandreJik
08-18-2021 04:47:40
08-18-2021 04:47:40
Models having `load_tf_weights` are checked, no bug found!<|||||>This looks correct to me!
transformers
13,158
closed
CvT: Convolution based Image Transformers
# 🌟 New model addition ## Model description A new architecture, named Convolutional vision Transformers (CvT), that improves Vision Transformers (ViT) in performance and efficiently by introducing convolutions into ViT to yield the best of both designes. This is accomplished through two primary modifications: a hierarchy of Transformers containing a new convolutional token embedding, and a convolutional Transformer block leveraging a convolutional projection. These changes introduce desirable properties of convolutional neural networks (CNNs) to the ViT architecture (e.g. shift, scale, and distortion invariance) while maintaining the merits of Transformers (e.g. dynamic attention, global context, and better generalization). ## Open source status * [ https://github.com/microsoft/CvT] the model implementation is available: The Microsoft Model is OpenSource and would be a good addition to huggingface library * [ https://1drv.ms/u/s!AhIXJn_J-blW9RzF3rMW7SsLHa8h?e=blQ0Al] the model weights are available: The pretrained weights are present in drive * [https://github.com/leoxiaobin] is the authors: @leoxiaobin
08-18-2021 04:42:34
08-18-2021 04:42:34
I would like to work on this @LysandreJik if you feel it's a nice addition.<|||||>Great suggestion! How is this model different from Facebook AI's [ConViT](https://github.com/facebookresearch/convit)? Currently, we have [ViT](https://huggingface.co/transformers/model_doc/vit.html), [DeiT](https://huggingface.co/transformers/model_doc/deit.html) and [BEiT](https://huggingface.co/transformers/master/model_doc/beit.html) in the library. It would be cool to have a Vision Transformer with convolutional inductive biases in the library, as it's probably better in terms of sample efficiency/FLOPS. Perhaps you can compare CvT and ConViT, and add the best of the two to the library? I can help you if you want (I've contributed the aforementioned ones 😉 ).<|||||>@NielsRogge yeah sure. Any help is great help. I haven't read ConvViT in depth but on skimming through it they have attempted to do something similar to convolutions. While CvT use pure convolution and here in this architecture they eliminate need for positional embedding, simplifying design for vision tasks with variable input resolution. Position Embedding is often realized by fixed-length learn-able vectors, limiting the trained model adaptation of variable-length input. This seems a good architecture even on metrics. Your thoughts? If you agree then I can move forward with your help since this my first contribution here.<|||||>> Position Embedding is often realized by fixed-length learn-able vectors, limiting the trained model adaptation of variable-length input. Yeah indeed, models like ViT and BEiT require interpolation of the pre-trained position embeddings when fine-tuning, which is a pain. Do you know how to get started to add a model? Most info can be found [here](https://huggingface.co/transformers/contributing.html) and [here](https://github.com/huggingface/transformers/tree/master/templates/adding_a_new_model).<|||||>@NielsRogge yeah. I have gone through it. I can try following similarly as given ViT, BEiT. I can start it now. If I get stuck I will get back to you. <|||||>The issue is resolved with PR #17299
transformers
13,157
closed
export BART model to ONNX failed with [Segmentation fault (core dumped)]
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version:`v4.10.0-dev0` - Platform:Ubuntu 18.04.3 LTS - `Python` version:`v3.8.11` - `PyTorch` version (GPU?):`v1.9.0-cu102`(TRUE) - `Tensorflow` version (GPU?):`None` - `onnx` version:`v1.10.1` - `onnxruntim` version:`v1.8.1` - Using GPU in script?:`False` - Using distributed or parallel set-up in script?:`False` ### Who can help @patrickvonplaten @patil-suraj <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using **BART**: The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts following the **command line example** given in the official [Export transformers models](https://huggingface.co/transformers/serialization.html#onnx-onnxruntime) document. ## To reproduce Steps to reproduce the behavior: 1.run the following command line in console: ``` python -m transformers.onnx --model="lidiya/bart-large-xsum-samsum" --feature=default "lidiya-bart-large-xsum-samsum" ``` <details> <summary>Full log</summary> <pre> Some weights of the model checkpoint at lidiya/bart-large-xsum-samsum were not used when initializing BartModel: ['lm_head.weight', 'final_logits_bias'] - This IS expected if you are initializing BartModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing BartModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Using framework PyTorch: 1.9.0+cu102 Overriding 1 configuration item(s) - use_cache -> False /root/miniconda3/envs/speedup/lib/python3.8/site-packages/transformers/models/bart/modeling_bart.py:212: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len): /root/miniconda3/envs/speedup/lib/python3.8/site-packages/transformers/models/bart/modeling_bart.py:218: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attention_mask.size() != (bsz, 1, tgt_len, src_len): /root/miniconda3/envs/speedup/lib/python3.8/site-packages/transformers/models/bart/modeling_bart.py:249: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim): /root/miniconda3/envs/speedup/lib/python3.8/site-packages/transformers/models/bart/modeling_bart.py:863: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if input_shape[-1] > 1: Validating ONNX model... Traceback (most recent call last): Segmentation fault (core dumped) </pre> </details> <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Exporting **BART** model to onnx successfully and can be run on onnxruntime to generate correct results. <!-- A clear and concise description of what you would expect to happen. -->
08-18-2021 03:49:39
08-18-2021 03:49:39
Hello @PanQiWei! I have successfully exported the model with your command: ``` Validating ONNX model... -[✓] ONNX model outputs' name match reference model ({'last_hidden_state', 'encoder_last_hidden_state'} - Validating ONNX Model output "last_hidden_state": -[✓] (2, 8, 1024) matches (2, 8, 1024) -[✓] all values close (atol: 0.0001) - Validating ONNX Model output "encoder_last_hidden_state": -[✓] (2, 8, 1024) matches (2, 8, 1024) -[✓] all values close (atol: 0.0001) All good, model saved at: lidiya-bart-large-xsum-samsum/model.onnx ``` Would you mind mentioning the versions of PyTorch and onnxruntime you have installed in your environment? Thank you!<|||||>Hi @LysandreJik ! First of all, thank you for your replay! I'm currently using ```pytorch==1.9.2``` with cuda version 10.2 and ```onnxruntime==1.8.1``` For 🤗transformers I tried both ```v1.9.2``` and ```the unrealesed version by installing from source```. I tried saveral times to export Bart model into ONNX but all got the failed information as given above.<|||||>A segmentation fault isn't easy to debug, but I wonder if this isn't a memory error under the hood. Are you using a google colab? Can you try exporting the following model, which is much smaller, to see if this succeeds or not? `sshleifer/distilbart-cnn-12-6`<|||||>I'm using GPU provided by my company, which is a RTX2080 GPU. I ran the command and replace model name with ```sshleifer/distilbart-cnn-12-6```, this time the error message changed, as shown below: ``` Using framework PyTorch: 1.9.0+cu102 Overriding 1 configuration item(s) - use_cache -> False /root/miniconda3/envs/gpc/lib/python3.8/site-packages/transformers/models/bart/modeling_bart.py:212: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len): /root/miniconda3/envs/gpc/lib/python3.8/site-packages/transformers/models/bart/modeling_bart.py:218: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attention_mask.size() != (bsz, 1, tgt_len, src_len): /root/miniconda3/envs/gpc/lib/python3.8/site-packages/transformers/models/bart/modeling_bart.py:249: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim): /root/miniconda3/envs/gpc/lib/python3.8/site-packages/transformers/models/bart/modeling_bart.py:863: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if input_shape[-1] > 1: Validating ONNX model... Floating point exception (core dumped) ```<|||||>Hmmm I'm failing at reproducing :( I have the following versions, could you try installing them to see if it changes something? ``` onnx 1.9.0 onnxruntime 1.8.1 torch 1.9.0 ``` I can also upload the converted model to the hub under a repository if that's helpful for you.<|||||>I re-install the libraries with the versions as yours, but still faild. 😢 It would be wonderful and thankful if you could upload the converted model! ❤️ Again, thank you for your help! 😄<|||||>Hello again! I've uploaded the converted model here: https://huggingface.co/lysandre/onnx-bart/tree/main (search for model.onnx)<|||||>Thanks soooo much!! 😆 It's my first time to try an ONNX model, can't wait to see the improvement in my tasks, thank you! ❤️
transformers
13,156
closed
🐛: skip_special_tokens in tokenization_utils.py
# What does this PR do? 🐛: skip_special_tokens in tokenization_utils.py The skip_special_tokens in tokenization_utils.py does not work, because token never in self.all_special_ids, which shoule be fixed to self.all_special_tokens. Many thakns! @n1t0 @LysandreJik .
08-18-2021 03:49:13
08-18-2021 03:49:13
I think this is redundant, because the special ids have already been skipped in `filtered_tokens`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,155
closed
Add FSNER example in research_projects
# What does this PR do? - This PR adds example code for FSNER (few-shot named entity recognition) using huggingface's `transformers` library. - Only prediction/inference code is provided, training code will be provided very soon. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. https://github.com/huggingface/transformers/pull/13133 - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @NielsRogge @LysandreJik Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-17-2021 20:01:11
08-17-2021 20:01:11
Looks great already! I left some small comments.<|||||>Hi @NielsRogge ! Would you mind telling me what else should I do? Or it's ready to merge? Thanks!<|||||>Hi, Now others need to review this, once they're back from holiday ;)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @sayef, there are a few code quality issues; I tried to push to your branch but I do not have push access on your branch. Could you run the following commands at the root of your clone? It should tell you what needs fixing: ``` pip install -U -e .[quality] make fixup ```<|||||>Hi @LysandreJik, I followed what you suggested. Let me know if I need to do anything else. :) <|||||>Hi @sayef - I believe you also rebased or merged the `master` branch into your PR. Unfortunately, GitHub sometimes has issues understanding what happened, for example here your PR shows 245 commits and 466 files changed. Usually just closing the PR and opening a new one from the same branch, without changing anything is enough. Would you mind doing that and pinging me so that I may merge? Thank you!<|||||>> Usually just closing the PR and opening a new one from the same branch, without changing anything is enough. Would you mind doing that and pinging me so that I may merge? Thank you! Okay. Closing here and will ping you in other PR.
transformers
13,154
closed
AttributeError: 'AlbertModel' object has no attribute 'bias' -Transforms 4.9.2
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.9.2 - Platform: Lunix - Python version: 3 - PyTorch version (GPU?): 1.9.0 - Tensorflow version (GPU?): 2.6.0 - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people.--> @LysandreJik - albert , bert, xlm: @LysandreJik ## Information I am using (AlBert Pretrained on custom corpus): The problem arises when using: * [ ] this is my own scripts: (give details below) _which uses transformers_ Wrote a simple script to extract CLS embedding for sentence from an albert model pretrained on custom vocab. I define the model using **AlbertModel.from_pretrained** and try to load my pre-trained weights using **load_tf_weights_in_albert** I run the script and get the error **AttributeError: 'AlbertModel' object has no attribute 'bias** The tasks I am working on is: * [ ] my own task or dataset: (give details below) Trying to extract CLS embedding for an input sentence from my albert model I pretrained on custom vocab. I will then feed these embedding into a custom classification layer. ## To reproduce Steps to reproduce the behavior: 1. Define my model: modelprt = AlbertModel.from_pretrained(pretrained_model_name_or_path='AOUTPR21/model.ckpt-10000', config=ptcfg, from_tf=True) --(I also tried converting checkpoint to pytorch but that gave an even worse error) 2. load weights into model via : modelprt = load_tf_weights_in_albert(modelprt, ptcfg, prot_model) ..._Note here prot_model = AOUTPR21/model.ckpt-10000_ 3. Run my script and I get the error <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> **Error seen :** File "./finetune_prot_pep.py", line 54, in <module> contexemb = getProtembedd(try_loader) File "/workspace/finetuneclassifier.py", line 52, in __init__ modelprt = AlbertModel.from_pretrained(pretrained_model_name_or_path='AOUTPR21/model.ckpt-10000', config=self.ptcfg, from_tf=True) File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py", line 1331, in from_pretrained model = cls.load_tf_weights(model, config, resolved_archive_file[:-6]) # Remove the '.index' File "/usr/local/lib/python3.6/dist-packages/transformers/models/albert/modeling_albert.py", line 169, in load_tf_weights_in_albert pointer = getattr(pointer, "bias") File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1131, in __getattr__ type(self).__name__, name)) AttributeError: 'AlbertModel' object has no attribute 'bias' ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> I expected that no error will come up
08-17-2021 18:48:42
08-17-2021 18:48:42
Since your tf checkpoint has a prediction head which isn't included in `AlbertModel`, using `AlbertForPreTraining` should solve the problem. But `AlbertForPreTraining` currently has a small alias bug, I'll open a PR for the fix.<|||||>> Since your tf checkpoint has a prediction head which isn't included in `AlbertModel`, using `AlbertForPreTraining` should solve the problem. But `AlbertForPreTraining` currently has a small alias bug, I'll open a PR for the fix. Many Thanks. I saw you recommended that I use **AlbertForPreTraining**, and then you said **AlbertForPreTraining** has a bug. So I should I wait on the fix before proceeding correct?<|||||>> > Since your tf checkpoint has a prediction head which isn't included in `AlbertModel`, using `AlbertForPreTraining` should solve the problem. But `AlbertForPreTraining` currently has a small alias bug, I'll open a PR for the fix. > > Many Thanks. I saw you recommended that I use **AlbertForPreTraining**, and then you said **AlbertForPreTraining** has a bug. So I should I wait on the fix before proceeding correct? That's right. If you switch to `AlbertForPreTraining` now, you may encounter another error as the alias was not set at the right place.<|||||>> > > Since your tf checkpoint has a prediction head which isn't included in `AlbertModel`, using `AlbertForPreTraining` should solve the problem. But `AlbertForPreTraining` currently has a small alias bug, I'll open a PR for the fix. > > > > > > Many Thanks. I saw you recommended that I use **AlbertForPreTraining**, and then you said **AlbertForPreTraining** has a bug. So I should I wait on the fix before proceeding correct? > > That's right. If you switch to `AlbertForPreTraining` now, you may encounter another error as the alias was not set at the right place. Sounds good, I will wait. Thanks again.<|||||>Hi, the PR has been merged. You can install transformers from source and test whether it works as expected :-)<|||||>> Hi, the PR has been merged. You can install transformers from source and test whether it works as expected :-) Will do. Thanks<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,153
closed
Add Wav2Vec2 & Hubert ForSequenceClassification
# What does this PR do? This adds a Hubert extension for sequence classification. Ultimately this classification head should be compatible with s3prl `UtteranceLevel` [implementation](https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/model.py#L35) to support classification tasks from SUPERB, such as [Keyword Spotting](https://huggingface.co/datasets/superb#ks) and transfer their pretrained models. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @patrickvonplaten @patil-suraj
08-17-2021 13:28:06
08-17-2021 13:28:06
Accuracy evaluation on SUPERB tasks: - **KS** has uniform-length samples, so no padding - **ER** has non-uniform padded batches - **SID** is evaluated with batch_size=1 as in `s3prl` | Task | Model | normalize=True | normalize=False | Paper | | ---- | ------------- | -------------- | --------------- | ------ | | **KS** | Wav2Vec2-base | 0.9627 | 0.9643 | 0.9623 | | | Hubert-base | 0.9669 | 0.9672 | 0.9630 | | **ER** | Wav2Vec2-base | 0.5281 | 0.6258 | 0.6343 | | | Hubert-base | 0.5502 | 0.6359 | 0.6492 | | **SID** | Wav2Vec2-base | 0.7360 | 0.7518 | 0.7518 | | | Hubert-base | 0.8071 | 0.8071 | 0.8142 | So far `normalize=False` is always better, as expected (`s3prl` never used normalization during eval). There's also some slight variation with the official results, but it's of the same magnitude as `s3prl` vs `paper` results. <|||||>- [x] Passed integration test for all 4 tasks on both models - [x] Added `Copied from` where possible (the script just inserts a full copy of `W2V2.forward()` before `End copy`, so I didn't use it there) - [x] Added dummy examples to `forward()` docs - [x] Moved the models to `https://huggingface.co/superb` @patrickvonplaten everything should be ready to merge now :) <|||||>Awesome job @anton-l ! Feel free to merge the PR whenever you want
transformers
13,152
closed
Set missing seq_length variable when using inputs_embeds with ALBERT & Remove code duplication
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> I think this bug is similar to #13128 , only difference is that this PR is for ALBERT. `AlbertModel` has the same issue that `seq_length` variable is not declared when using `inputs_embeds` I checked that other models that were implemented in the same code format as ALBERT/ELECTRA don't have this issue anymore. ++Additional Remove all of code duplications as @NielsRogge referred on the comments. ```Diff if input_ids is not None and inputs_embeds is not None: raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") elif input_ids is not None: input_shape = input_ids.size() - batch_size, seq_length = input_shape elif inputs_embeds is not None: input_shape = inputs_embeds.size()[:-1] - batch_size, seq_length = input_shape else: raise ValueError("You have to specify either input_ids or inputs_embeds") + batch_size, seq_length = input_shape device = input_ids.device if input_ids is not None else inputs_embeds.device ``` I think it is trivial, so I don't make additional PR. (If this is the problem, please inform me.) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @LysandreJik <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-17-2021 13:01:30
08-17-2021 13:01:30
Not sure why I can't add the code suggestion, but it makes more sense to do this: ```diff if input_ids is not None and inputs_embeds is not None: raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") elif input_ids is not None: input_shape = input_ids.size() - batch_size, seq_length = input_shape elif inputs_embeds is not None: input_shape = inputs_embeds.size()[:-1] - batch_size, seq_length = input_shape else: raise ValueError("You have to specify either input_ids or inputs_embeds") + batch_size, seq_length = input_shape device = input_ids.device if input_ids is not None else inputs_embeds.device ```<|||||>> Not sure why I can't add the code suggestion, but it makes more sense to do this: > > ```diff > if input_ids is not None and inputs_embeds is not None: > raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") > elif input_ids is not None: > input_shape = input_ids.size() > - batch_size, seq_length = input_shape > elif inputs_embeds is not None: > input_shape = inputs_embeds.size()[:-1] > - batch_size, seq_length = input_shape > else: > raise ValueError("You have to specify either input_ids or inputs_embeds") > > + batch_size, seq_length = input_shape > device = input_ids.device if input_ids is not None else inputs_embeds.device > ``` @NielsRogge Yes, I fully agree with you. But most of the codes of other models are implemented as I wrote, and I just wanted to unify the format to prevent confusion. For example, the code below is from src/transformers/models/bert/modeling_bert.py: ```python if input_ids is not None and inputs_embeds is not None: raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") elif input_ids is not None: input_shape = input_ids.size() batch_size, seq_length = input_shape elif inputs_embeds is not None: input_shape = inputs_embeds.size()[:-1] batch_size, seq_length = input_shape else: raise ValueError("You have to specify either input_ids or inputs_embeds") device = input_ids.device if input_ids is not None else inputs_embeds.device ``` But if needed, I could change all of the code looks like above.<|||||>> But if needed, I could change all of the code looks like above. Actually, I'm in favor of this, because it's duplicated code, and I think it's cleaner when just writing it once. <|||||>In that case, you can also update the [CookieCutter template](https://github.com/huggingface/transformers/blob/master/templates/adding_a_new_model/cookiecutter-template-%7B%7Bcookiecutter.modelname%7D%7D/modeling_%7B%7Bcookiecutter.lowercase_modelname%7D%7D.py), which is used when adding a new model. <|||||>@NielsRogge Okay, I reflected it! (for at least the files I've found) Please check again.<|||||>LGTM! Thanks for making this cleaner.
transformers
13,151
closed
Unhashable type : dict for visualbert example code.
Hi, I am using the visualbert model as shown in [visualbert visualreasoning](https://huggingface.co/transformers/model_doc/visual_bert.html#visualbertforvisualreasoning) ``` # Assumption: `get_visual_embeddings(image)` gets the visual embeddings of the image in the batch. from transformers import BertTokenizer, VisualBertForVisualReasoning import torch tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = VisualBertForVisualReasoning.from_pretrained('uclanlp/visualbert-nlvr2') text = "Who is eating the apple?" inputs = tokenizer(text, return_tensors='pt') visual_embeds = get_visual_embeddings(image).unsqueeze(0) visual_token_type_ids = torch.ones(visual_embeds.shape[:-1], dtype=torch.long) #example visual_attention_mask = torch.ones(visual_embeds.shape[:-1], dtype=torch.float) inputs.update({{ "visual_embeds": visual_embeds, "visual_token_type_ids": visual_token_type_ids, "visual_attention_mask": visual_attention_mask }}) labels = torch.tensor(1).unsqueeze(0) # Batch size 1, Num choices 2 outputs = model(**inputs, labels=labels) loss = outputs.loss scores = outputs.logits ``` and I encountered the following error: ``` Traceback (most recent call last): File "<ipython-input-1-8716adc0686f>", line 1, in <module> runfile('/Users/abhijithneilabraham/Documents/GitHub/visualbert_experiments/example.py', wdir='/Users/abhijithneilabraham/Documents/GitHub/visualbert_experiments') File "/Users/abhijithneilabraham/miniconda3/envs/py/lib/python3.6/site-packages/spyder/utils/site/sitecustomize.py", line 705, in runfile execfile(filename, namespace) File "/Users/abhijithneilabraham/miniconda3/envs/py/lib/python3.6/site-packages/spyder/utils/site/sitecustomize.py", line 102, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "/Users/abhijithneilabraham/Documents/GitHub/visualbert_experiments/example.py", line 219, in <module> "visual_attention_mask": visual_attention_mask TypeError: unhashable type: 'dict' ``` Is this operation supported by python or is this a bug in the code? Transformers-cli env output: ``` - `transformers` version: 4.9.2 - Platform: Darwin-16.7.0-x86_64-i386-64bit - Python version: 3.6.13 - PyTorch version (GPU?): 1.9.0 (False) - Tensorflow version (GPU?): 2.4.1 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ``` @patil-suraj
08-17-2021 10:47:20
08-17-2021 10:47:20
You have 2 {{ in your code, whereas it should be only one: ``` inputs.update({ "visual_embeds": visual_embeds, "visual_token_type_ids": visual_token_type_ids, "visual_attention_mask": visual_attention_mask }) ```<|||||>yes I guessed that, i.e, it used sets instead of dicts. But then it should be modified in the documentation as well. Also, doing as you said threw another error ``` Traceback (most recent call last): File "<ipython-input-2-8716adc0686f>", line 1, in <module> runfile('/Users/abhijithneilabraham/Documents/GitHub/visualbert_experiments/example.py', wdir='/Users/abhijithneilabraham/Documents/GitHub/visualbert_experiments') File "/Users/abhijithneilabraham/miniconda3/envs/py/lib/python3.6/site-packages/spyder/utils/site/sitecustomize.py", line 705, in runfile execfile(filename, namespace) File "/Users/abhijithneilabraham/miniconda3/envs/py/lib/python3.6/site-packages/spyder/utils/site/sitecustomize.py", line 102, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "/Users/abhijithneilabraham/Documents/GitHub/visualbert_experiments/example.py", line 224, in <module> outputs = model(**inputs, labels=labels) File "/Users/abhijithneilabraham/miniconda3/envs/py/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/Users/abhijithneilabraham/miniconda3/envs/py/lib/python3.6/site-packages/transformers/models/visual_bert/modeling_visual_bert.py", line 1240, in forward return_dict=return_dict, File "/Users/abhijithneilabraham/miniconda3/envs/py/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/Users/abhijithneilabraham/miniconda3/envs/py/lib/python3.6/site-packages/transformers/models/visual_bert/modeling_visual_bert.py", line 784, in forward visual_input_shape = visual_embeds.size()[:-1] TypeError: 'int' object is not callable ``` @NielsRogge <|||||>Tagging @gchhablani as he's the expert on VisualBERT<|||||>Thanks for the tag @NielsRogge. @abhijithneilabraham There was an error in the docs earlier. The dictionary update is wrong. It should not have `{{` and `}}`, but `{` and `}` instead. It was fixed recently in a PR. Sorry about that. Please let me know if this solves your issue.<|||||>> Please let me know if this solves your issue. Apparently that doesn't solve his issue, as he shows above.<|||||>@gchhablani I would like to help with the issue if I can. Let me know.<|||||>@abhijithneilabraham Can you share your `get_visual_embeddings` method if possible?<|||||>@gchhablani I used it from the [colab notebook](https://colab.research.google.com/drive/1bLGxKdldwqnMVA5x4neY7-l_8fKGWQYI?usp=sharing) that you shared in the doc. I still was unclear on the proper way of using it.<|||||>Also @gchhablani this was the issue encountered after modifying the example code like the way you mentioned. ``` Traceback (most recent call last): File "<ipython-input-2-8716adc0686f>", line 1, in <module> runfile('/Users/abhijithneilabraham/Documents/GitHub/visualbert_experiments/example.py', wdir='/Users/abhijithneilabraham/Documents/GitHub/visualbert_experiments') File "/Users/abhijithneilabraham/miniconda3/envs/py/lib/python3.6/site-packages/spyder/utils/site/sitecustomize.py", line 705, in runfile execfile(filename, namespace) File "/Users/abhijithneilabraham/miniconda3/envs/py/lib/python3.6/site-packages/spyder/utils/site/sitecustomize.py", line 102, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "/Users/abhijithneilabraham/Documents/GitHub/visualbert_experiments/example.py", line 224, in <module> outputs = model(**inputs, labels=labels) File "/Users/abhijithneilabraham/miniconda3/envs/py/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/Users/abhijithneilabraham/miniconda3/envs/py/lib/python3.6/site-packages/transformers/models/visual_bert/modeling_visual_bert.py", line 1240, in forward return_dict=return_dict, File "/Users/abhijithneilabraham/miniconda3/envs/py/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/Users/abhijithneilabraham/miniconda3/envs/py/lib/python3.6/site-packages/transformers/models/visual_bert/modeling_visual_bert.py", line 784, in forward visual_input_shape = visual_embeds.size()[:-1] TypeError: 'int' object is not callable ``` Could this be because of the improper way of using the visual embeds? If yes I'd like to understand a proper approach to generating the visual embeds with a function<|||||>@gchhablani This is my source code ``` #!/usr/bin/env python3 # -*- coding: utf-8 -*- """ Created on Mon Aug 16 11:22:20 2021 @author: abhijithneilabraham """ import torch,torchvision import matplotlib.pyplot as plt import json import cv2 import numpy as np from detectron2.modeling import build_model from detectron2.checkpoint import DetectionCheckpointer from detectron2.structures.image_list import ImageList from detectron2.data import transforms as T from detectron2.modeling.box_regression import Box2BoxTransform from detectron2.modeling.roi_heads.fast_rcnn import FastRCNNOutputs from detectron2.structures.boxes import Boxes from detectron2.layers import nms from detectron2 import model_zoo from detectron2.config import get_cfg img1 = plt.imread(f'profile_pic.jpeg') # Detectron expects BGR images img_bgr1 = cv2.cvtColor(img1, cv2.COLOR_RGB2BGR) cfg_path = "COCO-InstanceSegmentation/mask_rcnn_R_101_FPN_3x.yaml" def load_config_and_model_weights(cfg_path): cfg = get_cfg() cfg.merge_from_file(model_zoo.get_config_file(cfg_path)) # ROI HEADS SCORE THRESHOLD cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5 # Comment the next line if you're using 'cuda' cfg['MODEL']['DEVICE']='cpu' cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url(cfg_path) return cfg cfg = load_config_and_model_weights(cfg_path) def get_model(cfg): # build model model = build_model(cfg) # load weights checkpointer = DetectionCheckpointer(model) checkpointer.load(cfg.MODEL.WEIGHTS) # eval mode model.eval() return model model = get_model(cfg) def prepare_image_inputs(cfg, img_list): # Resizing the image according to the configuration transform_gen = T.ResizeShortestEdge( [cfg.INPUT.MIN_SIZE_TEST, cfg.INPUT.MIN_SIZE_TEST], cfg.INPUT.MAX_SIZE_TEST ) img_list = [transform_gen.get_transform(img).apply_image(img) for img in img_list] # Convert to C,H,W format convert_to_tensor = lambda x: torch.Tensor(x.astype("float32").transpose(2, 0, 1)) batched_inputs = [{"image":convert_to_tensor(img), "height": img.shape[0], "width": img.shape[1]} for img in img_list] # Normalizing the image num_channels = len(cfg.MODEL.PIXEL_MEAN) pixel_mean = torch.Tensor(cfg.MODEL.PIXEL_MEAN).view(num_channels, 1, 1) pixel_std = torch.Tensor(cfg.MODEL.PIXEL_STD).view(num_channels, 1, 1) normalizer = lambda x: (x - pixel_mean) / pixel_std images = [normalizer(x["image"]) for x in batched_inputs] # Convert to ImageList images = ImageList.from_tensors(images,model.backbone.size_divisibility) return images, batched_inputs images, batched_inputs = prepare_image_inputs(cfg, [img_bgr1]) def get_features(model, images): features = model.backbone(images.tensor) return features features = get_features(model, images) def get_proposals(model, images, features): proposals, _ = model.proposal_generator(images, features) return proposals proposals = get_proposals(model, images, features) def get_box_features(model, features, proposals): features_list = [features[f] for f in ['p2', 'p3', 'p4', 'p5']] box_features = model.roi_heads.box_pooler(features_list, [x.proposal_boxes for x in proposals]) box_features = model.roi_heads.box_head.flatten(box_features) box_features = model.roi_heads.box_head.fc1(box_features) box_features = model.roi_heads.box_head.fc_relu1(box_features) box_features = model.roi_heads.box_head.fc2(box_features) box_features = box_features.reshape(1, 1000, 1024) # depends on your config and batch size return box_features, features_list box_features, features_list = get_box_features(model, features, proposals) def get_prediction_logits(model, features_list, proposals): cls_features = model.roi_heads.box_pooler(features_list, [x.proposal_boxes for x in proposals]) cls_features = model.roi_heads.box_head(cls_features) pred_class_logits, pred_proposal_deltas = model.roi_heads.box_predictor(cls_features) return pred_class_logits, pred_proposal_deltas pred_class_logits, pred_proposal_deltas = get_prediction_logits(model, features_list, proposals) def get_box_scores(cfg, pred_class_logits, pred_proposal_deltas): box2box_transform = Box2BoxTransform(weights=cfg.MODEL.ROI_BOX_HEAD.BBOX_REG_WEIGHTS) smooth_l1_beta = cfg.MODEL.ROI_BOX_HEAD.SMOOTH_L1_BETA outputs = FastRCNNOutputs( box2box_transform, pred_class_logits, pred_proposal_deltas, proposals, smooth_l1_beta, ) boxes = outputs.predict_boxes() scores = outputs.predict_probs() image_shapes = outputs.image_shapes return boxes, scores, image_shapes boxes, scores, image_shapes = get_box_scores(cfg, pred_class_logits, pred_proposal_deltas) def get_output_boxes(boxes, batched_inputs, image_size): proposal_boxes = boxes.reshape(-1, 4) scale_x, scale_y = (batched_inputs["width"] / image_size[1], batched_inputs["height"] / image_size[0]) output_boxes = Boxes(proposal_boxes) # output_boxes.scale(scale_x, scale_y) output_boxes.clip(image_size) return output_boxes output_boxes = [get_output_boxes(boxes[i], batched_inputs[i], proposals[i].image_size) for i in range(len(proposals))] def select_boxes(cfg, output_boxes, scores): test_score_thresh = cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST test_nms_thresh = cfg.MODEL.ROI_HEADS.NMS_THRESH_TEST cls_prob = scores.detach() cls_boxes = output_boxes.tensor.detach().reshape(1000,80,4) max_conf = torch.zeros((cls_boxes.shape[0])) for cls_ind in range(0, cls_prob.shape[1]-1): cls_scores = cls_prob[:, cls_ind+1] det_boxes = cls_boxes[:,cls_ind,:] keep = np.array(nms(det_boxes, cls_scores, test_nms_thresh)) max_conf[keep] = torch.where(cls_scores[keep] > max_conf[keep], cls_scores[keep], max_conf[keep]) keep_boxes = torch.where(max_conf >= test_score_thresh)[0] return keep_boxes, max_conf temp = [select_boxes(cfg, output_boxes[i], scores[i]) for i in range(len(scores))] keep_boxes, max_conf = [],[] for keep_box, mx_conf in temp: keep_boxes.append(keep_box) max_conf.append(mx_conf) MIN_BOXES=10 MAX_BOXES=100 def filter_boxes(keep_boxes, max_conf, min_boxes, max_boxes): if len(keep_boxes) < min_boxes: keep_boxes = np.argsort(max_conf).numpy()[::-1][:min_boxes] elif len(keep_boxes) > max_boxes: keep_boxes = np.argsort(max_conf).numpy()[::-1][:max_boxes] return keep_boxes keep_boxes = [filter_boxes(keep_box, mx_conf, MIN_BOXES, MAX_BOXES) for keep_box, mx_conf in zip(keep_boxes, max_conf)] def get_visual_embeds(box_features, keep_boxes): return box_features[keep_boxes.copy()] visual_embeds = np.asarray([get_visual_embeds(box_feature, keep_box) for box_feature, keep_box in zip(box_features, keep_boxes)]) # Assumption: `get_visual_embeddings(image)` gets the visual embeddings of the image in the batch. from transformers import BertTokenizer, VisualBertForQuestionAnswering tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = VisualBertForQuestionAnswering.from_pretrained('uclanlp/visualbert-vqa') text = "what color dress is he wearing?" inputs = tokenizer(text, return_tensors='pt') visual_token_type_ids = torch.ones(visual_embeds.shape[:-1], dtype=torch.long) #example visual_attention_mask = torch.ones(visual_embeds.shape[:-1], dtype=torch.float) inputs.update({ "visual_embeds": visual_embeds, "visual_token_type_ids": visual_token_type_ids, "visual_attention_mask": visual_attention_mask }) labels = torch.tensor([[0.0,1.0]]).unsqueeze(0) # Batch size 1, Num labels 2 outputs = model(**inputs, labels=labels) loss = outputs.loss scores = outputs.logits print(outputs) ```<|||||>@abhijithneilabraham The issue is that you are using a numpy array when `visual_embeds` expects a torch tensor: ```python >>> import numpy as np >>> import torch >>> a = np.ones(10) >>> a.size 10 >>> b = torch.ones(10) >>> b.size <built-in method size of Tensor object at 0x7fb34109ebc0> >>> b.size() torch.Size([10]) ``` I believe you can check the other demo, where the LXMERT authors have provided FasterRCNN classes and the pre-trained model on Visual Genome. It'll be much easier to use that. EDIT ------ The docs issue has been fixed, the docs have not yet updated, I guess. <|||||>Much thanks @gchhablani ! Can you share the link to the other demo? I can then close this issue.<|||||>@abhijithneilabraham No problem :) Here is the demo link : https://github.com/huggingface/transformers/tree/master/examples/research_projects/visual_bert<|||||>Thank you!
transformers
13,150
closed
examples: add keep_linebreaks option to CLM examples
Hi, as discussed in #12971 a newline is missing when using the CLM example scripts, when no dataset name is provided (this is the case when you use "normal" text files). This PR adds the `keep_linebreaks=True` option to all CLM example scripts (when using files).
08-17-2021 09:21:11
08-17-2021 09:21:11
Thanks a lot for your PR @stefan-it! Could we maybe make `keep_linebreaks` configurable by the command line and let it default to `True`? So ideally we could add it to the `DataArguments` class<|||||>@patrickvonplaten good idea, I'm working on it now!<|||||>I've implemented it :) The `examples/pytorch/language-modeling/run_clm_no_trainer.py` uses the raw argument parser, so I used the same boolean logic as used for the fast tokenizer (or here: slow tokenizer).<|||||>Thanks a lot for your PR @stefan-it! @sgugger agreed to make this change here: https://github.com/huggingface/transformers/issues/12971 So good to merge for me!
transformers
13,149
closed
Autoregressive differentiable decoding? (no teacher forcing nor self-reconstruction)
# 🚀 Feature request Hi, Is there any way to perform autoregressive differentiable decoding? As far as I know, for **encoder-decoder models** (e.g., T5, BART), we have the following: - **.forward()** performs decoding using **teacher forcing**, or in the case of BART, shifts the _input_ids_ to the right if no _decoder_input_ids_ are given in order to do self-reconstruction. In that case the decoding is differentiable but not autoregressive as it uses teacher forcing (or some kind of supervision). - **.generate()** enables decoding using **greedy search**, **beam search**, or **top-k sampling**. In that case, the decoding is autoregressive but not differentiable. I would like to **_decode differentiably_** by using **plain autoregression**: at each decoding step t, we feed to the decoder the token generated at previous step t-1. By _differentiably_, I mean that we don't take the argmax() to select a single generated token. Rather, we use embeddings weighted by the logits, to get a pseudo-token that we can feed to the decoder at the next step. Such a technique of using weighted averages of embeddings as pseudo-tokens has been used in recent research: https://arxiv.org/abs/1905.05621 We can implement this manually using a for loop over decoding steps calling .forward() at each step, but it is quite slow. ## Motivation The motivation behind this feature request is to be able to generate sequences differentiably without supervision, and fast, which can be very useful for several research purposes.
08-17-2021 08:31:49
08-17-2021 08:31:49
As far as I know, in generation phase, the autoregressive decoding procedures are all implemented in each searching method such as greedy search, meaning that the output embeddings will certainly be sampled by `argmax` or other sampling methods. Besides, the autoregressive decodings are all running in loop statement currently. I think that implementing a method that solely performs autoregression and makes the output differentiable seems great and feasible, but if the bottleneck in your code is loop statement, this way may have no significant help in performance.<|||||>cc @patrickvonplaten <|||||>Yeah currently gradient backprop is not really supported in `transformers` sadly. Think it would require some major changes to implemented this. Feel free to give it a stab! Would also be very interested in knowing how feasible this is in PyTorch!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,148
closed
Slow tokenizers return overflowing tokens in reversed order
When implementing the slow tokenizer for LayoutLMv2, I spotted some weird behaviour for slow tokenizers when specifying `return_overflowing_tokens = True`. Namely, in that case, overflowing tokens are returned in reversed order, and no padding is performed, unlike fast tokenizers. Small example: ``` from transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") text = "hello my name is niels" encoding = tokenizer(text, padding=True, max_length=6, truncation=True, return_overflowing_tokens=True) ``` When checking out the encoding, it looks as follows: ``` print(tokenizer.decode(encoding.input_ids)) # prints '[CLS] hello my name is [SEP]' print(tokenizer.decode(encoding.overflowing_tokens)) # prints '##els ni' ``` As you can see, the overflowing tokens are returned in reversed order, and they are not padded up to the max length of 6 tokens. In contrast, `BertTokenizerFast` does everything correctly: ``` from transformers import BertTokenizerFast tokenizer = BertTokenizerFast.from_pretrained("bert-base-uncased") text = "hello my name is niels" encoding = tokenizer(text, padding=True, max_length=6, truncation=True, return_overflowing_tokens=True) ``` returns ``` print(tokenizer.decode(encoding.input_ids[0])) # prints '[CLS] hello my name is [SEP]' print(tokenizer.decode(encoding.input_ids[1])) # prints '[CLS] niels [SEP] [PAD] [PAD]' ``` So I guess we have some work to do for slow tokenizers to work correctly. cc @LysandreJik @SaulLu @n1t0
08-17-2021 08:21:54
08-17-2021 08:21:54
@NielsRogge I would like to contribute to this. Can I work on this issue? <|||||>Sure! The goal would be to make the slow tokenizers equivalent to the fast tokenizers. So that means: - [ ] making sure overflowing tokens are returned in the correct order - [ ] add special tokens to the overflowing tokens - [ ] add a `overflow_to_sample_mapping`, similar to the fast tokenizers. This would probably require to update the `truncate_sequences` method defined [here](https://github.com/huggingface/transformers/blob/439a43b6b403205eeda2d62645fc16c93627d30d/src/transformers/tokenization_utils_base.py#L2922).<|||||>I see someone also already noticed this: #6697<|||||>@Apoorvgarg-creator It is extremely kind of you to offer your help on this problem! As I had started to look at the problem of the strange order of tokens in `overflowing_tokens` ("making sure overflowing tokens are returned in the correct order"), let me share with you what I had identified if it can be of any help: - There are behaviours that were not tested in the `test_maximum_encoding_length_pair_input` and `test_maximum_encoding_length_single_input` tests in the `test_tokenization_common.py` file. So we should add these tests to make sure that overflowing tokens are tested for all `TruncationStrategy` types and with a single sequence or a pair of sequences; - As said by @NielsRogge, the problem is most likely with the `truncate_sequences` method in `tokenization_utils_base.py`. I would like to take this opportunity to comment on the other 2 points ("add special tokens to the overflowing tokens" and "add a `overflow_to_sample_mapping`, similar to the fast tokenizers") raised by @NielsRogge. Indeed, the slow and fast tokenizer handle overflowing tokens quite differently. I think it would be nice to have the opinion of @LysandreJik , @sgugger and @n1t0 (and if ever someone else wants to give their opinion too, it would be a pleasure!!) on the fact of changing the API of the slow tokenizers so that it corresponds to the one of the fast tokenizers (as there is perhaps a need for backward compatibility).<|||||>@SaulLu @NielsRogge Thank you for the guidance. I will go through the `truncate_sequences` method.<|||||>@NielsRogge @SaulLu The reason we are getting the reverse order in the `longest_first` truncation strategy is that In other truncation strategies we are truncating the sequence in one iteration only whereas In `longest_first` we are running a loop `num_tokens_to_remove` times keeping `window_len` = 1 every time except when `overflowing_token` is empty. Hence we will be taking `1 id` at a time from the last. I have developed the code that I think will resolve the issue > making sure overflowing tokens are returned in the correct order.<|||||>@Apoorvgarg-creator - could be error on my end, but on the current master branch I'm still witnessing reversed order with the toy example provided in the original post.<|||||>> @Apoorvgarg-creator - could be error on my end, but on the current master branch I'm still witnessing reversed order with the toy example provided in the original post. > toy example provided in the original post could you please share the code or link for the same ? Thank you <|||||>> could you please share the code or link for the same ? > Thank you I was just referring to the original post in this thread. If i do a fresh install of the latest master and then ```python from transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") text = "hello my name is niels" encoding = tokenizer(text, padding=True, max_length=6, truncation=True, return_overflowing_tokens=True) print(tokenizer.decode(encoding.input_ids)) # prints '[CLS] hello my name is [SEP]' print(tokenizer.decode(encoding.overflowing_tokens)) # prints '##els ni' ``` Is this expected?<|||||>> > could you please share the code or link for the same ? > > Thank you > > I was just referring to the original post in this thread. If i do a fresh install of the latest master and then > > ```python > from transformers import BertTokenizer > tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") > text = "hello my name is niels" > encoding = tokenizer(text, padding=True, max_length=6, truncation=True, return_overflowing_tokens=True) > > print(tokenizer.decode(encoding.input_ids)) > # prints '[CLS] hello my name is [SEP]' > > print(tokenizer.decode(encoding.overflowing_tokens)) > # prints '##els ni' > ``` > > Is this expected? Sorry, By original post I thought you meant somewhere in the documentation. No this is not expected. I will try reproducing the same. Thank you<|||||>@dcyoung I ran the same code against the current master branch, I got the expected output - <img width="273" alt="Screenshot 2021-09-08 at 11 02 44 AM" src="https://user-images.githubusercontent.com/57873504/132451970-385f7171-14f8-4ce0-93a9-461657bdb7d7.png"> @dcyoung Can you provide more details about the environment in which you are running the code.<|||||>@Apoorvgarg-creator -- i can't explain it, but a fresh environment solved the issue with the toy example above. It is now correctly printing off `niels`. However, I'm still seeing unexpected behavior with the following example: Environment: ```bash $ conda create -n test python=3.8 $ source activate test $ pip install git+https://github.com/huggingface/transformers.git ... $ pip list Package Version ------------------ ------------------- certifi 2021.5.30 charset-normalizer 2.0.4 click 8.0.1 filelock 3.0.12 huggingface-hub 0.0.16 idna 3.2 joblib 1.0.1 numpy 1.21.2 packaging 21.0 pip 21.0.1 pyparsing 2.4.7 PyYAML 5.4.1 regex 2021.8.28 requests 2.26.0 sacremoses 0.0.45 setuptools 52.0.0.post20210125 six 1.16.0 tokenizers 0.10.3 tqdm 4.62.2 transformers 4.11.0.dev0 typing-extensions 3.10.0.2 urllib3 1.26.6 wheel 0.37.0 ``` Reproducible example: ```python from transformers import BertTokenizer, LayoutLMv2Tokenizer max_length = 8 n_src_tok_per_sample = max_length - 2 # account for pad words = ( n_src_tok_per_sample * ["a"] + n_src_tok_per_sample * ["b"] + n_src_tok_per_sample * ["c"] ) print("Original words: ", words) print(50 * "=" + "\nBERT\n" + 50 * "=") tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") encoded_inputs = tokenizer( text=words, padding="max_length", pad_to_multiple_of=8, truncation=True, max_length=max_length, return_overflowing_tokens=True, return_tensors="pt", is_split_into_words=True, ) input_ids = encoded_inputs["input_ids"] print("Decoded input_ids: ", [tokenizer.decode(x) for x in input_ids]) overflowing_tokens = encoded_inputs["overflowing_tokens"] print("Decoded overflow tokens: ", [tokenizer.decode(x) for x in overflowing_tokens]) print(50 * "=" + "\nLayout\n" + 50 * "=") tokenizer = LayoutLMv2Tokenizer.from_pretrained( "microsoft/layoutlmv2-base-uncased", only_label_first_subword=False, ) encoded_inputs = tokenizer( text=words, boxes=len(words) * [[1, 1, 1, 1]], padding="max_length", pad_to_multiple_of=8, truncation=True, max_length=max_length, return_overflowing_tokens=True, return_tensors="pt", is_split_into_words=True, ) input_ids = encoded_inputs["input_ids"] print("Decoded input_ids: ", [tokenizer.decode(x) for x in input_ids]) overflowing_tokens = encoded_inputs["overflowing_tokens"] print("Decoded overflow tokens: ", [tokenizer.decode(x) for x in overflowing_tokens]) ``` Output: ```bash Original words: ['a', 'a', 'a', 'a', 'a', 'a', 'b', 'b', 'b', 'b', 'b', 'b', 'c', 'c', 'c', 'c', 'c', 'c'] ================================================== BERT ================================================== Decoded input_ids: ['[CLS] a a a a a a [SEP]'] Decoded overflow tokens: ['b b b b b b c c c c c c'] ================================================== Layout ================================================== Decoded input_ids: ['[CLS] a a a a a a [SEP]'] Decoded overflow tokens: ['c c c c c c b b b b b b'] ```<|||||>Thank you very much for reporting the issue @dcyoung :blush:. I think it's due to the fact that `layoutLMv2` (which must have been merged around the same time as this fix) redefines the operation and does not use the generic method. Might be of interest to @NielsRogge :slightly_smiling_face: <|||||>@NielsRogge @SaulLu, LayoutLMv2 has its own `truncate_sequence` method. so that's why the problem of reverse order of overflowing tokens occurred in this tokenizer. Shall I make the respective changes in the `truncate_sequence` method of LayoutLMv2 tokenizer? @dcyoung, Thank you very much for reporting the issue.<|||||>Yes, the LayoutLMv2 PR was merged before the PR that fixed the reverse order. So feel free to update the `truncate_sequence` method of `LayoutLMv2Tokenizer`.
transformers
13,147
open
Support OpenNMT models
It will be great if OpenNMT (https://opennmt.net/) and CTranslate2 (https://github.com/OpenNMT/CTranslate2) model support is provided out of the box.
08-17-2021 07:55:48
08-17-2021 07:55:48
Hi @jordimas ! Would you be interested in adding this model to transformers? I briefly looked at the code and it looks similar to [mbart](https://github.com/huggingface/transformers/blob/master/src/transformers/models/mbart/modeling_mbart.py)/[m2m](https://github.com/huggingface/transformers/blob/master/src/transformers/models/m2m_100/modeling_m2m_100.py)/[Marian](https://github.com/huggingface/transformers/blob/master/src/transformers/models/marian/modeling_marian.py) style models in transformers. So it should be fairly straightforward to port this by looking at the design of these models. As you might already know, each model in the library requires 3 files, which will look something like this: - `configuration_open_nmt.py` - `modeling_open_nmt.py` - `tokenization_open_nmt.py` We provide a template using CookieCutter which lets you set up these files for you, even filling in the names, as explained [here](https://github.com/huggingface/transformers/tree/master/templates/adding_a_new_model). It also creates documentation pages, test files, and so on. I would be happy to help with this, so feel free to ping me if you have any issues. Thank you :)
transformers
13,146
closed
Runtime error when training DetForObjectDetection using HFTrainer with GPU.
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.9.2 - Platform: Linux-5.4.0-81-generic-x86_64-with-glibc2.27 - Python version: 3.8.0 - PyTorch version (GPU?): 1.9.0+cu111 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <yes> - Using distributed or parallel set-up in script?: <no> ## Information Model I am using: DetrForObjectDetection The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: I'm training DetrForObjectDetection by using HFTrainer. Save a script file below as `mini_example.py`, and run as `python mini_example.py --output_dir mini_model mini_model` after setting `img_folder` to the path to the coco image dataset folder and `annotations` to the path to the coco annotation JSON file. ```python from typing import Dict, List, Union import torch from torchvision.datasets import CocoDetection from transformers import ( DetrConfig, DetrFeatureExtractor, DetrForObjectDetection, HfArgumentParser, Trainer, TrainingArguments, ) def load_category(category): id2label = {} label2id = {} maxid = 0 for k, v in category.items(): id2label[int(k)] = v["name"] label2id[v["name"]] = int(k) maxid = max(maxid, int(k)) for i in range(maxid): if not (i in id2label): id2label[i] = None return id2label, label2id class DetrData(CocoDetection): def __init__(self, img_folder, annotations, feature_extractor, train=True): super(DetrData, self).__init__(img_folder, annotations) self.feature_extractor = feature_extractor def __getitem__(self, idx): # read in PIL image and target in COCO format img, target = super(DetrData, self).__getitem__(idx) # preprocess image and target (converting target to DETR format, resizing + normalization of both image and target) image_id = self.ids[idx] target = {'image_id': image_id, 'annotations': target} encoding = self.feature_extractor(images=img, annotations=target, return_tensors="pt") encoding["pixel_values"] = encoding["pixel_values"].squeeze() # remove batch dimension encoding["labels"] = encoding["labels"][0] # remove batch dimension return encoding @dataclass class DataCollatorDetr: feature_extractor: DetrFeatureExtractor def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]: pixel_values = [item["pixel_values"] for item in features] encoding = self.feature_extractor.pad_and_create_pixel_mask(pixel_values, return_tensors="pt") encoding["labels"] = [item["labels"] for item in features] return encoding def main(): parser = HfArgumentParser((TrainingArguments)) training_args, = parser.parse_args_into_dataclasses() feature_extractor = DetrFeatureExtractor() train_dataset = DetrData(img_folder="path/to/image_folder", annotations="path/to/annotation_file", feature_extractor=feature_extractor) id2label, label2id = load_category(train_dataset.coco.cats) config = DetrConfig.from_pretrained("facebook/detr-resnet-50") config.id2label = id2label config.label2id = label2id model = DetrForObjectDetection.from_pretrained( "facebook/detr-resnet-50", config=config) # Initialize our Trainer trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, tokenizer=feature_extractor, data_collator=DataCollatorDetr(feature_extractor=feature_extractor), ) train_result = trainer.train() if __name__ == "__main__": main() ``` When train without GPU, it works fine, but got RuntimeError below with GPU, ``` Traceback (most recent call last): File "mini_example.py", line 97, in <module> main() File "mini_example.py", line 93, in main train_result = trainer.train() File "/home/christopher/pyenvs/detr/lib/python3.8/site-packages/transformers/trainer.py", line 1286, in train tr_loss += self.training_step(model, inputs) File "/home/christopher/pyenvs/detr/lib/python3.8/site-packages/transformers/trainer.py", line 1779, in training_step loss = self.compute_loss(model, inputs) File "/home/christopher/pyenvs/detr/lib/python3.8/site-packages/transformers/trainer.py", line 1811, in compute_loss outputs = model(**inputs) File "/home/christopher/pyenvs/detr/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/christopher/pyenvs/detr/lib/python3.8/site-packages/transformers/models/detr/modeling_detr.py", line 1435, in forward loss_dict = criterion(outputs_loss, labels) File "/home/christopher/pyenvs/detr/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/christopher/pyenvs/detr/lib/python3.8/site-packages/transformers/models/detr/modeling_detr.py", line 2009, in forward indices = self.matcher(outputs_without_aux, targets) File "/home/christopher/pyenvs/detr/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/christopher/pyenvs/detr/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context return func(*args, **kwargs) File "/home/christopher/pyenvs/detr/lib/python3.8/site-packages/transformers/models/detr/modeling_detr.py", line 2127, in forward bbox_cost = torch.cdist(out_bbox, tgt_bbox, p=1) File "/home/christopher/pyenvs/detr/lib/python3.8/site-packages/torch/functional.py", line 1049, in cdist return _VF.cdist(x1, x2, p, None) # type: ignore[attr-defined] RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking arugment for argument x2 in method wrapper__cdist_forward) 0%| | 0/1875 [00:03<?, ?it/s] ``` This is maybe because `inputs["labels"]` is not sent to GPU here https://github.com/huggingface/transformers/blob/v4.9.2/src/transformers/trainer.py#L1734 which is called at https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L1771 because it is dict. Any suggestion on how to fix it? <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Successfully complete training <!-- A clear and concise description of what you would expect to happen. -->
08-17-2021 07:33:52
08-17-2021 07:33:52
Hey @jnishi, Thanks a lot for your issue! Could you please try to make a minimum reproducible code example that doesn't force us to manually create a `img_folder` or `annotations` folder? Ideally, you could link to a colab that runs in less than a minute to reproduce the error. Also cc'ing @NielsRogge here for DETR<|||||>Here is the link to colab. https://colab.research.google.com/drive/1qvasKfJGhxoNn-l_5GZwkvh4FhW59gBS?usp=sharing Please upload sample.jpg and sample.json included below before you run colab. [detr_samples.tar.gz](https://github.com/huggingface/transformers/files/7011436/detr_samples.tar.gz) <|||||>Thanks for the colab! It was indeed easy to reproduce the issue. I've fixed it here: https://colab.research.google.com/drive/1oIHGwr1U0sw-6KW-MG60s-ksXA-kYyUO?usp=sharing As you already spotted, the problem is in the `_prepare_inputs()` method of the Trainer, which does not take into account inputs which are lists. For DETR, the `labels` are a list of dictionaries, each dictionary containing the annotations (class labels and boxes) for an example in the batch. I've fixed it by overwriting that method. cc'ing @sgugger, as this could be incorporated directly in the Trainer, instead of having to overwrite it.<|||||>Thanks for a quick response, and suggestion of the fix. It works fine in my scripts too. I would be more than happy to incorporate it directly. BTW, I have another problem with a multi-GPU environment, so I created another issue. https://github.com/huggingface/transformers/issues/13197<|||||>The PR linked above should solve this problem. It's a bit more general than your solution in the notebook @NielsRogge to handle any nested dict/list of tensors.
transformers
13,145
closed
remove unwanted control-flow code from DeBERTa-V2
# What does this PR do? Removes never executed branch from `deberta-v2` code discussed in https://github.com/huggingface/transformers/pull/13120#issuecomment-899865394 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @LysandreJik @patrickvonplaten @Rocketknight1
08-17-2021 04:11:55
08-17-2021 04:11:55
transformers
13,144
closed
[Benchmark]
# 🖥 Benchmarking `transformers` ## Benchmark Which part of `transformers` did you benchmark? ## Set-up What did you run your benchmarks on? Please include details, such as: CPU, GPU? If using multiple GPUs, which parallelization did you use? ## Results Put your results here!
08-17-2021 01:09:03
08-17-2021 01:09:03
Closing since the issue doesn't seem to have much information. @mahmoudcupo did you mean to submit a benchmark?
transformers
13,143
closed
fix wrong 'cls' masking for bigbird qa model output
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> Currently, the bigbird QA model masks out (assign very small value < -1e6) all logits before context tokens as follows. ``` tokens : ['[CLS]', '▁How', '▁old', '▁are', '▁you', '?', '[SEP]', '▁I', "'m", '▁twenty', '▁years', '▁old', '.'] input_ids : [65, 1475, 1569, 490, 446, 131, 66, 415, 1202, 8309, 913, 1569, 114] attention_mask : [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] token_type_ids : [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1] start_logits: [-1.00000231e+06 -1.00000294e+06 -1.00000794e+06 -1.00000525e+06 -1.00000344e+06 -1.00000288e+06 -9.99994312e+05 -2.53751278e+00 -7.34928894e+00 4.26531649e+00 -6.21708155e+00 -8.17963409e+00 -6.25242186e+00] end_logits: [-1.00000169e+06 -1.00000869e+06 -1.00000731e+06 -1.00001088e+06 -1.00000856e+06 -1.00000781e+06 -9.99996375e+05 -9.58227539e+00 -9.81797123e+00 -2.89585280e+00 1.97710574e+00 -9.89597499e-01 -5.21932888e+00] ``` As you can see, it also masks out the logits from [CLS] token. This is because the following function makes question masks based on the position of the first [SEP] token. https://github.com/huggingface/transformers/blob/14e9d2954c3a7256a49a3e581ae25364c76f521e/src/transformers/models/big_bird/modeling_big_bird.py#L3047 However, this is the wrong mechanism because [CLS] token is used for the prediction of "unanswerable question" in many QA models. So, I simply change the code so that the masking on [CLS] token is disabled right after the creation of token_type_ids. <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-16-2021 19:42:28
08-16-2021 19:42:28
Hey @donggyukimc, Thanks for your PR - this makes sense to me. Do you by any chance have a reference to the original code / paper that shows that the original CLS token should not be masked out? Also cc-ing our expert on BigBird here @vasudevgupta7 <|||||>@donggyukimc, I am little unsure about this. In the original code also, they are masking out everything till first `[SEP]` ([see this](https://github.com/tensorflow/models/blob/6de0c8e97f6f658a6387d8b7fa946b070a50e98f/official/nlp/projects/triviaqa/modeling.py#L56)). If we don't mask the `CLS` token, then there is a possibility that `start_token` will point to `CLS` but `end_token` will point to some token in a sequence and hence final answer will have question also. I think cases corresponding to whether answer is present (or not) should be handled by putting a classifier over the pooler layer instead ([something like this](https://github.com/vasudevgupta7/bigbird/blob/ea2ce568f8f55978b3f0808f811de7d2ac0deb6c/src/train_nq_torch.py#L96)). If we make the model point `start_token` & `end_token` to `CLS` during training, it usually leads to infinite/nan loss during training but classifier approach works well. Correct me if you feel I am wrong somewhere.<|||||>@vasudevgupta7, Thank you for your comment. I bring the QA models from other architectures (BERT, ROBERTA) https://github.com/huggingface/transformers/blob/91ff480e2693f36b11aaebc4e9cc79e4e3c049da/src/transformers/models/bert/modeling_bert.py#L1831-L1863 https://github.com/huggingface/transformers/blob/91ff480e2693f36b11aaebc4e9cc79e4e3c049da/src/transformers/models/roberta/modeling_roberta.py#L1518-L1550 Even though both of them do not apply any mask on predictions for CLS (and also questions), they can be trained without the problems on loss. (actually, CLS shouldn't be masked out because they predict unanswerable probability from CLS) As you can see in, [squad_metrics.py](https://github.com/huggingface/transformers/blob/91ff480e2693f36b11aaebc4e9cc79e4e3c049da/src/transformers/data/metrics/squad_metrics.py#L384), the QA evaluation pipeline in transformers library, https://github.com/huggingface/transformers/blob/91ff480e2693f36b11aaebc4e9cc79e4e3c049da/src/transformers/data/metrics/squad_metrics.py#L437-L456 it directly computes unanswerable probability from same MLP logit outputs with answerable spans. https://github.com/huggingface/transformers/blob/91ff480e2693f36b11aaebc4e9cc79e4e3c049da/src/transformers/data/metrics/squad_metrics.py#L438 One of your our concerns (**there is a possibility that start_token will point to CLS but end_token will point to some token in a sequence and hence final answer will have question also**) will be prevented in this part. https://github.com/huggingface/transformers/blob/91ff480e2693f36b11aaebc4e9cc79e4e3c049da/src/transformers/data/metrics/squad_metrics.py#L453-L456 because the positions of questions tokens not exists in feature.token_to_orig_map. Your [suggestion](https://github.com/vasudevgupta7/bigbird/blob/ea2ce568f8f55978b3f0808f811de7d2ac0deb6c/src/train_nq_torch.py#L96) using a separate MLP to predict unanswerable probability will also do the work, but you have to use different evaluation code except for [squad_metrics.py](https://github.com/huggingface/transformers/blob/91ff480e2693f36b11aaebc4e9cc79e4e3c049da/src/transformers/data/metrics/squad_metrics.py#L384). Actually, this is how i found the problem, i got wrong prediction results when i used bigbirdQA model + [squad_metrics.py](https://github.com/huggingface/transformers/blob/91ff480e2693f36b11aaebc4e9cc79e4e3c049da/src/transformers/data/metrics/squad_metrics.py#L384) In my opinion, it is better to use the same prediction mechanism in order to keep compatibility between other QA model architectures and the QA evaluation pipeline in transformers library. I'd like to hear your opinion on this. Thank you for your thoughtful comment again, @vasudevgupta7. <|||||>any thoughts on my [opinion](https://github.com/huggingface/transformers/pull/13143#issuecomment-902547309)? @patrickvonplaten @vasudevgupta7 <|||||>Hey @donggyukimc, so sorry I missed your comment earlier. As you pointed out about BERT like models, I think it's fine to unmask `CLS` token to mantain consistency with other models. So, this PR looks alright to me.<|||||>Awesome merging it then!
transformers
13,142
closed
Pretrain BART MNLI model on Financial Phrasebank
Hi, I am trying to train/finetune the BART large model pretrained on MNLI on Financial Phrasebank but completely lost as I'm just a beginner. from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained('facebook/bart-large-mnli') tokenizer = AutoTokenizer.from_pretrained('facebook/bart-large-mnli') I couldnt find any code examples for tokenizing the input text from Financial Phrasebank. Different tutorials show different ways and I'm completely full. Can anyone please please share any links to examples similar to this? I was also trying to look for the finetuning code of the BART large MNLI finetuned on yahoo datset by Joe Davison @joeddav (https://huggingface.co/joeddav/bart-large-mnli-yahoo-answers) but couldn't find that code. Any suggestions or advice would be much appreciated. Thanks in advance.
08-16-2021 17:32:28
08-16-2021 17:32:28
Hi, We like to keep Github issues for bugs/feature requests. For training-related questions, please use the [forum](https://discuss.huggingface.co/). Many HuggingFace members are happy to help you there! Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,141
closed
Implement a `batch_size` parameter in the `pipeline` object
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> Implement a `batch_size` parameter in the `pipeline` object, so that when we call it, it computes the predictions by batches of sentences and then does get CUDA Out of Memory errors. Ideally, this optional argument would have a good default, computed from the tokenizer's parameters and the hardware the code is running on. References to this need in the forum: https://discuss.huggingface.co/t/how-to-make-pipeline-automatically-scale/7432/3 https://discuss.huggingface.co/t/how-to-change-the-batch-size-in-a-pipeline/8738 ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> When making inference on very long list of sentences using the `pipeline` object, I often get CUDA OOM errors. ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md --> I could try :)
08-16-2021 16:05:56
08-16-2021 16:05:56
@sgugger ?<|||||>Hello @xegulon, this is in line with some work currently underway by @Narsil <|||||>@xegulon, Batching on inference is something to be very cautious about, because alignment might heavily penalize the speed of inference. See https://github.com/huggingface/transformers/pull/11251 and https://gist.github.com/Narsil/ee5c09875e74fa6f018dc6d014f6c06c for more information. Cuda OOM errors are most likely due to the fact that you are padding way too much, and actually showcase the slow down. The big refactor mentionned by @LysandreJik is ready here https://github.com/Narsil/transformers/tree/iterable_pipelines With said PR, you should be able to actually stream all your data to the GPU leading to a massive speedup (like DataLoader), and if you want to do batching because you know it will speedup (please measure real payloads, it's unlikely to be significant, so make sure it is a speedup) you can do it by manually using `Dataloader`, `preprocess`, `forward` and `postprocess`. The proposed PR will use DataLoader (for `pt`) by default if you send lists too. You can also send directly Datasets. <|||||>Great (useful) work @Narsil thanks a lot. Is it planned to be released in `v4.10.0`?<|||||>I don't think it will make it in time, it's a pretty massive change, we're pulling in stuff bit by bit to make sure we're not breaking anything (we're in a phase where we're strengthening the tests first)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@xegulon the modifications have landed in master, can you confirm it speeds up inference without the need for `batch_size` ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,140
closed
Ci continue through smi failure
Temporary fix in order to get coverage while we replace the machine: apply the `continue-on-error` option to NVIDIA-SMI runs that run on the multi-gpu machine
08-16-2021 15:38:05
08-16-2021 15:38:05
transformers
13,139
closed
[WIP][Wav2Vec2] Fix Wav2Vec2 Pretraining
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## 1. Fix Wav2Vec2 Pretraining with PyTorch & Deepspeed Changes in the initialization and loss calculation seemed to have solved the unstable wav2vec2 pretraining loss problem for now. In a first run, the following loss curves were obtained: ``` {'loss': 4.7028, 'learning_rate': 1.1098409651313401e-05, 'epoch': 0.49} {'loss': 4.6605, 'learning_rate': 1.443936386052536e-05, 'epoch': 0.99} {'loss': 4.7504, 'learning_rate': 1.639369678954086e-05, 'epoch': 1.49} {'loss': 4.5622, 'learning_rate': 1.778031806973732e-05, 'epoch': 1.99} {'loss': 4.5645, 'learning_rate': 1.885586509341484e-05, 'epoch': 2.49} {'loss': 4.513, 'learning_rate': 1.9734650998752817e-05, 'epoch': 2.99} {'loss': 4.5666, 'learning_rate': 2.0477653894913667e-05, 'epoch': 3.49} {'loss': 4.4167, 'learning_rate': 2.112127227894928e-05, 'epoch': 3.99} {'loss': 4.5131, 'learning_rate': 2.1742243895858865e-05, 'epoch': 4.49} {'loss': 4.4049, 'learning_rate': 2.2244779679123014e-05, 'epoch': 4.99} {'loss': 4.5507, 'learning_rate': 2.269983228781891e-05, 'epoch': 5.49} {'loss': 4.4056, 'learning_rate': 2.3115605255534444e-05, 'epoch': 5.99} {'loss': 4.4998, 'learning_rate': 2.349834418215476e-05, 'epoch': 6.49} {'loss': 4.4116, 'learning_rate': 2.385291414268169e-05, 'epoch': 6.99} {'loss': 4.539, 'learning_rate': 2.418317878182792e-05, 'epoch': 7.49} {'loss': 4.3734, 'learning_rate': 2.4492257601320028e-05, 'epoch': 7.99} {'loss': 4.4986, 'learning_rate': 2.4810810490944393e-05, 'epoch': 8.49} {'loss': 4.3982, 'learning_rate': 2.5083198105070827e-05, 'epoch': 8.99} {'loss': 4.4946, 'learning_rate': 2.5341012393499218e-05, 'epoch': 9.49} {'loss': 4.3938, 'learning_rate': 2.5585733888334973e-05, 'epoch': 9.99} {'loss': 4.4944, 'learning_rate': 2.5818628371128123e-05, 'epoch': 10.49} {'loss': 4.4035, 'learning_rate': 2.604078649703087e-05, 'epoch': 10.99} 6%|████████▎ | 227/4000 [16:55<4:45:30, 4.54s/it] ``` The run can be reproduced by doing the following: **1. Create training folder** ```bash mkdir wav2vec2_reproduce cd wav2vec2_reproduce ``` **2. Create data folder** ``` git lfs install git clone https://huggingface.co/patrickvonplaten/LibriSpeechTest ``` **3. Create model & experiment folder** ``` git lfs install git clone https://huggingface.co/patrickvonplaten/wav2vec2_libri ``` **4. Prepare training** We need to create a simlink as follows: ``` ln $(realpath ./LibriSpeechTest) LibriSpeech ``` and the manual data dir as defined in: https://huggingface.co/patrickvonplaten/wav2vec2_libri/blob/main/run_main.sh#L20 should be renamed to the local absolute path that leads to the just created simlink folder `wav2vec2_reproduce/LibriSpeech`. We have to make sure that the `transformers` is checkout to the branch of this PR: `https://github.com/patrickvonplaten/transformers/tree/wav2vec2-pretraining` Finally, we can start running the training: ``` cd wav2vec2_libri ./run_main.sh ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-16-2021 13:39:23
08-16-2021 13:39:23
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,138
closed
Fix classifier dropout in RobertaForMultipleChoice
# What does this PR do? Fix as per [PR#13087](https://github.com/huggingface/transformers/pull/13087) <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-16-2021 12:14:38
08-16-2021 12:14:38
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,137
closed
how to finetune or test XLM-ProphetNet on XGLUE-NTG task
# 📚 Migration ## Information <!-- Important information --> Model I am using (xprophetnet): 'microsoft/xprophetnet-large-wiki100-cased-xglue-ntg' Language I am using the model on (English, Chinese ...): multi-language The problem arises when using: * [ ] the official example scripts: (give details below) * [√] my own modified scripts: (give details below) Just a little change in ./examples/pytorch/summarization/run_summarization_no_trainer.py to suit for NTG task and bleu evaluation metric. The tasks I am working on is: * [√ ] an official GLUE/SQUaD task: (give the name): XGLUE-NTG * [ ] my own task or dataset: (give details below) ## Details <!-- A clear and concise description of the migration issue. If you have code snippets, please provide it here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code. --> I have tried to use open xprophetnet checkpoint "https://huggingface.co/microsoft/xprophetnet-large-wiki100-cased-xglue-ntg" to reproduce NTG test results without further training, but I have received very bad blue results. For example, the test.fr can only get 7.7, while the paper claims 11.4. The num_beams and max_source_length parameters in my script (run_summarization_no_trainer.py) are set to 10 and 512, while others are same as original default value. Now I don't know how to reproduce the NTG results of xprophetnet. Can you show me some related inference scripts or how to fine-tune xprophetnet-ntg from the pre-trained xprophetnet-multi ckpt? Here are some jupyter notebook examples. You can see that most generated titles are wrong, even have this ',,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,'. ``` for step, batch in enumerate(test_dataloader[lg]): if step > 5: break with torch.no_grad(): generated_tokens = accelerator.unwrap_model(model).generate( batch["input_ids"], attention_mask=batch["attention_mask"], **gen_kwargs, ) #print("generated_tokens", generated_tokens) generated_tokens = accelerator.pad_across_processes( generated_tokens, dim=1, pad_index=tokenizer.pad_token_id ) #print("generated_tokens", generated_tokens) labels = batch["labels"] if not args.pad_to_max_length: # If we did not pad to max length, we need to pad the labels too labels = accelerator.pad_across_processes(batch["labels"], dim=1, pad_index=tokenizer.pad_token_id) generated_tokens = accelerator.gather(generated_tokens).cpu().numpy() labels = accelerator.gather(labels).cpu().numpy() if args.ignore_pad_token_for_loss: # Replace -100 in the labels as we can't decode them. labels = np.where(labels != -100, labels, tokenizer.pad_token_id) if isinstance(generated_tokens, tuple): generated_tokens = generated_tokens[0] decoded_preds = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True) input_seq = tokenizer.batch_decode(batch["input_ids"], skip_special_tokens=True) print("\ninput_seq", input_seq[0][:200]) print("decoded_preds", decoded_preds) print("decoded_labels", decoded_labels) # Some simple post-processing #decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels) ``` input_seq Vice-présidente de l'Assemblée nationale, la macroniste Carole Bureau-Bonnard était chargée mardi après-midi d'animer la séance d'examen du projet de loi «confiance dans l'action publique». C'était sa decoded_preds ["Carole Bureau-Bonnard, vice-présidente de l'Assemblée nationale, a connu une séance éprouvante", "Les plus grands fauteuils de l'île d'Antiparos"] decoded_labels ["Les débuts balbutiants d'une députée LREM provoque la pagaille à l'Assemblée nationale", 'Ces maisons du sud qui nous inspirent'] input_seq Le procès d'un Turc de 17 ans qui avait agressé en janvier 2016 à la machette un enseignant d'une école juive de Marseille portant une kippa, s'ouvre mercredi devant le tribunal pour enfants (TPE) de decoded_preds [',,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,', 'The S.O.A.A.D.:,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,'] decoded_labels ['Un jeune djihadiste de 17 ans en procès à Paris', 'Canada : la forme de ce nuage est invraisemblable'] input_seq Face à l'inflation des médicaments, le Comité économique des produits de santé alerte les industriels, qui répondent coûts de recherche. Une fatalité? On la surnomme "la pilule du président", car elle decoded_preds ['Le Keytruda est un espoir pour les malades atteints de la tumeur de Jimmy Carter', 'Les voyageurs qui utilisent Android ou iOS seraient des voyageurs préférés'] decoded_labels ['La vérité sur... la surenchère des anticancéreux', "Dis-moi quel système d'exploitation mobile tu utilises, je te dirai quel voyageur tu es"] input_seq La République serbe de Bosnie (Republika Srpska) s'est déclarée mercredi "militairement neutre" alors que le gouvernement central de Sarajevo, les Bosniaques musulmans et les Croates de Bosnie-Herzégo decoded_preds ['La République serbe de Bosnie déclarée "militairement neutre"', 'Les habitudes alimentaires des Français changent, selon une étude'] decoded_labels ['La République serbe de Bosnie proclame sa neutralité militaire', 'Les Français de plus en plus adeptes du grignotage'] input_seq Eva Longoria se livre dans une interview accordée à Hollywood Access au sujet de son mari, José Baston dont elle semble éperdument amoureuse. Grande supportrice de l'ex-candidate présidentielle Hillar decoded_preds ["Eva Longoria s'est confiée sur le bonheur trouvé dans le bras de José Baston", '3 exercices de respiration simples à mettre en oeuvre pour se détendre'] decoded_labels ['Avec Pepe, Eva Longoria file le parfait amour', '3 exercices de respiration qui vont vous sauver en cas de coup de stress'] input_seq Le kaki fait son grand come-back dans notre dressing. Par petites touches ou en total look, voici 20 tenues repérées sur Pinterest pour être stylée en kaki.. Un blouson satiné kaki avec une jupe fleur decoded_preds ['20 tenues pour être stylée en kaki', 'La tuerie de Las Vegas relance le débat sur le contrôle des armes à feu aux Etats-Unis'] decoded_labels ['Pinterest : 20 façons de porter du kaki ce printemps', 'Fusillades: Les Etats-Unis pays développé le plus meurtrier au monde'] ## Environment info <!-- You can run the command `python transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - PyTorch version (GPU): - Using GPU in script?: <!-- IMPORTANT: which version of the former library do you use? --> * `pytorch-transformers` or `pytorch-pretrained-bert` version (or branch): pytorch-transformers ## Checklist - [√ ] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers)) - [√ ] I checked if a related official extension example runs on my machine.
08-16-2021 09:37:02
08-16-2021 09:37:02
I think XLMProphetNet and ProphetNet training is currently broken, see: https://github.com/huggingface/transformers/issues/9804<|||||>There might be a PR to fix it though https://github.com/huggingface/transformers/pull/13132<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,136
closed
Correct & simplify check_dummies regex
# What does this PR do? Add necessary `\` escapes omitted for `()` and remove unnecessary `\` in the check_dummies' code matching regex. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
08-16-2021 05:57:22
08-16-2021 05:57:22
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,135
closed
dtype
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
08-15-2021 22:24:37
08-15-2021 22:24:37
transformers
13,134
closed
✨ Add PyTorch image classification example
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Adds PyTorch image classification example. For now, it uses `torchvision.datasets.ImageFolder` to load local image folders (just like the flax image classification example). In the future, we will switch to using the `datasets` package's image folder (once it exists). Marking as draft for now as I'm still working through cleaning up changes I made from [this example](https://github.com/nateraw/vision/tree/main/image-classification) I wrote earlier that uses `datasets` instead. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-15-2021 20:22:40
08-15-2021 20:22:40
Nice!! Relevant for #13080 <|||||>I'll review this PR in detail (thanks for working on this!). Regarding the fixtures for the tests, I've recently moved these files to the hf-internal-testing organization on the [hub](https://huggingface.co/hf-internal-testing). This makes it more clear, as otherwise these fixture files are also downloaded when people do a `git clone` of the library.<|||||>Last nit on my side: can we move the vision folder to be `image-classification`? We will have other kinds of vision examples in the future.<|||||>Ok, addressed most of the comments. Merging as-is for now. @NielsRogge I did not address these two items, however I can in future PRs (if need be): - Adding test data to `datasets` library. - Adjusting train/validation/test split logic. <|||||>13134
transformers
13,133
closed
[WIP] Add Few Shot Named Entity Recognition (FSNER) model
# What does this PR do? - This PR adds a new model FSNER (few shot named entity recognition) which has been implemented and trained based on the paper: [Example-Based Named Entity Recognition](https://arxiv.org/abs/2008.10570) from the researches in Microsoft Dynamics 365 AI. - It includes only FSNERModel, not any other derivations, i.e., MaskedLM, ForQuestionAnswering etc. - Doc strings are also updated, but not sure how it would appear visually. - No additional tests are included. <details><summary>Usage Example</summary> <p> ``` from transformers import FSNERModel, FSNERTokenizerFast, FSNERTokenizer device = 'cpu' fsner_model = FSNERModel.from_pretrained("sayef/fsner-bert-base-uncased").to(device) fsner_tokenizer = FSNERTokenizer.from_pretrained("sayef/fsner-bert-base-uncased") # size of query and supports must be same. If you want to find all the entitites in one particular query, just repeat same query n times where n is the size of supports (or entities). query = [ 'KWE 4000 can reach with a maximum speed from up to 450 P/min an accuracy from 50 mg', 'I would like to order a computer from eBay.', ] # each list in supports are the examples of one entity type supports = [ [ 'Horizontal flow wrapper [E] Pack 403 [/E] features the new retrofit-kit „paper-ON-form“', '[E] Paloma Pick-and-Place-Roboter [/E] arranges the bakery products for the downstream tray-forming equipment' , 'Finally, the new [E] Kliklok ACE [/E] carton former forms cartons and trays without the use of glue', 'We set up our pilot plant with the right [E] FibreForm® [/E] configuration to make prototypes for your marketi ng tests and package validation', 'The [E] CAR-T5 [/E] is a reliable, purely mechanically driven cartoning machine for versatile application fiel ds' ], [ "[E] Walmart [/E] is a leading e-commerce company", "I recently ordered a book from [E] Amazon [/E]", "I ordered this from [E] ShopClues [/E]", "Fridge can be ordered in [E] Amazon [/E]", "[E] Flipkart [/E] started it's journey from zero" ] ] def tokenize(x): return fsner_tokenizer(x, padding='max_length', max_length=384, truncation=True, return_tensors="pt") W_query = tokenize(query).to(device) W_supports = tokenize([s for support in supports for s in support]).to(device) start_prob, end_prob = fsner_model.get_start_end_token_scores(W_query, W_supports) output = fsner_tokenizer.extract_entity_from_scores(query, W_query, start_prob, end_prob, thresh=0.50) print(output) ``` </p> </details> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @LysandreJik @stas00 @sgugger Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-15-2021 14:32:58
08-15-2021 14:32:58
Hi, Thanks for your contribution! Looking at the code, I'm not sure whether there's a need to add an entire new model for it that includes a modeling file, tokenizer, etc., as the model itself is a just a BERT model, and one can just use `BertTokenizer` to prepare data for the model. The only differences are the `get_start_end_token_scores` and `extract_entity_from_scores` methods. So personally, I'd opt to: - upload the model's weights to the hub under the "microsoft" namespace. I see you've already uploaded them under your own name, so we can transfer or copy them for you. - add a Colab notebook or script under the [research_projects](https://github.com/huggingface/transformers/tree/master/examples/research_projects) directory, that illustrates how FSNER works. <|||||>Thanks for your kind reviews and replies. I would like to start with @NielsRogge 's comment. - The `get_start_end_token_scores ` method is actually the few shot prediction method. The training method is almost similar (which is not yet included), returning the entity start/end span probability and off-course the proposed loss function etc. Because of the extra special tokens i.e. [E], and [/E] the tokenizer has some modifications inside. Well, end users also can do that by themselves. - On the other hand, the `extract_entity_from_scores` method is responsible for choosing the best spans from the start/end probabilities like answer span selection process in question answering task. - At the end, we actually achieve a fine-tuned BERT model ready for few-shot named entity recognition task. Now, what I am confused about is, how you maintain/support a new model which is not a variation of transformer model, rather uses pretrained BERT model and fine-tunes on new task and data, for example, BertForQuestionAnswering. It would be best, in my opinion, if we could work on something like that, BertForFSNER or something similar. It's totally okay for me to support the model in any format, i.e., new model, colab script or what I discussed above. For your comprehension of the model, I am attaching the class I wrote when I started implementing the proposed architecture. <details><summary>FSNER Prototype Code</summary> <p> ```python class FSNER(nn.Module): def __init__(self, model_name='bert-base-uncased'): super(FSNER, self).__init__() # declare bert tokenizer self.tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased') # add special tokens for enity boundaries self.tokenizer.add_special_tokens({'additional_special_tokens': ['[E]','[/E]']}) # get enitity start and end token ids self.start_token_id, self.end_token_id = tuple(self.tokenizer.convert_tokens_to_ids(['[E]','[/E]'])) # declare bert model self.bert = BertModel.from_pretrained(model_name, return_dict=True) # resize model token embeddings self.bert.resize_token_embeddings(len(self.tokenizer)) # cosine sim self.cos = torch.nn.CosineSimilarity(3, 1e-08) # softmax self.softmax = torch.nn.Softmax(dim=1) def BERT(self, **inputs): return self.bert(**inputs).last_hidden_state def VectorSum(self, token_embeddings): return token_embeddings.sum(2, keepdim=True) def Atten(self, q_rep, S_rep, T=1): return self.softmax(T*self.cos(q_rep, S_rep)) def tokenize(self, x): return self.tokenizer(x, padding='max_length', max_length=384, truncation=True, return_tensors="pt", return_offsets_mapping=True) def save(self): self.bert.save_pretrained('./fsner-bert-base-uncased/') def forward(self, W_query, W_supports): q = self.BERT(**W_query) S = self.BERT(**W_supports) # reshape from (batch_size, 384, 784) to (batch_size, 1, 384, 784) q = q.view(q.shape[0], -1, q.shape[1], q.shape[2]) # reshape from (batch_size*n_exaples_per_entity, 384, 784) to (batch_size, n_exaples_per_entity, 384, 784) S = S.view(q.shape[0], -1, S.shape[1], S.shape[2]) q_rep = self.VectorSum(q) S_rep = self.VectorSum(S) s_start = S[(W_supports['input_ids'] == self.start_token_id).view(S.shape[:3])].view(S.shape[0], -1, 1, S.shape[-1]) s_end = S[(W_supports['input_ids'] == self.end_token_id).view(S.shape[:3])].view(S.shape[0], -1, 1, S.shape[-1]) atten = self.Atten(q_rep, S_rep) P_start = torch.sum(atten * torch.einsum("bitf,bejf->bet", q, s_start), dim=1) P_end = torch.sum(atten * torch.einsum("bitf,bejf->bet", q, s_end), dim=1) return P_start, P_end def decode(self, ids, skip_special_tokens=True): return self.tokenizer.decode(ids, skip_special_tokens=skip_special_tokens) ``` As you can see, the forward method takes in BERT pretrained model and does some calculations to provide span predictions. </p> </details> <|||||>The [research_projects folder](https://github.com/huggingface/transformers/tree/master/examples/research_projects) is typically used for models that introduce a new technique or model based BERT. Examples are Performer, LXMERT, etc. As mentioned there, they are not actively maintained, one just needs to specify a requirements.txt file, together with a script or Colab notebook. So perhaps you can make a Colab notebook in which you define the `nn.Module` as shown in your prototype above, and illustrate how the model works to perform few-shot NER. You can also fill in the README of that folder as you like. Does that work for you? <|||||>> The [research_projects folder](https://github.com/huggingface/transformers/tree/master/examples/research_projects) is typically used for models that introduce a new technique or model based BERT. Examples are Performer, LXMERT, etc. > > As mentioned there, they are not actively maintained, one just needs to specify a requirements.txt file, together with a script or Colab notebook. So perhaps you can make a Colab notebook in which you define the `nn.Module` as shown in your prototype above, and illustrate how the model works to perform few-shot NER. You can also fill in the README of that folder as you like. > > Does that work for you? - Yeah, that works for me. I just want to keep the trained model's weights and tokenizer under my namespace, since they are not officially from Microsoft. And I also plan to add other BERT variation based fsner. So, I would prefer to keep those under my namespace, if that's not an issue for you. - So, should I/you close this PR and open a new PR with the suggested procedures you mentioned above?<|||||>> I just want to keep the trained model's weights and tokenizer under my namespace, since they are not officially from Microsoft. And I also plan to add other BERT variation based fsner. So, I would prefer to keep those under my namespace, if that's not an issue for you. Makes sense! > So, should I/you close this PR and open a new PR with the suggested procedures you mentioned above? Yes, indeed. You can perhaps take a look at other research projects to get some inspiration :) <|||||>Thanks for your help. Will talk to you in other PR.
transformers
13,132
closed
Fix the loss calculation of ProphetNet
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #9804 ## Before submitting - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you write any new necessary tests? ## Who can review? @patrickvonplaten <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-15-2021 14:12:05
08-15-2021 14:12:05
Thanks for the fix here @StevenTang1998 - did you succesfully run a ProphetNet fine-tuning with this fix? :-)<|||||>Yes, I have the very close results of my pr and that of the code to calculate loss manually.<|||||>That sounds great! I'm running the training command: ``` python examples/pytorch/summarization/run_summarization.py --learning_rate=3e-5 --do_train --do_eval --evaluation_strategy steps --model_name_or_path microsoft/prophetnet-large-uncased --output_dir myoutputdir --per_device_train_batch_size 8 --per_device_eval_batch_size 16 --eval_accumulation_steps 8 --gradient_accumulation_steps 8 --num_train_epochs=20 --load_best_model_at_end --save_steps 25 --logging_steps 25 --fp16 --overwrite_output_dir --dataset_name cnn_dailymail --dataset_config_name 3.0.0 ``` on a single GPU once to verify that training works :-) Will let you know how it goes!<|||||>I've run training for 5h and the loss goes down nicely which is a very good sign! Maybe this is the long-awaited ProphetNet fix :partying_face: Merging!
transformers
13,131
closed
[WIP] Add Few Shot Named Entity Recognition (FSNER) model
# What does this PR do? - This PR adds a new model FSNER (few shot named entity recognition) which has been implemented and trained based on the paper: [Example-Based Named Entity Recognition](https://arxiv.org/abs/2008.10570) from the researches in Microsoft Dynamics 365 AI. - It includes only FSNERModel, not any other derivations, i.e., MaskedLM, ForQuestionAnswering etc. - Doc strings are also updated, but not sure how it would appear visually. - No additional tests are included. <details><summary>Usage Example</summary> <p> ``` from transformers import FSNERModel, FSNERTokenizerFast, FSNERTokenizer device = 'cpu' fsner_model = FSNERModel.from_pretrained("sayef/fsner-bert-base-uncased").to(device) fsner_tokenizer = FSNERTokenizer.from_pretrained("sayef/fsner-bert-base-uncased") # size of query and supports must be same. If you want to find all the entitites in one particular query, just repeat same query n times where n is the size of supports (or entities). query = [ 'KWE 4000 can reach with a maximum speed from up to 450 P/min an accuracy from 50 mg', 'I would like to order a computer from eBay.', ] # each list in supports are the examples of one entity type supports = [ [ 'Horizontal flow wrapper [E] Pack 403 [/E] features the new retrofit-kit „paper-ON-form“', '[E] Paloma Pick-and-Place-Roboter [/E] arranges the bakery products for the downstream tray-forming equipment' , 'Finally, the new [E] Kliklok ACE [/E] carton former forms cartons and trays without the use of glue', 'We set up our pilot plant with the right [E] FibreForm® [/E] configuration to make prototypes for your marketi ng tests and package validation', 'The [E] CAR-T5 [/E] is a reliable, purely mechanically driven cartoning machine for versatile application fiel ds' ], [ "[E] Walmart [/E] is a leading e-commerce company", "I recently ordered a book from [E] Amazon [/E]", "I ordered this from [E] ShopClues [/E]", "Fridge can be ordered in [E] Amazon [/E]", "[E] Flipkart [/E] started it's journey from zero" ] ] def tokenize(x): return fsner_tokenizer(x, padding='max_length', max_length=384, truncation=True, return_tensors="pt") W_query = tokenize(query).to(device) W_supports = tokenize([s for support in supports for s in support]).to(device) start_prob, end_prob = fsner_model.get_start_end_token_scores(W_query, W_supports) output = fsner_tokenizer.extract_entity_from_scores(query, W_query, start_prob, end_prob, thresh=0.50) print(output) ``` </p> </details> Would like to have attention of @LysandreJik, @stas00, @sgugger
08-15-2021 13:18:47
08-15-2021 13:18:47
transformers
13,130
closed
[Flax] Add logging steps, eval steps, and save steps for hybrid CLIP example
# What does this PR do? This PR enables users to set `logging_steps`, `eval_steps`, and `save_steps` when training a model using the Hybrid CLIP example. `logging_steps` helps to keep the train_metrics small so that we can avoid fragmentation errors. `eval_steps` and `save_steps` enables users to save evaluation results and model checkpoints based on steps instead of epochs which may run for days especially when using large datasets. Discussed in #13095 ## Notes I'd like to have input on the following: - I've tested the script using the same dataset as the one described in the readme. The run can be found on [tensorboard.dev](https://tensorboard.dev/experiment/WH8xEX25RVavnizS4VaU8Q/#scalars). I'm not sure if I should update the tensorboard in the readme or not. - I'm also not sure if we should save the final model once the training is done, or only save the model based on the steps only. Right now the script also saves the final model after the whole training is done. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. Yes, discussed in #13095 - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @patil-suraj
08-15-2021 07:45:27
08-15-2021 07:45:27
@galuhsahid could you run `make style` that will fix the failing test. Thanks :) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>gently pinging @galuhsahid :) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,129
closed
Fix classifier dropout in bertForMultipleChoice
# What does this PR do? Fix as per [PR#13087](https://github.com/huggingface/transformers/pull/13087) <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-15-2021 07:00:04
08-15-2021 07:00:04