url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
sequence
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/10735
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10735/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10735/comments
https://api.github.com/repos/huggingface/transformers/issues/10735/events
https://github.com/huggingface/transformers/pull/10735
832,350,238
MDExOlB1bGxSZXF1ZXN0NTkzNTIyODI2
10,735
Release utils
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
COLLABORATOR
null
# What does this PR do? This PR adds utilities to help with the release process and four make commands to use them easily: 1. `make pre-release` will do all the necessary steps prior to the commit with the release tag (put the right version in all the good places and clean the README from the references to the master documentation). 2. `make post-release` will do all the necessary steps after the release has been made (put the right dev version in all the good places and add the latest version in the deploy doc/doc navbar) 3. `make pre-patch` will do all the necessary steps prior to the commit with the release patch tag (put the right version in all the good places). 4. `make post-past` will do all the necessary steps after the patch release has been made and we are back on master (put the right dev version in all the good places and add the latest version in the deploy doc/doc navbar)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10735/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10735/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10735", "html_url": "https://github.com/huggingface/transformers/pull/10735", "diff_url": "https://github.com/huggingface/transformers/pull/10735.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10735.patch", "merged_at": 1615898507000 }
https://api.github.com/repos/huggingface/transformers/issues/10734
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10734/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10734/comments
https://api.github.com/repos/huggingface/transformers/issues/10734/events
https://github.com/huggingface/transformers/pull/10734
832,333,260
MDExOlB1bGxSZXF1ZXN0NTkzNTA5MjAw
10,734
[examples/seq2seq/README.md] fix t5 examples
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This looks good to me! I think it's a good idea to add `--source_prefix` for the 5 t5 checkpoints in the examples. We should specify though that it's only for these 5 checkpoints. Let's discuss summarization in the issue.", "As discussed in https://github.com/huggingface/transformers/issues/10733#issuecomment-800123545 updated the summarization example to use cnn_dailymail - now the prefix works for t5! Thank you, @patrickvonplaten ", "Sorry for disturbing you, but has the support for MBart been removed? This model needs source_lang and target_lang arguments, but the scripts don't accept them now.", "They are sill [here](https://github.com/huggingface/transformers/blob/fd1d9f1ab89805fb2a8e773edbc27531b449ddea/examples/seq2seq/run_translation.py#L96).", "Thank you very much! I can copy that part into summarization file.", "Sorry to disturb you again. Even after the copy of the MBart part code, the error occurred.\r\n```py\r\nTraceback (most recent call last):\r\n File \"examples/seq2seq/run_summarization.py\", line 609, in <module>\r\n main()\r\n File \"examples/seq2seq/run_summarization.py\", line 443, in main\r\n train_dataset = train_dataset.map(\r\n File \"/home/zchelllo/anaconda3/envs/ex/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 1407, in map\r\n update_data = does_function_return_dict(test_inputs, test_indices)\r\n File \"/home/zchelllo/anaconda3/envs/ex/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 1378, in does_function_return_dict\r\n function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)\r\n File \"examples/seq2seq/run_summarization.py\", line 424, in preprocess_function\r\n with tokenizer.as_target_tokenizer():\r\n File \"/home/zchelllo/anaconda3/envs/ex/lib/python3.8/contextlib.py\", line 113, in __enter__\r\n return next(self.gen)\r\n File \"/home/zchelllo/anaconda3/envs/ex/lib/python3.8/site-packages/transformers/models/mbart/tokenization_mbart_fast.py\", line 214, in as_target_tokenizer\r\n self.set_tgt_lang_special_tokens(self.tgt_lang)\r\n File \"/home/zchelllo/anaconda3/envs/ex/lib/python3.8/site-packages/transformers/models/mbart/tokenization_mbart_fast.py\", line 240, in set_tgt_lang_special_tokens\r\n suffix_tokens_str = self.convert_ids_to_tokens(self.suffix_tokens)\r\n File \"/home/zchelllo/anaconda3/envs/ex/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py\", line 286, in convert_ids_to_tokens\r\n index = int(index)\r\nTypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'\r\n```\r\n\r\nThe command is \r\n```sh\r\nCUDA_VISIBLE_DEVICES=0 python examples/seq2seq/run_summarization.py --model_name_or_path facebook/mbart-large-cc25 \\\r\n --do_train --do_predict --train_file ../data/lang8_ja_train.csv --test_file ../data/lang8_ja_test.csv \\\r\n--output_dir ../pre_models/mbart_lang8_ja \\\r\n--per_device_train_batch_size=4 --per_device_eval_batch_size=4 \\\r\n--predict_with_generate --text_column errorful_sent --summary_column correct_sent \\\r\n--save_steps=2000 --save_total_limit=3 --overwrite_output_dir \\\r\n--source_lang ja_XX --target_lang ja_XX\r\n```\r\n\r\nAnd the possible reason is that even I input the source and target language, these arguments haven't been sent into prepare_seq2seq_batch function in line 195 tokenization_mbart_fast.py. The `self.src_lang` is `en_XX` and `self.tgt_lang` is `None`.", "Sorry, I found that I missed this part. After copying this part, it works well! Thank you very much!\r\n```py\r\n # For translation we set the codes of our source and target languages (only useful for mBART, the others will\r\n# ignore those attributes).\r\nif isinstance(tokenizer, (MBartTokenizer, MBartTokenizerFast)):\r\n if data_args.source_lang is not None:\r\n tokenizer.src_lang = data_args.source_lang\r\n if data_args.target_lang is not None:\r\n tokenizer.tgt_lang = data_args.target_lang\r\n```" ]
1,615
1,616
1,616
CONTRIBUTOR
null
This PR: * switches the summarization example to use CNN/DailyMail as with t5-small it provides high scores out of the box. * fixes T5 examples to include `--source_prefix` - it's **not** optional. If you give it a try you will see that you get 10x worse bleu scores w/o it. w/ `27.6849`, w/o `2.374` * adds a normal translation example w/o the peculiarities of MBart and T5 * reduces the default max samples to 50 so it's much faster to test quickly * fixes the reference to the last custom dataset that I incorrectly added in the first place (was missing a username, but worked locally when I created it w/o it) * removes the 3 `--max*samples` from this README and puts a section on how to use these in the top-level `examples/README.md` @sgugger, @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10734/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10734/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10734", "html_url": "https://github.com/huggingface/transformers/pull/10734", "diff_url": "https://github.com/huggingface/transformers/pull/10734.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10734.patch", "merged_at": 1616086539000 }
https://api.github.com/repos/huggingface/transformers/issues/10733
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10733/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10733/comments
https://api.github.com/repos/huggingface/transformers/issues/10733/events
https://github.com/huggingface/transformers/issues/10733
832,329,894
MDU6SXNzdWU4MzIzMjk4OTQ=
10,733
[examples run_summarization.py] t5 worse score w/ --source_prefix "summarize: " than w/o
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "`--max_train_samples 50` is a tiny sample, I am not surprised that the model doesn't learn anything here. Note that `t5-small` was pretrained on CNN/Dailymail and WMT, but **not** on XSum. So, it makes sense that one gets reasonable results when fine-tuning the model on just 50 translation samples of WMT16 because the model has already seen the whole training data in pretraining. However, the model has never seen XSum in pretraining, so fine-tuning on 50 samples will get us nowhere here I think. We could try to switch to `CNN/Dailymail`. I have fine-tuned the model on the whole corpus for CNN/Dailymail and have gotten good results. In the paper, it was reported that with `t5-small` a ROUGE-2 score of 19.56 can be achieved on CNN/Dailymail. So we should get something like 17 or 18 ROUGE-2 for full fine-tuning. \r\n\r\nAlso, IMO for such low ROUGE number we cannot really say that \"no prefix\" works better than \"with prefix\" because both cases don't work well at all. \r\n\r\nLet's just try it with CNN/Dailymail instead and see what we get. Maybe first with just very few samples & if this doesn't work then let's run one full fine-tuning.", "Definitely a jackpot on the example using a new dataset and too short of training: might be a good idea to add some of your notes to the README as well.\r\n\r\nNew stats, this time on `--dataset_name cnn_dailymail --dataset_config \"3.0.0\"`\r\n\r\n```\r\n# w/o --source_prefix \"summarize: \"\r\n\r\npython examples/seq2seq/run_summarization.py --model_name_or_path t5-small --do_train --do_eval --dataset_name cnn_dailymail --dataset_config \"3.0.0\" --output_dir /tmp/tst-summarization --per_device_train_batch_size=4 --per_device_eval_batch_size=4 --overwrite_output_dir --predict_with_generate --max_train_samples 50 --max_val_samples 50\r\n\r\n***** eval metrics *****\r\n epoch = 3.0\r\n eval_gen_len = 65.24\r\n eval_loss = 2.3276\r\n eval_rouge1 = 25.4707\r\n eval_rouge2 = 7.2334\r\n eval_rougeL = 18.3807\r\n eval_rougeLsum = 23.2505\r\n eval_runtime = 6.4841\r\n eval_samples = 50\r\n eval_samples_per_second = 7.711\r\n\r\n```\r\n\r\n```\r\n# w/ --source_prefix \"summarize: \"\r\n\r\npython examples/seq2seq/run_summarization.py --model_name_or_path t5-small --do_train --do_eval --dataset_name cnn_dailymail --dataset_config \"3.0.0\" --output_dir /tmp/tst-summarization --per_device_train_batch_size=4 --per_device_eval_batch_size=4 --overwrite_output_dir --predict_with_generate --max_train_samples 50 --max_val_samples 50 --source_prefix \"summarize: \"\r\n\r\n***** eval metrics *****\r\n epoch = 3.0\r\n eval_gen_len = 62.38\r\n eval_loss = 2.3243\r\n eval_rouge1 = 30.0675\r\n eval_rouge2 = 10.2052\r\n eval_rougeL = 22.154\r\n eval_rougeLsum = 27.2161\r\n eval_runtime = 6.0876\r\n eval_samples = 50\r\n eval_samples_per_second = 8.213\r\n```\r\n\r\nThis is much better. I will try a longer train sequence next.", "Pretrained with 5000 samples the score goes up nicely, this is 1/100th of the full dataset.\r\n\r\n```\r\n***** eval metrics *****\r\n epoch = 3.0\r\n eval_gen_len = 61.66\r\n eval_loss = 2.0773\r\n eval_rouge1 = 30.174\r\n eval_rouge2 = 12.0182\r\n eval_rougeL = 23.5012\r\n eval_rougeLsum = 27.3718\r\n eval_runtime = 5.9836\r\n eval_samples = 50\r\n eval_samples_per_second = 8.356\r\n```\r\n\r\nSo I updated https://github.com/huggingface/transformers/pull/10734 with the recommendation you made @patrickvonplaten.\r\n\r\nClosing this." ]
1,615
1,615
1,615
CONTRIBUTOR
null
I don't think the latest incarnation of summarization examples works for t5. I'm lost with all the proposed let's-not-do-anything special for t5, except as you will see from numbers something isn't right: With the latest master: ``` python examples/seq2seq/run_summarization.py --model_name_or_path t5-small --do_train --do_eval \ --dataset_name xsum --output_dir /tmp/tst-summarization --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 --overwrite_output_dir --predict_with_generate --max_train_samples 50 \ --max_val_samples 50 ***** eval metrics ***** epoch = 3.0 eval_gen_len = 60.14 eval_loss = 3.3003 eval_rouge1 = 19.3055 eval_rouge2 = 2.4192 eval_rougeL = 13.931 eval_rougeLsum = 16.3446 eval_runtime = 6.2317 eval_samples = 50 eval_samples_per_second = 8.023 ``` Then let's add the required `--source_prefix "summarize: "` ``` python examples/seq2seq/run_summarization.py --model_name_or_path t5-small --do_train --do_eval \ --dataset_name xsum --output_dir /tmp/tst-summarization --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 --overwrite_output_dir --predict_with_generate --max_train_samples 50 \ --max_val_samples 50 --source_prefix "summarize: " ***** eval metrics ***** epoch = 3.0 eval_gen_len = 52.94 eval_loss = 3.3734 eval_rouge1 = 18.7997 eval_rouge2 = 2.2857 eval_rougeL = 13.4997 eval_rougeLsum = 14.7778 eval_runtime = 5.2697 eval_samples = 50 eval_samples_per_second = 9.488 ``` As you can see the scores are worse than w/o `--source_prefix "summarize: "` and it should be in reverse. Where are we adding `task_specific_params`: ``` "summarization": { "early_stopping": true, "length_penalty": 2.0, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, ``` so that the model knows to do the right thing? **edit**: found where this was last discussed: https://github.com/huggingface/transformers/pull/10133#issuecomment-778071812 So should `README.md` just say that currently `run_summarization.py` cannot be used for T5 models and then find another summarization model instead. Of course, a lot of these repetitive breakages would have been avoided if we had quality-measuring tests for examples - perhaps when the dust settles around the examples we could have some of those added. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10733/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10733/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10732
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10732/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10732/comments
https://api.github.com/repos/huggingface/transformers/issues/10732/events
https://github.com/huggingface/transformers/issues/10732
832,315,923
MDU6SXNzdWU4MzIzMTU5MjM=
10,732
run_clm.py does not work with any other block_size other than 1024
{ "login": "sytelus", "id": 2096835, "node_id": "MDQ6VXNlcjIwOTY4MzU=", "avatar_url": "https://avatars.githubusercontent.com/u/2096835?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sytelus", "html_url": "https://github.com/sytelus", "followers_url": "https://api.github.com/users/sytelus/followers", "following_url": "https://api.github.com/users/sytelus/following{/other_user}", "gists_url": "https://api.github.com/users/sytelus/gists{/gist_id}", "starred_url": "https://api.github.com/users/sytelus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sytelus/subscriptions", "organizations_url": "https://api.github.com/users/sytelus/orgs", "repos_url": "https://api.github.com/users/sytelus/repos", "events_url": "https://api.github.com/users/sytelus/events{/privacy}", "received_events_url": "https://api.github.com/users/sytelus/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Indeed! Would you mind making a PR with that change since you found the correct fix?", "Sure, I'll get that prepared!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,615
1,619
1,619
NONE
null
**Note:** This issue can be fixed with one character change as described in last section. ## Environment info - `transformers` version: 4.4.0.dev0 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.8.5 - PyTorch version (GPU?): 1.7.1 (True) - Tensorflow version (GPU?): 2.4.1 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help - maintained examples (not research project or legacy): @sgugger, @patil-suraj ## Information Model I am using (Bert, XLNet ...): GPT2 The problem arises when using: * [X] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [X] an official GLUE/SQUaD task: Causal language modelling * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Use GPT based model which has block_size different than 1024 2. Try to train it or fine tune with run_clm.py setting setting block_size in data_args. In GPU mode, you will get following error: ``` RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)` ``` In CPU mode, you will get following error: ``` index out of range in self ``` ## Expected behavior Above error should not occur. ## Cause and Proposed Fix The issue is because block_size on [line 337](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_clm.py#L337) always get set to 1024 because of wrong indentation: ``` if data_args.block_size is None: block_size = tokenizer.model_max_length if block_size > 1024: logger.warn( f"The tokenizer picked seems to have a very large `model_max_length` ({tokenizer.model_max_length}). " "Picking 1024 instead. You can change that default value by passing --block_size xxx." ) block_size = 1024 # <<< THIS LINE NEEDS TO BE INTENDED!!! ``` So just tabbing that line should fix the issue.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10732/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10732/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10731
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10731/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10731/comments
https://api.github.com/repos/huggingface/transformers/issues/10731/events
https://github.com/huggingface/transformers/issues/10731
832,302,379
MDU6SXNzdWU4MzIzMDIzNzk=
10,731
Fix log message for training from checkpoint with global step
{ "login": "ioana-blue", "id": 17202292, "node_id": "MDQ6VXNlcjE3MjAyMjky", "avatar_url": "https://avatars.githubusercontent.com/u/17202292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ioana-blue", "html_url": "https://github.com/ioana-blue", "followers_url": "https://api.github.com/users/ioana-blue/followers", "following_url": "https://api.github.com/users/ioana-blue/following{/other_user}", "gists_url": "https://api.github.com/users/ioana-blue/gists{/gist_id}", "starred_url": "https://api.github.com/users/ioana-blue/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ioana-blue/subscriptions", "organizations_url": "https://api.github.com/users/ioana-blue/orgs", "repos_url": "https://api.github.com/users/ioana-blue/repos", "events_url": "https://api.github.com/users/ioana-blue/events{/privacy}", "received_events_url": "https://api.github.com/users/ioana-blue/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[ { "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false } ]
[ "Yes, that sounds clearer. Would you mind making a PR with this?", "Will make time this week. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,615
1,619
1,619
NONE
null
# 🚀 Feature request I think the log message here is wrong: https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L978 I think it should read something along these lines: ``` logger.info( f" Will skip the first {epochs_trained} epochs then the first {steps_trained_in_current_epoch} " "batches in the current epoch {epochs_trained + 1}." ) ``` The batches that are skipped are not skipped in the first epoch since the training was already done for `epochs_trained`. As the variable name indicates, the training is going to skip the steps already trained in the current epoch. The current epoch is `epochs_trained + 1`. ## Motivation The log message was confusing. I went and traced the code to ensure that it does the right skipping (which I think it does).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10731/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10731/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10730
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10730/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10730/comments
https://api.github.com/repos/huggingface/transformers/issues/10730/events
https://github.com/huggingface/transformers/issues/10730
832,245,861
MDU6SXNzdWU4MzIyNDU4NjE=
10,730
Stacked Roberta run_mlm.py
{ "login": "matteomedioli", "id": 31959430, "node_id": "MDQ6VXNlcjMxOTU5NDMw", "avatar_url": "https://avatars.githubusercontent.com/u/31959430?v=4", "gravatar_id": "", "url": "https://api.github.com/users/matteomedioli", "html_url": "https://github.com/matteomedioli", "followers_url": "https://api.github.com/users/matteomedioli/followers", "following_url": "https://api.github.com/users/matteomedioli/following{/other_user}", "gists_url": "https://api.github.com/users/matteomedioli/gists{/gist_id}", "starred_url": "https://api.github.com/users/matteomedioli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/matteomedioli/subscriptions", "organizations_url": "https://api.github.com/users/matteomedioli/orgs", "repos_url": "https://api.github.com/users/matteomedioli/repos", "events_url": "https://api.github.com/users/matteomedioli/events{/privacy}", "received_events_url": "https://api.github.com/users/matteomedioli/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,616
1,616
NONE
null
# 🖥 Benchmarking `transformers` ## Benchmark I try to run [transformers/experiments/language_modeling/run_mlm.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py) in order to train Roberta from scratch on Wikipedia dataset. ## Set-up Transformers version: 4.4.0.dev0 4 GPUs: NVIDIA Tesla V100 16 GB; 4096 Bit, PCI Express 3.0 x16. Used bash run script: ```bash #!/bin/bash export CUDA_LAUNCH_BLOCKING=1 source /data/env/bin/activate nohup python3 transformers/examples/language-modeling/run_mlm.py \ --dataset_name wikipedia \ --tokenizer_name roberta-base \ --model_type roberta \ --dataset_config_name 20200501.en \ --do_train \ --do_eval \ --learning_rate 1e-5 \ --num_train_epochs 5 \ --save_steps 5000 \ --warmup_steps=10000 \ --output_dir /data/models/wikipedia_roberta \ & ``` Tested with the script and also directly with python command (without nohup) ## Results Code seems stacked in `trainer.py`, at the first compute_loss step, when performing inference: ```python def compute_loss(self, model, inputs, return_outputs=False): """ How the loss is computed by Trainer. By default, all models return the loss in the first element. Subclass and override for custom behavior. """ if self.label_smoother is not None and "labels" in inputs: labels = inputs.pop("labels") else: labels = None print("STACKED HERE") outputs = model(**inputs) ... ``` I can't understand which running parameters are wrong. Could inference for a single batch take more than 30 mins? Thanks in advance! UPDATE: Without --warmup_steps is working.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10730/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10730/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10729
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10729/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10729/comments
https://api.github.com/repos/huggingface/transformers/issues/10729/events
https://github.com/huggingface/transformers/issues/10729
832,211,831
MDU6SXNzdWU4MzIyMTE4MzE=
10,729
Multi-node training with the latest transformers/examples code
{ "login": "phqtuyen", "id": 13807015, "node_id": "MDQ6VXNlcjEzODA3MDE1", "avatar_url": "https://avatars.githubusercontent.com/u/13807015?v=4", "gravatar_id": "", "url": "https://api.github.com/users/phqtuyen", "html_url": "https://github.com/phqtuyen", "followers_url": "https://api.github.com/users/phqtuyen/followers", "following_url": "https://api.github.com/users/phqtuyen/following{/other_user}", "gists_url": "https://api.github.com/users/phqtuyen/gists{/gist_id}", "starred_url": "https://api.github.com/users/phqtuyen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/phqtuyen/subscriptions", "organizations_url": "https://api.github.com/users/phqtuyen/orgs", "repos_url": "https://api.github.com/users/phqtuyen/repos", "events_url": "https://api.github.com/users/phqtuyen/events{/privacy}", "received_events_url": "https://api.github.com/users/phqtuyen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @sgugger ", "I'm unsure what you think is not supported. Launching any of the example scripts with \r\n```\r\npython -m torch.distributed.launch --nproc_per_node=xxx \\\r\n --node_rank=$THIS_MACHINE_INDEX \\\r\n --master_addr=\"192.168.1.1\" \\\r\n --master_port=1234 \\\r\n run_xxx.py\r\n```\r\nis going to work.", "Hi @sgugger , thank you for your answer. My understanding is that \"--nproc_per_node\" is the number of gpus will be used for the launched process? Also, if I want to launch another training node, I assume I will just run the same command with \"node_rank=1\"?", "Yes, that is the number of GPUs.\r\n\r\nYou can refer to the [PyTorch documentation](https://pytorch.org/docs/stable/distributed.html#launch-utility) for all the arguments of the PyTorch launcher as all the example scripts are fully compatible with it. You will also need to pass `--nnodes=$NUMBER_OF_NODES` for completeness.", "Thanks, I am able to make it work now." ]
1,615
1,621
1,615
NONE
null
Hi, I am trying to follow the instructions on how to use the examples from [https://huggingface.co/transformers/examples.html] and I notice there is a difference between version 4.3 and 1.2 in the distributed training session. In the older version it seems that it supports multi-node training with " --node_rank=$THIS_MACHINE_INDEX \ --master_addr="192.168.1.1" \ --master_port=1234 run_bert_classifier.py \" But these options no longer exist in the latest tutorial. Does the latest version still support multi-node training? Thanks.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10729/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10729/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10728
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10728/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10728/comments
https://api.github.com/repos/huggingface/transformers/issues/10728/events
https://github.com/huggingface/transformers/pull/10728
832,195,843
MDExOlB1bGxSZXF1ZXN0NTkzMzk1NDgy
10,728
[Issue template] need to update/extend who to tag
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "For broken models you would just tag the model author(s) – we'll add a feature to the model hub to tag someone in a conversation thread, but in the meantime you can use the Forum to ping them", "That's a great idea to tag the model author - how would a user know the model author's corresponding forum username? I guess this is temporary so probably can be somehow figured out...", "We use SSO so one's username on Forum is guaranteed to be one's username on hf.co", "Ah, that's perfect then! Thank you!" ]
1,615
1,616
1,616
CONTRIBUTOR
null
This PR * [x] adds an entry for what to do what to do when someone has model hub issues - thank you, @julien-c! TODO/Input needed: * [ ] need to update who to tag for `tensorflow` Issues @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10728/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10728/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10728", "html_url": "https://github.com/huggingface/transformers/pull/10728", "diff_url": "https://github.com/huggingface/transformers/pull/10728.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10728.patch", "merged_at": 1616005994000 }
https://api.github.com/repos/huggingface/transformers/issues/10727
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10727/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10727/comments
https://api.github.com/repos/huggingface/transformers/issues/10727/events
https://github.com/huggingface/transformers/pull/10727
832,172,939
MDExOlB1bGxSZXF1ZXN0NTkzMzc2NTM0
10,727
Rename zero-shot pipeline multi_class argument
{ "login": "joeddav", "id": 9353833, "node_id": "MDQ6VXNlcjkzNTM4MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joeddav", "html_url": "https://github.com/joeddav", "followers_url": "https://api.github.com/users/joeddav/followers", "following_url": "https://api.github.com/users/joeddav/following{/other_user}", "gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}", "starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joeddav/subscriptions", "organizations_url": "https://api.github.com/users/joeddav/orgs", "repos_url": "https://api.github.com/users/joeddav/repos", "events_url": "https://api.github.com/users/joeddav/events{/privacy}", "received_events_url": "https://api.github.com/users/joeddav/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
CONTRIBUTOR
null
Renames the `multi_class` argument to `multi_label` in the `ZeroShotClassificationPipeline` and adds a deprecation warning to the former. Typically, "multi-label classification" is used to refer to this type of classification (where each class is evaluated independently). The name is changed in the zero-shot distillation script as well. Resolves #6668.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10727/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10727/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10727", "html_url": "https://github.com/huggingface/transformers/pull/10727", "diff_url": "https://github.com/huggingface/transformers/pull/10727.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10727.patch", "merged_at": 1615845767000 }
https://api.github.com/repos/huggingface/transformers/issues/10726
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10726/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10726/comments
https://api.github.com/repos/huggingface/transformers/issues/10726/events
https://github.com/huggingface/transformers/issues/10726
832,137,409
MDU6SXNzdWU4MzIxMzc0MDk=
10,726
broken models on the hub
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I fixed this model (it had `bart` as a `model_type` instead of `mbart` in `config.json`). \r\n\r\nBut the point of this issue is that perhaps we could add to a todo list to run a cronjob that will validate the models and tokenizers? \r\n\r\n@theo-m, is this something that would fit with the project you're currently working on, since you will have to run these things anyway for each model? Just asking - not piling it up on you.\r\n\r\nIn fact if I understand the idea correctly this will be a requirement for your things to work, right?", "We would not be aiming for 100% coverage, but yes, getting a sense of what's runnable on the hub would be awesome. \r\n\r\nMaybe a once-a-week CI job, as I expect the run to be terribly long/expensive? cc infra people @julien-c @n1t0 ", "I think we'll hook something into the git-push-event driven ML analytics system we've been talking about internally\r\n\r\nThat's a medium-term goal though", "It's a different thing though: on-push would ensure integrity on upload, which is indeed needed, but a recurrent job would enable us to detect regression in what the lib can support and give estimates on what is actually runnable.", "The lib can change after the upload was made and the model/tokenizer stop working. We have seen this before with older models.\r\n\r\nI think @theo-m you're saying the same thing. ", "@theo-m, btw the low hanging fruit would be to just validate that the listed in \"use in transformers\" instructions indeed work. i.e. we just load the model and tokenizer and do nothing with it if it works and do something with it if it doesn't. ", "Note that this is not necessarily a low hanging fruit (depending on your definition of a low hanging fruit 😂) given that:\r\n- we have 7,000+ models whose total weights represent multiple TBs of data\r\n- they change over time", "the lowest hanging fruit is loading all that can be associated to a pipeline and run a single example in the associated pipeline, the results of this are stronger than just loading\r\n\r\nand yes it sure is a big big job, but it's the best we can do in order to build a good understanding of what is runnable on the hub - _in fine_ for non hf we won't be able to do much, but we can't give guarantees to code we don't manage.", "I meant that just loading a model / tokenizer is cheaper/faster/requires almost 0 extra code to write - hence low-hanging fruit.\r\n\r\nI hear you that the hub is huge, a little bit at a time. It would have been the same code to validate 10 models or 7K models if there is no urgency to complete it fast, it just would take much much longer to complete.\r\n\r\n> * they change over time\r\n\r\nThat was exactly my point, they and the codebase too, so it's not enough to check it once, even if we track when it was changed and when it was validated last.\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,615
1,619
1,619
CONTRIBUTOR
null
Go to https://huggingface.co/sshleifer/distill-mbart-en-ro-12-6 click on "use in transformers", copy-n-paste and nope can't use this in `transformers`: ``` python -c 'from transformers import AutoTokenizer; AutoTokenizer.from_pretrained("sshleifer/distill-mbart-en-ro-12-6")' Traceback (most recent call last): File "<string>", line 1, in <module> File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/auto/tokenization_auto.py", line 410, in from_pretrained return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py", line 1704, in from_pretrained return cls._from_pretrained( File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py", line 1717, in _from_pretrained slow_tokenizer = (cls.slow_tokenizer_class)._from_pretrained( File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py", line 1776, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/roberta/tokenization_roberta.py", line 159, in __init__ super().__init__( File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/gpt2/tokenization_gpt2.py", line 179, in __init__ with open(vocab_file, encoding="utf-8") as vocab_handle: TypeError: expected str, bytes or os.PathLike object, not NoneType ``` this is with the latest master. These for example I tested to work fine: - `sshleifer/distill-mbart-en-ro-12-4` - `sshleifer/distill-mbart-en-ro-12-9` Perhaps we need a sort of CI that goes over the public models, validates that `run in transformers` code succeeds and sends an alert if it doesn't? We have no idea how many other models are broken on the hub right now.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10726/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10726/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10725
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10725/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10725/comments
https://api.github.com/repos/huggingface/transformers/issues/10725/events
https://github.com/huggingface/transformers/pull/10725
832,115,929
MDExOlB1bGxSZXF1ZXN0NTkzMzI5MzQ2
10,725
Flax testing should not run the full torch test suite
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 2934977194, "node_id": "MDU6TGFiZWwyOTM0OTc3MTk0", "url": "https://api.github.com/repos/huggingface/transformers/labels/Flax", "name": "Flax", "color": "4862AD", "default": false, "description": "" } ]
closed
false
null
[]
[]
1,615
1,619
1,615
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR adds a new `run_tests_torch_and_flax` circle ci job so that the flax test don't have to run the full pytorch test suite anymore. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10725/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10725/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10725", "html_url": "https://github.com/huggingface/transformers/pull/10725", "diff_url": "https://github.com/huggingface/transformers/pull/10725.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10725.patch", "merged_at": 1615871137000 }
https://api.github.com/repos/huggingface/transformers/issues/10724
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10724/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10724/comments
https://api.github.com/repos/huggingface/transformers/issues/10724/events
https://github.com/huggingface/transformers/pull/10724
832,073,364
MDExOlB1bGxSZXF1ZXN0NTkzMjk0MDc5
10,724
Add minimum version check in examples
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
COLLABORATOR
null
# What does this PR do? This PR adds a minimum version check to all examples, in order to avoid the waves of issues created each time they use a functionality that was just released into `Trainer`. The script will immediately error if the version of Transformers does not match the required minimum version. At each release, a script will set that to the version released automatically (work in progress for a second PR with other release utils) so that the examples associated with one tag will require the minimum version of that tag. The user can still remove that line to avoid the error (at their own risks). The error points out to: - the instruction for a source install - the examples README that now lists all examples folders with the various version tags.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10724/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10724/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10724", "html_url": "https://github.com/huggingface/transformers/pull/10724", "diff_url": "https://github.com/huggingface/transformers/pull/10724.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10724.patch", "merged_at": 1615850995000 }
https://api.github.com/repos/huggingface/transformers/issues/10723
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10723/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10723/comments
https://api.github.com/repos/huggingface/transformers/issues/10723/events
https://github.com/huggingface/transformers/issues/10723
832,007,146
MDU6SXNzdWU4MzIwMDcxNDY=
10,723
Train tokenizer for Deberta
{ "login": "avacaondata", "id": 35173563, "node_id": "MDQ6VXNlcjM1MTczNTYz", "avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4", "gravatar_id": "", "url": "https://api.github.com/users/avacaondata", "html_url": "https://github.com/avacaondata", "followers_url": "https://api.github.com/users/avacaondata/followers", "following_url": "https://api.github.com/users/avacaondata/following{/other_user}", "gists_url": "https://api.github.com/users/avacaondata/gists{/gist_id}", "starred_url": "https://api.github.com/users/avacaondata/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/avacaondata/subscriptions", "organizations_url": "https://api.github.com/users/avacaondata/orgs", "repos_url": "https://api.github.com/users/avacaondata/repos", "events_url": "https://api.github.com/users/avacaondata/events{/privacy}", "received_events_url": "https://api.github.com/users/avacaondata/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "HuggingFace has another library called [tokenizers](https://github.com/huggingface/tokenizers) especially for this.", "Currently, the training of Deberta Tokenizer is not supported directly by huggingface. Of course, you can create the required files by yourself from BPETokenizer training output, but you could also simply wait until #10703 is merged into the master branch and released. :-)", "How would be the process of creating the required files from the BPETokenizer training output? @cronoik I'd really appreciate a little bit of explanation, as I tried to do so and I failed.", "You can save me a lot of time by simply using the mentioned patch above. Just copy the DebertaTokenizer class to your runtime.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,615
1,619
1,619
NONE
null
Hi, I would like to know how can I train a DeBERTa tokenizer. From the paper I saw it uses BPETokenizer, but the BPETokenizer from huggingface/tokenizers doesn't work for this. Could you recommend me another implementation or library or a correct configuration for huggingface/tokenizers implementation to be able to train a DeBERTa model from scratch?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10723/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10723/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10722
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10722/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10722/comments
https://api.github.com/repos/huggingface/transformers/issues/10722/events
https://github.com/huggingface/transformers/issues/10722
831,955,586
MDU6SXNzdWU4MzE5NTU1ODY=
10,722
iterative evaluation in Trainer to save memory
{ "login": "PaulLerner", "id": 25532159, "node_id": "MDQ6VXNlcjI1NTMyMTU5", "avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PaulLerner", "html_url": "https://github.com/PaulLerner", "followers_url": "https://api.github.com/users/PaulLerner/followers", "following_url": "https://api.github.com/users/PaulLerner/following{/other_user}", "gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}", "starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions", "organizations_url": "https://api.github.com/users/PaulLerner/orgs", "repos_url": "https://api.github.com/users/PaulLerner/repos", "events_url": "https://api.github.com/users/PaulLerner/events{/privacy}", "received_events_url": "https://api.github.com/users/PaulLerner/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This would result in an inaccurate metric in most cases, as metric functions are seldom linear. I'm afraid you will have to run evaluation on smaller chunks or use your own evaluation loop.", "Yes of course I thought maybe to gather a reduced version of the predictions before actually computing the metric\r\n\r\ne.g. for sentence level accuracy, in my example, a bool array of shape `(200206, )` where the boolean value represents the accuracy of the output (i.e. `predictions == labels`). \r\n\r\nThe actual `compute_metrics` would only have to reduce this array to a single value (using `np.mean` in my example).", "You can do that using `Trainer` if your model returns that. `Trainer` is too generic to be able to guess that in this case it should gather a reduced version of the predictions (and how would it do it?). Otherwise writing the evaluation loop yourself is super easy (there is one example in [run_glue_no_trainer](https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_glue_no_trainer.py) for instance).", "Ok, thanks for the advice :)" ]
1,615
1,615
1,615
CONTRIBUTOR
null
# 🚀 Feature request In Trainer.prediction_loop, compute metrics batch per batch (or allow to tweak this number) instead of gathering the whole predictions in a single array (nb: I'm aware of `eval_accumulation_steps` but it only allows to save GPU memory) ## Motivation Running a ses2seq evaluation on a dataset of 200K examples with a vocabulary of 50K and context of 77 words, gathering all of the output amounts to an array of 2.77 TiB, which I'm not sure everyone can afford: ``` Traceback (most recent call last): File "/mnt/beegfs/home/lerner/anaconda3/envs/transformers/bin/clip-train", line 33, in <module> sys.exit(load_entry_point('clip', 'console_scripts', 'clip-train')()) File "/mnt/beegfs/home/lerner/CLIP/clip/train.py", line 196, in main trainer.train(**config.get("checkpoint", {})) File "/mnt/beegfs/home/lerner/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/trainer.py", line 983, in train self._maybe_log_save_evaluate(tr_loss, model, trial, epoch) File "/mnt/beegfs/home/lerner/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/trainer.py", line 1058, in _maybe_log_save_evaluate metrics = self.evaluate() File "/mnt/beegfs/home/lerner/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/trainer_seq2seq.py", line 74, in evaluate return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix) File "/mnt/beegfs/home/lerner/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/trainer.py", line 1513, in evaluate metric_key_prefix=metric_key_prefix, File "/mnt/beegfs/home/lerner/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/trainer.py", line 1644, in prediction_loop preds_gatherer.add_arrays(self._gather_and_numpify(preds_host, "eval_preds")) File "/mnt/beegfs/home/lerner/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/trainer_pt_utils.py", line 330, in add_arrays self._storage = nested_new_like(arrays, self.total_samples, padding_index=self.padding_index) File "/mnt/beegfs/home/lerner/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/trainer_pt_utils.py", line 238, in nested_new_like return np.full_like(arrays, padding_index, shape=(num_samples, *arrays.shape[1:])) File "<__array_function__ internals>", line 6, in full_like File "/mnt/beegfs/home/lerner/anaconda3/envs/transformers/lib/python3.7/site-packages/numpy/core/numeric.py", line 382, in full_like res = empty_like(a, dtype=dtype, order=order, subok=subok, shape=shape) File "<__array_function__ internals>", line 6, in empty_like MemoryError: Unable to allocate 2.77 TiB for an array with shape (200206, 77, 49408) and data type float32 60%|██████ | 3000/5000 [17:53<11:55, 2.80it/s] ``` ### Who can help Library: * trainer: @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10722/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10722/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10721
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10721/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10721/comments
https://api.github.com/repos/huggingface/transformers/issues/10721/events
https://github.com/huggingface/transformers/issues/10721
831,933,725
MDU6SXNzdWU4MzE5MzM3MjU=
10,721
Run Time Error: RuntimeError: Expected hidden[0] size (2, 1, 512), got [2, 128, 512] - Seq2Seq Model with PreTrained BERT Model
{ "login": "Ninja16180", "id": 61466835, "node_id": "MDQ6VXNlcjYxNDY2ODM1", "avatar_url": "https://avatars.githubusercontent.com/u/61466835?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ninja16180", "html_url": "https://github.com/Ninja16180", "followers_url": "https://api.github.com/users/Ninja16180/followers", "following_url": "https://api.github.com/users/Ninja16180/following{/other_user}", "gists_url": "https://api.github.com/users/Ninja16180/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ninja16180/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ninja16180/subscriptions", "organizations_url": "https://api.github.com/users/Ninja16180/orgs", "repos_url": "https://api.github.com/users/Ninja16180/repos", "events_url": "https://api.github.com/users/Ninja16180/events{/privacy}", "received_events_url": "https://api.github.com/users/Ninja16180/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "Kindly help in resolving the issue.\r\nIt will help to build a seq2seq conversation model using pretrained bert model.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "Maybe of interest to @patrickvonplaten and @patil-suraj ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,615
1,623
1,623
NONE
null
Hi, I am facing this run time error while training a seq2seq model with pretrained bert model. ``` RuntimeError Traceback (most recent call last) <ipython-input-63-472071541d41> in <module>() 8 start_time = time.time() 9 ---> 10 train_loss = train(model, train_iterator, optimizer, criterion, CLIP) 11 valid_loss = evaluate(model, valid_iterator, criterion) 12 8 frames /usr/local/lib/python3.7/dist-packages/torch/nn/modules/rnn.py in check_hidden_size(self, hx, expected_hidden_size, msg) 221 msg: str = 'Expected hidden size {}, got {}') -> None: 222 if hx.size() != expected_hidden_size: --> 223 raise RuntimeError(msg.format(expected_hidden_size, list(hx.size()))) 224 225 def check_forward_args(self, input: Tensor, hidden: Tensor, batch_sizes: Optional[Tensor]): RuntimeError: Expected hidden[0] size (2, 1, 512), got [2, 128, 512] ``` Related code snippets: ``` from torchtext.legacy.data import BucketIterator,TabularDataset BATCH_SIZE = 128 device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') train_iterator, valid_iterator = data.BucketIterator.splits( (train_data, valid_data), batch_size = BATCH_SIZE, device = device) ``` # Encoder class Encoder(nn.Module): def __init__(self, bert, hid_dim, n_layers, dropout): super().__init__() self.hid_dim = hid_dim self.n_layers = n_layers self.bert = bert emb_dim = bert.config.to_dict()['hidden_size'] self.rnn = nn.LSTM(emb_dim, hid_dim, n_layers, batch_first = True,dropout = dropout) self.dropout = nn.Dropout(dropout) def forward(self, sent1): #sent1 = [sent1 len, batch size] with torch.no_grad(): embedded = self.bert(sent1)[0] #embedded = [sent1 len, batch size, emb dim] outputs, (hidden, cell) = self.rnn(embedded) #outputs = [sent1 len, batch size, hid dim * n directions] #hidden = [n layers * n directions, batch size, hid dim] #cell = [n layers * n directions, batch size, hid dim] #outputs are always from the top hidden layer return hidden, cell The detailed code with error description is available here for your reference: https://github.com/Ninja16180/BERT/blob/main/Training_Seq2Seq_Model_using_Pre-Trained_BERT_Model.ipynb Kindly help me in resolving the issue Thanks in advance! ``` ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10721/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10721/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10720
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10720/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10720/comments
https://api.github.com/repos/huggingface/transformers/issues/10720/events
https://github.com/huggingface/transformers/issues/10720
831,926,683
MDU6SXNzdWU4MzE5MjY2ODM=
10,720
Cannot use custom roberta tokenizer with run_mlm_wwm.py
{ "login": "avacaondata", "id": 35173563, "node_id": "MDQ6VXNlcjM1MTczNTYz", "avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4", "gravatar_id": "", "url": "https://api.github.com/users/avacaondata", "html_url": "https://github.com/avacaondata", "followers_url": "https://api.github.com/users/avacaondata/followers", "following_url": "https://api.github.com/users/avacaondata/following{/other_user}", "gists_url": "https://api.github.com/users/avacaondata/gists{/gist_id}", "starred_url": "https://api.github.com/users/avacaondata/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/avacaondata/subscriptions", "organizations_url": "https://api.github.com/users/avacaondata/orgs", "repos_url": "https://api.github.com/users/avacaondata/repos", "events_url": "https://api.github.com/users/avacaondata/events{/privacy}", "received_events_url": "https://api.github.com/users/avacaondata/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "That example only runs with `BERT`, which is why it has been moved to a separate research project.", "I tried this script with albert and it worked, which script should I use to train a Roberta model from scratch with Whole word Masking??", "Is that intended:\r\n`--model_type deberta`\r\n?\r\n@alexvaca0 ", "Sorry, that was from the previous launch script, now it is roberta @cronoik ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,615
1,619
1,619
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.4.dev0 - Platform: Ubuntu 18 - Python version: 3.7 - PyTorch version (GPU?): 1.7.1 (YES) - Tensorflow version (GPU?): - Using GPU in script?: YES - Using distributed or parallel set-up in script?: ### Who can help @patrickvonplaten @LysandreJik @ <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information When I try to use the BPE Tokenizer trained with huggingface/tokenizers with Roberta directly, it works: ```{python} tok = RobertaTokenizer.from_pretrained("bpe_tokenizer_0903", use_fast=True) ``` However, when I try to use this same tokenizer for training a language model, it fails: ```{bash} python -u transformers/examples/language-modeling/run_mlm_wwm.py \ --model_type deberta \ --config_name ./bpe_tokenizer_0903/config.json \ --tokenizer_name ./bpe_tokenizer_0903 \ --train_file ./prueba_tr.txt \ --validation_file ./final_valid.txt \ --output_dir ./roberta_1102 \ --overwrite_output_dir \ --do_train \ --do_eval \ --evaluation_strategy steps \ --per_device_train_batch_size 1 \ --per_device_eval_batch_size 2 \ --gradient_accumulation_steps 2 \ --learning_rate 6e-4 \ --save_steps 10 \ --logging_steps 10 \ --overwrite_cache \ --max_seq_length 128 \ --eval_accumulation_steps 10 \ --load_best_model_at_end \ --run_name deberta_0902 \ --save_total_limit 10 --warmup_steps 1750 \ --adam_beta2 0.98 --adam_epsilon 1e-6 --weight_decay 0.01 --num_train_epochs 1 ``` The error message is the following: ``` Traceback (most recent call last): File "transformers/examples/language-modeling/run_mlm_wwm.py", line 399, in <module> main() File "transformers/examples/language-modeling/run_mlm_wwm.py", line 286, in main use_fast=model_args.use_fast_tokenizer, File "/home/alejandro.vaca/data_rigoberta/transformers/src/transformers/models/auto/tokenization_auto.py", line 401, in from_pretrained return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/home/alejandro.vaca/data_rigoberta/transformers/src/transformers/tokenization_utils_base.py", line 1719, in from_pretrained resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs File "/home/alejandro.vaca/data_rigoberta/transformers/src/transformers/tokenization_utils_base.py", line 1790, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/home/alejandro.vaca/data_rigoberta/transformers/src/transformers/models/roberta/tokenization_roberta_fast.py", line 173, in __init__ **kwargs, File "/home/alejandro.vaca/data_rigoberta/transformers/src/transformers/models/gpt2/tokenization_gpt2_fast.py", line 145, in __init__ **kwargs, File "/home/alejandro.vaca/data_rigoberta/transformers/src/transformers/tokenization_utils_fast.py", line 87, in __init__ fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file) Exception: data did not match any variant of untagged enum ModelWrapper at line 1 column 1138661 ``` Why doesn't it fail when I try to load the tokenizer with RobertaTokenizer.from_pretrained() but it does fail when I try to run run_mlm_wwm.py ? @sgugger @patrickvonplaten @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10720/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10720/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10719
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10719/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10719/comments
https://api.github.com/repos/huggingface/transformers/issues/10719/events
https://github.com/huggingface/transformers/pull/10719
831,858,368
MDExOlB1bGxSZXF1ZXN0NTkzMTE0MTI3
10,719
[WIP] Extend LayoutLMTokenizer to handle bounding boxes
{ "login": "valentinkoe", "id": 8581199, "node_id": "MDQ6VXNlcjg1ODExOTk=", "avatar_url": "https://avatars.githubusercontent.com/u/8581199?v=4", "gravatar_id": "", "url": "https://api.github.com/users/valentinkoe", "html_url": "https://github.com/valentinkoe", "followers_url": "https://api.github.com/users/valentinkoe/followers", "following_url": "https://api.github.com/users/valentinkoe/following{/other_user}", "gists_url": "https://api.github.com/users/valentinkoe/gists{/gist_id}", "starred_url": "https://api.github.com/users/valentinkoe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/valentinkoe/subscriptions", "organizations_url": "https://api.github.com/users/valentinkoe/orgs", "repos_url": "https://api.github.com/users/valentinkoe/repos", "events_url": "https://api.github.com/users/valentinkoe/events{/privacy}", "received_events_url": "https://api.github.com/users/valentinkoe/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The fixup target fails because `get_modified_files.py` reports `tokenization_layoutlm_fast.py` as differing from the master branch.\r\nThen calling e.g. `black` to format that file fails because it doesn't exist anymore. How would I fix that?", "Hello! Thank you for your PR, and sorry for getting back to you so late. First of all, incredible work on doing the implementation and on overriding the tests that should be.\r\n\r\nI wonder if the approach here shouldn't make use of the recently introduced feature processors. @NielsRogge you have had extensive experience with feature processors and you were part of the initial conversation, what do you think would be the best approach here? \r\n\r\nIt's a bit different to handling images as it's simply handling the bbox, so I might be wrong here.\r\n\r\nAs a high level overview, I'm not keen on removing the fast tokenizer, and I'm wondering if we really need to accept two sequences or if LayoutLM is only made for single sequences - I haven't played with the model, so please tell me if I'm mistaken.\r\n\r\nAlso cc @sgugger and @patil-suraj who might have some insights.", "Yes I think you should look at the design used for `SpeechToText` or `Wav2Vec2`: there is a processor that combines a tokenizer and a feature extractor in those models. We should do the same here: leave the tokenizer unchanged and add a feature extractor to treat the bounding boxes separately, then merge the two in a `LayoutLMProcessor`.", "Thank you for the input, I wasn't aware of feature processors. It sounds like this could be a way nicer solution here, I agree.\r\n\r\n> I'm wondering if we really need to accept two sequences or if LayoutLM is only made for single sequences\r\n\r\nThe basic difference between LayoutLM and BERT is that the additional `bbox` input is [added to the embeddings](https://github.com/huggingface/transformers/blob/master/src/transformers/models/layoutlm/modeling_layoutlm.py#L104-L126).\r\n\r\nSo the two sequences are processed slightly different inside the model. However, there's a one to one relationship between their items. That's also why the processing of the `bbox` sequence depends on how the tokenizer splits the input. In case of a split into N sub-tokens the corresponding bounding box is repeated N times to retain the one to one relationship.\r\nWill this be possible with a feature processor? From a first glance I'm not too sure but I might be wrong. Can someone clarify?\r\n\r\n> As a high level overview, I'm not keen on removing the fast tokenizer\r\n\r\nI removed the fast tokenizer to first discuss if this is a suitable approach before investing more time. Eventually, I planned to add support for it as well.", "> However, there's a one to one relationship between their items. That's also why the processing of the bbox sequence depends on how the tokenizer splits the input. In case of a split into N sub-tokens the corresponding bounding box is repeated N times to retain the one to one relationship.\r\n\r\nIn the fast tokenizer, you can rely on the `word_ids` method of the `BatchEncoding` (the type of the return of the tokenizer) to get back the word associated to each token. For the slow tokenizer you may have to compute it. \r\n\r\nThe workflow I see is: the tokenizer returns a `BatchEnconding` with `input_ids`, `attention_mask` etc (like a usual tokenzier) and a field containing the mapping token to word, then the processor will extract that field form the batch encoding and pass it to the feature extractor responsible for the bounding boxes, so the proper repetition can happen. This way we still get a nice separation for the two modalities in two different objects.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "This is not a priority for me right now and correctly marked as stale. I didn't forget about it, though and hope to be able to come back to it in a near future." ]
1,615
1,622
1,621
CONTRIBUTOR
null
# What does this PR do? LayoutLMTokenizer does not take care properly of additional model input `bbox` (Bounding Boxes for words/tokens on a document), see https://github.com/huggingface/transformers/issues/10349. With this PR, LayoutLMTokenizer will take care of bounding boxes when doing tokenization, that is repeating a bounding box for a split text as is done [in the official LayoutLM code](https://github.com/microsoft/unilm/blob/23a7ea35b55279a171a118ac767e863aa92e692c/layoutlm/layoutlm/data/funsd.py#L252). Additionally, bounding box coordinates may be normalized to a target width and height. `LayoutLMTokenizerFast` is removed as it is currently only a copy of `BertTokenizerFast` and does not have the added functionality yet. Marked as WIP as I'm not sure this is the best way to tackle this problem, please see the discussion in the linked issue. Fixes #10349
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10719/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10719/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10719", "html_url": "https://github.com/huggingface/transformers/pull/10719", "diff_url": "https://github.com/huggingface/transformers/pull/10719.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10719.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/10718
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10718/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10718/comments
https://api.github.com/repos/huggingface/transformers/issues/10718/events
https://github.com/huggingface/transformers/pull/10718
831,838,689
MDExOlB1bGxSZXF1ZXN0NTkzMDk3NTMy
10,718
Fix backward compatibility with EvaluationStrategy
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
COLLABORATOR
null
# What does this PR do? As mentioned in #10666, the switch from `EvaluationStrategy` to `IntervalStrategy` is not fully backward compatible. This PR fixes that.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10718/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10718/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10718", "html_url": "https://github.com/huggingface/transformers/pull/10718", "diff_url": "https://github.com/huggingface/transformers/pull/10718.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10718.patch", "merged_at": 1615818038000 }
https://api.github.com/repos/huggingface/transformers/issues/10717
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10717/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10717/comments
https://api.github.com/repos/huggingface/transformers/issues/10717/events
https://github.com/huggingface/transformers/issues/10717
831,780,165
MDU6SXNzdWU4MzE3ODAxNjU=
10,717
How can I get the exact position von answers?
{ "login": "ahnz7", "id": 65608766, "node_id": "MDQ6VXNlcjY1NjA4NzY2", "avatar_url": "https://avatars.githubusercontent.com/u/65608766?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ahnz7", "html_url": "https://github.com/ahnz7", "followers_url": "https://api.github.com/users/ahnz7/followers", "following_url": "https://api.github.com/users/ahnz7/following{/other_user}", "gists_url": "https://api.github.com/users/ahnz7/gists{/gist_id}", "starred_url": "https://api.github.com/users/ahnz7/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ahnz7/subscriptions", "organizations_url": "https://api.github.com/users/ahnz7/orgs", "repos_url": "https://api.github.com/users/ahnz7/repos", "events_url": "https://api.github.com/users/ahnz7/events{/privacy}", "received_events_url": "https://api.github.com/users/ahnz7/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!", "> Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\n> Could you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n> \r\n> Thanks!\r\n\r\nokay, my bad\r\n" ]
1,615
1,615
1,615
NONE
null
I want to load a local model for Question and Answers task. I have copied the almost Code from Pipeline like follow: model = MyBert() checkpoint = torch.load('./training_mode_aim/checkpoint.pth.tar') model.load_state_dict(checkpoint['model']) model.eval() #read the context in local file texts = utils.readTxt() question = Question tokenizer = BertTokenizerFast.from_pretrained('') all_answers = [] for text in texts[:1]: encoding = tokenizer(question,text,truncation = True, max_length = 512,padding = True,stride = 256, return_tensors = 'np', return_overflowing_tokens=True, return_token_type_ids=True, return_offsets_mapping=True, return_special_tokens_mask=True,) num_span = len(encoding['input_ids']) answers = [] for span_idx in range(num_span): _,start_logits,end_logits = model(torch.tensor([encoding['input_ids'][span_idx]]), torch.tensor([encoding['attention_mask'][span_idx]]), torch.tensor([encoding['token_type_ids'][span_idx]])) with torch.no_grad(): start_logits,end_logits = start_logits.cpu().numpy(),end_logits.cpu().numpy() undesired_tokens = np.abs(np.array(encoding['token_type_ids'][span_idx]) - 1) & np.array(encoding['attention_mask'][span_idx]) undesired_tokens_mask = undesired_tokens == 0.0 start = np.where(undesired_tokens_mask,start_logits, -10000.0) end = np.where(undesired_tokens_mask, end_logits, -10000.0) start = np.exp(start - np.log(np.sum(np.exp(start), axis=-1, keepdims=True))) end = np.exp(end- np.log(np.sum(np.exp(end), axis=-1, keepdims=True))) starts,ends,scores = decode(start,end,5,128) print('starts: {}, end: {}'.format(starts,ends)) answers += [{ 'score':score.item(), 'start':encoding.token_to_word(s), 'end':encoding.token_to_word(e)} for s,e,score in zip(starts,ends,scores)] answers = sorted(answers, key=lambda x: x["score"], reverse=True)[: 5] print(answers) Then I got the relative positions for each subencodings. But finally I want to take the absolute positions of the answers. So someone know how to solve this problem?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10717/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10717/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10716
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10716/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10716/comments
https://api.github.com/repos/huggingface/transformers/issues/10716/events
https://github.com/huggingface/transformers/issues/10716
831,779,915
MDU6SXNzdWU4MzE3Nzk5MTU=
10,716
Language model for wav2vec2.0 decoding
{ "login": "EmreOzkose", "id": 17765576, "node_id": "MDQ6VXNlcjE3NzY1NTc2", "avatar_url": "https://avatars.githubusercontent.com/u/17765576?v=4", "gravatar_id": "", "url": "https://api.github.com/users/EmreOzkose", "html_url": "https://github.com/EmreOzkose", "followers_url": "https://api.github.com/users/EmreOzkose/followers", "following_url": "https://api.github.com/users/EmreOzkose/following{/other_user}", "gists_url": "https://api.github.com/users/EmreOzkose/gists{/gist_id}", "starred_url": "https://api.github.com/users/EmreOzkose/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/EmreOzkose/subscriptions", "organizations_url": "https://api.github.com/users/EmreOzkose/orgs", "repos_url": "https://api.github.com/users/EmreOzkose/repos", "events_url": "https://api.github.com/users/EmreOzkose/events{/privacy}", "received_events_url": "https://api.github.com/users/EmreOzkose/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nYou can ping @patrickvonplaten on the forum as he's the best suited to help you out.\r\n\r\nThanks!", "I moved the question to the forum and @patrickvonplaten said that the feature is not supported for now, but will be soon. \r\nhttps://discuss.huggingface.co/t/language-model-for-wav2vec2-0-decoding/4434", "I'm now working on this topic full time. \r\n\r\nWe will most likely foster a closer collaboration between [pyctcdecode](https://github.com/kensho-technologies/pyctcdecode) and Transformers. [Here](https://github.com/patrickvonplaten/Wav2Vec2_PyCTCDecode) is a github repo that shows how to use `pyctcdecode` with Wav2Vec2 for LM supported decoding. It works quite well with KenLM." ]
1,615
1,636
1,615
NONE
null
Hello, I implemented [wav2vec2.0 code](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) and a language model is not used for decoding. How can I add a language model (let's say a language model which is trained with KenLM) for decoding? thanks in advance.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10716/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10716/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10715
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10715/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10715/comments
https://api.github.com/repos/huggingface/transformers/issues/10715/events
https://github.com/huggingface/transformers/issues/10715
831,631,676
MDU6SXNzdWU4MzE2MzE2NzY=
10,715
Pegasus-Large Question
{ "login": "karrtikiyer", "id": 4375472, "node_id": "MDQ6VXNlcjQzNzU0NzI=", "avatar_url": "https://avatars.githubusercontent.com/u/4375472?v=4", "gravatar_id": "", "url": "https://api.github.com/users/karrtikiyer", "html_url": "https://github.com/karrtikiyer", "followers_url": "https://api.github.com/users/karrtikiyer/followers", "following_url": "https://api.github.com/users/karrtikiyer/following{/other_user}", "gists_url": "https://api.github.com/users/karrtikiyer/gists{/gist_id}", "starred_url": "https://api.github.com/users/karrtikiyer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/karrtikiyer/subscriptions", "organizations_url": "https://api.github.com/users/karrtikiyer/orgs", "repos_url": "https://api.github.com/users/karrtikiyer/repos", "events_url": "https://api.github.com/users/karrtikiyer/events{/privacy}", "received_events_url": "https://api.github.com/users/karrtikiyer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "`pegasus-large` is a pre-trained checkpoint. It's not fine-tuned on downstream task. The finetuned checkpoint name will have the name of the datsets in them. \r\n\r\nAlso please the [forum](https://discuss.huggingface.co/) to ask such questions.\r\n\r\nThanks.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,615
1,619
1,619
NONE
null
Is [Pegasus Large model checkpoint](https://huggingface.co/google/pegasus-large) trained on any downstream task? Or is it only trained on pre-training task of gap sentence?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10715/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10715/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10714
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10714/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10714/comments
https://api.github.com/repos/huggingface/transformers/issues/10714/events
https://github.com/huggingface/transformers/pull/10714
831,537,704
MDExOlB1bGxSZXF1ZXN0NTkyODQzOTc2
10,714
[Wav2Vec2] Make wav2vec2 test deterministic
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
MEMBER
null
# What does this PR do? The Wav2Vec2 tests are not deterministic on some machines. This PR should force all tests to use the expected samples.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10714/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10714/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10714", "html_url": "https://github.com/huggingface/transformers/pull/10714", "diff_url": "https://github.com/huggingface/transformers/pull/10714.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10714.patch", "merged_at": 1615816205000 }
https://api.github.com/repos/huggingface/transformers/issues/10713
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10713/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10713/comments
https://api.github.com/repos/huggingface/transformers/issues/10713/events
https://github.com/huggingface/transformers/issues/10713
831,506,758
MDU6SXNzdWU4MzE1MDY3NTg=
10,713
Is any possible with pipeline for using local model?
{ "login": "ahnz7", "id": 65608766, "node_id": "MDQ6VXNlcjY1NjA4NzY2", "avatar_url": "https://avatars.githubusercontent.com/u/65608766?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ahnz7", "html_url": "https://github.com/ahnz7", "followers_url": "https://api.github.com/users/ahnz7/followers", "following_url": "https://api.github.com/users/ahnz7/following{/other_user}", "gists_url": "https://api.github.com/users/ahnz7/gists{/gist_id}", "starred_url": "https://api.github.com/users/ahnz7/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ahnz7/subscriptions", "organizations_url": "https://api.github.com/users/ahnz7/orgs", "repos_url": "https://api.github.com/users/ahnz7/repos", "events_url": "https://api.github.com/users/ahnz7/events{/privacy}", "received_events_url": "https://api.github.com/users/ahnz7/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "As mentioned in the [docs](https://huggingface.co/transformers/main_classes/pipelines.html), you can either provide a model identifier from the hub or a model that inherits from `PreTrainedModel` or `TFPreTrainedModel`. \r\n\r\nSo suppose you have fine-tuned a model and stored it into a variable called `model`, then you can initialize a corresponding pipeline by providing this model variable at initialization. Make sure that the model you're providing is suitable for the pipeline. \r\n\r\nSo suppose that you want to use the question answering pipeline, and you have a local `xxxForQuestionAnswering` model, then you can provide it as follows:\r\n\r\n```\r\nfrom transformers import pipeline\r\n\r\nmodel = ...\r\nnlp = pipeline (task='question-answering', model=model)\r\n```\r\n\r\n", "> docs\r\n\r\nThanks a lot." ]
1,615
1,615
1,615
NONE
null
Hallo I have fine tuned a model in my local PC. I have read the Doc for Pipeline to search a way to use local model in Pipeline. But didn't find. Someone who knows how should I do?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10713/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10713/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10712
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10712/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10712/comments
https://api.github.com/repos/huggingface/transformers/issues/10712/events
https://github.com/huggingface/transformers/pull/10712
831,498,736
MDExOlB1bGxSZXF1ZXN0NTkyODEwMzgw
10,712
Update modeling_tf_pytorch_utils.py
{ "login": "LooperXX", "id": 28567594, "node_id": "MDQ6VXNlcjI4NTY3NTk0", "avatar_url": "https://avatars.githubusercontent.com/u/28567594?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LooperXX", "html_url": "https://github.com/LooperXX", "followers_url": "https://api.github.com/users/LooperXX/followers", "following_url": "https://api.github.com/users/LooperXX/following{/other_user}", "gists_url": "https://api.github.com/users/LooperXX/gists{/gist_id}", "starred_url": "https://api.github.com/users/LooperXX/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LooperXX/subscriptions", "organizations_url": "https://api.github.com/users/LooperXX/orgs", "repos_url": "https://api.github.com/users/LooperXX/repos", "events_url": "https://api.github.com/users/LooperXX/events{/privacy}", "received_events_url": "https://api.github.com/users/LooperXX/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "> This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\r\n> \r\n> Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.\r\n\r\nThis issue causes incorrect parameter transfer between PyTorch and TensorFlow when the LSTM neural networks are used in code.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,615
1,620
1,620
NONE
null
# What does this PR do? Fix a bug in convert_tf_weight_name_to_pt_weight_name(). Similar to the kernel parameters, the recurrent_kernel parameters in the LSTM networks need to be transposed, too. Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. ## Memo If the recurrent_kernel parameters are not transposed, it will cause the model parameters not to be loaded correctly, resulting in model migration failure.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10712/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10712/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10712", "html_url": "https://github.com/huggingface/transformers/pull/10712", "diff_url": "https://github.com/huggingface/transformers/pull/10712.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10712.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/10711
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10711/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10711/comments
https://api.github.com/repos/huggingface/transformers/issues/10711/events
https://github.com/huggingface/transformers/issues/10711
831,367,559
MDU6SXNzdWU4MzEzNjc1NTk=
10,711
'Trainer' object has no attribute 'log_metrics'
{ "login": "Gpwner", "id": 19349207, "node_id": "MDQ6VXNlcjE5MzQ5MjA3", "avatar_url": "https://avatars.githubusercontent.com/u/19349207?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Gpwner", "html_url": "https://github.com/Gpwner", "followers_url": "https://api.github.com/users/Gpwner/followers", "following_url": "https://api.github.com/users/Gpwner/following{/other_user}", "gists_url": "https://api.github.com/users/Gpwner/gists{/gist_id}", "starred_url": "https://api.github.com/users/Gpwner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Gpwner/subscriptions", "organizations_url": "https://api.github.com/users/Gpwner/orgs", "repos_url": "https://api.github.com/users/Gpwner/repos", "events_url": "https://api.github.com/users/Gpwner/events{/privacy}", "received_events_url": "https://api.github.com/users/Gpwner/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Btw ,these 3 APIs show similar error:\r\n**API:**\r\n```\r\n trainer.log_metrics(\"train\", metrics)\r\n trainer.save_metrics(\"train\", metrics)\r\n trainer.save_state()\r\n```\r\n\r\n**error**\r\n```\r\nTraceback (most recent call last):\r\n File \"run_mlm.py\", line 476, in <module>\r\n main()\r\n File \"run_mlm.py\", line 452, in main\r\n trainer.save_state()\r\nAttributeError: 'Trainer' object has no attribute 'save_state'\r\n```", "See #10446 ", "Indeed. Please search the issues before opening one that is an exact duplicate of an existing one (see the link above for a resolution of your problem)." ]
1,615
1,615
1,615
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version:4.3.3 - Platform:Pytorch - Python version:3.7.0 - PyTorch version (GPU?):GPU - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?:NO ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [x] the official example scripts: (give details below) https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1.python run_mlm.py --model_name_or_path release_model/ --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --do_eval --output_dir test-mlm --max_seq_length 128 2.Got an error: ``` [INFO|trainer.py:1408] 2021-03-15 11:10:32,884 >> Saving model checkpoint to test-mlm [INFO|configuration_utils.py:304] 2021-03-15 11:10:32,886 >> Configuration saved in test-mlm/config.json [INFO|modeling_utils.py:817] 2021-03-15 11:10:33,863 >> Model weights saved in test-mlm/pytorch_model.bin Traceback (most recent call last): File "run_mlm.py", line 475, in <module> main() File "run_mlm.py", line 450, in main trainer.log_metrics("train", metrics) AttributeError: 'Trainer' object has no attribute 'log_metrics' ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> The script finish without error.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10711/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10711/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10710
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10710/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10710/comments
https://api.github.com/repos/huggingface/transformers/issues/10710/events
https://github.com/huggingface/transformers/pull/10710
831,286,063
MDExOlB1bGxSZXF1ZXN0NTkyNjM3OTUx
10,710
independent training / eval with local files
{ "login": "riklopfer", "id": 413300, "node_id": "MDQ6VXNlcjQxMzMwMA==", "avatar_url": "https://avatars.githubusercontent.com/u/413300?v=4", "gravatar_id": "", "url": "https://api.github.com/users/riklopfer", "html_url": "https://github.com/riklopfer", "followers_url": "https://api.github.com/users/riklopfer/followers", "following_url": "https://api.github.com/users/riklopfer/following{/other_user}", "gists_url": "https://api.github.com/users/riklopfer/gists{/gist_id}", "starred_url": "https://api.github.com/users/riklopfer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/riklopfer/subscriptions", "organizations_url": "https://api.github.com/users/riklopfer/orgs", "repos_url": "https://api.github.com/users/riklopfer/repos", "events_url": "https://api.github.com/users/riklopfer/events{/privacy}", "received_events_url": "https://api.github.com/users/riklopfer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks!" ]
1,615
1,615
1,615
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Allows running evaluation on local files without specifying a train file. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> - maintained examples (not research project or legacy): @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10710/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10710/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10710", "html_url": "https://github.com/huggingface/transformers/pull/10710", "diff_url": "https://github.com/huggingface/transformers/pull/10710.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10710.patch", "merged_at": 1615851327000 }
https://api.github.com/repos/huggingface/transformers/issues/10709
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10709/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10709/comments
https://api.github.com/repos/huggingface/transformers/issues/10709/events
https://github.com/huggingface/transformers/pull/10709
831,187,907
MDExOlB1bGxSZXF1ZXN0NTkyNTY2MTEx
10,709
Wrong link to super class
{ "login": "cronoik", "id": 18630848, "node_id": "MDQ6VXNlcjE4NjMwODQ4", "avatar_url": "https://avatars.githubusercontent.com/u/18630848?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cronoik", "html_url": "https://github.com/cronoik", "followers_url": "https://api.github.com/users/cronoik/followers", "following_url": "https://api.github.com/users/cronoik/following{/other_user}", "gists_url": "https://api.github.com/users/cronoik/gists{/gist_id}", "starred_url": "https://api.github.com/users/cronoik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cronoik/subscriptions", "organizations_url": "https://api.github.com/users/cronoik/orgs", "repos_url": "https://api.github.com/users/cronoik/repos", "events_url": "https://api.github.com/users/cronoik/events{/privacy}", "received_events_url": "https://api.github.com/users/cronoik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
CONTRIBUTOR
null
# What does this PR do? Documentation was referring to slow tokenizer class while it should be the fast tokenizer. ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. Documentation: @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10709/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10709/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10709", "html_url": "https://github.com/huggingface/transformers/pull/10709", "diff_url": "https://github.com/huggingface/transformers/pull/10709.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10709.patch", "merged_at": 1615808350000 }
https://api.github.com/repos/huggingface/transformers/issues/10708
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10708/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10708/comments
https://api.github.com/repos/huggingface/transformers/issues/10708/events
https://github.com/huggingface/transformers/issues/10708
831,178,412
MDU6SXNzdWU4MzExNzg0MTI=
10,708
ValueError: Unsupported value type BatchEncoding returned by IteratorSpec._serialize
{ "login": "user06039", "id": 58213113, "node_id": "MDQ6VXNlcjU4MjEzMTEz", "avatar_url": "https://avatars.githubusercontent.com/u/58213113?v=4", "gravatar_id": "", "url": "https://api.github.com/users/user06039", "html_url": "https://github.com/user06039", "followers_url": "https://api.github.com/users/user06039/followers", "following_url": "https://api.github.com/users/user06039/following{/other_user}", "gists_url": "https://api.github.com/users/user06039/gists{/gist_id}", "starred_url": "https://api.github.com/users/user06039/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/user06039/subscriptions", "organizations_url": "https://api.github.com/users/user06039/orgs", "repos_url": "https://api.github.com/users/user06039/repos", "events_url": "https://api.github.com/users/user06039/events{/privacy}", "received_events_url": "https://api.github.com/users/user06039/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
NONE
null
I am trying to prediction on a data-point, but keep getting an error, ``` from transformers import TFDistilBertForSequenceClassification, DistilBertTokenizerFast # initialize longformer tokenizer and model tokenizer = DistilBertTokenizerFast.from_pretrained("distilbert-base-uncased", do_lower_case=True) model = TFDistilBertForSequenceClassification.from_pretrained("MODEL") data = tokenizer.encode_plus( sentence, padding="max_length", add_special_tokens=True, max_length=512, truncation=True, ) data['input_ids'] = tf.convert_to_tensor(np.reshape(data['input_ids'], (1, -1))) data['attention_mask'] = tf.convert_to_tensor(np.reshape(data['attention_mask'], (1, -1))) model.predict(data) ``` ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-95-c8644be457c1> in <module> ----> 1 model.predict(data) ~/Desktop/Work/apps/env/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py in predict(self, x, batch_size, verbose, steps, callbacks, max_queue_size, workers, use_multiprocessing) 1692 for step in data_handler.steps(): 1693 callbacks.on_predict_batch_begin(step) -> 1694 tmp_batch_outputs = self.predict_function(iterator) 1695 if data_handler.should_sync: 1696 context.async_wait() ~/Desktop/Work/apps/env/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds) 865 tracing_count = self.experimental_get_tracing_count() 866 with trace.Trace(self._name) as tm: --> 867 result = self._call(*args, **kwds) 868 compiler = "xla" if self._jit_compile else "nonXla" 869 new_tracing_count = self.experimental_get_tracing_count() ~/Desktop/Work/apps/env/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds) 900 # In this case we have not created variables on the first call. So we can 901 # run the first trace but we should fail if variables are created. --> 902 results = self._stateful_fn(*args, **kwds) 903 if self._created_variables: 904 raise ValueError("Creating variables on a non-first call to a function" ~/Desktop/Work/apps/env/lib/python3.8/site-packages/tensorflow/python/eager/function.py in __call__(self, *args, **kwargs) 3015 with self._lock: 3016 (graph_function, -> 3017 filtered_flat_args) = self._maybe_define_function(args, kwargs) 3018 return graph_function._call_flat( 3019 filtered_flat_args, captured_inputs=graph_function.captured_inputs) # pylint: disable=protected-access ~/Desktop/Work/apps/env/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs) 3395 3396 cache_key_context = self._cache_key_context() -> 3397 cache_key = self._cache_key(args, kwargs, cache_key_context) 3398 3399 try: ~/Desktop/Work/apps/env/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _cache_key(self, args, kwargs, cache_key_context, include_tensor_ranks_only) 3176 input_signature = pywrap_tfe.TFE_Py_EncodeArg(inputs, 3177 include_tensor_ranks_only) -> 3178 hashable_input_signature = _make_input_signature_hashable(input_signature) 3179 else: 3180 del args, kwargs ~/Desktop/Work/apps/env/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _make_input_signature_hashable(elem) 112 """ 113 try: --> 114 hash(elem) 115 except TypeError: 116 # TODO(slebedev): consider using nest. ~/Desktop/Work/apps/env/lib/python3.8/site-packages/tensorflow/python/framework/type_spec.py in __hash__(self) 311 312 def __hash__(self): --> 313 return hash(self.__get_cmp_key()) 314 315 def __reduce__(self): ~/Desktop/Work/apps/env/lib/python3.8/site-packages/tensorflow/python/framework/type_spec.py in __get_cmp_key(self) 349 """Returns a hashable eq-comparable key for `self`.""" 350 # TODO(b/133606651): Decide whether to cache this value. --> 351 return (type(self), self.__make_cmp_key(self._serialize())) 352 353 def __make_cmp_key(self, value): ~/Desktop/Work/apps/env/lib/python3.8/site-packages/tensorflow/python/framework/type_spec.py in __make_cmp_key(self, value) 367 ]) 368 if isinstance(value, tuple): --> 369 return tuple([self.__make_cmp_key(v) for v in value]) 370 if isinstance(value, list): 371 return (list, tuple([self.__make_cmp_key(v) for v in value])) ~/Desktop/Work/apps/env/lib/python3.8/site-packages/tensorflow/python/framework/type_spec.py in <listcomp>(.0) 367 ]) 368 if isinstance(value, tuple): --> 369 return tuple([self.__make_cmp_key(v) for v in value]) 370 if isinstance(value, list): 371 return (list, tuple([self.__make_cmp_key(v) for v in value])) ~/Desktop/Work/apps/env/lib/python3.8/site-packages/tensorflow/python/framework/type_spec.py in __make_cmp_key(self, value) 367 ]) 368 if isinstance(value, tuple): --> 369 return tuple([self.__make_cmp_key(v) for v in value]) 370 if isinstance(value, list): 371 return (list, tuple([self.__make_cmp_key(v) for v in value])) ~/Desktop/Work/apps/env/lib/python3.8/site-packages/tensorflow/python/framework/type_spec.py in <listcomp>(.0) 367 ]) 368 if isinstance(value, tuple): --> 369 return tuple([self.__make_cmp_key(v) for v in value]) 370 if isinstance(value, list): 371 return (list, tuple([self.__make_cmp_key(v) for v in value])) ~/Desktop/Work/apps/env/lib/python3.8/site-packages/tensorflow/python/framework/type_spec.py in __make_cmp_key(self, value) 379 return (np.ndarray, value.shape, 380 TypeSpec.__nested_list_to_tuple(value.tolist())) --> 381 raise ValueError("Unsupported value type %s returned by " 382 "%s._serialize" % 383 (type(value).__name__, type(self).__name__)) ValueError: Unsupported value type BatchEncoding returned by IteratorSpec._serialize ``` This is the input, I don't see any mistake in the input as well, `Tensorflow :- 2.5.0-dev20210311` ``` {'input_ids': <tf.Tensor: shape=(1, 512), dtype=int64, numpy= array([[ 101, 10373, 25933, 20974, 13006, 2980, 21397, 4012, 4813, 2086, 4341, 6823, 2086, 20519, 4852, 10198, 3325, 6605, 3375, 3934, 2312, 2235, 4094, 2565, 2622, 2968, 2203, 2203, 3001, 2458, 2166, 23490, 2968, 2974, 7300, 6959, 2974, 3136, 6032, 3325, 2551, 10317, 5338, 4044, 2832, 8048, 3003, 2844, 4813, 6581, 4105, 11647, 2583, 6464, 2958, 6578, 22859, 8754, 21466, 3893, 10266, 3798, 2136, 2372, 5198, 3237, 3247, 11153, 3237, 2968, 2195, 3454, 2764, 3266, 4800, 10521, 6895, 28296, 5649, 8185, 2098, 5500, 2780, 9605, 2136, 2372, 2164, 6327, 17088, 6605, 3674, 18402, 3934, 2780, 7453, 3144, 2650, 2501, 12771, 3934, 9531, 2051, 5166, 3737, 4118, 20792, 6605, 6502, 9871, 3325, 2086, 2551, 2976, 2231, 6401, 2780, 12746, 16134, 6310, 2544, 16134, 6605, 3934, 5994, 28585, 2974, 4526, 9084, 2449, 6194, 7142, 7620, 2306, 6605, 7781, 2279, 4245, 4684, 5097, 11924, 16380, 5281, 19905, 25870, 2500, 6605, 4736, 4114, 6581, 17826, 6970, 28823, 4807, 4813, 3001, 2933, 3266, 11103, 2892, 8360, 3934, 2434, 4145, 2345, 7375, 4208, 2968, 4073, 12725, 2536, 3450, 14206, 4187, 4130, 3450, 4719, 3934, 5147, 2949, 2051, 2306, 5166, 25276, 2152, 3737, 4781, 3113, 8013, 14206, 3450, 8518, 2426, 2622, 2136, 2372, 4722, 7640, 6327, 4411, 2734, 3113, 2622, 3289, 5676, 2622, 6503, 6134, 6413, 4807, 4722, 6327, 22859, 4953, 14670, 9531, 3570, 5166, 2622, 8503, 3450, 2164, 5337, 3550, 24162, 2622, 9920, 5935, 8220, 4342, 2949, 12653, 2192, 2622, 8116, 3085, 2015, 23946, 20271, 3530, 2588, 3314, 13248, 2967, 2949, 2695, 7375, 3319, 6709, 2752, 7620, 10779, 2136, 2372, 8013, 8048, 3921, 10908, 2836, 4806, 7411, 4781, 2836, 9312, 2873, 5704, 2443, 7748, 2780, 12706, 5300, 6481, 5326, 7142, 7620, 6078, 10471, 5906, 2109, 2622, 2968, 3120, 3642, 2544, 2491, 16473, 10813, 3451, 2689, 8678, 3488, 5461, 29494, 10787, 3029, 8651, 2366, 2195, 3934, 5704, 2443, 25505, 2724, 2164, 3772, 3040, 10912, 6605, 2051, 27554, 11376, 19795, 3145, 4391, 6567, 12725, 3463, 4041, 2680, 7396, 10908, 2635, 2599, 3788, 4722, 22859, 7846, 2622, 20283, 4187, 5246, 9990, 4415, 2396, 2594, 10924, 4751, 11157, 10035, 8635, 2147, 7528, 29445, 6194, 3266, 4684, 16380, 7375, 2081, 3154, 3266, 2048, 2590, 2951, 25095, 3934, 2951, 2697, 4600, 2109, 3934, 11506, 2367, 4127, 2951, 2066, 2865, 4684, 2951, 2449, 3563, 2592, 2136, 2565, 2622, 10489, 3375, 3795, 3454, 5884, 10843, 16316, 3454, 12139, 11100, 5225, 3144, 23259, 6503, 3454, 6162, 2449, 3289, 2164, 8720, 5813, 10831, 3314, 4254, 2565, 6959, 2195, 3454, 3934, 6401, 2976, 4034, 2306, 1044, 7898, 2565, 3208, 2877, 3947, 7396, 3674, 4411, 3266, 2832, 7620, 2967, 3271, 10938, 3151, 29003, 16134, 3024, 11433, 6143, 3257, 8182, 2449, 5918, 2864, 6578, 4106, 2203, 2203, 6653, 2346, 2863, 2361, 24977, 4208, 2968, 4073, 12725, 2536, 3450, 14206, 4187, 4130, 3450, 4719, 3934, 5147, 2949, 2051, 2306, 5166, 25276, 2152, 3737, 4781, 3113, 8013, 14206, 3450, 8518, 2426, 2622, 2136, 2372, 4722, 7640, 6327, 4411, 2734, 3113, 2622, 3289, 5676, 2622, 6503, 6134, 2780, 3454, 2458, 2190, 6078, 2109, 2622, 2968, 3120, 3642, 2544, 2491, 16473, 10813, 3266, 9123, 3433, 3934, 4083, 19875, 6364, 17953, 2361, 2109, 22969, 2376, 8013, 102]])>, 'attention_mask': <tf.Tensor: shape=(1, 512), dtype=int64, numpy= array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])>} ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10708/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10708/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10707
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10707/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10707/comments
https://api.github.com/repos/huggingface/transformers/issues/10707/events
https://github.com/huggingface/transformers/issues/10707
831,158,242
MDU6SXNzdWU4MzExNTgyNDI=
10,707
Inheriting from BartForConditionalGeneration into a new class - weight not initializing
{ "login": "katzurik", "id": 37979288, "node_id": "MDQ6VXNlcjM3OTc5Mjg4", "avatar_url": "https://avatars.githubusercontent.com/u/37979288?v=4", "gravatar_id": "", "url": "https://api.github.com/users/katzurik", "html_url": "https://github.com/katzurik", "followers_url": "https://api.github.com/users/katzurik/followers", "following_url": "https://api.github.com/users/katzurik/following{/other_user}", "gists_url": "https://api.github.com/users/katzurik/gists{/gist_id}", "starred_url": "https://api.github.com/users/katzurik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/katzurik/subscriptions", "organizations_url": "https://api.github.com/users/katzurik/orgs", "repos_url": "https://api.github.com/users/katzurik/repos", "events_url": "https://api.github.com/users/katzurik/events{/privacy}", "received_events_url": "https://api.github.com/users/katzurik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
NONE
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10707/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10707/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10706
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10706/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10706/comments
https://api.github.com/repos/huggingface/transformers/issues/10706/events
https://github.com/huggingface/transformers/issues/10706
831,134,128
MDU6SXNzdWU4MzExMzQxMjg=
10,706
Trainer crashes when saving checkpoint
{ "login": "GuillemGSubies", "id": 37592763, "node_id": "MDQ6VXNlcjM3NTkyNzYz", "avatar_url": "https://avatars.githubusercontent.com/u/37592763?v=4", "gravatar_id": "", "url": "https://api.github.com/users/GuillemGSubies", "html_url": "https://github.com/GuillemGSubies", "followers_url": "https://api.github.com/users/GuillemGSubies/followers", "following_url": "https://api.github.com/users/GuillemGSubies/following{/other_user}", "gists_url": "https://api.github.com/users/GuillemGSubies/gists{/gist_id}", "starred_url": "https://api.github.com/users/GuillemGSubies/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/GuillemGSubies/subscriptions", "organizations_url": "https://api.github.com/users/GuillemGSubies/orgs", "repos_url": "https://api.github.com/users/GuillemGSubies/repos", "events_url": "https://api.github.com/users/GuillemGSubies/events{/privacy}", "received_events_url": "https://api.github.com/users/GuillemGSubies/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "As indicated in the [documentation](https://huggingface.co/transformers/main_classes/trainer.html#trainingarguments) `metric_for_best_model` _Must be the name of a metric returned by the evaluation with or without the prefix \"eval\\_\"_. You passed a Metric object to it instead of the name returned by your `compute_metric` function, which is what caused your error.", "Thank you so much, I'm sorry for that" ]
1,615
1,615
1,615
CONTRIBUTOR
null
## Environment info - `transformers` version: 4.3.3 - Platform: Linux-4.4.0-161-generic-x86_64-with-glibc2.10 - Python version: 3.8.8 - PyTorch version (GPU?): 1.8.0+cu101 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes, distributed - Using distributed or parallel set-up in script?: Yes ``` +-----------------------------------------------------------------------------+ | NVIDIA-SMI 418.87.00 Driver Version: 418.87.00 CUDA Version: 10.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla P100-PCIE... Off | 00000000:02:00.0 Off | 0 | | N/A 47C P0 33W / 250W | 16181MiB / 16280MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 1 Tesla P100-PCIE... Off | 00000000:81:00.0 Off | 0 | | N/A 42C P0 34W / 250W | 16173MiB / 16280MiB | 0% Default | +-------------------------------+----------------------+----------------------+ ``` ### Who can help - trainer: @sgugger ## Information Model I am using (Bert, XLNet ...): "bert-base-multilingual-cased" The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(model_name, do_lower_case=False) def preprocess_function(examples): tokenized = tokenizer(examples["text"], truncation=True) tokenized["labels"] = [1 if elem == "sexist" else 0 for elem in examples["task1"]] return tokenized encoded_dataset = dataset.map(preprocess_function, batched=True) from transformers import AutoConfig, AutoModelForSequenceClassification, TrainingArguments, Trainer import datasets model_name = "bert-base-multilingual-cased" config = AutoConfig.from_pretrained( model_name, num_labels=len(np.unique(encoded_dataset["train"]["labels"])), ) model = AutoModelForSequenceClassification.from_pretrained(model_name, config=config) training_arguments = TrainingArguments( output_dir=f"{model_name}", do_train=True, do_eval=True, evaluation_strategy="steps", per_device_train_batch_size=16, per_device_eval_batch_size=16, learning_rate=2e-5, num_train_epochs=5, label_names=["labels"], load_best_model_at_end=True, metric_for_best_model=datasets.load_metric("accuracy"), eval_steps=50, ) def compute_metrics(eval_pred): predictions, labels = eval_pred predictions = np.argmax(predictions, axis=1) return metric.compute(predictions=predictions, references=labels) trainer = Trainer( model, training_arguments, train_dataset=encoded_dataset["train"], eval_dataset=encoded_dataset["validation"], tokenizer=tokenizer, compute_metrics=compute_metrics, ) trainer.train() ``` The error: ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-44-974384763956> in <module> 10 #callbacks=[EarlyStoppingCallback(early_stopping_patience=4)], # Por alguna razón, casca 11 ) ---> 12 trainer.train() ~/.local/lib/python3.8/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, **kwargs) 981 self.control = self.callback_handler.on_step_end(self.args, self.state, self.control) 982 --> 983 self._maybe_log_save_evaluate(tr_loss, model, trial, epoch) 984 985 if self.control.should_epoch_stop or self.control.should_training_stop: ~/.local/lib/python3.8/site-packages/transformers/trainer.py in _maybe_log_save_evaluate(self, tr_loss, model, trial, epoch) 1060 1061 if self.control.should_save: -> 1062 self._save_checkpoint(model, trial, metrics=metrics) 1063 self.control = self.callback_handler.on_save(self.args, self.state, self.control) 1064 ~/.local/lib/python3.8/site-packages/transformers/trainer.py in _save_checkpoint(self, model, trial, metrics) 1085 self.store_flos() 1086 -> 1087 self.save_model(output_dir) 1088 if self.deepspeed: 1089 self.deepspeed.save_checkpoint(output_dir) ~/.local/lib/python3.8/site-packages/transformers/trainer.py in save_model(self, output_dir) 1376 self._save_tpu(output_dir) 1377 elif self.is_world_process_zero(): -> 1378 self._save(output_dir) 1379 1380 # If on sagemaker and we are saving the main model (not a checkpoint so output_dir=None), save a copy to ~/.local/lib/python3.8/site-packages/transformers/trainer.py in _save(self, output_dir) 1419 1420 # Good practice: save your training arguments together with the trained model -> 1421 torch.save(self.args, os.path.join(output_dir, "training_args.bin")) 1422 1423 def store_flos(self): ~/miniconda3/envs/transformers4/lib/python3.8/site-packages/torch/serialization.py in save(obj, f, pickle_module, pickle_protocol, _use_new_zipfile_serialization) 370 if _use_new_zipfile_serialization: 371 with _open_zipfile_writer(opened_file) as opened_zipfile: --> 372 _save(obj, opened_zipfile, pickle_module, pickle_protocol) 373 return 374 _legacy_save(obj, opened_file, pickle_module, pickle_protocol) ~/miniconda3/envs/transformers4/lib/python3.8/site-packages/torch/serialization.py in _save(obj, zip_file, pickle_module, pickle_protocol) 474 pickler = pickle_module.Pickler(data_buf, protocol=pickle_protocol) 475 pickler.persistent_id = persistent_id --> 476 pickler.dump(obj) 477 data_value = data_buf.getvalue() 478 zip_file.write_record('data.pkl', data_value, len(data_value)) TypeError: cannot pickle '_thread.lock' object ``` It doesn't matter the save steps, etc. When it tries to save the model, I get that error. I don't think that I can post the dataset here but it doesn't look like a dataset problem. I'm following the sequence classification notebook but I changed some things to use binary classification and load my own dataset. I also get a different error when using the early stopping callback: ```python from transformers import EarlyStoppingCallback trainer = Trainer( model, training_arguments, train_dataset=encoded_dataset["train"], eval_dataset=encoded_dataset["validation"], tokenizer=tokenizer, compute_metrics=compute_metrics, callbacks=[EarlyStoppingCallback(early_stopping_patience=4)], ) trainer.train() ``` ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-46-17322e623d42> in <module> 10 callbacks=[EarlyStoppingCallback(early_stopping_patience=4)], # Por alguna razón, casca 11 ) ---> 12 trainer.train() ~/.local/lib/python3.8/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, **kwargs) 981 self.control = self.callback_handler.on_step_end(self.args, self.state, self.control) 982 --> 983 self._maybe_log_save_evaluate(tr_loss, model, trial, epoch) 984 985 if self.control.should_epoch_stop or self.control.should_training_stop: ~/.local/lib/python3.8/site-packages/transformers/trainer.py in _maybe_log_save_evaluate(self, tr_loss, model, trial, epoch) 1056 metrics = None 1057 if self.control.should_evaluate: -> 1058 metrics = self.evaluate() 1059 self._report_to_hp_search(trial, epoch, metrics) 1060 ~/.local/lib/python3.8/site-packages/transformers/trainer.py in evaluate(self, eval_dataset, ignore_keys, metric_key_prefix) 1522 xm.master_print(met.metrics_report()) 1523 -> 1524 self.control = self.callback_handler.on_evaluate(self.args, self.state, self.control, output.metrics) 1525 return output.metrics 1526 ~/.local/lib/python3.8/site-packages/transformers/trainer_callback.py in on_evaluate(self, args, state, control, metrics) 360 def on_evaluate(self, args: TrainingArguments, state: TrainerState, control: TrainerControl, metrics): 361 control.should_evaluate = False --> 362 return self.call_event("on_evaluate", args, state, control, metrics=metrics) 363 364 def on_save(self, args: TrainingArguments, state: TrainerState, control: TrainerControl): ~/.local/lib/python3.8/site-packages/transformers/trainer_callback.py in call_event(self, event, args, state, control, **kwargs) 375 def call_event(self, event, args, state, control, **kwargs): 376 for callback in self.callbacks: --> 377 result = getattr(callback, event)( 378 args, 379 state, ~/.local/lib/python3.8/site-packages/transformers/trainer_callback.py in on_evaluate(self, args, state, control, metrics, **kwargs) 527 def on_evaluate(self, args, state, control, metrics, **kwargs): 528 metric_to_check = args.metric_for_best_model --> 529 if not metric_to_check.startswith("eval_"): 530 metric_to_check = f"eval_{metric_to_check}" 531 metric_value = metrics.get(metric_to_check) AttributeError: 'Accuracy' object has no attribute 'startswith' ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10706/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10706/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10705
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10705/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10705/comments
https://api.github.com/repos/huggingface/transformers/issues/10705/events
https://github.com/huggingface/transformers/issues/10705
831,108,595
MDU6SXNzdWU4MzExMDg1OTU=
10,705
Please provide format of the dataset to finetuning wav2vec using run_asr.py script
{ "login": "vigneshgig", "id": 34392627, "node_id": "MDQ6VXNlcjM0MzkyNjI3", "avatar_url": "https://avatars.githubusercontent.com/u/34392627?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vigneshgig", "html_url": "https://github.com/vigneshgig", "followers_url": "https://api.github.com/users/vigneshgig/followers", "following_url": "https://api.github.com/users/vigneshgig/following{/other_user}", "gists_url": "https://api.github.com/users/vigneshgig/gists{/gist_id}", "starred_url": "https://api.github.com/users/vigneshgig/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vigneshgig/subscriptions", "organizations_url": "https://api.github.com/users/vigneshgig/orgs", "repos_url": "https://api.github.com/users/vigneshgig/repos", "events_url": "https://api.github.com/users/vigneshgig/events{/privacy}", "received_events_url": "https://api.github.com/users/vigneshgig/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "We are organizing a \"fine-tuning XLSR-53\" event. Check this announcement: https://discuss.huggingface.co/t/open-to-the-community-xlsr-wav2vec2-fine-tuning-week-for-low-resource-languages/4467. Would be awesome if you want to participate :-)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,615
1,618
1,618
NONE
null
Hi @patrickvonplaten, thanks for the great work, could you please provide some examples for the dataset format to train the wav2vec model using run_asr.py script.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10705/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/10705/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10704
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10704/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10704/comments
https://api.github.com/repos/huggingface/transformers/issues/10704/events
https://github.com/huggingface/transformers/issues/10704
831,093,479
MDU6SXNzdWU4MzEwOTM0Nzk=
10,704
How to generate texts in huggingface in a batch way?
{ "login": "yananchen1116", "id": 80617901, "node_id": "MDQ6VXNlcjgwNjE3OTAx", "avatar_url": "https://avatars.githubusercontent.com/u/80617901?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yananchen1116", "html_url": "https://github.com/yananchen1116", "followers_url": "https://api.github.com/users/yananchen1116/followers", "following_url": "https://api.github.com/users/yananchen1116/following{/other_user}", "gists_url": "https://api.github.com/users/yananchen1116/gists{/gist_id}", "starred_url": "https://api.github.com/users/yananchen1116/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yananchen1116/subscriptions", "organizations_url": "https://api.github.com/users/yananchen1116/orgs", "repos_url": "https://api.github.com/users/yananchen1116/repos", "events_url": "https://api.github.com/users/yananchen1116/events{/privacy}", "received_events_url": "https://api.github.com/users/yananchen1116/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Looking at the [source code](https://github.com/huggingface/transformers/blob/4c32f9f26e6a84f0d9843fec8757e6ce640bb44e/src/transformers/pipelines/text_generation.py#L108) of the text-generation pipeline, it seems that the texts are indeed generated one by one, so it's not ideal for batch generation. \r\n\r\nIn order to genere contents in a batch, you'll have to use GPT-2 (or another generation model from the hub) directly, like so (this is based on PR #7552):\r\n\r\n```\r\nfrom transformers import GPT2Tokenizer, GPT2LMHeadModel\r\nimport torch\r\n\r\ntokenizer = GPT2Tokenizer.from_pretrained('gpt2')\r\ntokenizer.padding_side = \"left\" \r\ntokenizer.pad_token = tokenizer.eos_token # to avoid an error\r\nmodel = GPT2LMHeadModel.from_pretrained('gpt2')\r\n\r\ndevice = 'cuda' if torch.cuda.is_available() else 'cpu'\r\n\r\ntexts = [\"this is a first prompt\", \"this is a second prompt\"]\r\nencoding = tokenizer(texts, padding=True, return_tensors='pt').to(device)\r\nwith torch.no_grad():\r\n generated_ids = model.generate(**encoding)\r\ngenerated_texts = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)\r\n```\r\n\r\nIn my case, this prints out: \r\n`['this is a first prompt for the user to enter the password.\\n\\nThe password is a string', \"this is a second prompt, but it's not a full-screen one.\\n\\nThe first\"]`", "> Looking at the [source code](https://github.com/huggingface/transformers/blob/4c32f9f26e6a84f0d9843fec8757e6ce640bb44e/src/transformers/pipelines/text_generation.py#L108) of the text-generation pipeline, it seems that the texts are indeed generated one by one, so it's not ideal for batch generation.\r\n> \r\n> In order to genere contents in a batch, you'll have to use GPT-2 (or another generation model from the hub) directly, like so (this is based on PR #7552):\r\n> \r\n> ```\r\n> from transformers import GPT2Tokenizer, GPT2LMHeadModel\r\n> import torch\r\n> \r\n> tokenizer = GPT2Tokenizer.from_pretrained('gpt2')\r\n> tokenizer.padding_side = \"left\" \r\n> tokenizer.pad_token = tokenizer.eos_token # to avoid an error\r\n> model = GPT2LMHeadModel.from_pretrained('gpt2')\r\n> \r\n> device = 'cuda' if torch.cuda.is_available() else 'cpu'\r\n> \r\n> texts = [\"this is a first prompt\", \"this is a second prompt\"]\r\n> encoding = tokenizer(texts, return_tensors='pt').to(device)\r\n> with torch.no_grad():\r\n> generated_ids = model.generate(**encoding)\r\n> generated_texts = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)\r\n> ```\r\n> \r\n> In my case, this prints out:\r\n> `['this is a first prompt for the user to enter the password.\\n\\nThe password is a string', \"this is a second prompt, but it's not a full-screen one.\\n\\nThe first\"]`\r\n\r\nthanks. It seems that the standard workflow is to organize the components of `tokenizer`, `generate` and `batch_decode`\r\nin a cascade way. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "@NielsRogge Is padding on the left still the way to go for batched generation? It seems odd to require a workaround for such a common feature" ]
1,615
1,669
1,619
NONE
null
I am new to huggingface. My task is quite simple, where I want to generate contents based on the given titles. The below codes is of low efficiency, that the GPU Util is only about 15%. It seems that it makes generation one by one. How can I improve the code to process and generate the contents in a batch way? ``` df_test = pd.read_csv("./ag_news/test.csv").sample(frac=1) from transformers import pipeline text_generator = pipeline("text-generation") rows = df_test.sample(1000) titles = rows['title'].tolist() contents = rows['content'].tolist() generate_texts = text_generator(titles, max_length=40, do_sample=False) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10704/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10704/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10703
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10703/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10703/comments
https://api.github.com/repos/huggingface/transformers/issues/10703/events
https://github.com/huggingface/transformers/pull/10703
831,075,817
MDExOlB1bGxSZXF1ZXN0NTkyNDgxOTI2
10,703
DebertaTokenizer Rework closes #10258
{ "login": "cronoik", "id": 18630848, "node_id": "MDQ6VXNlcjE4NjMwODQ4", "avatar_url": "https://avatars.githubusercontent.com/u/18630848?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cronoik", "html_url": "https://github.com/cronoik", "followers_url": "https://api.github.com/users/cronoik/followers", "following_url": "https://api.github.com/users/cronoik/following{/other_user}", "gists_url": "https://api.github.com/users/cronoik/gists{/gist_id}", "starred_url": "https://api.github.com/users/cronoik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cronoik/subscriptions", "organizations_url": "https://api.github.com/users/cronoik/orgs", "repos_url": "https://api.github.com/users/cronoik/repos", "events_url": "https://api.github.com/users/cronoik/events{/privacy}", "received_events_url": "https://api.github.com/users/cronoik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This is great @cronoik! I just tested it and reproduced identical results between the previous version and this one. Fantastic!\r\n\r\nCould you add an integration tests for the DeBERTa tokenizer to ensure the implementations don't diverge? You can just copy paste the [test for ALBERT tokenizers](https://github.com/huggingface/transformers/blob/master/tests/test_tokenization_albert.py#L106) and change the values to DeBERTa.\r\n\r\nIf you don't have time, let us know and we'll take care of it.", "@LysandreJik \r\nI will give it a try.", "Hi,\r\na new test is now added for the debertatokenizer. I stuck more to the robertatokenizer test since the debertatokenizer is basically a robertatokenizer.\r\n\r\nI can not interpret the failed check. I assume it is because the model hub is missing the new files? @LysandreJik could you please have a look?", "I am trying out this PR with the following:\r\n\r\n```\r\ntokenizer = DebertaTokenizer.from_pretrained(\"microsoft/deberta-base\")\r\ntarget_tokenized = tokenizer.tokenize(\"Some test text\")\r\n```\r\nbut I see the following error (`TypeError: expected str, bytes or os.PathLike object, not NoneType`):\r\n```\r\n def __init__(\r\n self,\r\n vocab_file,\r\n merges_file,\r\n errors=\"replace\",\r\n unk_token=\"<|endoftext|>\",\r\n bos_token=\"<|endoftext|>\",\r\n eos_token=\"<|endoftext|>\",\r\n add_prefix_space=False,\r\n **kwargs\r\n ):\r\n bos_token = AddedToken(bos_token, lstrip=False, rstrip=False) if isinstance(bos_token, str) else bos_token\r\n eos_token = AddedToken(eos_token, lstrip=False, rstrip=False) if isinstance(eos_token, str) else eos_token\r\n unk_token = AddedToken(unk_token, lstrip=False, rstrip=False) if isinstance(unk_token, str) else unk_token\r\n super().__init__(\r\n errors=errors,\r\n unk_token=unk_token,\r\n bos_token=bos_token,\r\n eos_token=eos_token,\r\n add_prefix_space=add_prefix_space,\r\n **kwargs,\r\n )\r\n\r\n> with open(vocab_file, encoding=\"utf-8\") as vocab_handle:\r\nE TypeError: expected str, bytes or os.PathLike object, not NoneType\r\n\r\n../../../../jeswan_transformers/src/transformers/models/gpt2/tokenization_gpt2.py:179: TypeError\r\n```\r\n@LysandreJik potentially related to @cronoik's error above ", "@jeswan that will not work since the required files are not uploaded to the model hub yet. You can download them from the [link](https://drive.google.com/drive/folders/1gH5EMABR94iHO7SCb_AdNGCOOIdSloxh?usp=sharing) and load the tokenizer from local.", "@cronoik thanks for your effort. I just uploaded the files to the model repository and left one comment to the changes. ", "@BigBird01 Thank you for the review. I had only tested single sentences and completely ignored the sentence pairs before pushing.\r\n@LysandreJik Can you please help me with the test error? ", "@sgugger Thanks for the review. I have pushed/accepted your change requests.", "Thanks for your efforts @cronoik!" ]
1,615
1,617
1,617
CONTRIBUTOR
null
# What does this PR do? Fixes #10258 @BigBird01 Please upload these [files](https://drive.google.com/drive/folders/1gH5EMABR94iHO7SCb_AdNGCOOIdSloxh?usp=sharing) to your deberta repositories. @huggingface: Please don't merge before @BigBird01 has uploaded the files to his repository. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? - tokenizers: @LysandreJik, @BigBird01
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10703/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10703/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10703", "html_url": "https://github.com/huggingface/transformers/pull/10703", "diff_url": "https://github.com/huggingface/transformers/pull/10703.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10703.patch", "merged_at": 1617299634000 }
https://api.github.com/repos/huggingface/transformers/issues/10702
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10702/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10702/comments
https://api.github.com/repos/huggingface/transformers/issues/10702/events
https://github.com/huggingface/transformers/issues/10702
831,071,620
MDU6SXNzdWU4MzEwNzE2MjA=
10,702
Performance Issue in doing inferencing hugging face models
{ "login": "kingafy", "id": 15839412, "node_id": "MDQ6VXNlcjE1ODM5NDEy", "avatar_url": "https://avatars.githubusercontent.com/u/15839412?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kingafy", "html_url": "https://github.com/kingafy", "followers_url": "https://api.github.com/users/kingafy/followers", "following_url": "https://api.github.com/users/kingafy/following{/other_user}", "gists_url": "https://api.github.com/users/kingafy/gists{/gist_id}", "starred_url": "https://api.github.com/users/kingafy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kingafy/subscriptions", "organizations_url": "https://api.github.com/users/kingafy/orgs", "repos_url": "https://api.github.com/users/kingafy/repos", "events_url": "https://api.github.com/users/kingafy/events{/privacy}", "received_events_url": "https://api.github.com/users/kingafy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!" ]
1,615
1,615
1,615
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version:3.2.0 - Platform:windows azure with GPU - Python version: 3.6 - PyTorch version (GPU?):CUDA 10.1 - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: We have created a flask end point to cater to multiple ML models such as Longformer Question answeing,zero shot learning, paraphrasing.But when these functions are called from flask end point the CPU percentage utilisation increases to more than 97% even though the GPU is already in place. Any idea why CPU takes a hit as this is just used for inferencing purpose. Also is there a way to check the memory size allocated to a variable when we load the torch pretrained model in the variable and pass this as an argument in the function. If there are concurrent calls does this memory requirement increases or is the same memory is being referred.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10702/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10702/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10701
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10701/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10701/comments
https://api.github.com/repos/huggingface/transformers/issues/10701/events
https://github.com/huggingface/transformers/issues/10701
831,063,893
MDU6SXNzdWU4MzEwNjM4OTM=
10,701
Seq2Seq Model with PreTrained BERT Model is Throwing Error During Training: ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
{ "login": "Ninja16180", "id": 61466835, "node_id": "MDQ6VXNlcjYxNDY2ODM1", "avatar_url": "https://avatars.githubusercontent.com/u/61466835?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ninja16180", "html_url": "https://github.com/Ninja16180", "followers_url": "https://api.github.com/users/Ninja16180/followers", "following_url": "https://api.github.com/users/Ninja16180/following{/other_user}", "gists_url": "https://api.github.com/users/Ninja16180/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ninja16180/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ninja16180/subscriptions", "organizations_url": "https://api.github.com/users/Ninja16180/orgs", "repos_url": "https://api.github.com/users/Ninja16180/repos", "events_url": "https://api.github.com/users/Ninja16180/events{/privacy}", "received_events_url": "https://api.github.com/users/Ninja16180/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }, { "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false } ]
[ "Hi @Ninja16180 \r\n\r\nCould you please post a short code snippet to reproduce the issue? Thanks.", "Hi Suraj,\r\n\r\nI was able to find out the issue; there was a variable name I wrongly passed into the decoder class and hence the error.\r\n\r\nCorrection made:\r\n`embedded = self.bert(sent2)[0] should be embedded = self.bert(input)[0] `\r\n\r\nThus closing this issue." ]
1,615
1,615
1,615
NONE
null
Hi, I tried creating a seq2seq model using pretrained BERT model following your tutorials: https://github.com/bentrevett/pytorch-sentiment-analysis/blob/master/6%20-%20Transformers%20for%20Sentiment%20Analysis.ipynb https://github.com/bentrevett/pytorch-seq2seq/blob/master/1%20-%20Sequence%20to%20Sequence%20Learning%20with%20Neural%20Networks.ipynb However during training, I am getting the following error: ``` AttributeError Traceback (most recent call last) <ipython-input-63-472071541d41> in <module>() 8 start_time = time.time() 9 ---> 10 train_loss = train(model, train_iterator, optimizer, criterion, CLIP) 11 valid_loss = evaluate(model, valid_iterator, criterion) 12 6 frames /usr/local/lib/python3.7/dist-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict) 917 raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") 918 elif input_ids is not None: --> 919 input_shape = input_ids.size() 920 batch_size, seq_length = input_shape 921 elif inputs_embeds is not None: AttributeError: 'Field' object has no attribute 'size' ``` I am sharing my code for your review in the following github repo: https://github.com/Ninja16180/BERT/blob/main/Training_Seq2Seq_Model_using_Pre-Trained_BERT_Model.ipynb Also, request you to kindly review the Encoder and Decoder classes which have been modified to incorporate pretrained bert embedding. Thanks in advance!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10701/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10701/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10700
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10700/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10700/comments
https://api.github.com/repos/huggingface/transformers/issues/10700/events
https://github.com/huggingface/transformers/issues/10700
830,930,024
MDU6SXNzdWU4MzA5MzAwMjQ=
10,700
Trying to implement "nielsr/luke-large" gives "KeyError: 'luke'"
{ "login": "UrosOgrizovic", "id": 25843402, "node_id": "MDQ6VXNlcjI1ODQzNDAy", "avatar_url": "https://avatars.githubusercontent.com/u/25843402?v=4", "gravatar_id": "", "url": "https://api.github.com/users/UrosOgrizovic", "html_url": "https://github.com/UrosOgrizovic", "followers_url": "https://api.github.com/users/UrosOgrizovic/followers", "following_url": "https://api.github.com/users/UrosOgrizovic/following{/other_user}", "gists_url": "https://api.github.com/users/UrosOgrizovic/gists{/gist_id}", "starred_url": "https://api.github.com/users/UrosOgrizovic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/UrosOgrizovic/subscriptions", "organizations_url": "https://api.github.com/users/UrosOgrizovic/orgs", "repos_url": "https://api.github.com/users/UrosOgrizovic/repos", "events_url": "https://api.github.com/users/UrosOgrizovic/events{/privacy}", "received_events_url": "https://api.github.com/users/UrosOgrizovic/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for your interest! LUKE is not part of the master branch yet.\r\n\r\nActually, the current implementation of LUKE is here (at my `adding_luke_v2` branch): https://github.com/NielsRogge/transformers/tree/adding_luke_v2/src/transformers/models/luke\r\n\r\nNote that it is work-in-progress, but you can already use the base `EntityAwareAttentionModel` and the head models. It's mostly the tokenizer that needs some work.\r\n\r\ncc'ing the original author for visibility: @ikuyamada ", "Thanks, Niels!\r\n\r\nAs far as I'm concerned, this can be closed." ]
1,615
1,615
1,615
NONE
null
## Environment info - `transformers` version: 4.1.1 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.8.3 - PyTorch version (GPU?): 1.7.1+cpu (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @LysandreJik I guess, because it's an `AutoTokenizer`-related issue. ## Information I'm trying to use an implementation of LUKE ([paper](https://arxiv.org/abs/2010.01057)) ([implementation](https://huggingface.co/nielsr/luke-large/tree/main)). The problem arises when using: * my own modified scripts The task I am working on is: I don't think this is relevant. ## To reproduce Steps to reproduce the behavior: 1. `from transformers import AutoTokenizer, AutoModel` 2. `tokenizer = AutoTokenizer.from_pretrained("nielsr/luke-large")` Running gives the following error: ``` --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-12-c1614eef2346> in <module> 4 ----> 5 luke_tokenizer = AutoTokenizer.from_pretrained("nielsr/luke-large") 6 c:\...\venv\lib\site-packages\transformers\models\auto\tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) 343 config = kwargs.pop("config", None) 344 if not isinstance(config, PretrainedConfig): --> 345 config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs) 346 347 use_fast = kwargs.pop("use_fast", True) c:\...\venv\lib\site-packages\transformers\models\auto\configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs) 350 351 if "model_type" in config_dict: --> 352 config_class = CONFIG_MAPPING[config_dict["model_type"]] 353 return config_class.from_dict(config_dict, **kwargs) 354 else: KeyError: 'luke' ``` ## Expected behavior I'm expecting no error to be thrown.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10700/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10700/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10699
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10699/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10699/comments
https://api.github.com/repos/huggingface/transformers/issues/10699/events
https://github.com/huggingface/transformers/pull/10699
830,881,920
MDExOlB1bGxSZXF1ZXN0NTkyMzUwNDk5
10,699
TF BART models - Add `cross_attentions` to model output and fix cross-attention head masking
{ "login": "stancld", "id": 46073029, "node_id": "MDQ6VXNlcjQ2MDczMDI5", "avatar_url": "https://avatars.githubusercontent.com/u/46073029?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stancld", "html_url": "https://github.com/stancld", "followers_url": "https://api.github.com/users/stancld/followers", "following_url": "https://api.github.com/users/stancld/following{/other_user}", "gists_url": "https://api.github.com/users/stancld/gists{/gist_id}", "starred_url": "https://api.github.com/users/stancld/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stancld/subscriptions", "organizations_url": "https://api.github.com/users/stancld/orgs", "repos_url": "https://api.github.com/users/stancld/repos", "events_url": "https://api.github.com/users/stancld/events{/privacy}", "received_events_url": "https://api.github.com/users/stancld/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Currently, there are some Torch/TF equivalent tests failing but it should be settled down once #10605 be merged.", "@jplu - I'm gonna run all slow tests today and will let you know if everything works or not.", "@jplu - I ran all (slow) non-GPU tests and it seems to me everything is passing :) ", "Ok, if all the tests for the involved models, including the slow ones, are passing, it is fine to merge for me.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "still open ", "@jplu @patrickvonplaten After merging #10605 and rebasing the branch, all tests have passed now :)", "Great, thanks @stancld! \r\n\r\nThis looks good to merge for me.", "Pinging @Rocketknight1 for a final review here. Seq2Seq models like BART should also return cross attentions in TF (not just in PT)", "This is a big PR but LGTM! I haven't exhaustively checked everything but the changes seem correct and innocuous, so if it passes tests I'm happy to merge it." ]
1,615
1,619
1,619
CONTRIBUTOR
null
This PR fixes some missing and invalid things around `cross_attentions` for the TensorFlow implementation of BART models: - `Bart`, - `Blenderbot` / `Blenderbot_small`, - `Marian`, - `MBart`, - `Pegasus`. More specifically, this PR includes: - Enable returning `cross_attentions` - Add class `TFBaseModelOutputWithCrossAttentions` (according to the PyTorch counterpart) to support output containing `cross_attentions` - Fix attention head masking for the cross-attention module (by the introduction of `cross_attn_head_mask` and `cross_attn_layer_head_mask`) - Implement `test_head_masking` for `cross_attn_head_mask` - Fix some little typos in docs - Update model templates - implement `head_mask`, `decoder_head_mask`, `cross_attn_head_mask` and code around `cross_attentions` to the TF encoder-decoder models <hr> Partially fixes: #10698 <hr> **Reviewers:** @jplu @patrickvonplaten @LysandreJik @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10699/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10699/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10699", "html_url": "https://github.com/huggingface/transformers/pull/10699", "diff_url": "https://github.com/huggingface/transformers/pull/10699.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10699.patch", "merged_at": 1619439381000 }
https://api.github.com/repos/huggingface/transformers/issues/10698
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10698/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10698/comments
https://api.github.com/repos/huggingface/transformers/issues/10698/events
https://github.com/huggingface/transformers/issues/10698
830,850,131
MDU6SXNzdWU4MzA4NTAxMzE=
10,698
Add `cross_attentions` to the output of TensorFlow encoder-decoder models
{ "login": "stancld", "id": 46073029, "node_id": "MDQ6VXNlcjQ2MDczMDI5", "avatar_url": "https://avatars.githubusercontent.com/u/46073029?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stancld", "html_url": "https://github.com/stancld", "followers_url": "https://api.github.com/users/stancld/followers", "following_url": "https://api.github.com/users/stancld/following{/other_user}", "gists_url": "https://api.github.com/users/stancld/gists{/gist_id}", "starred_url": "https://api.github.com/users/stancld/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stancld/subscriptions", "organizations_url": "https://api.github.com/users/stancld/orgs", "repos_url": "https://api.github.com/users/stancld/repos", "events_url": "https://api.github.com/users/stancld/events{/privacy}", "received_events_url": "https://api.github.com/users/stancld/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,615
1,619
1,619
CONTRIBUTOR
null
# 🚀 Feature request TensorFlow encoder-decoder models cannot return `cross_attentions` as do their PyTorch counterparts. ## Motivation It would be nice to narrow the gap between PyTorch and Tensorflow implementations. ## Your contribution I've been working on PR fixing this issue. ## Reviewers @jplu and whoever else within the community
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10698/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10698/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10697
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10697/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10697/comments
https://api.github.com/repos/huggingface/transformers/issues/10697/events
https://github.com/huggingface/transformers/pull/10697
830,830,301
MDExOlB1bGxSZXF1ZXN0NTkyMzExMDY2
10,697
Fix Wav2Vec2 classes imports
{ "login": "jjdelvalle", "id": 1283149, "node_id": "MDQ6VXNlcjEyODMxNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/1283149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jjdelvalle", "html_url": "https://github.com/jjdelvalle", "followers_url": "https://api.github.com/users/jjdelvalle/followers", "following_url": "https://api.github.com/users/jjdelvalle/following{/other_user}", "gists_url": "https://api.github.com/users/jjdelvalle/gists{/gist_id}", "starred_url": "https://api.github.com/users/jjdelvalle/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jjdelvalle/subscriptions", "organizations_url": "https://api.github.com/users/jjdelvalle/orgs", "repos_url": "https://api.github.com/users/jjdelvalle/repos", "events_url": "https://api.github.com/users/jjdelvalle/events{/privacy}", "received_events_url": "https://api.github.com/users/jjdelvalle/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
NONE
null
# What does this PR do? Fixes imports for the following classes: * Wav2Vec2CTCTokenizer * Wav2Vec2FeatureExtractor * Wav2Vec2Processor In order to fine tune FB's Wav2Vec2 XLSR model, these classes need to be accessible. Importing using the instructions in the current [blog post](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) won't work, i.e. using `from transformers import Wav2Vec2CTCTokenizer` will fail. This PR fixes that. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @patrickvonplaten, I'd appreciate it if you could give this a look. Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10697/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10697/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10697", "html_url": "https://github.com/huggingface/transformers/pull/10697", "diff_url": "https://github.com/huggingface/transformers/pull/10697.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10697.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/10696
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10696/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10696/comments
https://api.github.com/repos/huggingface/transformers/issues/10696/events
https://github.com/huggingface/transformers/issues/10696
830,802,513
MDU6SXNzdWU4MzA4MDI1MTM=
10,696
OSerror, when loading 'wav2vec2-large-xlsr-53' Model of Wav2vec2
{ "login": "LifaSun", "id": 6188893, "node_id": "MDQ6VXNlcjYxODg4OTM=", "avatar_url": "https://avatars.githubusercontent.com/u/6188893?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LifaSun", "html_url": "https://github.com/LifaSun", "followers_url": "https://api.github.com/users/LifaSun/followers", "following_url": "https://api.github.com/users/LifaSun/following{/other_user}", "gists_url": "https://api.github.com/users/LifaSun/gists{/gist_id}", "starred_url": "https://api.github.com/users/LifaSun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LifaSun/subscriptions", "organizations_url": "https://api.github.com/users/LifaSun/orgs", "repos_url": "https://api.github.com/users/LifaSun/repos", "events_url": "https://api.github.com/users/LifaSun/events{/privacy}", "received_events_url": "https://api.github.com/users/LifaSun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The model doesn't contains the tokenizer and preprocessing files.\r\nCheckout this notebook:\r\nhttps://huggingface.co/blog/fine-tune-xlsr-wav2vec2\r\n\r\nTo build your own vocab etc", "@flozi00 Thank you! \r\n\r\n> The model doesn't contains the tokenizer and preprocessing files.\r\n> Checkout this notebook:\r\n> https://huggingface.co/blog/fine-tune-xlsr-wav2vec2\r\n> \r\n> To build your own vocab etc\r\n\r\n", "Forget about `transformers.AutoProcessor`. This class is used to load a general `processor` model. By call the constructor of this class you have to submit `feature_extractor` and `tokenizer`, however `Wav2Vec2` just extract the features from raw speech data. Then, there is no `tokenizer` has been defined for it. To load the `processor` you can use `transformers.Wav2Vec2FeatureExtractor` as follow:\r\n\r\n```\r\nfrom transformers import Wav2Vec2FeatureExtractor\r\n\r\nprocessor = Wav2Vec2FeatureExtractor.from_pretrained('facebook/wav2vec2-large-xlsr-53')\r\n```", "> Forget about `transformers.AutoProcessor`. This class is used to load a general `processor` model. By call the constructor of this class you have to submit `feature_extractor` and `tokenizer`, however `Wav2Vec2` just extract the features from raw speech data. Then, there is no `tokenizer` has been defined for it. To load the `processor` you can use `transformers.Wav2Vec2FeatureExtractor` as follow:\r\n> \r\n> ```\r\n> from transformers import Wav2Vec2FeatureExtractor\r\n> \r\n> processor = Wav2Vec2FeatureExtractor.from_pretrained('facebook/wav2vec2-large-xlsr-53')\r\n> ```\r\n\r\nUsing this approach, I got segmentation fault on the same wav2vec2-large-xlsr-53 model.\r\nOutput:\r\n```bash\r\nSome weights of the model checkpoint at facebook/wav2vec2-large-xlsr-53 were not used when initializing Wav2Vec2Model: ['quantizer.weight_proj.bias', 'project_q.weight', 'project_q.bias', 'quantizer.codevectors', 'quantizer.weight_proj.weight', 'project_hid.bias', 'project_hid.weight']\r\n- This IS expected if you are initializing Wav2Vec2Model from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing Wav2Vec2Model from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nProcessing 24393.wav\r\nSegmentation fault (core dumped)\r\n```" ]
1,615
1,661
1,615
NONE
null
## Environment info - `transformers` version: 4.3.3 - Platform: Linux-4.15.0-29-generic-x86_64-with-glibc2.10 - Python version: 3.8.8 - PyTorch version (GPU?): 1.8.0 (False) - Tensorflow version (GPU?): not installed (NA) @patrickvonplaten ## Information Model I am using Wav2vec2.0: The problem arises when using: Scipts: import soundfile as sf import torch from transformers import AutoTokenizer, AutoModel,Wav2Vec2ForCTC, Wav2Vec2Tokenizer tokenizer4 = AutoTokenizer.from_pretrained("facebook/wav2vec2-large-xlsr-53") model4 = AutoModel.from_pretrained("facebook/wav2vec2-large-xlsr-53") OSError: OSError: Can't load tokenizer for 'facebook/wav2vec2-large-xlsr-53'. Make sure that: - 'facebook/wav2vec2-large-xlsr-53' is a correct model identifier listed on 'https://huggingface.co/models' - or 'facebook/wav2vec2-large-xlsr-53' is the correct path to a directory containing relevant tokenizer files The tasks I am working on is: * an official wav2vec task: facebook/wav2vec2-large-xlsr-53 ## To reproduce Steps to reproduce the behavior: Follow the instructions https://huggingface.co/facebook/wav2vec2-large-xlsr-53 ## Expected behavior I try to use xlsr model as the pre-trained model to finetune my own ASR model, but the xlsr model, especially tokenizer, can't be loaded smoothly. Could you tell me how to modify it? Thank you very much!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10696/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10696/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10695
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10695/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10695/comments
https://api.github.com/repos/huggingface/transformers/issues/10695/events
https://github.com/huggingface/transformers/pull/10695
830,768,393
MDExOlB1bGxSZXF1ZXN0NTkyMjYwODk1
10,695
Merge from huggingface/transformer master
{ "login": "SherlockNoMad", "id": 9906745, "node_id": "MDQ6VXNlcjk5MDY3NDU=", "avatar_url": "https://avatars.githubusercontent.com/u/9906745?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SherlockNoMad", "html_url": "https://github.com/SherlockNoMad", "followers_url": "https://api.github.com/users/SherlockNoMad/followers", "following_url": "https://api.github.com/users/SherlockNoMad/following{/other_user}", "gists_url": "https://api.github.com/users/SherlockNoMad/gists{/gist_id}", "starred_url": "https://api.github.com/users/SherlockNoMad/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SherlockNoMad/subscriptions", "organizations_url": "https://api.github.com/users/SherlockNoMad/orgs", "repos_url": "https://api.github.com/users/SherlockNoMad/repos", "events_url": "https://api.github.com/users/SherlockNoMad/events{/privacy}", "received_events_url": "https://api.github.com/users/SherlockNoMad/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
NONE
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10695/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10695/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10695", "html_url": "https://github.com/huggingface/transformers/pull/10695", "diff_url": "https://github.com/huggingface/transformers/pull/10695.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10695.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/10694
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10694/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10694/comments
https://api.github.com/repos/huggingface/transformers/issues/10694/events
https://github.com/huggingface/transformers/pull/10694
830,761,739
MDExOlB1bGxSZXF1ZXN0NTkyMjU1NDE5
10,694
[Wav2Vec2] Fix documentation inaccuracy
{ "login": "MikeG112", "id": 58539344, "node_id": "MDQ6VXNlcjU4NTM5MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/58539344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MikeG112", "html_url": "https://github.com/MikeG112", "followers_url": "https://api.github.com/users/MikeG112/followers", "following_url": "https://api.github.com/users/MikeG112/following{/other_user}", "gists_url": "https://api.github.com/users/MikeG112/gists{/gist_id}", "starred_url": "https://api.github.com/users/MikeG112/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MikeG112/subscriptions", "organizations_url": "https://api.github.com/users/MikeG112/orgs", "repos_url": "https://api.github.com/users/MikeG112/repos", "events_url": "https://api.github.com/users/MikeG112/events{/privacy}", "received_events_url": "https://api.github.com/users/MikeG112/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Great! Thanks a lot for correcting those :-)\r\n", "Hey @MikeG112, sorry could you run `make style` once to fix the code quality issue? Then we can merge :-)", "Hey @patrickvonplaten, absolutely, I added the changes made by `make style`. Thanks for the review and the wav2vec2 implementation :)" ]
1,615
1,615
1,615
CONTRIBUTOR
null
# What does this PR do? This PR resolves two Wav2Vec2 documentation statements that I believe are typos. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. Based on revision history, I assume @patrickvonplaten is an appropriate reviewer. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10694/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10694/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10694", "html_url": "https://github.com/huggingface/transformers/pull/10694", "diff_url": "https://github.com/huggingface/transformers/pull/10694.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10694.patch", "merged_at": 1615828277000 }
https://api.github.com/repos/huggingface/transformers/issues/10693
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10693/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10693/comments
https://api.github.com/repos/huggingface/transformers/issues/10693/events
https://github.com/huggingface/transformers/issues/10693
830,758,367
MDU6SXNzdWU4MzA3NTgzNjc=
10,693
mBART Large-50 MMT provides incorrect translation when the source and target language are the same
{ "login": "xhluca", "id": 21180505, "node_id": "MDQ6VXNlcjIxMTgwNTA1", "avatar_url": "https://avatars.githubusercontent.com/u/21180505?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xhluca", "html_url": "https://github.com/xhluca", "followers_url": "https://api.github.com/users/xhluca/followers", "following_url": "https://api.github.com/users/xhluca/following{/other_user}", "gists_url": "https://api.github.com/users/xhluca/gists{/gist_id}", "starred_url": "https://api.github.com/users/xhluca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xhluca/subscriptions", "organizations_url": "https://api.github.com/users/xhluca/orgs", "repos_url": "https://api.github.com/users/xhluca/repos", "events_url": "https://api.github.com/users/xhluca/events{/privacy}", "received_events_url": "https://api.github.com/users/xhluca/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[ { "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false } ]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "@patil-suraj Was this resolved? If so I'll close this issue", "HI @xhlulu \r\n\r\nI don't think if this is an issue with the model implementation, also the model is not really expected to do well on paraphrasing (which English-English), I've seen few other issues where models output text in the wrong language but it's the same with original model in `fairseq` as well. From my experience, multilingual models tend to do this in few cases.", "Thanks, that makes sense! Glad you clarified it 😊" ]
1,615
1,618
1,618
CONTRIBUTOR
null
mBART Large-50 MMT provides incorrect translation when the source and target language are the same, e.g. when translating from "en_XX" to "en_XX" ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.4.0.dev0 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.10 - PyTorch version (GPU?): 1.8.0+cu101 (False) - Tensorflow version (GPU?): 2.4.1 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @patrickvonplaten, @patil-suraj ## Information Model I am using (Bert, XLNet ...): `facebook/mbart-large-50-one-to-many-mmt` The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce ```python from transformers import MBartForConditionalGeneration, MBart50TokenizerFast article_en = "The head of the United Nations says there is no military solution in Syria" model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-one-to-many-mmt") tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-one-to-many-mmt", src_lang="en_XX") model_inputs = tokenizer(article_en, return_tensors="pt") generated_tokens = model.generate( **model_inputs, forced_bos_token_id=tokenizer.lang_code_to_id["en_XX"] ) decoded = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) print(decoded) ``` Returns: ``` ['Şeful Naţiunilor Unite declară că nu există o soluţie militară în Siria'] ``` ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> Should return something in english, preferably the same content as the original input
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10693/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10693/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10692
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10692/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10692/comments
https://api.github.com/repos/huggingface/transformers/issues/10692/events
https://github.com/huggingface/transformers/pull/10692
830,706,839
MDExOlB1bGxSZXF1ZXN0NTkyMjAzNzY5
10,692
Add RemBERT model code to huggingface
{ "login": "Iwontbecreative", "id": 494951, "node_id": "MDQ6VXNlcjQ5NDk1MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/494951?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Iwontbecreative", "html_url": "https://github.com/Iwontbecreative", "followers_url": "https://api.github.com/users/Iwontbecreative/followers", "following_url": "https://api.github.com/users/Iwontbecreative/following{/other_user}", "gists_url": "https://api.github.com/users/Iwontbecreative/gists{/gist_id}", "starred_url": "https://api.github.com/users/Iwontbecreative/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Iwontbecreative/subscriptions", "organizations_url": "https://api.github.com/users/Iwontbecreative/orgs", "repos_url": "https://api.github.com/users/Iwontbecreative/repos", "events_url": "https://api.github.com/users/Iwontbecreative/events{/privacy}", "received_events_url": "https://api.github.com/users/Iwontbecreative/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
closed
false
null
[]
[ "@LysandreJik I've been mostly following: https://huggingface.co/transformers/add_new_model.html so far.\r\nTrying to add the tokenizer. I think I am good for the slow one but not sure what to do for the fast one, in particular how to generate the tokenizer.json style files (e.g: https://huggingface.co/t5-small/resolve/main/tokenizer.json ). Do you have any pointers to that?\r\n\r\nI also see that the doc mentions that there is no fast version for sentencepiece, which this model uses. Is that the case given T5 seems to have one?\r\n\r\nEdit: Seem to have found a way to add a FastTokenizer version, doc still seems out of sync.", "For the TF code, I'm struggling a bit to initialize an output embedding layer in `TFRemBertLMPredictionHead()` that interacts well with `_resize_token_embeddings`. Welcoming any suggestions as what the right approach there is.", "@LysandreJik : They are still some things to iron out but I think this is ready for a first look.\r\n\r\nWhat is missing:\r\n- Model hub upload \r\n- Minor discrepancy between model and original tf implementation\r\n\r\n\r\nWhat I'd like some input on:\r\n- I'm having some issue on the `TFRemBertLMPredictionHead` implementation. I'd like to initialize a new projection from hidden_size to vocab_size (embeddings are decoupled) but I'm struggling to find how to make my implementation compatible with all the `get_bias`, `set_bias` details so that it's `resize_embeddings` friendly. Any chance you could help here? This is the culprit for tests failing AFAICT.\r\n- Model hub upload: should this be done on top level (how) or on the Google org model hub?\r\n- I'm finding a discrepancy between this implementation and the original tf one. Results are exactly equal up to the first hidden layer (so embeddings and upprojection). On the first layer it differs but by small amounts (~0.002), difference eventually increases up to 0.007. Any idea what are common culprits here? This is just the standard BERT model and differences are small so maybe numerical stability?\r\n", "> Model hub upload: should this be done on top level (how) or on the Google org model hub?\r\n\r\nToplevel models were for legacy (historical) integrations and we now namespace all models. If this work was conducted at Google yes google is the right namespace! Do you want us to add you to the `google` org?", "> A difference of *e-3 doesn't look too bad, but looking at the integration test you have provided, it seems that the difference is noticeable. Is it possible that a bias is missing, or something to do with attention masks?\r\n\r\nNot impossible but given the transformer section is simply Bert I doubt it. Also does seem like the results would change more.\r\n \r\n> If it proves impossible to get the two implementations closer to each other, then we'll rely on a fine-tuning tests: if we can obtain similar results on a same dataset with the two implementations, then we'll be good to go.\r\n\r\nI've tried to do that for a bit, unfortunately hard to fine-tune this model on a colab on XNLI (training gets interrupted too early on). Will try to see if I can get a better finetuning setup.\r\n\r\n\r\n", "> > Model hub upload: should this be done on top level (how) or on the Google org model hub?\r\n> \r\n> Toplevel models were for legacy (historical) integrations and we now namespace all models. If this work was conducted at Google yes google is the right namespace! Do you want us to add you to the `google` org?\r\n\r\nThat would be helpful, though I'm no longer affiliated with Google so not sure what the usual policy is there. If it is ok that will be easier than having to send the checkpoints to @hwchung so he uploads them.", "> That would be helpful, though I'm no longer affiliated with Google so not sure what the usual policy is there.\r\n\r\nUltimately the org admins should decide, but for now I think it's perfectly acceptable if you're a part of the org. I added you manually.", "@Iwontbecreative I opened a PR on your branch that should fix all the failing tests here: https://github.com/Iwontbecreative/transformers/pull/1\r\n\r\nI've separated each test suite (TF, style, docs) in three commits if you want to have a look at smaller portions at a time.", "Thanks Lysandre. Have not forgotten about this issue, just need one more approval from Google to open source the checkpoint so waiting for this.", "Sure @Iwontbecreative, let us know when all is good!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "Reopen: this is still in progress, just got the open sourcing done from google last Friday. Travelling for a few days and then can tackle this. See: https://github.com/google-research/google-research/tree/master/rembert", "That's great news @Iwontbecreative!", "(Finally) some updates on this now that the Google open sourcing is done.\r\n\r\n* Updated the PR for the lates transformers\r\n* Included @LysandreJik's change (manually instead of merging your PR since I messed up the order, sorry about that)\r\n* Uploaded the model to the model hub under `iwontbecreative/rembert`\r\n\r\nMain issue that still exists: the discrepancy between my implementation and the tf one. Welcoming ideas on this one. The code should now be easier for everyone to test now that the tf version is available and mine is uploaded to the model hub.\r\n\r\n@LysandreJik think this is ready for another look. Thanks for the patience here!\r\n", "Welcome back! We'll take a look shortly. Do you have an example of code to run to quickly test this version with the TF one?", "Sadly the tensorflow code to run the model is not available externally. \r\nI tried to replicate for some time this morning but it is hard because it relies on https://github.com/google-research/bert which has dependencies that are not available on pip anymore...\r\n\r\nThe main change to the modelling code is here:\r\nhttps://github.com/google-research/bert/blob/master/modeling.py#L813-L823\r\nneeding to be replaced with:\r\nhttps://github.com/google-research/albert/blob/master/modeling.py#L1083-L1088\r\non the tokenization front, it is mainly replacing `BertTokenizer` with `AlbertTokenizer`\r\n\r\nI do however have example inputs and outputs run by my coauthor:\r\n\r\n### Model outputs\r\nExample modelling outputs at several layers for different input_ids:\r\nhttps://pastebin.com/t9bPFmeM \r\nThis is the `[batch, length, :3]` section of the `[batch, length, hidden]` outputs.\r\n\r\n### Tokenization outputs\r\nhttps://pastebin.com/j6D6YE1e", "Fine-tuning was, here is what I was able to run, comnparing performance on XNLI:\r\n\r\nhttps://docs.google.com/spreadsheets/d/1gWWSLo7XxEZkXpX272tQoZBXTgs96IFvh-fwqVqihM0/edit#gid=0\r\n\r\nPerformance matches in English but does seem to be lower on other languages. We used more hyperparam tuning at Google but I do not think that explains the whole difference for those languages. I think there might be a subtle difference that is both causing the results to differ slightly and the worse fine-tuning outcomes. The model is still much better than random so most of it should be there.", "Performance does look pretty similar, and good enough for me to merge it.\r\n\r\nThere are a few `# Copied from` statements missing though as said in my previous message, in both the PyTorch and TensorFlow implementations. Do you mind adding them? If you're lacking time let me know and I'll do it myself.", "Hi @LysandreJik \r\n\r\n- Added copy statements\r\n- Merged with last master\r\n- Uploaded model to google org\r\n\r\nSeems like it is mostly ready, though tests fail at the `utils/check_copies.py` stage of `make quality`.\r\nI am actually not sure what the issue is in this instance, any chance you could help investigate/fix/merge after?", "Actually, managed to find the issue\r\n\r\n`utils/check_copies.py` crashes without a helpful error message if the \"Copied from\" statement is before the decorator. I was just overeager with my copied from statements.\r\n\r\nAlso renamed rembert-large to rembert since this is the only version we are open-sourcing at this time.\r\n\r\nEdit: Not sure why the model templates check is failing, but think this should be ready for merge with one last review. ", "Fantastic, thanks a lot @Iwontbecreative! I'll take a final look, fix the model templates issue and ping another reviewer.", "@patrickvonplaten Thanks for the helpful feedback. Incorporated most of it. See the comment on possible needed changes to the cookiecutter templates to address on of your comments in the future.\r\nFor the discrepancy in results, see my answer above.\r\n\r\n@sgugger Regarding older template: Yes, this PR ended up being delayed due to slow open-sourcing process at Google, so the templates were a bit out of date. Thanks for catching most of the mistakes.\r\n\r\n", "Hi @LysandreJik, any last remaining steps before this can be merged? Would like to get this in to avoid further rebases if possible. ", "I think this can be merged - thanks for your effort @Iwontbecreative, fantastic addition!", "I'm not entirely sure why there was 88 authors involved or 250 commits squashed into a single one - but I did verify only your changes were merged.\r\n\r\nCould you let me know how you handled the merge/rebasing of this branch so that I may understand what happened w.r.t the number of commits included?", "I think I just merged the master's changes into my branch to ensure it was up to date with upstream. Maybe I needed to rebase?", "Hi @Iwontbecreative thanks for adding the RemBERT model! Do you have a list of the 110 languages used in the pretraining of the model?", "Sure, here's the list:\r\n\r\n['af', 'am', 'ar', 'az', 'be', 'bg', 'bg-Latn', 'bn', 'bs', 'ca', 'ceb', 'co', 'cs', 'cy', 'da', 'de', 'el', 'el-Latn', 'en', 'eo', 'es', 'et', 'eu', 'fa', 'fi', 'fil', 'fr', 'fy', 'ga', 'gd', 'gl', 'gu', 'ha', 'haw', 'hi', 'hi-Latn', 'hmn', 'hr', 'ht', 'hu', 'hy', 'id', 'ig', 'is', 'it', 'iw', 'ja', 'ja-Latn', 'jv', 'ka', 'kk', 'km', 'kn', 'ko', 'ku', 'ky', 'la', 'lb', 'lo', 'lt', 'lv', 'mg', 'mi', 'mk', 'ml', 'mn', 'mr', 'ms', 'mt', 'my', 'ne', 'nl', 'no', 'ny', 'pa', 'pl', 'ps', 'pt', 'ro', 'ru', 'ru-Latn', 'sd', 'si', 'sk', 'sl', 'sm', 'sn', 'so', 'sq', 'sr', 'st', 'su', 'sv', 'sw', 'ta', 'te', 'tg', 'th', 'tr', 'uk', 'ur', 'uz', 'vi', 'xh', 'yi', 'yo', 'zh', 'zh-Hans', 'zh-Hant', 'zh-Latn', 'zu']\r\n\r\ncf https://github.com/google-research/google-research/tree/master/rembert\r\n" ]
1,615
1,628
1,627
CONTRIBUTOR
null
Add RemBERT model to Huggingface ( https://arxiv.org/abs/2010.12821 ). This adds code to support the RemBERT model in Huggingface. In terms of implementation, this is roughly a scaled up version of mBERT with ALBERT-like factorized embeddings and tokenizer. Still needs to be done: - [x] Check results validity - [x] Upload model to model hub - [x] FastTokenizer version - [x] More testing - [x] TF code Fixes #9711 ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @LysandreJik seems appropriate here.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10692/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10692/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10692", "html_url": "https://github.com/huggingface/transformers/pull/10692", "diff_url": "https://github.com/huggingface/transformers/pull/10692.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10692.patch", "merged_at": 1627140703000 }
https://api.github.com/repos/huggingface/transformers/issues/10691
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10691/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10691/comments
https://api.github.com/repos/huggingface/transformers/issues/10691/events
https://github.com/huggingface/transformers/issues/10691
830,377,713
MDU6SXNzdWU4MzAzNzc3MTM=
10,691
Naming convention for (pytorch) checkpoints broken?
{ "login": "ioana-blue", "id": 17202292, "node_id": "MDQ6VXNlcjE3MjAyMjky", "avatar_url": "https://avatars.githubusercontent.com/u/17202292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ioana-blue", "html_url": "https://github.com/ioana-blue", "followers_url": "https://api.github.com/users/ioana-blue/followers", "following_url": "https://api.github.com/users/ioana-blue/following{/other_user}", "gists_url": "https://api.github.com/users/ioana-blue/gists{/gist_id}", "starred_url": "https://api.github.com/users/ioana-blue/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ioana-blue/subscriptions", "organizations_url": "https://api.github.com/users/ioana-blue/orgs", "repos_url": "https://api.github.com/users/ioana-blue/repos", "events_url": "https://api.github.com/users/ioana-blue/events{/privacy}", "received_events_url": "https://api.github.com/users/ioana-blue/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[ { "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false } ]
[ "I think I might have come up with a bandaid solution for now. If the output dir exists and overwrite output dir flag is not set, load the configuration from the output dir and resume training from `pytorch_model.bin`. I'm going to give this a try, I think it's going to work. ", "Yes, as I expected, it worked. I modified the sample scripts as follows:\r\n\r\n```\r\n if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir:\r\n # last_checkpoint = get_last_checkpoint(training_args.output_dir)\r\n # check there is a WEIGHTS_NAME model in the output directory and use that as the last_checkpoint\r\n if os.path.isfile(os.path.join(training_args.output_dir, WEIGHTS_NAME)):\r\n # use it as checkpoint\r\n last_checkpoint = training_args.output_dir\r\n```\r\n\r\nThe ramification of this change is that the checkpoint is going to be replaced upon training completion (which so happens is what I needed). \r\n\r\nI still think the checkpoint naming conventions should be reconciled (re: WEIGHTS_NAME and PREFIX_CHECKPOINT_DIR) so I'll leave this feature request open. ", "Nothing has changed in the way checkpoints are named since version 2 at least, e.g. the checkpoints are saved in `args.output_dir/checkpoin-xxx` where xxx is the number of training steps elapsed since the beginning of training.\r\n\r\nThe change you are suggesting would remove the ability for people to resume training from the last checkpoint saved which is not something we want. If you want to start your training from a specific saved model, you can pass along \r\n`--model_name_or_path path_to_folder_with_saved_model`.", "@sgugger Thanks for the reply. I think I wanted the behavior that you're talking about (not what I ended up doing, i.e., looking for a saved model directly in the output dir). \r\n\r\nBased on what you just said, I looked again at the code and there are two ways to save a checkpoint: `save_model` and `_save_checkpoint`. It so happens that the sample text classification scripts use `save_model` directly which does not create the `checkpoint-xxx` directory. Underneath, `_save_checkpoint` calls `save_model` but with the dir `checkpoint-xxx` which was the behavior that I wanted. \r\n\r\nBTW, what's the \"right\" way of saving models/checkpoints? the `_` in `_save_checkpoint` makes me believe it's supposed to be a utility function and there is another API function. \r\n\r\nSo I guess what's happening is the sample script calls `save_model` when the training ends (saving the `pytorch_model.bin` directly in the output dir) and that confused me. Bottom line: the current sample script for text classification can't resume training from the last checkpoint saved (because it doesn't \"save a checkpoint\" it \"saves a model\" https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_glue.py#L466).\r\n\r\nI completely agree with you that that is a behavior everybody wants, i.e., resuming training from last checkpoint. If you agree, I'll change this issue to read save checkpoint in sample text classification script.", "I think you are confusing:\r\n- saving checkpoints during training, done automatically by the `Trainer` every `save_steps` (unless you are using a different strategy)\r\nand\r\n- saving the final model, which is done at the end of training.", "Got it, thanks for clarifying the terminology! \r\n\r\nPS: It so happens that I needed a checkpoint to be saved at the end of training, now I understand how that's done. " ]
1,615
1,615
1,615
NONE
null
# 🚀 Feature request In previous versions of the library and sample files, checkpoints were saved with some naming convention that had `checkpoint` in the name file. Subsequent jobs could look in the output directory and check if any checkpoint is available first; if found, it would load the checkpoint and the corresponding config and continue training from where it left off; if not found, it would check for the model_path_or_name. I'm under the impression that this convention broke, from what I can tell. When using the utilities from the library, for pytorch models, the model is saved with the name `pytorch_model.bin` (WEIGHTS_NAME in file_utils.py) and when looking to load a checkpoint PREFIX_CHECKPOINT_DIR = "checkpoint" from trainer_utils.py is used. So it doesn't match and it starts training from scratch. One (local) way to fix this is to rewrite searching for a checkpoint instead of using the one in the library. Is there any other option that allows a pipeline of jobs without using different scripts (e.g., one script that loads the original pretrained bert model, for example, and all subsequent runs use a different script that point the model_path to the local path where the pytorch_model.bin is saved). I guess the feature request is to bring this feature back. One way to do it is to use command line args for checkpoint names instead of using hardcoded naming in the files. ## Motivation Cascading/pipelined training jobs: one job starts, takes a checkpoint, the next one picks up from the last checkpoint. The same script is used for either first or intermediate job in the pipeline.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10691/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10691/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10690
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10690/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10690/comments
https://api.github.com/repos/huggingface/transformers/issues/10690/events
https://github.com/huggingface/transformers/pull/10690
830,324,665
MDExOlB1bGxSZXF1ZXN0NTkxODU0MDU2
10,690
enable loading Mbart50Tokenizer with AutoTokenizer
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Very nice ! " ]
1,615
1,615
1,615
MEMBER
null
# What does this PR do? Currently `MBart50Tokenizer`, `MBart50TokenizerFast` can't be loaded using `AutoTokenizer` because they use the `MBartConfig` which is associated with `MBartTokenizer`. This PR enables loading `MBart50Tokenizer(Fast)` by adding them to the `NO_CONFIG_TOKENIZER` list. I've also added the `tokenizer_type` argument in the respective models' config file on the hub. cc @Narsil
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10690/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10690/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10690", "html_url": "https://github.com/huggingface/transformers/pull/10690", "diff_url": "https://github.com/huggingface/transformers/pull/10690.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10690.patch", "merged_at": 1615805437000 }
https://api.github.com/repos/huggingface/transformers/issues/10689
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10689/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10689/comments
https://api.github.com/repos/huggingface/transformers/issues/10689/events
https://github.com/huggingface/transformers/pull/10689
830,193,387
MDExOlB1bGxSZXF1ZXN0NTkxNzQzMzk5
10,689
Fix mixed precision for TFGPT2LMHeadModel
{ "login": "mymusise", "id": 6883957, "node_id": "MDQ6VXNlcjY4ODM5NTc=", "avatar_url": "https://avatars.githubusercontent.com/u/6883957?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mymusise", "html_url": "https://github.com/mymusise", "followers_url": "https://api.github.com/users/mymusise/followers", "following_url": "https://api.github.com/users/mymusise/following{/other_user}", "gists_url": "https://api.github.com/users/mymusise/gists{/gist_id}", "starred_url": "https://api.github.com/users/mymusise/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mymusise/subscriptions", "organizations_url": "https://api.github.com/users/mymusise/orgs", "repos_url": "https://api.github.com/users/mymusise/repos", "events_url": "https://api.github.com/users/mymusise/events{/privacy}", "received_events_url": "https://api.github.com/users/mymusise/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "> This is not the proper way to proceed. Here you force the dtype to float32 which is not correct, because you might encounter conflicts with other values that are in float16.\r\n\r\nYes, you're right, I should not force the dtype to float32, maybe I should force it to float32 when using `mixed_precision`.\r\n\r\n> To get similar results, you have to increase the number of epochs. Usually, I get a similar model after multiplying the number of epochs by 2 or 2.5.\r\n\r\nBut in my sample: \r\nI always get 0.98 accuracy after 18 epochs without using the `mixed_precision`, \r\nbut sometimes the accuracy will stop at 0.7333 when using the `mixed_precision` policy, even I train it after 100 epochs.\r\n\r\nI think there must something wrong. I have updated my sample, have a look: https://colab.research.google.com/github/mymusise/gpt2-quickly/blob/main/examples/mixed_precision_test.ipynb", "> Yes, you're right, I should not force the dtype to float32, maybe I should force it to float32 when using mixed_precision.\r\n\r\nNo! This would be even worse. Set the layer norm and the embeddings directly to `float32` is a better temporary fix.\r\n\r\n> I always get 0.98 accuracy after 18 epochs without using the mixed_precision, but sometimes the accuracy will stop at 0.7333 when using the mixed_precision policy, even I train it after 100 epochs.\r\n\r\nHaving a lower accuracy in mixed precision is normal, but not that much. How much do you get by setting the layer norm and the embeddings directly to `float32` in mixed precision?\r\n\r\nI cannot run any test for now as I don't have access to a computer so I cannot really use your Colab.", "> No! This would be even worse. Set the layer norm and the embeddings directly to `float32` is a better temporary fix.\r\n\r\nMy apologies, I didn't get it, do you mean the first commit is better? :joy:\r\n\r\n> Having a lower accuracy in mixed precision is normal, but not that much. How much do you get by setting the layer norm and the embeddings directly to `float32` in mixed precision?\r\n> \r\n> I cannot run any test for now as I don't have access to a computer so I cannot really use your Colab.\r\n\r\nSorry to disturb your rest time. I can get 0.99 accuracy after setting the layer norm and the embeddings directly to `float32` in mixed precision.\r\nHere's a partial screenshot:\r\n![image](https://user-images.githubusercontent.com/6883957/110977074-a9b0b880-839c-11eb-98d0-d006c926ce0c.png)\r\n", "> My apologies, I didn't get it, do you mean the first commit is better?\r\n\r\nYes.\r\n\r\n> Sorry to disturb your rest time. I can get 0.99 accuracy after setting the layer norm and the embeddings directly to float32 in mixed precision.\r\n\r\nThanks for the screenshot. Can you please revert your last commit then.", "> Thanks for the screenshot. Can you please revert your last commit then.\r\n\r\nSure!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "@Rocketknight1\r\nHello, what should I do to help with this?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,615
1,625
1,625
CONTRIBUTOR
null
Fixed the loss of precision when using mixed_precision. Not sure it's the right way to do this, correct me if it's wrong. related issue: https://github.com/huggingface/transformers/issues/8559#issuecomment-797528526 - gpt2: @patrickvonplaten, @LysandreJik - tensorflow: @jplu
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10689/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10689/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10689", "html_url": "https://github.com/huggingface/transformers/pull/10689", "diff_url": "https://github.com/huggingface/transformers/pull/10689.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10689.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/10688
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10688/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10688/comments
https://api.github.com/repos/huggingface/transformers/issues/10688/events
https://github.com/huggingface/transformers/pull/10688
830,184,068
MDExOlB1bGxSZXF1ZXN0NTkxNzM1NDc3
10,688
Adding required flags to non-default arguments in hf_argparser
{ "login": "Craigacp", "id": 729696, "node_id": "MDQ6VXNlcjcyOTY5Ng==", "avatar_url": "https://avatars.githubusercontent.com/u/729696?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Craigacp", "html_url": "https://github.com/Craigacp", "followers_url": "https://api.github.com/users/Craigacp/followers", "following_url": "https://api.github.com/users/Craigacp/following{/other_user}", "gists_url": "https://api.github.com/users/Craigacp/gists{/gist_id}", "starred_url": "https://api.github.com/users/Craigacp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Craigacp/subscriptions", "organizations_url": "https://api.github.com/users/Craigacp/orgs", "repos_url": "https://api.github.com/users/Craigacp/repos", "events_url": "https://api.github.com/users/Craigacp/events{/privacy}", "received_events_url": "https://api.github.com/users/Craigacp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "No problem. I can't see what test failed in CircleCI as it wants to bind to my Github account and access private things in my org. Is there any way to see the failures without letting it into my account?" ]
1,615
1,615
1,615
CONTRIBUTOR
null
Signed-off-by: Adam Pocock <[email protected]> # What does this PR do? Fixes #10677. I didn't update the docs as I think this is the intended behaviour, but I can do if you think this change would be unexpected. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. Issue #10677. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10688/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10688/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10688", "html_url": "https://github.com/huggingface/transformers/pull/10688", "diff_url": "https://github.com/huggingface/transformers/pull/10688.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10688.patch", "merged_at": 1615814876000 }
https://api.github.com/repos/huggingface/transformers/issues/10687
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10687/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10687/comments
https://api.github.com/repos/huggingface/transformers/issues/10687/events
https://github.com/huggingface/transformers/pull/10687
830,150,548
MDExOlB1bGxSZXF1ZXN0NTkxNzA2OTA4
10,687
Multiple fixes in SageMakerTrainer
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
COLLABORATOR
null
# What does this PR do? This PR adds quite a few fixes to the `SageMakerTrainer` to make sure example scripts run fully. In particular it fixes: - save made the training hanging forever - predict didn't work - evaluation required using `drop_last=True` which is not something anyone wants. The goal is now to test a little bit more that functionality before merging the `SageMakerTrainer` into the main `Trainer` (otherwise one can't use model parallelism in seq2seq examples or QA example). The plan is to have them merged in the v4.5.0.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10687/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10687/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10687", "html_url": "https://github.com/huggingface/transformers/pull/10687", "diff_url": "https://github.com/huggingface/transformers/pull/10687.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10687.patch", "merged_at": 1615814895000 }
https://api.github.com/repos/huggingface/transformers/issues/10686
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10686/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10686/comments
https://api.github.com/repos/huggingface/transformers/issues/10686/events
https://github.com/huggingface/transformers/pull/10686
830,130,646
MDExOlB1bGxSZXF1ZXN0NTkxNjg5ODI3
10,686
fix backend tokenizer args override: key mismatch
{ "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false }
[ { "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false } ]
[ "I'm talking with @n1t0 soon to get a better sense of where it might make sense as well yes! Will update then", "went looking for similar case with this regex `(\\[|\\()\"do_lower_case`, found only one more: 7e42461\r\n\r\nI think this is it, will move onto #10121 once this is merged", "There does seem to be an issue remaining though, as the tests in the suite currently fail with:\r\n```\r\n=========================== short test summary info ============================\r\nFAILED tests/test_tokenization_auto.py::AutoTokenizerTest::test_do_lower_case\r\n=== 1 failed, 5143 passed, 2103 skipped, 2207 warnings in 266.40s (0:04:26) ====\r\n```" ]
1,615
1,616
1,616
CONTRIBUTOR
null
# What does this PR do? Related to #10390 Turns out it was a simple key mismatch - leaving as draft for now just to see the results of the full test suites, but hopeful this will fix the main problem for the related issue.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10686/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10686/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10686", "html_url": "https://github.com/huggingface/transformers/pull/10686", "diff_url": "https://github.com/huggingface/transformers/pull/10686.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10686.patch", "merged_at": 1616120025000 }
https://api.github.com/repos/huggingface/transformers/issues/10685
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10685/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10685/comments
https://api.github.com/repos/huggingface/transformers/issues/10685/events
https://github.com/huggingface/transformers/pull/10685
830,125,709
MDExOlB1bGxSZXF1ZXN0NTkxNjg1NjI1
10,685
Distributed barrier before loading model
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
COLLABORATOR
null
# What does this PR do? This PR adds a distributed barrier before the final load when the option `load_best_model_at_end` is selected. This is because a process might get to that point before process 0 has finished saving the checkpoint that is re-loaded, which would result in an error. Fixes #10666
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10685/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10685/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10685", "html_url": "https://github.com/huggingface/transformers/pull/10685", "diff_url": "https://github.com/huggingface/transformers/pull/10685.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10685.patch", "merged_at": 1615811295000 }
https://api.github.com/repos/huggingface/transformers/issues/10684
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10684/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10684/comments
https://api.github.com/repos/huggingface/transformers/issues/10684/events
https://github.com/huggingface/transformers/issues/10684
830,082,535
MDU6SXNzdWU4MzAwODI1MzU=
10,684
Question answering: a couple of things after fine-tuning a model
{ "login": "LivingDeadCloud", "id": 22834605, "node_id": "MDQ6VXNlcjIyODM0NjA1", "avatar_url": "https://avatars.githubusercontent.com/u/22834605?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LivingDeadCloud", "html_url": "https://github.com/LivingDeadCloud", "followers_url": "https://api.github.com/users/LivingDeadCloud/followers", "following_url": "https://api.github.com/users/LivingDeadCloud/following{/other_user}", "gists_url": "https://api.github.com/users/LivingDeadCloud/gists{/gist_id}", "starred_url": "https://api.github.com/users/LivingDeadCloud/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LivingDeadCloud/subscriptions", "organizations_url": "https://api.github.com/users/LivingDeadCloud/orgs", "repos_url": "https://api.github.com/users/LivingDeadCloud/repos", "events_url": "https://api.github.com/users/LivingDeadCloud/events{/privacy}", "received_events_url": "https://api.github.com/users/LivingDeadCloud/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Here's an answer to your questions:\r\n\r\n> 1. Now that I have this model, how can I use it to answer to a \"custom\" question on a specific context? I am assuming that I will need to preprocess the question+context in a way similar to the preprocessing of training dataset, but how can I exactly do that?\r\n\r\nYes, I've created a small Colab notebook to illustrate inference for question answering models: https://colab.research.google.com/drive/1F-4rWIDythF4B8hS6SdNx9x3h3ffg2zw?usp=sharing\r\n\r\nBtw, this is also explained in the [docs](https://huggingface.co/transformers/task_summary.html#extractive-question-answering).\r\n\r\n> 2\\. As final goal I will have to fine-tune an italian language model. I'm assuming that this depends on the value of `model_checkpoint`, so here I would have to select an italian pre-trained model (for example [dbmdz/bert-base-italian-cased](https://huggingface.co/dbmdz/bert-base-italian-cased)), is that correct? And if I want to use a multilanguage model, how do I specify the language I want to use?\r\n\r\nIf you want to fine-tune on an Italian dataset, then it's indeed advised to start from a pre-trained Italian model. If you want to use a multilanguage model, you don't need to specify any language you want to use, because it's a general-purpose model. Just make sure that Italian is one of the languages on which the multi-lingual model was pre-trained.\r\n\r\n> 3\\. (this may be a very dumb question) Can I fine-tune an already fine-tuned model (for example [this model](https://huggingface.co/mrm8488/bert-italian-finedtuned-squadv1-it-alfa))? Would it make sense?\r\n\r\nYes you can, and it's maybe the best thing to do, because as this model is already fine-tuned on Italian questions, then it will already have reasonable performance out-of-the-box. You can just improve it a little more by fine-tuning on your specific dataset.\r\n\r\nBtw, it's advised to ask such questions on the [forum](https://discuss.huggingface.co/) rather than here, as the authors of HuggingFace like to keep Github issues for bugs and feature requests.\r\n\r\nCheers!", "Many thanks @NielsRogge , I discovered the existence of the forum 20 minutes after posting this, my bad :(\r\n\r\nI will use the forum now to ask some more things.\r\n\r\nThansk a lot!" ]
1,615
1,615
1,615
NONE
null
Hello everybody First of all, I'm kinda new to HF and transformers library, so I apologize if my questions may be be trivial. I followed the very well explained guide provided [here](https://github.com/huggingface/notebooks/blob/master/examples/question_answering.ipynbl) to fine-tune a pre-trained model (for the moment, I used the standard SQuaD dataset). The model has been fine-tuned correctly, saved and evaluated, giving as expected the following results: `{'exact_match': 76.80227057710502, 'f1': 84.96565168555021}` That said, here are my questions: 1. Now that I have this model, how can I use it to answer to a "custom" question on a specific context? I am assuming that I will need to preprocess the question+context in a way similar to the preprocessing of training dataset, but how can I exactly do that? I will have to use this model in another script, so having the possibility to have a function that gets as input (model, context, question) and gives me as output the predicted answer (possibly with its probability) would be great, is there some piece of code that does this? 2. As final goal I will have to fine-tune an italian language model. I'm assuming that this depends on the value of `model_checkpoint`, so here I would have to select an italian pre-trained model (for example [dbmdz/bert-base-italian-cased](https://huggingface.co/dbmdz/bert-base-italian-cased)), is that correct? And if I want to use a multilanguage model, how do I specify the language I want to use? 3. (this may be a very dumb question) Can I fine-tune an already fine-tuned model (for example [this model](https://huggingface.co/mrm8488/bert-italian-finedtuned-squadv1-it-alfa))? Would it make sense? I'm asking this because in the future I will likely be able to expand my training set, so I want to know if I have to restart from a pre-trained model or if I can fine-tune an already tuned model several times. Thanks a lot for your patience. Claudio
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10684/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10684/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10683
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10683/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10683/comments
https://api.github.com/repos/huggingface/transformers/issues/10683/events
https://github.com/huggingface/transformers/pull/10683
830,077,315
MDExOlB1bGxSZXF1ZXN0NTkxNjQzNzMw
10,683
Add util for deleting cached models programmatically
{ "login": "cdpierse", "id": 8831892, "node_id": "MDQ6VXNlcjg4MzE4OTI=", "avatar_url": "https://avatars.githubusercontent.com/u/8831892?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cdpierse", "html_url": "https://github.com/cdpierse", "followers_url": "https://api.github.com/users/cdpierse/followers", "following_url": "https://api.github.com/users/cdpierse/following{/other_user}", "gists_url": "https://api.github.com/users/cdpierse/gists{/gist_id}", "starred_url": "https://api.github.com/users/cdpierse/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cdpierse/subscriptions", "organizations_url": "https://api.github.com/users/cdpierse/orgs", "repos_url": "https://api.github.com/users/cdpierse/repos", "events_url": "https://api.github.com/users/cdpierse/events{/privacy}", "received_events_url": "https://api.github.com/users/cdpierse/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,615
1,619
1,619
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This was util was discussed in #8803 and had two parts to it. The first was addressed by #8836 which allows for cached models' names and information to be retrieved programmatically. This PR builds on that and allows a cached model and its associated .lock and .json files to be deleted by passing in the unique model url returned by `file_utils.get_cached_models()`. This PR will only delete a model file and its associated metadata files, its worth noting that tokenizers and config metadata will be left behind as they have separate unique file identifiers to the model. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [X] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. @LysandreJik this is continuation of #8836 and #8803 <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10683/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10683/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10683", "html_url": "https://github.com/huggingface/transformers/pull/10683", "diff_url": "https://github.com/huggingface/transformers/pull/10683.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10683.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/10682
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10682/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10682/comments
https://api.github.com/repos/huggingface/transformers/issues/10682/events
https://github.com/huggingface/transformers/issues/10682
830,057,757
MDU6SXNzdWU4MzAwNTc3NTc=
10,682
Token Classification: How to tokenize and align labels with overflow and stride?
{ "login": "oliverguhr", "id": 3495355, "node_id": "MDQ6VXNlcjM0OTUzNTU=", "avatar_url": "https://avatars.githubusercontent.com/u/3495355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/oliverguhr", "html_url": "https://github.com/oliverguhr", "followers_url": "https://api.github.com/users/oliverguhr/followers", "following_url": "https://api.github.com/users/oliverguhr/following{/other_user}", "gists_url": "https://api.github.com/users/oliverguhr/gists{/gist_id}", "starred_url": "https://api.github.com/users/oliverguhr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/oliverguhr/subscriptions", "organizations_url": "https://api.github.com/users/oliverguhr/orgs", "repos_url": "https://api.github.com/users/oliverguhr/repos", "events_url": "https://api.github.com/users/oliverguhr/events{/privacy}", "received_events_url": "https://api.github.com/users/oliverguhr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\n\r\nwould it be possible to ask this question on HuggingFace's [forum](https://discuss.huggingface.co/)? Some people like Sylvain (who created the tutorial you mention) are very active there, and are happy to help you. The authors like to keep Github issues for bugs caused by the Transformers library or feature requests.\r\n\r\nThanks!", "I moved this question to: https://discuss.huggingface.co/t/token-classification-how-to-tokenize-and-align-labels-with-overflow-and-stride/4353" ]
1,615
1,615
1,615
CONTRIBUTOR
null
Hello Huggingface, I try to solve a token classification task where the documents are longer than the model's max length. I modified the `tokenize_and_align_labels` function from [example token classification notebook](https://github.com/huggingface/notebooks/blob/master/examples/token_classification.ipynb). I set the tokenizer option `return_overflowing_tokens=True` and rewrote the function to map labels for the overflowing tokens: ```python tokenizer_settings = {'is_split_into_words':True,'return_offsets_mapping':True, 'padding':True, 'truncation':True, 'stride':0, 'max_length':tokenizer.model_max_length, 'return_overflowing_tokens':True} def tokenize_and_align_labels(examples): tokenized_inputs = tokenizer(examples["tokens"], **tokenizer_settings) labels = [] for i,document in enumerate(tokenized_inputs.encodings): doc_encoded_labels = [] last_word_id = None for word_id in document.word_ids: if word_id == None: #or last_word_id == word_id: doc_encoded_labels.append(-100) else: document_id = tokenized_inputs.overflow_to_sample_mapping[i] label = examples[task][document_id][word_id] doc_encoded_labels.append(int(label)) last_word_id = word_id labels.append(doc_encoded_labels) tokenized_inputs["labels"] = labels return tokenized_inputs ``` Executing this code will result in an error: ``` exception has occurred: ArrowInvalid Column 5 named task1 expected length 820 but got length 30 ``` It looks like the input 30 examples can't be mapped to the 820 examples after the slicing. How can I solve this issue? ## Environment info Google Colab runing Code: https://github.com/huggingface/notebooks/blob/master/examples/token_classification.ipynb ### Who can help Library: - tokenizers: @LysandreJik ## Information Model I am using (Bert ): The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official conll2003 task: * [x] my own task or dataset: ## To reproduce Steps to reproduce the behaviour: 1. Replace the tokenize_and_align_labels function with the function given above. 2. Add examples longer than max_length 3. run `tokenized_datasets = datasets.map(tokenize_and_align_labels, batched=True)` cell.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10682/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10682/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10681
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10681/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10681/comments
https://api.github.com/repos/huggingface/transformers/issues/10681/events
https://github.com/huggingface/transformers/pull/10681
830,044,261
MDExOlB1bGxSZXF1ZXN0NTkxNjE0OTE4
10,681
Tests run on Docker
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@stas00 I hear you regarding the installation of dependencies. We are installing every dependency except the main one (PT/TF) on the GPU machine, so I would say most errors are caught.\r\nFurthermore, we are installing these exact dependencies on the CircleCI runs, so I believe we are testing that already on every commit.\r\n\r\nIf there is a scenario I am missing, please let me know and I will do my best to adjust.\r\n\r\nAll other comments have been adressed in the last three commits.", "Ah, ok, for some reason I thought that if we are using a docker image then we can skip wasting time and resources on installing the same things million times a day and just run the tests right away, and in that case only an occasional test that installs from scratch will be needed. But perhaps this further speed up can be done in some future iteration.\r\n\r\nNothing more from my side.\r\n\r\nThank you, @LysandreJik " ]
1,615
1,615
1,615
MEMBER
null
This PR updates the GPU-based tests to run on docker images. This: - Simplifies maintenance - Horizontal scaling is made much simpler - Environment can be managed with a single line change (the docker image) hello pytorch 1.3-1.7 tests! - Setups a notification service to get alerted when scheduled tests fail. Co-authored-by: Morgan <[email protected]>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10681/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10681/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10681", "html_url": "https://github.com/huggingface/transformers/pull/10681", "diff_url": "https://github.com/huggingface/transformers/pull/10681.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10681.patch", "merged_at": 1615843681000 }
https://api.github.com/repos/huggingface/transformers/issues/10680
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10680/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10680/comments
https://api.github.com/repos/huggingface/transformers/issues/10680/events
https://github.com/huggingface/transformers/issues/10680
830,029,508
MDU6SXNzdWU4MzAwMjk1MDg=
10,680
[TFMarian] Slow integration tests are failing
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This is a very weird bug actually and can be reproduced the easiest ass follows:\r\n\r\n```python\r\n#!/usr/bin/env python3\r\nfrom transformers import TFMarianMTModel, MarianMTModel, MarianTokenizer\r\n\r\ntokenizer = MarianTokenizer.from_pretrained(\"Helsinki-NLP/opus-mt-mt-en\")\r\n\r\nmodel_pt = MarianMTModel.from_pretrained(\"Helsinki-NLP/opus-mt-mt-en\")\r\nmodel_tf = TFMarianMTModel.from_pretrained(\"Helsinki-NLP/opus-mt-mt-en\", from_pt=True)\r\n\r\nmodel_tf.save_pretrained(\"./\")\r\nmodel_tf = TFMarianMTModel.from_pretrained(\"./\")\r\n\r\ninput_str = \"My name is Wolfgang and I live in Berlin\"\r\ninput_str = \"Billi messu b'mod ġentili, Ġesù fejjaq raġel li kien milqut bil - marda kerha tal - ġdiem.\"\r\n\r\n\r\noutput_tokens_pt = model_pt.generate(tokenizer(input_str, return_tensors=\"pt\").input_ids)\r\noutput_tokens_tf = model_tf.generate(tokenizer(input_str, return_tensors=\"tf\").input_ids)\r\n\r\nprint(\"Pt:\", tokenizer.batch_decode(output_tokens_pt))\r\nprint(\"Tf:\", tokenizer.batch_decode(output_tokens_tf))\r\n```\r\n\r\nfails while commenting out the save and load lines works:\r\n\r\n```python\r\n#!/usr/bin/env python3\r\nfrom transformers import TFMarianMTModel, MarianMTModel, MarianTokenizer\r\n\r\ntokenizer = MarianTokenizer.from_pretrained(\"Helsinki-NLP/opus-mt-mt-en\")\r\n\r\nmodel_pt = MarianMTModel.from_pretrained(\"Helsinki-NLP/opus-mt-mt-en\")\r\nmodel_tf = TFMarianMTModel.from_pretrained(\"Helsinki-NLP/opus-mt-mt-en\", from_pt=True)\r\n\r\ninput_str = \"My name is Wolfgang and I live in Berlin\"\r\ninput_str = \"Billi messu b'mod ġentili, Ġesù fejjaq raġel li kien milqut bil - marda kerha tal - ġdiem.\"\r\n\r\n\r\noutput_tokens_pt = model_pt.generate(tokenizer(input_str, return_tensors=\"pt\").input_ids)\r\noutput_tokens_tf = model_tf.generate(tokenizer(input_str, return_tensors=\"tf\").input_ids)\r\n\r\nprint(\"Pt:\", tokenizer.batch_decode(output_tokens_pt))\r\nprint(\"Tf:\", tokenizer.batch_decode(output_tokens_tf))\r\n```\r\n\r\nMost other TFMarian models work correctly. This is pretty weird though and will need more time for investigation (cc @patil-suraj for info)", "Thanks for your investigation!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "Unstale", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,615
1,623
1,623
MEMBER
null
After having uploaded the TF weights of: - https://huggingface.co/Helsinki-NLP/opus-mt-mt-en/commit/552db365bf294f7a2604fadcedfca0ed5b29bd66 - https://huggingface.co/Helsinki-NLP/opus-mt-en-zh/commit/137ef1a50f7a0eaf22a7d5685d07b66bb670ddd1 - https://huggingface.co/Helsinki-NLP/opus-mt-en-ROMANCE/commit/1854185e5a3183d8c73360b1cd53f63c2fb0ed46 and merged this PR: https://github.com/huggingface/transformers/pull/10664 the TFMarian slow integration tests are falling. This doesn't seem to be an easy issue and needs further investigation. cc @patrickvonplaten @patil-suraj
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10680/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10680/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10679
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10679/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10679/comments
https://api.github.com/repos/huggingface/transformers/issues/10679/events
https://github.com/huggingface/transformers/pull/10679
830,022,443
MDExOlB1bGxSZXF1ZXN0NTkxNTk2MTQ2
10,679
[Tests] RAG
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
MEMBER
null
This PR shortens the RAG tests by simply reducing the batch size to 8. It's not ideal because RAG is a fairly complex model and IMO, it's good that we have such "big" integration tests. Maybe we should move those tests to a different `@require_large_gpu` decorator?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10679/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10679/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10679", "html_url": "https://github.com/huggingface/transformers/pull/10679", "diff_url": "https://github.com/huggingface/transformers/pull/10679.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10679.patch", "merged_at": 1615792032000 }
https://api.github.com/repos/huggingface/transformers/issues/10678
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10678/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10678/comments
https://api.github.com/repos/huggingface/transformers/issues/10678/events
https://github.com/huggingface/transformers/issues/10678
829,810,571
MDU6SXNzdWU4Mjk4MTA1NzE=
10,678
T5-base out of memory on one 2080 GPU with batchsize 4, sequence length 100
{ "login": "Arvid-pku", "id": 53811705, "node_id": "MDQ6VXNlcjUzODExNzA1", "avatar_url": "https://avatars.githubusercontent.com/u/53811705?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Arvid-pku", "html_url": "https://github.com/Arvid-pku", "followers_url": "https://api.github.com/users/Arvid-pku/followers", "following_url": "https://api.github.com/users/Arvid-pku/following{/other_user}", "gists_url": "https://api.github.com/users/Arvid-pku/gists{/gist_id}", "starred_url": "https://api.github.com/users/Arvid-pku/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Arvid-pku/subscriptions", "organizations_url": "https://api.github.com/users/Arvid-pku/orgs", "repos_url": "https://api.github.com/users/Arvid-pku/repos", "events_url": "https://api.github.com/users/Arvid-pku/events{/privacy}", "received_events_url": "https://api.github.com/users/Arvid-pku/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I don't think you should use `torch.cuda.empty_cache()`, as explained on [PyTorch's forum](https://discuss.pytorch.org/t/about-torch-cuda-empty-cache/34232/2), \"This function should not be used by the end-user except in very edge cases.\". \r\n\r\nAlso, you can set `truncation=True`, because currently you're only padding examples, but not truncating examples that are too long. \r\n\r\nBtw, it's better to ask training related questions which are not bugs caused by the Transformers library on the [forum](https://discuss.huggingface.co/) rather than here.", "@NielsRogge \r\nThanks for your help! It works!\r\nI use `torch.cuda.empty_cache()` because I have no idea but try it.\r\nAnd I will go to the website you said then.\r\nThank you again!" ]
1,615
1,615
1,615
CONTRIBUTOR
null
I want to finetune T5 on totto dataset but failed. This is werid that it is OOM suddenly when it is training about 10% of one epoch. And before that it is normal. about 3500M. Very grateful for help! thx! here is my simple code: ``` import torch from transformers import T5Tokenizer, T5ForConditionalGeneration,Adafactor import pandas as pd tokenizer = T5Tokenizer.from_pretrained('../t5-base') model = T5ForConditionalGeneration.from_pretrained('../t5-base', return_dict=True) if torch.cuda.is_available(): dev = torch.device("cuda:0") print("Running on the GPU") else: dev = torch.device("cpu") print("Running on the CPU") model.to(dev) train_df=pd.read_csv('../totto_data/tt.csv', index_col=[0]) train_df=train_df.iloc[:50000,:] train_df=train_df.sample(frac = 1) optimizer = Adafactor(model.parameters(),lr=1e-3, eps=(1e-30, 1e-3), clip_threshold=1.0, decay_rate=-0.8, beta1=None, weight_decay=0.0, relative_step=False, scale_parameter=False, warmup_init=False) num_of_epochs = 6 batch_size=4 num_of_batches=len(train_df)//batch_size model.train() for epoch in range(1,num_of_epochs+1): print('Running epoch: {}'.format(epoch)) running_loss=0 print(epoch) for i in range(num_of_batches): inputbatch=[] labelbatch=[] if i % 1000==0: print(i/num_of_batches) new_df=train_df[i*batch_size:i*batch_size+batch_size] for indx,row in new_df.iterrows(): input = row['input_text']+'</s>' labels = row['target_text']+'</s>' inputbatch.append(input) labelbatch.append(labels) inputbatch=tokenizer.batch_encode_plus(inputbatch,padding=True,max_length=100,return_tensors='pt')["input_ids"] labelbatch=tokenizer.batch_encode_plus(labelbatch,padding=True,max_length=100,return_tensors="pt") ["input_ids"] inputbatch=inputbatch.to(dev) labelbatch=labelbatch.to(dev) # clear out the gradients of all Variables optimizer.zero_grad() # Forward propogation outputs = model(input_ids=inputbatch, labels=labelbatch) loss = outputs.loss loss_num=loss.item() logits = outputs.logits running_loss+=loss_num # calculating the gradients loss.backward() #updating the params optimizer.step() torch.cuda.empty_cache() running_loss=running_loss/int(num_of_batches) print('Epoch: {} , Running loss: {}'.format(epoch,running_loss)) torch.save(model.state_dict(),'./finetune/pytoch_model.bin'+str(epoch+1)) ``` and this is my python libraries: backcall 0.2.0 backports.functools-lru-cache 1.6.1 certifi 2020.12.5 chardet 4.0.0 click 7.1.2 decorator 4.4.2 filelock 3.0.12 idna 2.10 importlib-metadata 3.7.2 ipykernel 5.5.0 ipython 7.21.0 ipython-genutils 0.2.0 jedi 0.18.0 joblib 1.0.1 jsonlines 2.0.0 jupyter-client 6.1.11 jupyter-core 4.7.1 mkl-fft 1.3.0 mkl-random 1.2.0 mkl-service 2.3.0 numpy 1.19.2 olefile 0.46 packaging 20.9 pandas 1.2.3 parso 0.8.1 pexpect 4.8.0 pickleshare 0.7.5 Pillow 8.1.2 pip 21.0.1 prompt-toolkit 3.0.16 ptyprocess 0.7.0 Pygments 2.8.1 pyparsing 2.4.7 python-dateutil 2.8.1 pytz 2021.1 pyzmq 22.0.3 regex 2020.11.13 requests 2.25.1 sacremoses 0.0.43 sentencepiece 0.1.95 setuptools 49.6.0.post20210108 six 1.15.0 tokenizers 0.10.1 torch 1.8.0 torchaudio 0.8.0a0+a751e1d torchvision 0.9.0 tornado 6.1 tqdm 4.59.0 traitlets 5.0.5 transformers 4.3.3 typing-extensions 3.7.4.3 urllib3 1.26.3 wcwidth 0.2.5 wheel 0.36.2 zipp 3.4.1
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10678/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10678/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10677
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10677/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10677/comments
https://api.github.com/repos/huggingface/transformers/issues/10677/events
https://github.com/huggingface/transformers/issues/10677
829,705,584
MDU6SXNzdWU4Mjk3MDU1ODQ=
10,677
hf_argparser doesn't set the required flag on non-defaulted enums
{ "login": "Craigacp", "id": 729696, "node_id": "MDQ6VXNlcjcyOTY5Ng==", "avatar_url": "https://avatars.githubusercontent.com/u/729696?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Craigacp", "html_url": "https://github.com/Craigacp", "followers_url": "https://api.github.com/users/Craigacp/followers", "following_url": "https://api.github.com/users/Craigacp/following{/other_user}", "gists_url": "https://api.github.com/users/Craigacp/gists{/gist_id}", "starred_url": "https://api.github.com/users/Craigacp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Craigacp/subscriptions", "organizations_url": "https://api.github.com/users/Craigacp/orgs", "repos_url": "https://api.github.com/users/Craigacp/repos", "events_url": "https://api.github.com/users/Craigacp/events{/privacy}", "received_events_url": "https://api.github.com/users/Craigacp/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[ { "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false } ]
[ "I agree with your assessment and your proposed fix, so by all means, please suggest a PR! Thanks!" ]
1,615
1,615
1,615
CONTRIBUTOR
null
## Environment info - `transformers` version: 3.0.0-4.3.3 - Platform: macOS - Python version: 3.9 - PyTorch version (GPU?): n/a - Tensorflow version (GPU?): n/a - Using GPU in script?: n/a - Using distributed or parallel set-up in script?: n/a ### Who can help I'm not sure who the owner is of hf_argparser. ## Information We're using hf_argparser to parse our experiment config into dataclasses before training. ## To reproduce Steps to reproduce the behavior: 1. Add an enum argument without a default to a dataclass 2. Parse the command line arguments without supplying the enum argument 3. Should have raised an exception and printed the usage, instead defaults the value to `None`. ## Expected behavior It should raise an exception. The issue is on https://github.com/huggingface/transformers/blob/master/src/transformers/hf_argparser.py#L100, the if statement should have an else which sets `kwargs["required"]=True`, the same way line [134](https://github.com/huggingface/transformers/blob/master/src/transformers/hf_argparser.py#L134) does. I can work up a patch if you agree this is an issue. I think it will also occur with anything that falls into [this branch](https://github.com/huggingface/transformers/blob/master/src/transformers/hf_argparser.py#L118) of the if too.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10677/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10677/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10676
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10676/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10676/comments
https://api.github.com/repos/huggingface/transformers/issues/10676/events
https://github.com/huggingface/transformers/issues/10676
829,583,269
MDU6SXNzdWU4Mjk1ODMyNjk=
10,676
Improve the speed of adding tokens from added_tokens.json
{ "login": "cchen-dialpad", "id": 47165889, "node_id": "MDQ6VXNlcjQ3MTY1ODg5", "avatar_url": "https://avatars.githubusercontent.com/u/47165889?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cchen-dialpad", "html_url": "https://github.com/cchen-dialpad", "followers_url": "https://api.github.com/users/cchen-dialpad/followers", "following_url": "https://api.github.com/users/cchen-dialpad/following{/other_user}", "gists_url": "https://api.github.com/users/cchen-dialpad/gists{/gist_id}", "starred_url": "https://api.github.com/users/cchen-dialpad/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cchen-dialpad/subscriptions", "organizations_url": "https://api.github.com/users/cchen-dialpad/orgs", "repos_url": "https://api.github.com/users/cchen-dialpad/repos", "events_url": "https://api.github.com/users/cchen-dialpad/events{/privacy}", "received_events_url": "https://api.github.com/users/cchen-dialpad/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@lhoestq If this looks good to you, I can help create a PR. Thanks!", "Hi !\r\nThis looks like a great solution, feel free to open a PR ! :)", "Great, thanks! Just made a PR here: https://github.com/huggingface/transformers/pull/10780" ]
1,615
1,617
1,617
CONTRIBUTOR
null
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> ~~Make `PreTrainedTokenizer.unique_no_split_tokens` a type `Set[str]`, or use a temporary `Set[str]` variable for adding tokens from `added_tokens.json`.~~ (**Update**) Found one old PR related to this: https://github.com/huggingface/transformers/pull/6461 So instead of changing its type to `Set[str]`, it would be great to slightly modify the way how tokens are added to `PreTrainedTokenizer.unique_no_split_tokens`. Assume `unique_no_split_tokens` is always ordered and deduped during the token adding process, we could do something like below: ```python import bisect # add this function to transformers/src/transformers/tokenization_utils.py def _insert_one_token(token_list: List[str], new_token: str): # search if new_token is already in the ordered token_list insertion_idx = bisect.bisect_left(token_list, new_token) if insertion_idx < len(token_list) and token_list[ insertion_idx] == new_token: # new_token is in token_list, don't add return else: token_list.insert(insertion_idx, new_token) ``` Then at https://github.com/huggingface/transformers/blob/26a33cfd8c2d6923f41ab98683f33172e8948ff3/src/transformers/tokenization_utils.py#L200-L205 Do something like this: ```python if special_tokens: if len(new_tokens) == 1: _insert_one_token(self.unique_no_split_tokens, new_tokens[0]) else: self.unique_no_split_tokens = sorted(set(self.unique_no_split_tokens).union(set(new_tokens))) else: # Or on the newly added tokens if len(tokens_to_add) == 1: _insert_one_token(self.unique_no_split_tokens, tokens_to_add[0]) else: self.unique_no_split_tokens = sorted(set(self.unique_no_split_tokens).union(set(tokens_to_add))) ``` My local tests show that this can reduce the token adding time from 9 mins (see details below) down to about 1 seconds. ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> Currently the `unique_no_split_tokens` is of type `List[str]`: https://github.com/huggingface/transformers/blob/26a33cfd8c2d6923f41ab98683f33172e8948ff3/src/transformers/tokenization_utils.py#L123 This causes a performance issue if the number of added tokens from `added_tokens.json` is large, when running `tokenizer.from_pretrained()`. For example, it takes 9 minutes to add about 50000 new tokens on a MacBook Pro (2.6 GHz Intel Core i7). Specifically, the issue is mainly caused by: https://github.com/huggingface/transformers/blob/26a33cfd8c2d6923f41ab98683f33172e8948ff3/src/transformers/tokenization_utils.py#L200-L205 Tokens in `added_tokens.json` are added **one by one,** https://github.com/huggingface/transformers/blob/26a33cfd8c2d6923f41ab98683f33172e8948ff3/src/transformers/tokenization_utils_base.py#L1827-L1832 so the `set` operation will be repeated 50000 times, with more and more number of elements. ~~By switching to `Set[str]` or using a temporary `Set[str]` variable for token adding purpose, it would significantly lower the overhead when adding tokens from `added_tokens.json`, and also helps a few `in` presence checks in a few places.~~ (**Update**: Like pointed out earlier, we don't need to change to `Set[str]`, just using a more efficient way to insert one token into the ordered list should be good enough.)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10676/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10676/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10675
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10675/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10675/comments
https://api.github.com/repos/huggingface/transformers/issues/10675/events
https://github.com/huggingface/transformers/issues/10675
829,574,581
MDU6SXNzdWU4Mjk1NzQ1ODE=
10,675
Reformer _pad_to_mult_of_chunk_length seems incorrect
{ "login": "fostiropoulos", "id": 4337024, "node_id": "MDQ6VXNlcjQzMzcwMjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/4337024?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fostiropoulos", "html_url": "https://github.com/fostiropoulos", "followers_url": "https://api.github.com/users/fostiropoulos/followers", "following_url": "https://api.github.com/users/fostiropoulos/following{/other_user}", "gists_url": "https://api.github.com/users/fostiropoulos/gists{/gist_id}", "starred_url": "https://api.github.com/users/fostiropoulos/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fostiropoulos/subscriptions", "organizations_url": "https://api.github.com/users/fostiropoulos/orgs", "repos_url": "https://api.github.com/users/fostiropoulos/repos", "events_url": "https://api.github.com/users/fostiropoulos/events{/privacy}", "received_events_url": "https://api.github.com/users/fostiropoulos/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @fostiropoulos,\r\n\r\nSorry to answer this late - could you provide a code example to reproduce the error? ", "@patrickvonplaten It would take me time to come up with a full working minimal example in colab. \r\n\r\nHowever you can try a model that you supply only the input_embeds and leave input_ids,position_ids to `None` during test time and when the sequence requires padding. \r\n\r\nThe positional encoding will use a ids (`torch.arange`) from index 0 (`start_idx_pos_encodings=0`) to `padded_sequence_length` for the padded ids. It should have been the start position of the end of the input embedding e.g. (`start_idx_pos_encodings=seq_len`)\r\n\r\nIt shouldn't affect the final results because the padded tokens are discarded at the end, but it is not the expected behavior. It would cause errors if padding happens outside of test time or the function is used elsewhere. ", "@patrickvonplaten does padding affect anything during sampling? e.g. the pad token being 0 vs 100 or any other random int. The casual attention / mask should only make it so that each token depends only on previously seen tokens (beyond the pad). Am I correct? ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,615
1,621
1,621
NONE
null
## Environment info - `transformers` version: 4.3.3 - Platform: Linux-4.15.0-136-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.7.1 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help Original Author of the method: @patrickvonplaten ## Information Model I am using: Reformer The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) Mnist ## Bug Description For input_ids,position_ids=None and inputs_embeds!=None _pad_to_mult_of_chunk_length produces unexpected results. When position_ids are None, the input embeds are padded with overlapping position_ids. Line [Link to method](https://github.com/huggingface/transformers/blame/90ecc29656ce37fdbe7279cf586511ed678c0cb7/src/transformers/models/reformer/modeling_reformer.py#L2176) Would produced ```padded_inputs_embeds``` with positional encoding for `[0,padding_length] ` ## Expected behavior `padded_input_embeds` should have a positional encoding in the range of [max_seq_length-padding_length, max_seq_length] [Link to method](https://github.com/huggingface/transformers/blame/90ecc29656ce37fdbe7279cf586511ed678c0cb7/src/transformers/models/reformer/modeling_reformer.py#L2176) Should be changed to: ``` padded_inputs_embeds = self.embeddings(padded_input_ids, position_ids,start_idx_pos_encodings=inputs_embeds.shape[-1]) ``` I am not sure at what extend, this affects the model or attention mechanism or whether the effect is cancelled out by the masking mechanism. I can create a pull request if @patrickvonplaten approves?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10675/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10675/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10674
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10674/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10674/comments
https://api.github.com/repos/huggingface/transformers/issues/10674/events
https://github.com/huggingface/transformers/issues/10674
829,508,078
MDU6SXNzdWU4Mjk1MDgwNzg=
10,674
[trainer] loss = NaN with label_smoothing and full-fp16 eval
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "id": 1990918270, "node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue", "name": "Good First Issue", "color": "bbf794", "default": false, "description": "" }, { "id": 2392046359, "node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue", "name": "Good Second Issue", "color": "dd935a", "default": false, "description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!" } ]
closed
false
null
[]
[ "I am unsure what you want me to fix. Yes evaluation in FP16 is not as precise as evaluation in FP32 and results in NaNs more easily, in particular in the loss, that's precisely the reason I was reluctant to add the --fp16_full_eval option.", "well, first of all I'm reporting this behavior.\r\n\r\nSecondly, it looks like DeepSpeed Zero3 for now requires that we continue using the fp16 model for inference. We are discussing other possibilities but they aren't there yet. This is because of all the hooks that get installed into the model during training. But if we were to remove all those hooks suddenly the model won't fit into the gpu memory - if it was spread out over multiple gpus in the first place. \r\n\r\nWhich is why I added ` --fp16_full_eval` in first place. To enable fitting the model onto the gpu memory during inference if it fit during training (with fp32 master weights being offloaded to cpu). e.g. fitting t5-11b (45GB at fp32) into 40GB gpu. Can train with DeepSpeed zero2, so should be able to do inference too.\r\n\r\nFinally, according to DeepSpeed devs a model trained in mixed precision shouldn't need to be put back into fp32 and should provide similar results only slightly imprecise. \r\n\r\nwe will need a way to get the fp32 model out of DeepSpeed training https://github.com/microsoft/DeepSpeed/issues/800", "I understand all of that, and maybe a fine-tuned model in mixed precision will have better results that don't get any Nan losses on the evaluation dataset, but it only takes one sample of the whole evaluation dataset with a slightly bigger loss than usual to get one Nan that will then drive the whole evaluation loss to NaN. And with label smoothing enabled it only takes one log_probs over all the possibilities to get to that.\r\n\r\nNot sure it really matters since you still have proper metrics (the predictions are not driven to Nan, just the loss). Maybe we could try a flag to deactivate label smoothing when running evaluation, but then the loss wouldn't be comparable to the training loss, so not sure if it would really be useful either.\r\n", "Thank you for elucidating the situation and suggesting workarounds, @sgugger \r\n\r\nWould it be of a useful value if we approximated the nans with something that would lead to a non-nan loss? since NaN can come from a variety of combinations of inf/0 operations, do we know which one is it? and then perhaps pick a corresponding substitution that might lead to a sufficiently good estimate?\r\n\r\n", "Yes, we could maybe track when it arises and where it comes from exactly as a first step. Won't have time to dive into this in the near future, but if someone wants to tackle that issue to that effect and report here, that would be awesome!", "I'm interested in taking a stab on this!", "Awesome! Thank you, @vladdy\r\n\r\nJust be aware that the following PR should be merged shortly: https://github.com/huggingface/transformers/pull/10611\r\nand so the script in the reproduction line will most likely be renamed to `run_translation.py` and will slightly change the cl args.\r\n", "@vladdy, while you research this - it'd be great to understand the cause of NaNs - so if you discover which operation leads to it please do share. Thank you!", "Before falling asleep this idea came to me, tried it this morning and it worked:\r\n\r\n```\r\n--- a/src/transformers/trainer_pt_utils.py\r\n+++ b/src/transformers/trainer_pt_utils.py\r\n@@ -390,7 +390,9 @@ class LabelSmoother:\r\n\r\n def __call__(self, model_output, labels):\r\n logits = model_output[\"logits\"] if isinstance(model_output, dict) else model_output[0]\r\n+ #logits = logits.to(dtype=torch.float32)\r\n log_probs = -torch.nn.functional.log_softmax(logits, dim=-1)\r\n+ log_probs = log_probs.to(dtype=torch.float32)\r\n if labels.dim() == log_probs.dim() - 1:\r\n labels = labels.unsqueeze(-1)\r\n```\r\n\r\nBasically, flip `logits` or `log_probs` back to fp32 and the problem goes away.\r\n\r\nSo the problem here has nothing to do with full fp16 inference, but with how label smoothing is calculated.\r\n\r\nThe issue comes from:\r\n```\r\n smoothed_loss = log_probs.sum(dim=-1, keepdim=True)\r\n```\r\nin fp32, the return values are huge:\r\n```\r\n [ 863637.3750],\r\n [ 864242.0000],\r\n [ 865449.0000],\r\n [ 866092.9375],\r\n [ 867702.4375],\r\n```\r\nand in fp16 these turn `inf`.\r\n\r\nSo either:\r\n1. we do what I proposed on top of this comment, which will double the size of the `log_probs` tensor (4 times if we apply it to `logits`, rather than `log_probs`) . This will of course depend on the size of the dictionary and `target_max_len` - so say:\r\n```\r\nbs * max_len * dict_size * 2 more bytes\r\n32 * 128 * 32000 * 2 = 262MB - huge!\r\n```\r\n2. or we change the calculation to scale down huge numbers back to a numerical range where `sum` over fp16 numbers doesn't overflow. Note that `smoothed_loss` does another `sum` towards the end which would definitely blow things up.\r\n3. same as (1) but switch the calculations to `.cpu()` - a bit slower but no extra gpu memory will be required.\r\n\r\nOf course number 2 is a better solution since it doesn't require much more memory to solve this problem and will require a change in algorithm to avoid going into huge numbers.\r\n\r\nThe other question is: do we deal with the label smoother separately or do we have other parts which may be affected in which case we should change the logits back to fp32 when prediction has completed. But as explained above this will come at a large gpu memory cost.", "Yeah, I came to a similar conclusion regarding the cause and when I wanted to post an update on it, I saw @stas00's response above.\r\n\r\nIf I'm not wrong, Apex tried to solve [this compatibility issue between fp16 and losses](https://github.com/NVIDIA/apex/tree/a109f856840ebb3ff5578e0bddfc4cffd4b96ed0/apex/fp16_utils), but I'm not sure how much of that could be reused in addition to already stated options. @stas00, please let me know if you want to continue driving this and I'll try to find some other issue for my contribution.", "@vladdy, by all means please continue, I was just sharing what I have discovered and calculated that this won't be an efficient solution memory requirement-wise. And I now have a better understanding of where NaN came from.\r\n\r\nAs you're suggesting the most efficient generic solution would be around loss scaling. We are doing it already during the training, so this is just some of the same for label smoothing.\r\n\r\nBut we definitely don't want to depend on apex for this. \r\n\r\nI haven't looked closely but I think the idea is to scale the `log_probs` into a much smaller size, while ensuring that the scaled numbers and the sum of 30-60k elements remain within the dynamic range of fp16 (plus there is one more sum of sums at the end!). If we were to implement it in a non-vectorized way it'd be the simplest to create an fp32 variable and add the fp16 bits to it, so it won't overflow. It won't take any extra memory, but that won't be efficient speed wise.\r\n\r\nAnd of course perhaps you can think of other solutions. Anything that doesn't require an extra GPU memory to perform label smoothing is goodness.\r\n\r\nPerhaps pytorch has some ready-made solutions too...", "Any progress on this, @vladdy? I have one idea that may work, but would love to hear what you have discovered in your research.", "@stas00, I have not found anything better than switching to fp32 for that operation. The rest of the approaches appear to be more complicated or not as generic as I think we want them to be. What idea did you have in mind?", "Thank you for sharing the results of your research, @vladdy. So the need is the same, it's all about how to do it efficiently and not defeat the purpose of keeping things at fp16.\r\n\r\nMy discovery was switching to fp32 only for the aggregate and have it done by pytorch on the hardware level:\r\n```\r\ndiff --git a/src/transformers/trainer_pt_utils.py b/src/transformers/trainer_pt_utils.py\r\nindex ae8e24949..c2071733c 100644\r\n--- a/src/transformers/trainer_pt_utils.py\r\n+++ b/src/transformers/trainer_pt_utils.py\r\n@@ -399,7 +399,8 @@ class LabelSmoother:\r\n # will ignore them in any case.\r\n labels.clamp_min_(0)\r\n nll_loss = log_probs.gather(dim=-1, index=labels)\r\n- smoothed_loss = log_probs.sum(dim=-1, keepdim=True)\r\n+ # works for fp16 input tensor too, by internally upcasting it to fp32\r\n+ smoothed_loss = log_probs.sum(dim=-1, keepdim=True, dtype=torch.float32)\r\n\r\n nll_loss.masked_fill_(padding_mask, 0.0)\r\n smoothed_loss.masked_fill_(padding_mask, 0.0)\r\n```\r\nI was shocked that it took ~0 extra memory over pure fp16 solution (that is not even extra peak memory!) - that is it was doing the conversion to fp32 on the hardware level. At least that was so on my hardware - it might be not so on an older gpu that doesn't support fp16 natively. \r\n\r\nThis is pretty amazing that it can do that!\r\n\r\nI came to this idea while researching bfloat16 where its aggregate operations too require fp32 aggregates - so I thought why not try the same for our case and it seems to work. Similarly sometimes they use fp64 for fp32 inputs if they are too big, but I don't think we run into those things here.\r\n\r\nWhat do you think? \r\n", "I think, this simple solution makes sense to be applied as it is also generic enough to cover all the cases. I doubt it is possible to find a better approach within short time and it does not appear it is necessary to spend more time on this (at least, for now). Feel free to do the PR as you offered it!", "Dear @stas00 \r\nThank you very much for filing this issue. I am training mt5-small model and with deepspeed without label smoothing, I am getting NaNs, so far could not managed to fix it. I greatly appreciate your suggestions on this. if you think this can be appropriate I will open up a separate issue for mt5 model getting NaNs with deepspeed, and if not I follow this issue. Thank you very much", "@vladdy, thank you for doing your research and validating my suggestion.\r\n\r\n@dorost1234, the PR is here: https://github.com/huggingface/transformers/pull/10815 or you can just change it manually for your test - it's just one line.\r\nPlease let me know if it fixes your problem, if it's about eval_loss. If it doesn't, or it's about something else - then yes please open a separate issue. Based on your comments elsewhere the issue about mt5 and NaNs already exists, but not with deepspeed so definitely open one. Perhaps the Deepspeed team has some insights about this situation.\r\n\r\n", "Thank you so much @stas00 for the great response, I applied your PR and with deepspeed now mt5-small for me is not getting nan anymore, this is an incredible job you are doing, thanks a lot, I still getting nans with mt5-small with fp16, even after your PR, for this I made a separate issue here https://github.com/huggingface/transformers/issues/10819 I did not tag you since with deepspeed with your applied magic PR it is not getting nans so far, while still If you have time to give me an advice I would be really grateful. ", "@stas00 I tested mt5-small with run_translation.py model without this PR this also works fine without nans, if one does not use smoothing, with this PR this becomes much slower for me with deepspeed. is there a way to keep the speed as great as before?", "Please post the exact command lines that you're referring to.\r\n\r\nAs I wrote in https://github.com/huggingface/transformers/pull/10815 I definitely can see a 25% slowdown when enabling --fp16_full_eval and opened an issue about it https://github.com/huggingface/transformers/issues/10816 \r\n\r\nI don't see any speed difference with https://github.com/huggingface/transformers/pull/10815 w/o deepspeed, so once you show me what command line use then I can test.\r\n\r\nedit: Oh, I see you posted them in https://github.com/huggingface/transformers/issues/10819 - all is good then - I can test now." ]
1,615
1,616
1,616
CONTRIBUTOR
null
It looks like our `--label_smoothing_factor` Trainer's feature doesn't handle fp16 well. It's a problem with the deepspeed zero3 I'm integrating right now, since it evals in fp16, but also can be reproduced with the recently added `--fp16_full_eval` trainer option. To reproduce: ``` export BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 python examples/seq2seq/run_seq2seq.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --do_eval --evaluation_strategy=steps --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --val_max_target_length 128 --warmup_steps 500 --max_val_samples 500 --dataset_name wmt16 --dataset_config "ro-en" --source_prefix "translate English to Romanian: " --fp16_full_eval ``` ``` ***** eval metrics ***** eval_bleu = 24.1257 eval_gen_len = 39.554 eval_loss = nan eval_mem_cpu_alloc_delta = 56MB eval_mem_cpu_peaked_delta = 0MB eval_mem_gpu_alloc_delta = 116MB eval_mem_gpu_peaked_delta = 374MB eval_runtime = 25.3246 eval_samples = 500 eval_samples_per_second = 19.744 init_mem_cpu_alloc_delta = 2MB init_mem_cpu_peaked_delta = 0MB init_mem_gpu_alloc_delta = 0MB init_mem_gpu_peaked_delta = 0MB ``` If someone in the community would like to have a look at solving this puzzle, please refer to the discussion of this Issue. Basically, we would like to try to find a way to perform label smoothing under full fp16 while finding a way to handle NaNs so that the final loss is not a NaN. And for the reference value running the same script w/o `--fp16_full_eval` should give you the "golden" `eval_loss` - i.e. ideally it should be about the same with `--fp16_full_eval` (if possible that is). Thank you! @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10674/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10674/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10673
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10673/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10673/comments
https://api.github.com/repos/huggingface/transformers/issues/10673/events
https://github.com/huggingface/transformers/pull/10673
829,488,996
MDExOlB1bGxSZXF1ZXN0NTkxMTQ2MTQy
10,673
Add auto_wrap option in fairscale integration
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
COLLABORATOR
null
# What does this PR do? This PR adds support for the `auto_wrap` function added by fairscale to automatically wrap the model's modules in the `FSDP` container (necessary for ZeRO-DP3). cc @stas00 So you are informed for when you want to experiment more with fairscale.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10673/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10673/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10673", "html_url": "https://github.com/huggingface/transformers/pull/10673", "diff_url": "https://github.com/huggingface/transformers/pull/10673.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10673.patch", "merged_at": 1615553420000 }
https://api.github.com/repos/huggingface/transformers/issues/10672
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10672/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10672/comments
https://api.github.com/repos/huggingface/transformers/issues/10672/events
https://github.com/huggingface/transformers/pull/10672
829,429,241
MDExOlB1bGxSZXF1ZXN0NTkxMDk2NTUw
10,672
fix typing error for HfArgumentParser for Optional[bool]
{ "login": "bfineran", "id": 11316925, "node_id": "MDQ6VXNlcjExMzE2OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/11316925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bfineran", "html_url": "https://github.com/bfineran", "followers_url": "https://api.github.com/users/bfineran/followers", "following_url": "https://api.github.com/users/bfineran/following{/other_user}", "gists_url": "https://api.github.com/users/bfineran/gists{/gist_id}", "starred_url": "https://api.github.com/users/bfineran/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bfineran/subscriptions", "organizations_url": "https://api.github.com/users/bfineran/orgs", "repos_url": "https://api.github.com/users/bfineran/repos", "events_url": "https://api.github.com/users/bfineran/events{/privacy}", "received_events_url": "https://api.github.com/users/bfineran/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "No, `Optional[bool]` are dealt with later on at [this line](https://github.com/huggingface/transformers/blob/master/src/transformers/hf_argparser.py#L101) and need to stay wrapped until then.\r\n\r\nWithout more information on your error it's hard to help find the right fix.", "Hi @sugger, thank you for the quick response and for pointing that out. Sorry I did not catch it.\r\n\r\nI also see that you put a [tentative fix](https://github.com/huggingface/transformers/commit/fa1a8d102f273ee8118546f7b84133ab58032ac5) for this issue, but also just wanted to check to make sure that `Optional[bool]` values are caught by the expected if statement.\r\n\r\nLooking more into the issue, I'm actually seeing that `disable_tqdm` has its typing changed from `Optional[bool]` to `Union[bool, None]` during the `dataclasses.fields` parsing. So it is not getting caught by the line you reference and then the error occurs on the next if statement.\r\n\r\nI'm running Python 3.8 on ubuntu 18.04, maybe dataclasses parses optionals differently in other versions.\r\n\r\nPrintout of `disable_tqdm` fields value:\r\n```\r\n>>> [field for field in dataclasses.fields(TrainingArguments) if \"tqdm\" in field.name][0]\r\nField(name='disable_tqdm',type=typing.Union[bool, NoneType],default=None,default_factory=<dataclasses._MISSING_TYPE object at 0x7f2e6e7ff1f0>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': 'Whether or not to disable the tqdm progress bars.'}),_field_type=_FIELD)\r\n```\r\n\r\ndataclasses parsing of generic optional bool type:\r\n```\r\n>>> @dataclass\r\n... class OptionalBool:\r\n... value: Optional[bool]\r\n...\r\n>>> dataclasses.fields(OptionalBool)\r\n(Field(name='value',type=typing.Union[bool, NoneType],default=<dataclasses._MISSING_TYPE object at 0x7f2e6e7ff1f0>,default_factory=<dataclasses._MISSING_TYPE object at 0x7f2e6e7ff1f0>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),_field_type=_FIELD),)\r\n```\r\n\r\nTo address this, would it be fine to change the `is Optional[bool]` check [here](https://github.com/huggingface/transformers/blob/master/src/transformers/hf_argparser.py#L101) to `==`?\r\n\r\n```\r\n>>> Optional[bool] is Union[bool, None]\r\nFalse\r\n>>> Optional[bool] == Union[bool, None]\r\nTrue\r\n```\r\n\r\nThanks again for looking into this so quickly.", "I think this change is acceptable, thanks! Trying to check what the failure in the test is and if it's spurious." ]
1,615
1,615
1,615
CONTRIBUTOR
null
`TrainingArguments` uses the `Optional[bool]` type for [a couple arguments](https://github.com/huggingface/transformers/blob/master/src/transformers/training_args.py#L443). I ran into the following error when using transformers v4.3.3 with python 3.8: `"TrainingArguments" TypeError: issubclass() arg 1 must be a class`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10672/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10672/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10672", "html_url": "https://github.com/huggingface/transformers/pull/10672", "diff_url": "https://github.com/huggingface/transformers/pull/10672.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10672.patch", "merged_at": 1615502574000 }
https://api.github.com/repos/huggingface/transformers/issues/10671
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10671/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10671/comments
https://api.github.com/repos/huggingface/transformers/issues/10671/events
https://github.com/huggingface/transformers/pull/10671
829,420,155
MDExOlB1bGxSZXF1ZXN0NTkxMDg5MTA4
10,671
Fixes Pegasus tokenization tests
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
MEMBER
null
Non rectangular if padding is not set.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10671/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10671/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10671", "html_url": "https://github.com/huggingface/transformers/pull/10671", "diff_url": "https://github.com/huggingface/transformers/pull/10671.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10671.patch", "merged_at": 1615487750000 }
https://api.github.com/repos/huggingface/transformers/issues/10670
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10670/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10670/comments
https://api.github.com/repos/huggingface/transformers/issues/10670/events
https://github.com/huggingface/transformers/pull/10670
829,418,897
MDExOlB1bGxSZXF1ZXN0NTkxMDg4MTA0
10,670
Fix integration slow tests
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
COLLABORATOR
null
# What does this PR do? This PR fixes the following slow tests which are failing because of the change of beahvior in the `Embeddings` layer in PyTorh 1.8. This is done by adding an attention mask to ignore the padding token and checking a slice that does not contain the padding hidden states. ``` tests/test_modeling_albert.py::AlbertModelIntegrationTest::test_inference_no_head_absolute_embedding tests/test_modeling_bert.py::BertModelIntegrationTest::test_inference_no_head_absolute_embedding tests/test_modeling_bert.py::BertModelIntegrationTest::test_inference_no_head_relative_embedding_key tests/test_modeling_bert.py::BertModelIntegrationTest::test_inference_no_head_relative_embedding_key_query tests/test_modeling_convbert.py::ConvBertModelIntegrationTest::test_inference_masked_lm tests/test_modeling_deberta.py::DebertaModelIntegrationTest::test_inference_no_head tests/test_modeling_deberta_v2.py::DebertaV2ModelIntegrationTest::test_inference_no_head tests/test_modeling_distilbert.py::DistilBertModelIntergrationTest::test_inference_no_head_absolute_embedding tests/test_modeling_electra.py::ElectraModelIntegrationTest::test_inference_no_head_absolute_embedding tests/test_modeling_squeezebert.py::SqueezeBertModelIntegrationTest::test_inference_classification_head ``` It also fixes ``` tests/test_modeling_mbart.py::MBartEnroIntegrationTest::test_enro_generate_batch ``` that was failing since the change in `prepare_seq2seq_batch`. For some reason a word is different but it was consistent in PyTorch 1.7/PyTorch 1.8 so I changed the desired target. @patil-suraj if you want to take a closer look, I'll leave it to you.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10670/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10670/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10670", "html_url": "https://github.com/huggingface/transformers/pull/10670", "diff_url": "https://github.com/huggingface/transformers/pull/10670.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10670.patch", "merged_at": 1615488233000 }
https://api.github.com/repos/huggingface/transformers/issues/10669
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10669/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10669/comments
https://api.github.com/repos/huggingface/transformers/issues/10669/events
https://github.com/huggingface/transformers/pull/10669
829,406,809
MDExOlB1bGxSZXF1ZXN0NTkxMDc3OTQx
10,669
MT5 integration test: adjust loss difference
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
MEMBER
null
@patrickvonplaten, this test didn't pass. If you can double check that it has the expected outputs, that would be great. The difference I'm seeing on my machine is of 1.1e-4, which is slightly higher than the value proposed here of 1e-4.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10669/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10669/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10669", "html_url": "https://github.com/huggingface/transformers/pull/10669", "diff_url": "https://github.com/huggingface/transformers/pull/10669.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10669.patch", "merged_at": 1615529386000 }
https://api.github.com/repos/huggingface/transformers/issues/10668
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10668/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10668/comments
https://api.github.com/repos/huggingface/transformers/issues/10668/events
https://github.com/huggingface/transformers/pull/10668
829,367,201
MDExOlB1bGxSZXF1ZXN0NTkxMDQzMzQ0
10,668
Add DeBERTa to MODEL_FOR_PRETRAINING_MAPPING
{ "login": "jeswan", "id": 57466294, "node_id": "MDQ6VXNlcjU3NDY2Mjk0", "avatar_url": "https://avatars.githubusercontent.com/u/57466294?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jeswan", "html_url": "https://github.com/jeswan", "followers_url": "https://api.github.com/users/jeswan/followers", "following_url": "https://api.github.com/users/jeswan/following{/other_user}", "gists_url": "https://api.github.com/users/jeswan/gists{/gist_id}", "starred_url": "https://api.github.com/users/jeswan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jeswan/subscriptions", "organizations_url": "https://api.github.com/users/jeswan/orgs", "repos_url": "https://api.github.com/users/jeswan/repos", "events_url": "https://api.github.com/users/jeswan/events{/privacy}", "received_events_url": "https://api.github.com/users/jeswan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@LysandreJik Added!", "Very cool, thanks! Will merge once all the tests are green." ]
1,615
1,615
1,615
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> This PR adds DebertaForMaskedLM to MODEL_FOR_PRETRAINING_MAPPING since DeBERTa is currently missing from this dict. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? @patrickvonplaten @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10668/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10668/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10668", "html_url": "https://github.com/huggingface/transformers/pull/10668", "diff_url": "https://github.com/huggingface/transformers/pull/10668.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10668.patch", "merged_at": 1615489007000 }
https://api.github.com/repos/huggingface/transformers/issues/10667
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10667/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10667/comments
https://api.github.com/repos/huggingface/transformers/issues/10667/events
https://github.com/huggingface/transformers/pull/10667
829,332,111
MDExOlB1bGxSZXF1ZXN0NTkxMDEzODA1
10,667
[S2T] fix example in docs
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
MEMBER
null
# What does this PR do? `attention_mask` should always be passed for the `S2T` model. This PR fixes the examples in the doc.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10667/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10667/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10667", "html_url": "https://github.com/huggingface/transformers/pull/10667", "diff_url": "https://github.com/huggingface/transformers/pull/10667.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10667.patch", "merged_at": 1615482818000 }
https://api.github.com/repos/huggingface/transformers/issues/10666
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10666/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10666/comments
https://api.github.com/repos/huggingface/transformers/issues/10666/events
https://github.com/huggingface/transformers/issues/10666
829,316,805
MDU6SXNzdWU4MjkzMTY4MDU=
10,666
training LayouttLM 1 epoch in distributed more results in error
{ "login": "alvercau", "id": 24573258, "node_id": "MDQ6VXNlcjI0NTczMjU4", "avatar_url": "https://avatars.githubusercontent.com/u/24573258?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvercau", "html_url": "https://github.com/alvercau", "followers_url": "https://api.github.com/users/alvercau/followers", "following_url": "https://api.github.com/users/alvercau/following{/other_user}", "gists_url": "https://api.github.com/users/alvercau/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvercau/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvercau/subscriptions", "organizations_url": "https://api.github.com/users/alvercau/orgs", "repos_url": "https://api.github.com/users/alvercau/repos", "events_url": "https://api.github.com/users/alvercau/events{/privacy}", "received_events_url": "https://api.github.com/users/alvercau/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi! Do you mind posting the error message with the stacktrace? Thank you!\r\n\r\nPinging @sgugger ", "Here you go:\r\n```\r\nTraining completed. Do not forget to share your model on huggingface.co/models =)\r\nLoading best model from /mnt/pipeline/636d29be-97f9-458a-8174-70ae425a2a6a/pytorch_model/checkpoint-8 (score: 0.00027643400138217).\r\nSaving model checkpoint to /mnt/pipeline/636d29be-97f9-458a-8174-70ae425a2a6a/pytorch_model/checkpoint-8\r\nConfiguration saved in /mnt/pipeline/636d29be-97f9-458a-8174-70ae425a2a6a/pytorch_model/checkpoint-8/config.json\r\n404 Client Error: Not Found for url: \r\nhttps://huggingface.co//mnt/pipeline/636d29be-97f9-458a-8174-70ae425a2a6a/pytorch_model/checkpoint-8/resolve/main/config.json\r\n2021-03-11 12:17:57 ERROR layoutlm_model_training_script Can't load config for '/mnt/pipeline/636d29be-97f9-458a-8174-70ae425a2a6a/pytorch_model/checkpoint-8'. Make sure that:\r\n- '/mnt/pipeline/636d29be-97f9-458a-8174-70ae425a2a6a/pytorch_model/checkpoint-8' is a correct model identifier listed on '\r\nhttps://huggingface.co/models\r\n'\r\n- or '/mnt/pipeline/636d29be-97f9-458a-8174-70ae425a2a6a/pytorch_model/checkpoint-8' is the correct path to a directory containing a config.json file\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/configuration_utils.py\", line 399, in get_config_dict\r\n resolved_config_file = cached_path(\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/file_utils.py\", line 1077, in cached_path\r\n output_path = get_from_cache(\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/file_utils.py\", line 1215, in get_from_cache\r\n r.raise_for_status()\r\n File \"/usr/local/lib/python3.8/dist-packages/requests/models.py\", line 943, in raise_for_status\r\n raise HTTPError(http_error_msg, response=self)\r\nrequests.exceptions.HTTPError: 404 Client Error: Not Found for url: \r\nhttps://huggingface.co//mnt/pipeline/636d29be-97f9-458a-8174-70ae425a2a6a/pytorch_model/checkpoint-8/resolve/main/config.json\r\nDuring handling of the above exception, another exception occurred:\r\nTraceback (most recent call last):\r\n File \"nlp_ner_layoutlm/train_pipeline/training_step/training_script.py\", line 50, in <module>\r\n train_model(\r\n File \"/app/nlp_ner_layoutlm/layoutlm/utils_train.py\", line 260, in train_model\r\n trainer.train()\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/trainer.py\", line 868, in train\r\n self.model = self.model.from_pretrained(self.state.best_model_checkpoint)\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py\", line 948, in from_pretrained\r\n config, model_kwargs = cls.config_class.from_pretrained(\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/configuration_utils.py\", line 360, in from_pretrained\r\n config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/configuration_utils.py\", line 418, in get_config_dict\r\n raise EnvironmentError(msg)\r\nOSError: Can't load config for '/mnt/pipeline/636d29be-97f9-458a-8174-70ae425a2a6a/pytorch_model/checkpoint-8'. Make sure that:\r\n- '/mnt/pipeline/636d29be-97f9-458a-8174-70ae425a2a6a/pytorch_model/checkpoint-8' is a correct model identifier listed on '\r\nhttps://huggingface.co/models\r\n'\r\n- or '/mnt/pipeline/636d29be-97f9-458a-8174-70ae425a2a6a/pytorch_model/checkpoint-8' is the correct path to a directory containing a config.json file\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/configuration_utils.py\", line 399, in get_config_dict\r\n resolved_config_file = cached_path(\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/file_utils.py\", line 1077, in cached_path\r\n output_path = get_from_cache(\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/file_utils.py\", line 1215, in get_from_cache\r\n r.raise_for_status()\r\n File \"/usr/local/lib/python3.8/dist-packages/requests/models.py\", line 943, in raise_for_status\r\n raise HTTPError(http_error_msg, response=self)\r\nrequests.exceptions.HTTPError: 404 Client Error: Not Found for url: \r\nhttps://huggingface.co//mnt/pipeline/636d29be-97f9-458a-8174-70ae425a2a6a/pytorch_model/checkpoint-8/resolve/main/config.json\r\nDuring handling of the above exception, another exception occurred:\r\nTraceback (most recent call last):\r\n File \"nlp_ner_layoutlm/train_pipeline/training_step/training_script.py\", line 50, in <module>\r\n train_model(\r\n File \"/app/nlp_ner_layoutlm/layoutlm/utils_train.py\", line 260, in train_model\r\n trainer.train()\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/trainer.py\", line 868, in train\r\n self.model = self.model.from_pretrained(self.state.best_model_checkpoint)\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py\", line 948, in from_pretrained\r\n config, model_kwargs = cls.config_class.from_pretrained(\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/configuration_utils.py\", line 360, in from_pretrained\r\n config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/configuration_utils.py\", line 418, in get_config_dict\r\n raise EnvironmentError(msg)\r\nOSError: Can't load config for '/mnt/pipeline/636d29be-97f9-458a-8174-70ae425a2a6a/pytorch_model/checkpoint-8'. Make sure that:\r\n- '/mnt/pipeline/636d29be-97f9-458a-8174-70ae425a2a6a/pytorch_model/checkpoint-8' is a correct model identifier listed on '\r\nhttps://huggingface.co/models\r\n'\r\n- or '/mnt/pipeline/636d29be-97f9-458a-8174-70ae425a2a6a/pytorch_model/checkpoint-8' is the correct path to a directory containing a config.json file\r\n```\r\n\r\n", "And what is inside the folder `/mnt/pipeline/636d29be-97f9-458a-8174-70ae425a2a6a/pytorch_model/checkpoint-8`?", "config.json\r\noptimizer.pt\r\npytorch_model.bin\r\nsheduler.pt\r\ntrainer_state.json\r\ntraining_args.bin\r\n\r\nEverything that is needed to load the model.\r\nI checked the config file, it looks entirely normal.", "Oh, I think I know why: it's possible the process 1 arrived at that line before the process 0 finished its save and since there is no barrier, it failed loading the model since it wasn't there yet. Will make a fix for that.", "That makes sense, judging by our logs process 1 wasn't finished yet, as we had a log of saving a checkpoint after the error message from process 0. I cannot share the logs since the pod they were one is already gone...", "If you can checkout the PR mentioned above and see if it solves your issue, that would be great!", "I have to make some extra changes in my code to be able to use that commit ( was using 4.1.0 previously, and there are some breaking changes apparently).", "It works now. I did get a weird error though (not related):\r\n`ValueError: <EvaluationStrategy.EPOCH: 'epoch'> is not a valid IntervalStrategy, please select one of ['no', 'steps', 'epoch']`\r\nLooks like it's not possible anymore to pass EvaluationStrategy.EPOCH as an evaluation_strategy to the Trainer anymore... With version 4.1.0 this was possible. ", "Oh it's a bug in the backward compatibility (will fix today). It should work if you pass \"epoch\" instead of `EvaluationStrategy.EPOCH`.", "yes, that's what I did. \r\nAny idea when these fixes will be released?", "We'll be releasing v4.4.0 in the coming days, which will have the fix. The fix is available on `master` as of now!", "ok, thanks for the fast response!" ]
1,615
1,615
1,615
NONE
null
Not sure whether this issue should be posted here or rather in the pytorch repo, please let me know if it is not a transformer issue. When training LayoutLM with the Trainer in distributed mode for only one epoch, with setting `load_best_model_at_end` to `True`, I get an error when the model is loaded at the end. According to the error message, the config.json file for the model cannot be found although it is there. This issue does **not** arise when not training in distributed mode or when training in distributed mode for more than one epoch. ``` import os import torch from transformers import EvaluationStrategy, Trainer from transformers.training_args import TrainingArguments from transformers import ( LayoutLMConfig, LayoutLMForTokenClassification, ) training_args = TrainingArguments( output_dir="output_dir", # output directory do_train=True, do_eval=False, do_predict=False, evaluation_strategy=EvaluationStrategy.EPOCH, num_train_epochs=1, # total # of training epochs per_device_train_batch_size=8, # batch size per device during training per_device_eval_batch_size=8, # batch size for evaluation weight_decay=0.0005, # strength of weight decay learning_rate=0.00000001, logging_steps=0, # it logs when running evaluation so no need to log on step interval save_steps=0, seed=42, overwrite_output_dir=True, save_total_limit=10, load_best_model_at_end=True, metric_for_best_model="f1", greater_is_better=True, # higher f1 score is better fp16=True, local_rank=-1, gradient_accumulation_steps=2, warmup_steps=300, ) model_dir = "layoutlm_pretrained_model" train_dataset = [] validation_dataset = [] config = LayoutLMConfig.from_pretrained( os.path.join(model_dir, "config.json"), num_labels=64, cache_dir=None ) model = LayoutLMForTokenClassification.from_pretrained( model_dir, from_tf=bool(".ckpt" in model_dir), config=config, cache_dir=None, ) device = torch.device("cuda") model.train().to(device) trainer = Trainer( model=model, # the instantiated 🤗 Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=validation_dataset, # validation dataset ) trainer.train() ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10666/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10666/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10665
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10665/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10665/comments
https://api.github.com/repos/huggingface/transformers/issues/10665/events
https://github.com/huggingface/transformers/pull/10665
829,315,758
MDExOlB1bGxSZXF1ZXN0NTkxMDAwMDI4
10,665
W2v2 test require torch
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
MEMBER
null
The object `WAV_2_VEC_2_PRETRAINED_MODEL_ARCHIVE_LIST` requires torch to be installed to not be `None`. This adds the required `@require_torch`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10665/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10665/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10665", "html_url": "https://github.com/huggingface/transformers/pull/10665", "diff_url": "https://github.com/huggingface/transformers/pull/10665.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10665.patch", "merged_at": 1615485373000 }
https://api.github.com/repos/huggingface/transformers/issues/10664
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10664/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10664/comments
https://api.github.com/repos/huggingface/transformers/issues/10664/events
https://github.com/huggingface/transformers/pull/10664
829,308,700
MDExOlB1bGxSZXF1ZXN0NTkwOTk0MDc1
10,664
TensorFlow tests: having from_pt set to True requires torch to be installed.
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "@LysandreJik - I'll upload the respective weights today and then check that all these slow tests here work without `from_pt`", "Uploaded all the TF weights and checked that:\r\n\r\n`RUN_SLOW=1 pytest tests/test_modeling_tf_rag.py`\r\n`RUN_SLOW=1 pytest tests/test_modeling_tf_blenderbot.py` \r\n\r\npass.\r\n\r\nFor some reason `RUN_SLOW=1 pytest tests/test_modeling_tf_marian.py` now throws an error. I've opened a new issue for this here: https://github.com/huggingface/transformers/issues/10680" ]
1,615
1,615
1,615
MEMBER
null
Some tests were executed without having torch installed, while they require torch. Namely, all the tests that have a `from_pt=True` requirement require torch to be installed. This is a draft PR as several of the requirements to merge this PR are not met: - The Marian models do not have their tensorflow variant available on the hub - Neither do the RAG models The easy option is to only set `@requires_torch`, but since we have no slow test suite that runs both PT + TF that's not a good workaround. How do you want to proceed @patrickvonplaten ?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10664/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10664/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10664", "html_url": "https://github.com/huggingface/transformers/pull/10664", "diff_url": "https://github.com/huggingface/transformers/pull/10664.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10664.patch", "merged_at": 1615547801000 }
https://api.github.com/repos/huggingface/transformers/issues/10663
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10663/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10663/comments
https://api.github.com/repos/huggingface/transformers/issues/10663/events
https://github.com/huggingface/transformers/pull/10663
829,307,485
MDExOlB1bGxSZXF1ZXN0NTkwOTkzMDYz
10,663
Onnx fix test
{ "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Merging now to rebase the slow tests and re-run them." ]
1,615
1,615
1,615
MEMBER
null
GPT2 `past_keys_values` format seems to have changed since last time I checked, now exporting for each layer tuple with 2 elements. PyTorch's ONNX exporter doesn't seem to handle this format, so it was crashing with an error. The PR assumes we don't currently support exporting `past_keys_values` for GPT2 and then disable the return of such values when constructing the model. In order to support this behavior, `pipeline()` now ha a `model_kwargs: Dict[str, Any]` parameter which forwards the dict of parameters to model's `from_pretrained(..., **model_kwargs)`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10663/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10663/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10663", "html_url": "https://github.com/huggingface/transformers/pull/10663", "diff_url": "https://github.com/huggingface/transformers/pull/10663.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10663.patch", "merged_at": 1615487910000 }
https://api.github.com/repos/huggingface/transformers/issues/10662
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10662/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10662/comments
https://api.github.com/repos/huggingface/transformers/issues/10662/events
https://github.com/huggingface/transformers/pull/10662
829,302,257
MDExOlB1bGxSZXF1ZXN0NTkwOTg4Njg4
10,662
Specify minimum version for sacrebleu
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@patil-suraj, you must have meant `1.4.12` \r\n" ]
1,615
1,615
1,615
MEMBER
null
The `_tests_requirements.txt` require an install of sacrebleu without any version specified. However, some `sacrebleu` versions don't have the same API. I've had problems with version `1.2.10`, and @lhoestq confirmed the issue is not present in `1.4.12`. The error was the following: ``` def _compute( self, predictions, references, smooth_method="exp", smooth_value=None, force=False, lowercase=False, tokenize=scb.DEFAULT_TOKENIZER, use_effective_order=False, ): references_per_prediction = len(references[0]) if any(len(refs) != references_per_prediction for refs in references): raise ValueError("Sacrebleu requires the same number of references for each prediction") transformed_references = [[refs[i] for refs in references] for i in range(references_per_prediction)] > output = scb.corpus_bleu( sys_stream=predictions, ref_streams=transformed_references, smooth_method=smooth_method, smooth_value=smooth_value, force=force, lowercase=lowercase, tokenize=tokenize, use_effective_order=use_effective_order, ) E TypeError: corpus_bleu() got an unexpected keyword argument 'smooth_method' /mnt/cache/modules/datasets_modules/metrics/sacrebleu/b390045b3d1dd4abf6a95c4a2a11ee3bcc2b7620b076204d0ddc353fa649fd86/sacrebleu.py:114: TypeError ``` Full stack trace: ``` E File "/__w/transformers/transformers/src/transformers/trainer_seq2seq.py", line 74, in evaluate E return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix) E File "/__w/transformers/transformers/src/transformers/trainer.py", line 1650, in evaluate E output = self.prediction_loop( E File "/__w/transformers/transformers/src/transformers/trainer.py", line 1823, in prediction_loop E metrics = self.compute_metrics(EvalPrediction(predictions=preds, label_ids=label_ids)) E File "/__w/transformers/transformers/examples/seq2seq/run_seq2seq.py", line 563, in compute_metrics E result = metric.compute(predictions=decoded_preds, references=decoded_labels) E File "/opt/conda/lib/python3.8/site-packages/datasets/metric.py", line 403, in compute E output = self._compute(predictions=predictions, references=references, **kwargs) E File "/mnt/cache/modules/datasets_modules/metrics/sacrebleu/b390045b3d1dd4abf6a95c4a2a11ee3bcc2b7620b076204d0ddc353fa649fd86/sacrebleu.py", line 114, in _compute E output = scb.corpus_bleu( ``` I'm unsure about the minimum version required here, I just know that 1.2.10 doesn't work. Please advise if you think a better minimum version would be better.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10662/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10662/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10662", "html_url": "https://github.com/huggingface/transformers/pull/10662", "diff_url": "https://github.com/huggingface/transformers/pull/10662.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10662.patch", "merged_at": 1615488307000 }
https://api.github.com/repos/huggingface/transformers/issues/10661
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10661/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10661/comments
https://api.github.com/repos/huggingface/transformers/issues/10661/events
https://github.com/huggingface/transformers/pull/10661
829,260,656
MDExOlB1bGxSZXF1ZXN0NTkwOTUzOTg2
10,661
Fix Marian/TFMarian tokenization tests
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
MEMBER
null
Fixing a few tests that are failing in the slow tests suite. cc @patrickvonplaten (Marian) and @sgugger (author of the recent changes)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10661/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10661/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10661", "html_url": "https://github.com/huggingface/transformers/pull/10661", "diff_url": "https://github.com/huggingface/transformers/pull/10661.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10661.patch", "merged_at": 1615485495000 }
https://api.github.com/repos/huggingface/transformers/issues/10660
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10660/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10660/comments
https://api.github.com/repos/huggingface/transformers/issues/10660/events
https://github.com/huggingface/transformers/pull/10660
829,253,377
MDExOlB1bGxSZXF1ZXN0NTkwOTQ3ODYy
10,660
fix: #10628 expanduser path in TrainingArguments
{ "login": "PaulLerner", "id": 25532159, "node_id": "MDQ6VXNlcjI1NTMyMTU5", "avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PaulLerner", "html_url": "https://github.com/PaulLerner", "followers_url": "https://api.github.com/users/PaulLerner/followers", "following_url": "https://api.github.com/users/PaulLerner/following{/other_user}", "gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}", "starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions", "organizations_url": "https://api.github.com/users/PaulLerner/orgs", "repos_url": "https://api.github.com/users/PaulLerner/repos", "events_url": "https://api.github.com/users/PaulLerner/events{/privacy}", "received_events_url": "https://api.github.com/users/PaulLerner/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks a lot for fixing that issue!" ]
1,615
1,615
1,615
CONTRIBUTOR
null
## Who can review? - trainer: @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10660/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10660/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10660", "html_url": "https://github.com/huggingface/transformers/pull/10660", "diff_url": "https://github.com/huggingface/transformers/pull/10660.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10660.patch", "merged_at": 1615558699000 }
https://api.github.com/repos/huggingface/transformers/issues/10659
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10659/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10659/comments
https://api.github.com/repos/huggingface/transformers/issues/10659/events
https://github.com/huggingface/transformers/issues/10659
829,246,421
MDU6SXNzdWU4MjkyNDY0MjE=
10,659
How to use deepspeed finetune RAG?
{ "login": "qixintechnology", "id": 59593350, "node_id": "MDQ6VXNlcjU5NTkzMzUw", "avatar_url": "https://avatars.githubusercontent.com/u/59593350?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qixintechnology", "html_url": "https://github.com/qixintechnology", "followers_url": "https://api.github.com/users/qixintechnology/followers", "following_url": "https://api.github.com/users/qixintechnology/following{/other_user}", "gists_url": "https://api.github.com/users/qixintechnology/gists{/gist_id}", "starred_url": "https://api.github.com/users/qixintechnology/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qixintechnology/subscriptions", "organizations_url": "https://api.github.com/users/qixintechnology/orgs", "repos_url": "https://api.github.com/users/qixintechnology/repos", "events_url": "https://api.github.com/users/qixintechnology/events{/privacy}", "received_events_url": "https://api.github.com/users/qixintechnology/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thank you for your kind words, @qixintechnology \r\n\r\nI haven't tried it with RAG, but I don't see any reason why it shouldn't work - if you encounter any problems please open a specific issue with details so that we could reproduce it. " ]
1,615
1,615
1,615
NONE
null
Hi Stas!@stas00 Thanks for your great work! I have a question that is it possible to use deepspeed finetune RAG (finetune_rag.py)? Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10659/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10659/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10658
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10658/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10658/comments
https://api.github.com/repos/huggingface/transformers/issues/10658/events
https://github.com/huggingface/transformers/pull/10658
829,216,546
MDExOlB1bGxSZXF1ZXN0NTkwOTE2OTQ3
10,658
GPT2DoubleHeadsModel made parallelizable
{ "login": "ishalyminov", "id": 1062768, "node_id": "MDQ6VXNlcjEwNjI3Njg=", "avatar_url": "https://avatars.githubusercontent.com/u/1062768?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ishalyminov", "html_url": "https://github.com/ishalyminov", "followers_url": "https://api.github.com/users/ishalyminov/followers", "following_url": "https://api.github.com/users/ishalyminov/following{/other_user}", "gists_url": "https://api.github.com/users/ishalyminov/gists{/gist_id}", "starred_url": "https://api.github.com/users/ishalyminov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ishalyminov/subscriptions", "organizations_url": "https://api.github.com/users/ishalyminov/orgs", "repos_url": "https://api.github.com/users/ishalyminov/repos", "events_url": "https://api.github.com/users/ishalyminov/events{/privacy}", "received_events_url": "https://api.github.com/users/ishalyminov/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@patrickvonplaten , @LysandreJik", "Also pinging @stas00 here - does it make sense to add parallelize ability to GPT2DoubleHeadsModel?", "If it's being used then yes since `GPT2LMHeadModel` has it.", "@alexorona, do you want to take a look at this?", "@stas00 yeah, so I've been using the `GPT2DoubleHeadsModel` for my tasks since reading [this medium](https://medium.com/huggingface/how-to-build-a-state-of-the-art-conversational-ai-with-transfer-learning-2d818ac26313) (I guess a lot of people in the dialogue community also followed that tutorial).\r\nAnd seeing `parallelize()` implemented with the `GPT2LMHeadModel` got me curious to have the double-headed one working like that as well :)", "@LysandreJik, we parked any further activity on extending naive MP to other `transformers` models and their sub-classes because while it solved the immediate need it's not a good long term solution due to very inefficient gpu utilization.\r\n\r\nWe are working on integrating ZeRO-3 from DeepSpeed and fairscale which will automatically solve this scalability issue and will make the naive MP approach redundant and we can then decide whether it makes sense to keep it.\r\n\r\nUntil we sort it out and we can reliable know that ZeRO solves this problem and it's accessible to all users you can definitely merge this since @ishalyminov clearly has a good use for it.", "@ishalyminov Great work here! You're bringing up an important point. I've had this some question myself. @thomwolf It might be useful to add a few sentences to [this Medium article](https://medium.com/huggingface/how-to-build-a-state-of-the-art-conversational-ai-with-transfer-learning-2d818ac26313) clarifying whether distractors are likely to improve final performance in tasks that are ultimately concerned with generative text. It's not entirely clear if your approach using distractors and a double-headed model is an artifact of the competition setup or whether it's an approach you would recommend for anyone trying to fine-tune a transformer for chatbot-style tasks. If someone only cares about language modeling, do you think a double-headed approach with distractors and a classification task would usually produce a better chatbot than simply focusing on LM? Does it matter if the chatbot is attempting to model several discrete personalities present in the dataset?", "@alexorona thanks! Yeah it would be very interesting to hear more about @thomwolf's ConvAI experience:) As for me, I didn't conduct any evaluation of how the NSP task affects the resulting LM (and indeed, there are some works out there that don't use this secondary task at all).\r\n\r\nBut for what it's worth, we found the NSP head to be beneficial for hybrid [generative/retrieval](https://github.com/microsoft/GRTr) GPT-2 based dialogue architectures.\r\nAlso I guess the multi-task setup makes the model intuitively more versatile for a range of downstream tasks, as was originally proposed in the [BERT paper](https://arxiv.org/pdf/1810.04805.pdf) - would be really useful if there was an experimental evaluation proving or disproving this for the case of ConvAI GPT-2." ]
1,615
1,616
1,615
CONTRIBUTOR
null
# What does this PR do? GPT2DoubleHeadsModel made parallelizable; it is also reflected in the test_modeling_gpt2.py suite <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10658/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10658/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10658", "html_url": "https://github.com/huggingface/transformers/pull/10658", "diff_url": "https://github.com/huggingface/transformers/pull/10658.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10658.patch", "merged_at": 1615813844000 }
https://api.github.com/repos/huggingface/transformers/issues/10657
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10657/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10657/comments
https://api.github.com/repos/huggingface/transformers/issues/10657/events
https://github.com/huggingface/transformers/pull/10657
829,210,748
MDExOlB1bGxSZXF1ZXN0NTkwOTEyMTYz
10,657
S2S + M2M100 should be available in tokenization_auto
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
MEMBER
null
cc @patil-suraj Was it a choice not to add these to the tokenizer auto, or is it because that's not covered in the template?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10657/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10657/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10657", "html_url": "https://github.com/huggingface/transformers/pull/10657", "diff_url": "https://github.com/huggingface/transformers/pull/10657.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10657.patch", "merged_at": 1615474416000 }
https://api.github.com/repos/huggingface/transformers/issues/10656
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10656/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10656/comments
https://api.github.com/repos/huggingface/transformers/issues/10656/events
https://github.com/huggingface/transformers/pull/10656
829,195,864
MDExOlB1bGxSZXF1ZXN0NTkwODk5NzQx
10,656
Fix broken link
{ "login": "WybeKoper", "id": 40920213, "node_id": "MDQ6VXNlcjQwOTIwMjEz", "avatar_url": "https://avatars.githubusercontent.com/u/40920213?v=4", "gravatar_id": "", "url": "https://api.github.com/users/WybeKoper", "html_url": "https://github.com/WybeKoper", "followers_url": "https://api.github.com/users/WybeKoper/followers", "following_url": "https://api.github.com/users/WybeKoper/following{/other_user}", "gists_url": "https://api.github.com/users/WybeKoper/gists{/gist_id}", "starred_url": "https://api.github.com/users/WybeKoper/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/WybeKoper/subscriptions", "organizations_url": "https://api.github.com/users/WybeKoper/orgs", "repos_url": "https://api.github.com/users/WybeKoper/repos", "events_url": "https://api.github.com/users/WybeKoper/events{/privacy}", "received_events_url": "https://api.github.com/users/WybeKoper/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,621
1,615
CONTRIBUTOR
null
# What does this PR do? Fixes a broken link in model_doc/pegasus Link was pointing to: https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune_pegasus_xsum.sh File had been moved to: https://github.com/huggingface/transformers/blob/master/examples/research_projects/seq2seq-distillation/finetune_pegasus_xsum.sh Fixes # (issue) [#9257](https://github.com/huggingface/transformers/issues/9257) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10656/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10656/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10656", "html_url": "https://github.com/huggingface/transformers/pull/10656", "diff_url": "https://github.com/huggingface/transformers/pull/10656.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10656.patch", "merged_at": 1615490942000 }
https://api.github.com/repos/huggingface/transformers/issues/10655
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10655/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10655/comments
https://api.github.com/repos/huggingface/transformers/issues/10655/events
https://github.com/huggingface/transformers/issues/10655
829,180,029
MDU6SXNzdWU4MjkxODAwMjk=
10,655
MarianMT - tokenizer.supported_language_codes -> 'NoneType' object has no attribute 'supported_language_codes'
{ "login": "gagy3798", "id": 53332731, "node_id": "MDQ6VXNlcjUzMzMyNzMx", "avatar_url": "https://avatars.githubusercontent.com/u/53332731?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gagy3798", "html_url": "https://github.com/gagy3798", "followers_url": "https://api.github.com/users/gagy3798/followers", "following_url": "https://api.github.com/users/gagy3798/following{/other_user}", "gists_url": "https://api.github.com/users/gagy3798/gists{/gist_id}", "starred_url": "https://api.github.com/users/gagy3798/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gagy3798/subscriptions", "organizations_url": "https://api.github.com/users/gagy3798/orgs", "repos_url": "https://api.github.com/users/gagy3798/repos", "events_url": "https://api.github.com/users/gagy3798/events{/privacy}", "received_events_url": "https://api.github.com/users/gagy3798/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[ { "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false } ]
[ "Hi @gagy3798 \r\n\r\nI couldn't reproduce the issue on master and with the latest pypi version as well. What is your transformers version? \r\n(please always make sure to post the env info when opening an issue)", "Hi @patil-suraj \r\n\r\nplease try it on this colab https://colab.research.google.com/drive/1z9UtSETxVrDhYnH1eN9lyMv2g-YTFNrz?usp=sharing\r\n\r\ntransformers 4.3.3\r\n\r\n", "probably `sentencepiece` is not installed. Please install `sentencepiece` and restart the colab. That should resolve the issue.", "Ok, thank you." ]
1,615
1,615
1,615
NONE
null
## Environment info - colab.research.google.com ### Who can help @patrickvonplaten Models: - MarianMT Examples: https://huggingface.co/transformers/model_doc/marian.html ## Information I'm trying to run example code in colab but it fails `from transformers import MarianMTModel, MarianTokenizer src_text = [ '>>fra<< this is a sentence in english that we want to translate to french', '>>por<< This should go to portuguese', '>>esp<< And this to Spanish' ] model_name = 'Helsinki-NLP/opus-mt-en-roa' tokenizer = MarianTokenizer.from_pretrained(model_name) print(tokenizer.supported_language_codes) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer.prepare_seq2seq_batch(src_text, return_tensors="pt")) tgt_text = [tokenizer.decode(t, skip_special_tokens=True) for t in translated]` ---> 10 print(tokenizer.supported_language_codes) AttributeError: 'NoneType' object has no attribute 'supported_language_codes' Could you please provide working translation sample.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10655/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10655/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10654
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10654/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10654/comments
https://api.github.com/repos/huggingface/transformers/issues/10654/events
https://github.com/huggingface/transformers/issues/10654
829,158,624
MDU6SXNzdWU4MjkxNTg2MjQ=
10,654
Allow private model hosting and resolution
{ "login": "vblagoje", "id": 458335, "node_id": "MDQ6VXNlcjQ1ODMzNQ==", "avatar_url": "https://avatars.githubusercontent.com/u/458335?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vblagoje", "html_url": "https://github.com/vblagoje", "followers_url": "https://api.github.com/users/vblagoje/followers", "following_url": "https://api.github.com/users/vblagoje/following{/other_user}", "gists_url": "https://api.github.com/users/vblagoje/gists{/gist_id}", "starred_url": "https://api.github.com/users/vblagoje/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vblagoje/subscriptions", "organizations_url": "https://api.github.com/users/vblagoje/orgs", "repos_url": "https://api.github.com/users/vblagoje/repos", "events_url": "https://api.github.com/users/vblagoje/events{/privacy}", "received_events_url": "https://api.github.com/users/vblagoje/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I think it's a brilliant idea. \r\n\r\nSo just to validate I understood your proposal correclty, in addition to checking the usual places, it'll first check the env var `HUGGINGFACE_CO_PREFIX` and join it with `model_path` and check if it's available - and if not proceed with the normal algorithm.\r\n\r\nSo in your example using bash, it'd check `$HUGGINGFACE_CO_PREFIX/my_org/my_model`, which might be `https://some.place.on.earth/data/my_org/my_model`, right?\r\n\r\n@LysandreJik, @julien-c - what do you think?", "Yes, pretty much that's it. I think the top flat namespace where `bert-base-uncased`, `t5-base` and all other LMs \"live\" should never be allowed to resolve to anything else except HF hub (not just for security reasons). However, for the other 2+ level namespaces, i.e. `my_org/my_model` if users can register resolver - that would be great. In the Java world (where I come from), there are these notions of resources and resource bundles that could be dropped in predefined file locations and picked up by the library/framework. Not sure how this is done in Python, but perhaps [pkg_resources](https://setuptools.readthedocs.io/en/latest/pkg_resources.html#resourcemanager-api) could be used. I believe this would be a better approach than registering these resolvers via some HF API. Although perhaps that should be left as an option. \r\n\r\nI would love to hear the opinions of others!", "Hi,\r\n\r\nI think the simplest way (and I've seen users and organizations do that) would be to extend `AutoModel` to sync your model files from your remote storage before loading it locally\r\n\r\nSomething like:\r\n\r\n```python\r\nclass MyAutoModel:\r\n @classmethod\r\n def from_pretrained(cls, id):\r\n subprocess.run(f\"aws s3 sync s3://mymodels/{id}/ {local_path}\")\r\n return AutoModel.from_pretrained(local_path)\r\n```\r\n\r\nie. all methods are guaranteed to work to load from local paths so pretty trivial to use them to load from anywhere", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,615
1,619
1,619
CONTRIBUTOR
null
# 🚀 Feature request It would be great to provide model hosting and automatic naming resolution on remote storage outside of HF hub/infra. Currently, it is possible to store a model on, say, S3 and resolve it via AutoModel and AutoConfig. However, in this case, a user has to explicitly specify the full path to the configuration file or the model's pytorch_model.bin file. It would be great if private remote storage could be registered with the same resolution mechanism reserved for HF so that: `model = AutoModel.from_pretrained('my_org/my_model')` `config = AutoConfig.from_pretrained('my_org/my_model')` could be resolved to an actual remote storage path just like HF default resolution mechanism resolves config, models and tokenizers on HF hub. ## Motivation During the model development lifecycle, organizations often produce many models for internal testing and benchmarking before producing the final model for publishing. Storing all the models during the development phase on the HF hub is sometimes impractical, and some organizations might need stricter control of model storage. ## Your contribution I've investigated a bit feature request's implementation scope and it doesn't seem to require a big rewrite. The name --> URL naming resolution is done in `file_utils.py`. One could follow how models are resolved for `HUGGINGFACE_CO_PREFIX`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10654/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10654/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10653
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10653/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10653/comments
https://api.github.com/repos/huggingface/transformers/issues/10653/events
https://github.com/huggingface/transformers/pull/10653
829,156,823
MDExOlB1bGxSZXF1ZXN0NTkwODY3MzYz
10,653
Fix Longformer tokenizer filename
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
MEMBER
null
Fixes https://github.com/huggingface/transformers/issues/10642
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10653/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10653/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10653", "html_url": "https://github.com/huggingface/transformers/pull/10653", "diff_url": "https://github.com/huggingface/transformers/pull/10653.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10653.patch", "merged_at": 1615469649000 }
https://api.github.com/repos/huggingface/transformers/issues/10652
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10652/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10652/comments
https://api.github.com/repos/huggingface/transformers/issues/10652/events
https://github.com/huggingface/transformers/issues/10652
829,133,805
MDU6SXNzdWU4MjkxMzM4MDU=
10,652
Infernal tokenizer loading trained
{ "login": "avacaondata", "id": 35173563, "node_id": "MDQ6VXNlcjM1MTczNTYz", "avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4", "gravatar_id": "", "url": "https://api.github.com/users/avacaondata", "html_url": "https://github.com/avacaondata", "followers_url": "https://api.github.com/users/avacaondata/followers", "following_url": "https://api.github.com/users/avacaondata/following{/other_user}", "gists_url": "https://api.github.com/users/avacaondata/gists{/gist_id}", "starred_url": "https://api.github.com/users/avacaondata/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/avacaondata/subscriptions", "organizations_url": "https://api.github.com/users/avacaondata/orgs", "repos_url": "https://api.github.com/users/avacaondata/repos", "events_url": "https://api.github.com/users/avacaondata/events{/privacy}", "received_events_url": "https://api.github.com/users/avacaondata/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The tokenizers library powers the fast tokenizers, not the Python \"slow\" tokenizers. As there is no fast tokenizer for deberta, you can't use the tokenizers library for that model.\r\n\r\nYou can check which tokenizers have a version backed by the Tokenizers library in [this table](https://huggingface.co/transformers/index.html#bigtable).", "Then, how could we convert the \"fast\" BPETokenizer to the \"slow\" BPETokenizer used by Deberta? @sgugger ", "Another important thing. I have checked the table and it says that Roberta is able to use fast tokenizer. Deberta, as stated in the paper, uses exactly the same tokenizer as Roberta, so the obvious question is: if Deberta uses Roberta tokenizer, and Roberta tokenizer can be used in \"fast\" mode, why cannot Deberta be used in \"fast\" mode??", "Another issue is that when I try to use the BPE Tokenizer trained with huggingface/tokenizers with Roberta directly, it works:\r\n\r\n```{python}\r\n\r\ntok = RobertaTokenizer.from_pretrained(\"bpe_tokenizer_0903\", use_fast=True)\r\n\r\n```\r\n\r\nHowever, when I try to use this same tokenizer for training a language model, it fails:\r\n\r\n```{bash}\r\npython -u transformers/examples/language-modeling/run_mlm_wwm.py \\\r\n --model_type deberta \\\r\n --config_name ./bpe_tokenizer_0903/config.json \\\r\n --tokenizer_name ./bpe_tokenizer_0903 \\\r\n --train_file ./prueba_tr.txt \\\r\n --validation_file ./final_valid.txt \\\r\n --output_dir ./roberta_1102 \\\r\n --overwrite_output_dir \\\r\n --do_train \\\r\n --do_eval \\\r\n --evaluation_strategy steps \\\r\n --per_device_train_batch_size 1 \\\r\n --per_device_eval_batch_size 2 \\\r\n --gradient_accumulation_steps 2 \\\r\n --learning_rate 6e-4 \\\r\n --save_steps 10 \\\r\n --logging_steps 10 \\\r\n --overwrite_cache \\\r\n --max_seq_length 128 \\\r\n --eval_accumulation_steps 10 \\\r\n --load_best_model_at_end \\\r\n --run_name deberta_0902 \\\r\n --save_total_limit 10 --warmup_steps 1750 \\\r\n --adam_beta2 0.98 --adam_epsilon 1e-6 --weight_decay 0.01 --num_train_epochs 1\r\n\r\n```\r\n\r\nThe error message is the following:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"transformers/examples/language-modeling/run_mlm_wwm.py\", line 399, in <module>\r\n main()\r\n File \"transformers/examples/language-modeling/run_mlm_wwm.py\", line 286, in main\r\n use_fast=model_args.use_fast_tokenizer,\r\n File \"/home/alejandro.vaca/data_rigoberta/transformers/src/transformers/models/auto/tokenization_auto.py\", line 401, in from_pretrained\r\n return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)\r\n File \"/home/alejandro.vaca/data_rigoberta/transformers/src/transformers/tokenization_utils_base.py\", line 1719, in from_pretrained\r\n resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs\r\n File \"/home/alejandro.vaca/data_rigoberta/transformers/src/transformers/tokenization_utils_base.py\", line 1790, in _from_pretrained\r\n tokenizer = cls(*init_inputs, **init_kwargs)\r\n File \"/home/alejandro.vaca/data_rigoberta/transformers/src/transformers/models/roberta/tokenization_roberta_fast.py\", line 173, in __init__\r\n **kwargs,\r\n File \"/home/alejandro.vaca/data_rigoberta/transformers/src/transformers/models/gpt2/tokenization_gpt2_fast.py\", line 145, in __init__\r\n **kwargs,\r\n File \"/home/alejandro.vaca/data_rigoberta/transformers/src/transformers/tokenization_utils_fast.py\", line 87, in __init__\r\n fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file)\r\nException: data did not match any variant of untagged enum ModelWrapper at line 1 column 1138661\r\n```\r\n\r\nWhy doesn't it fail when I try to load the tokenizer with RobertaTokenizer.from_pretrained() but it does fail when I try to run run_mlm_wwm.py ? @sgugger @patrickvonplaten @LysandreJik ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,615
1,619
1,619
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.4.dev0 - Platform: Ubuntu 18 - Python version: 3.7 - PyTorch version (GPU?): 1.7.1 (YES) - Tensorflow version (GPU?): - Using GPU in script?: YES - Using distributed or parallel set-up in script?: NO ### Who can help @LysandreJik @patrickvonplaten @patil-suraj @sgugger @n1t0 ## Information Model I am using (Bert, XLNet ...): DeBerta The problem arises when using: * [ x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Use, for example, the OSCAR corpus in spanish, then use Tokenizers library to train your BPETokenizer (the one Deberta needs). 2. Try to load DebertaTokenizer from the .json generated by Tokenizers. The code used for training the tokenizer was the following: ```{python} import glob import os import random from tokenizers import Tokenizer from tokenizers.models import BPE from tokenizers.trainers import BpeTrainer if __name__ == "__main__": tokenizer = Tokenizer(BPE()) # tokenizer = ByteLevelBPETokenizer(add_prefix_space=False) trainer = BpeTrainer( special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"], vocab_size=50265, continuing_subword_prefix="\u0120", min_frequency=2, ) # t = AutoTokenizer.from_pretrained("microsoft/deberta-base") files = glob.glob("cleaned_train_data/*.csv") files_sample = random.choices(files, k=250) tokenizer.train( files=files_sample, trainer=trainer, ) os.makedirs("bpe_tokenizer_0903", exist_ok=True) tokenizer.save("bpe_tokenizer_0903") ``` The problem is that the DebertaTokenizer from transformers needs a different set of files to the ones Tokenizers generate. It's ironic that it's also a Huggingface library, because there doesn't seem to be much integration between the 2. Well, as this was the case, I tried many things. First, I tried adding added_tokens.json, special_tokens_map.json, vocab.json, vocab.txt, merges.txt... All these files are included in tokenizer.json (the file generated by huggingface/Tokenizers). However, none of those worked. Then, I tried looking at the files that are saved when you load a DebertaTokenizer from microsoft checkpoints, so that I could copy the structure of the saved folder. I tried to do so, but for the bpe_encoder.bin, there were some difficulties. I used my merges for bpe_encoder["vocab"], as the vocab in the Microsoft bpe_encoder.bin seemed to be merges, and in bpe_encoder["encoder"] I put the vocab dict. For the field bpe_encoder["dict_map"], I couldn't replicate it as token frequencies are not saved by Tokenizers, so I invented them with a random number. However, when I try to train with this tokenizer, it throws a KeyError on step 5, which is strange because when I try to tokenize that concrete token: 'Ŀ', it does indeed tokenize it (by doing DebertaTokenizer.from_pretrained(my_path)("Ŀ"))... I think all those problems are caused mainly because there is a complete disconnection between Transformers and Tokenizers library, as the tokenizers trained with Tokenizers are not integrable with Transformers, which doesn't make much sense to me, because Tokenizers is supposed to be used to train Tokenizers that are later used in Transformers... Could please anyone tell me how can I train a Deberta Tokenizer that is, from the beginning, saved with the files needed by Transformers DebertaTokenizer?? Is there any version of Tokenizers in which, when you train a BPETokenizer, it saves the files required by Transformers? Thank you very much. ## Expected behavior It is expected that if 2 libraries are from the same company and the mission of one of the two is to build tools that are later used by the other, the 2 libraries expect and produce the same objects for the same tasks, as it doesn't make sense that you can train a BPETokenizer that you cannot later use as a tokenizer in Transformers. So, what is expected is that if DebertaTokenizer uses BPETokenizer, and this tokenizer expects to receive bpe_encoder.bin, special_tokens_map.json and tokenizer_config.json, then when you train a BPETokenizer with Tokenizers library it should save those objects, not a tokenizer.json file that is useless for later use in Transformers library.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10652/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10652/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10651
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10651/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10651/comments
https://api.github.com/repos/huggingface/transformers/issues/10651/events
https://github.com/huggingface/transformers/pull/10651
829,124,492
MDExOlB1bGxSZXF1ZXN0NTkwODQwMjM5
10,651
added support for exporting of T5 models to onnx with past_key_values.
{ "login": "Ki6an", "id": 63173962, "node_id": "MDQ6VXNlcjYzMTczOTYy", "avatar_url": "https://avatars.githubusercontent.com/u/63173962?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ki6an", "html_url": "https://github.com/Ki6an", "followers_url": "https://api.github.com/users/Ki6an/followers", "following_url": "https://api.github.com/users/Ki6an/following{/other_user}", "gists_url": "https://api.github.com/users/Ki6an/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ki6an/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ki6an/subscriptions", "organizations_url": "https://api.github.com/users/Ki6an/orgs", "repos_url": "https://api.github.com/users/Ki6an/repos", "events_url": "https://api.github.com/users/Ki6an/events{/privacy}", "received_events_url": "https://api.github.com/users/Ki6an/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@patrickvonplaten, @patil-suraj any updates on this PR or [10645](https://github.com/huggingface/transformers/issues/10645) issue?", "@mfuntowicz or @Narsil do you have 2min to give your feedback on this maybe? :-)", "+1 for this", "+1, this is needed for fastT5", "Sorry, I failed to see the first mention.\r\n\r\nYes this is needed for T5. It's a relatively small change, so probably worth it.\r\n\r\n\r\n@Ki6an thanks for the notebook !\r\n\r\nJust a note for everyone reading this, dynamic sizes that are **too** general might affect performance (for instance at runtime, batch_size=1 can be enforced for `encoder_input_ids`. This can lead to some performance gains using `onnxruntime`.\r\nEnforcing batch_size = num_beams *can* also lead to improvements. ", "hey, @patrickvonplaten, @patil-suraj, @mfuntowicz could you please have another look at this PR.", "If this enables ONNX, I'm totally fine with the PR, but I'm no expert in ONNX at all...\r\n\r\nI leave it no @Narsil to merge the PR if it looks good to him" ]
1,615
1,619
1,619
CONTRIBUTOR
null
# What does this PR do? >by applying this fix I was able to create **[fastT5](https://pypi.org/project/fastt5/)** library. which increases the T5 model inference speed up to 5x. for more details check out my [GitHub](https://github.com/Ki6an/fastT5) repo. addressing [this ](https://github.com/huggingface/transformers/issues/10645)issue and [this ](https://github.com/huggingface/transformers/pull/9733)PR while exporting T5 decoder model to onnx with `past_key_values` was getting this error. ```python /usr/local/lib/python3.7/dist-packages/transformers/models/t5/modeling_t5.py in forward(self, hidden_states, mask, key_value_states, position_bias, past_key_value, layer_head_mask, query_length, use_cache, output_attentions) 497 position_bias = position_bias + mask # (batch_size, n_heads, seq_length, key_length) 498 --> 499 scores += position_bias 500 attn_weights = F.softmax(scores.float(), dim=-1).type_as( 501 scores RuntimeError: output with shape [5, 8, 1, 2] doesn't match the broadcast shape [5, 8, 2, 2] ``` the reason is while `torch-jit-tracing` the `seq_lenth` is converted to type `<class 'torch.Tensor'>` in this line [424](https://github.com/huggingface/transformers/blob/26a33cfd8c2d6923f41ab98683f33172e8948ff3/src/transformers/models/t5/modeling_t5.py#L424) ` batch_size, seq_length = hidden_states.shape[:2]` next, tracing throws the following warning at line [494](https://github.com/huggingface/transformers/blob/26a33cfd8c2d6923f41ab98683f33172e8948ff3/src/transformers/models/t5/modeling_t5.py#L494) ```python /usr/local/lib/python3.7/dist-packages/transformers/models/t5/modeling_t5.py:494: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! position_bias = position_bias[:, :, -seq_length:, :] ``` so it keeps `position_bias` as constant and we get the error at line [499](https://github.com/huggingface/transformers/blob/26a33cfd8c2d6923f41ab98683f33172e8948ff3/src/transformers/models/t5/modeling_t5.py#L499). because of the shape mismatch of `positon_bais` and `scores`. to solve this issue, we can create a variable `int_seq_length` that will stay as `<class 'int'>` throughout the whole process. & we will use this variable in line `position_bias = position_bias[:, :, -int_seq_length:, :]`. now, tracing no longer throws the warning of `position_bias` being constant and we won't get the error: shape mismatch of `positon_bais` and `scores`. by following this simple fix I was able to export t5 to onnx as shown in this [notebook ](https://colab.research.google.com/drive/1Q5GSqOOrhO-7NQLpZPZ1C7YrkAot3TJg?usp=sharing). & also was able to create ['fastT5'](https://github.com/Ki6an/fastT5) repo :) t5: @patrickvonplaten, @patil-suraj
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10651/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10651/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10651", "html_url": "https://github.com/huggingface/transformers/pull/10651", "diff_url": "https://github.com/huggingface/transformers/pull/10651.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10651.patch", "merged_at": 1619194460000 }
https://api.github.com/repos/huggingface/transformers/issues/10650
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10650/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10650/comments
https://api.github.com/repos/huggingface/transformers/issues/10650/events
https://github.com/huggingface/transformers/issues/10650
829,114,161
MDU6SXNzdWU4MjkxMTQxNjE=
10,650
DistilBertTokenizerFast ignores "do_lower_case=False" parameter
{ "login": "PierceEigirthon", "id": 33113126, "node_id": "MDQ6VXNlcjMzMTEzMTI2", "avatar_url": "https://avatars.githubusercontent.com/u/33113126?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PierceEigirthon", "html_url": "https://github.com/PierceEigirthon", "followers_url": "https://api.github.com/users/PierceEigirthon/followers", "following_url": "https://api.github.com/users/PierceEigirthon/following{/other_user}", "gists_url": "https://api.github.com/users/PierceEigirthon/gists{/gist_id}", "starred_url": "https://api.github.com/users/PierceEigirthon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PierceEigirthon/subscriptions", "organizations_url": "https://api.github.com/users/PierceEigirthon/orgs", "repos_url": "https://api.github.com/users/PierceEigirthon/repos", "events_url": "https://api.github.com/users/PierceEigirthon/events{/privacy}", "received_events_url": "https://api.github.com/users/PierceEigirthon/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false }
[ { "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false } ]
[ "Hi @PierceEigirthon! Thanks for submitting, we're aware of this bug, it's related to #10390 and on my backlog", "I'll close this one as duplicate, you'll be able to follow progress on #10390 ;) " ]
1,615
1,615
1,615
NONE
null
Hi, hope all is well :) It looks like DistilBertTokenizerFast doesn't take do_lower_case into account. ``` from transformers import DistilBertTokenizerFast, DistilBertTokenizer PRE_TRAINED_MODEL_NAME = "distilbert-base-uncased" tokenizer_f = DistilBertTokenizerFast.from_pretrained(PRE_TRAINED_MODEL_NAME, do_lower_case=False) tokenizer_s = DistilBertTokenizer.from_pretrained(PRE_TRAINED_MODEL_NAME, do_lower_case=False) sample = "Hello, world. How are you?" tokens_f = tokenizer_f.tokenize(sample) tokens_s = tokenizer_s.tokenize(sample) print(tokens_f) print(tokens_s) ``` output: ``` ['hello', ',', 'world', '.', 'how', 'are', 'you', '?'] ['[UNK]', ',', 'world', '.', '[UNK]', 'are', 'you', '?'] ``` expected: ``` ['[UNK]', ',', 'world', '.', '[UNK]', 'are', 'you', '?'] ['[UNK]', ',', 'world', '.', '[UNK]', 'are', 'you', '?'] ``` packages: ``` argon2-cffi==20.1.0 async-generator==1.10 attrs==20.3.0 backcall==0.2.0 bleach==3.3.0 certifi==2020.12.5 cffi==1.14.5 chardet==4.0.0 click==7.1.2 decorator==4.4.2 defusedxml==0.7.1 entrypoints==0.3 filelock==3.0.12 idna==2.10 ipykernel==5.5.0 ipython==7.21.0 ipython-genutils==0.2.0 ipywidgets==7.6.3 jedi==0.18.0 Jinja2==2.11.3 joblib==1.0.1 jsonschema==3.2.0 jupyter-client==6.1.11 jupyter-core==4.7.1 jupyterlab-pygments==0.1.2 jupyterlab-widgets==1.0.0 MarkupSafe==1.1.1 mistune==0.8.4 nbclient==0.5.3 nbconvert==6.0.7 nbformat==5.1.2 nest-asyncio==1.5.1 notebook==6.2.0 numpy==1.20.1 packaging==20.9 pandocfilters==1.4.3 parso==0.8.1 pexpect==4.8.0 pickleshare==0.7.5 prometheus-client==0.9.0 prompt-toolkit==3.0.16 ptyprocess==0.7.0 pycparser==2.20 Pygments==2.8.1 pyparsing==2.4.7 pyrsistent==0.17.3 python-dateutil==2.8.1 pyzmq==22.0.3 regex==2020.11.13 requests==2.25.1 sacremoses==0.0.43 Send2Trash==1.5.0 six==1.15.0 terminado==0.9.2 testpath==0.4.4 tokenizers==0.10.1 torch==1.8.0+cu111 tornado==6.1 tqdm==4.59.0 traitlets==5.0.5 transformers==4.3.3 typing-extensions==3.7.4.3 urllib3==1.26.3 wcwidth==0.2.5 webencodings==0.5.1 widgetsnbextension==3.5.1 ``` Python version: `Python 3.8.6` System: PopOS 20, happy to provide more info on system specs such as hardware if needed
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10650/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10650/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10649
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10649/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10649/comments
https://api.github.com/repos/huggingface/transformers/issues/10649/events
https://github.com/huggingface/transformers/issues/10649
829,109,356
MDU6SXNzdWU4MjkxMDkzNTY=
10,649
[Question] How do I prevent a lack of VRAM halfway through training a (Pegasus) model?
{ "login": "lars-at-styx", "id": 75663112, "node_id": "MDQ6VXNlcjc1NjYzMTEy", "avatar_url": "https://avatars.githubusercontent.com/u/75663112?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lars-at-styx", "html_url": "https://github.com/lars-at-styx", "followers_url": "https://api.github.com/users/lars-at-styx/followers", "following_url": "https://api.github.com/users/lars-at-styx/following{/other_user}", "gists_url": "https://api.github.com/users/lars-at-styx/gists{/gist_id}", "starred_url": "https://api.github.com/users/lars-at-styx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lars-at-styx/subscriptions", "organizations_url": "https://api.github.com/users/lars-at-styx/orgs", "repos_url": "https://api.github.com/users/lars-at-styx/repos", "events_url": "https://api.github.com/users/lars-at-styx/events{/privacy}", "received_events_url": "https://api.github.com/users/lars-at-styx/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I think I may have been an idiot; shortly after posting this I found that instead of `padding=True` I should set `padding='max_length'`. Woops." ]
1,615
1,615
1,615
NONE
null
I'm taking a pre-trained pegasus model (specifically, google/pegasus-cnn_dailymail, and I'm using Huggingface transformers through Pytorch) and I want to finetune it on my own data. This is however quite a large dataset and I've run into the problem of running out of VRAM halfway through training, which because of the size of the dataset can be a few days after training even started, which makes a trial-and-error approach very inefficient. I'm wondering how I can make sure ahead of time that it doesn't run out of memory. I would think that the memory usage of the model is in some way proportional to the size of the input, so I've passed truncation=True, padding=True, max_length=1024 to my tokenizer, which if my understanding is correct should make all the outputs of the tokenizer of the same size per line. Considering that the batch size is also a constant, I would think that the amount of VRAM in use should be stable. So I should just be able to cut up the dataset into managable parts, just looking at the ram/vram use of the first run, and infer that it will run smoothly from start to finish. However, the opposite seems to be true. I've been observing the amount of VRAM used at any time and it can vary wildly, from ~12GB at one time to suddenly requiring more than 24GB and crashing (because I don't have more than 24GB). So, how do I make sure that the amount of vram in use will stay within reasonable bounds for the full duration of the training process, and avoid it crashing due to a lack of vram when I'm already days into the training process?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10649/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10649/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10648
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10648/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10648/comments
https://api.github.com/repos/huggingface/transformers/issues/10648/events
https://github.com/huggingface/transformers/pull/10648
829,098,311
MDExOlB1bGxSZXF1ZXN0NTkwODE4MjYz
10,648
[XLSR-Wav2Vec2] Add multi-lingual Wav2Vec2 models
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
MEMBER
null
The model is identical to Wav2Vec2, but comes with a new paper and new checkpoints. So this PR only adds a new doc page and a tiny change in the conversion script. Check out the new models here: https://huggingface.co/models?search=wav2vec2-large-xlsr
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10648/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10648/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10648", "html_url": "https://github.com/huggingface/transformers/pull/10648", "diff_url": "https://github.com/huggingface/transformers/pull/10648.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10648.patch", "merged_at": 1615473858000 }
https://api.github.com/repos/huggingface/transformers/issues/10647
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10647/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10647/comments
https://api.github.com/repos/huggingface/transformers/issues/10647/events
https://github.com/huggingface/transformers/pull/10647
829,073,985
MDExOlB1bGxSZXF1ZXN0NTkwNzk4MjU5
10,647
Update README.md
{ "login": "Arvid-pku", "id": 53811705, "node_id": "MDQ6VXNlcjUzODExNzA1", "avatar_url": "https://avatars.githubusercontent.com/u/53811705?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Arvid-pku", "html_url": "https://github.com/Arvid-pku", "followers_url": "https://api.github.com/users/Arvid-pku/followers", "following_url": "https://api.github.com/users/Arvid-pku/following{/other_user}", "gists_url": "https://api.github.com/users/Arvid-pku/gists{/gist_id}", "starred_url": "https://api.github.com/users/Arvid-pku/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Arvid-pku/subscriptions", "organizations_url": "https://api.github.com/users/Arvid-pku/orgs", "repos_url": "https://api.github.com/users/Arvid-pku/repos", "events_url": "https://api.github.com/users/Arvid-pku/events{/privacy}", "received_events_url": "https://api.github.com/users/Arvid-pku/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
CONTRIBUTOR
null
correct spell error: 'nether'
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10647/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10647/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10647", "html_url": "https://github.com/huggingface/transformers/pull/10647", "diff_url": "https://github.com/huggingface/transformers/pull/10647.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10647.patch", "merged_at": 1615471085000 }
https://api.github.com/repos/huggingface/transformers/issues/10646
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10646/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10646/comments
https://api.github.com/repos/huggingface/transformers/issues/10646/events
https://github.com/huggingface/transformers/issues/10646
829,065,330
MDU6SXNzdWU4MjkwNjUzMzA=
10,646
seq2seq BertGeneration model failed "ValueError: You have to specify either input_ids or inputs_embeds"
{ "login": "gyin94", "id": 67664443, "node_id": "MDQ6VXNlcjY3NjY0NDQz", "avatar_url": "https://avatars.githubusercontent.com/u/67664443?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gyin94", "html_url": "https://github.com/gyin94", "followers_url": "https://api.github.com/users/gyin94/followers", "following_url": "https://api.github.com/users/gyin94/following{/other_user}", "gists_url": "https://api.github.com/users/gyin94/gists{/gist_id}", "starred_url": "https://api.github.com/users/gyin94/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gyin94/subscriptions", "organizations_url": "https://api.github.com/users/gyin94/orgs", "repos_url": "https://api.github.com/users/gyin94/repos", "events_url": "https://api.github.com/users/gyin94/events{/privacy}", "received_events_url": "https://api.github.com/users/gyin94/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[ { "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false } ]
[ "hi @gyin-ai \r\n\r\nThank you for reporting the issue. The `run_seq2seq.py` currently does not work for encoder-decoder models. This is because the encoder-decoder models expect both `decoder_input_ids` and `labels` whereas the script only passes the `labels`. Which is causing the above error.\r\n\r\nYou could refer to this [notebook](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/RoBERTaShared_for_BBC_XSum.ipynb) to see how to use `Trainer` for encoder-decoder models. Also, you easily adapt the `run_seq2seq.py` script for this, I think you'll only need to change the data collator [here](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/run_seq2seq.py#L521) to return both the `labels` and `decoder_input_ids`", "@patil-suraj can I ask whether `batch[\"decoder_input_ids\"]` should be `inputs.input_ids` instead of `outputs.input_ids`?\r\n\r\n```\r\ndef process_data_to_model_inputs(batch): \r\n # Tokenizer will automatically set [BOS] <text> [EOS] \r\n inputs = tokenizer(batch[\"document\"], padding=\"max_length\", truncation=True, max_length=encoder_max_length)\r\n outputs = tokenizer(batch[\"summary\"], padding=\"max_length\", truncation=True, max_length=decoder_max_length)\r\n \r\n batch[\"input_ids\"] = inputs.input_ids \r\n batch[\"attention_mask\"] = inputs.attention_mask \r\n batch[\"decoder_input_ids\"] = outputs.input_ids \r\n batch[\"labels\"] = outputs.input_ids.copy() \r\n # mask loss for padding \r\n batch[\"labels\"] = [ \r\n [-100 if token == tokenizer.pad_token_id else token for token in labels] for labels in batch[\"labels\"]\r\n ] \r\n batch[\"decoder_attention_mask\"] = outputs.attention_mask \r\n \r\n return batch \r\n```\r\n\r\nhere is the example from EncoderDecoderModel\r\n```\r\n>>> from transformers import EncoderDecoderModel, BertTokenizer\r\n>>> import torch\r\n\r\n>>> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\n>>> model = EncoderDecoderModel.from_encoder_decoder_pretrained('bert-base-uncased', 'bert-base-uncased') # initialize Bert2Bert from pre-trained checkpoints\r\n\r\n>>> # forward\r\n>>> input_ids = torch.tensor(tokenizer.encode(\"Hello, my dog is cute\", add_special_tokens=True)).unsqueeze(0) # Batch size 1\r\n>>> outputs = model(input_ids=input_ids, decoder_input_ids=input_ids)\r\n\r\n>>> # training\r\n>>> outputs = model(input_ids=input_ids, decoder_input_ids=input_ids, labels=input_ids)\r\n```", "The `labels` and `decoder_input_ids` always correspond to output. so it should be `outputs.input_ids`" ]
1,615
1,616
1,616
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.4.0.dev0 - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj ``` python examples/seq2seq/run_seq2seq.py \ --model_name_or_path google/roberta2roberta_L-24_discofuse \ --do_train \ --do_eval \ --task summarization \ --train_file path_to_csv_or_jsonlines_file \ --validation_file path_to_csv_or_jsonlines_file \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate \ --max_train_samples 500 \ --max_val_samples 500 ``` path_to_csv_or_jsonlines_file: ``` text,summary google map, gg map google translate, gg translate ``` t5-small works perfectly. But BertGeneration model has the following error error: ``` File "/Users/gyin/Documents/working/transformers/src/transformers/models/bert_generation/modeling_bert_generation.py", line 361, in forward raise ValueError("You have to specify either input_ids or inputs_embeds") ValueError: You have to specify either input_ids or inputs_embeds ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10646/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10646/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10645
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10645/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10645/comments
https://api.github.com/repos/huggingface/transformers/issues/10645/events
https://github.com/huggingface/transformers/issues/10645
829,011,989
MDU6SXNzdWU4MjkwMTE5ODk=
10,645
export T5 model to onnx with past_key_values
{ "login": "Ki6an", "id": 63173962, "node_id": "MDQ6VXNlcjYzMTczOTYy", "avatar_url": "https://avatars.githubusercontent.com/u/63173962?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ki6an", "html_url": "https://github.com/Ki6an", "followers_url": "https://api.github.com/users/Ki6an/followers", "following_url": "https://api.github.com/users/Ki6an/following{/other_user}", "gists_url": "https://api.github.com/users/Ki6an/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ki6an/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ki6an/subscriptions", "organizations_url": "https://api.github.com/users/Ki6an/orgs", "repos_url": "https://api.github.com/users/Ki6an/repos", "events_url": "https://api.github.com/users/Ki6an/events{/privacy}", "received_events_url": "https://api.github.com/users/Ki6an/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@LysandreJik @mfuntowicz - how do we deal with ONNX issues currently? ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,615
1,619
1,619
CONTRIBUTOR
null
## Environment info - `transformers` version: 4.3.3 - torch version 1.7.0 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.10 - PyTorch version (GPU?): 1.7.0 (False) - Tensorflow version (GPU?): 2.4.1 (False) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> Models: - t5: @patrickvonplaten, @patil-suraj while exporting `T5 decoder` with `past_key_values`, I'm getting the following error, ````python /usr/local/lib/python3.7/dist-packages/torch/onnx/utils.py:1109: UserWarning: Provided key encoder_hidden_states for dynamic axes is not a valid input/output name warnings.warn("Provided key {} for dynamic axes is not a valid input/output name".format(key)) /usr/local/lib/python3.7/dist-packages/transformers/models/t5/modeling_t5.py:646: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if torch.isinf(hidden_states).any(): /usr/local/lib/python3.7/dist-packages/transformers/models/t5/modeling_t5.py:684: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if torch.isinf(hidden_states).any(): /usr/local/lib/python3.7/dist-packages/transformers/modeling_utils.py:244: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if causal_mask.shape[1] < attention_mask.shape[1]: /usr/local/lib/python3.7/dist-packages/transformers/models/t5/modeling_t5.py:494: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! position_bias = position_bias[:, :, -seq_length:, :] --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-7-baab8a36e37d> in <module>() ----> 1 generate_onnx_representation(model_to_be_converted, 't5') 25 frames /usr/local/lib/python3.7/dist-packages/transformers/models/t5/modeling_t5.py in forward(self, hidden_states, mask, key_value_states, position_bias, past_key_value, layer_head_mask, query_length, use_cache, output_attentions) 497 position_bias = position_bias + mask # (batch_size, n_heads, seq_length, key_length) 498 --> 499 scores += position_bias 500 attn_weights = F.softmax(scores.float(), dim=-1).type_as( 501 scores RuntimeError: output with shape [5, 8, 1, 2] doesn't match the broadcast shape [5, 8, 2, 2] ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10645/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10645/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10644
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10644/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10644/comments
https://api.github.com/repos/huggingface/transformers/issues/10644/events
https://github.com/huggingface/transformers/pull/10644
829,006,202
MDExOlB1bGxSZXF1ZXN0NTkwNzQxNDUx
10,644
Numeracy
{ "login": "Zugunruhekami", "id": 6384803, "node_id": "MDQ6VXNlcjYzODQ4MDM=", "avatar_url": "https://avatars.githubusercontent.com/u/6384803?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Zugunruhekami", "html_url": "https://github.com/Zugunruhekami", "followers_url": "https://api.github.com/users/Zugunruhekami/followers", "following_url": "https://api.github.com/users/Zugunruhekami/following{/other_user}", "gists_url": "https://api.github.com/users/Zugunruhekami/gists{/gist_id}", "starred_url": "https://api.github.com/users/Zugunruhekami/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Zugunruhekami/subscriptions", "organizations_url": "https://api.github.com/users/Zugunruhekami/orgs", "repos_url": "https://api.github.com/users/Zugunruhekami/repos", "events_url": "https://api.github.com/users/Zugunruhekami/events{/privacy}", "received_events_url": "https://api.github.com/users/Zugunruhekami/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10644/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10644/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10644", "html_url": "https://github.com/huggingface/transformers/pull/10644", "diff_url": "https://github.com/huggingface/transformers/pull/10644.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10644.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/10643
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10643/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10643/comments
https://api.github.com/repos/huggingface/transformers/issues/10643/events
https://github.com/huggingface/transformers/issues/10643
828,954,067
MDU6SXNzdWU4Mjg5NTQwNjc=
10,643
Space token cannot be add when is_split_into_words = True
{ "login": "Boltzmachine", "id": 56542320, "node_id": "MDQ6VXNlcjU2NTQyMzIw", "avatar_url": "https://avatars.githubusercontent.com/u/56542320?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Boltzmachine", "html_url": "https://github.com/Boltzmachine", "followers_url": "https://api.github.com/users/Boltzmachine/followers", "following_url": "https://api.github.com/users/Boltzmachine/following{/other_user}", "gists_url": "https://api.github.com/users/Boltzmachine/gists{/gist_id}", "starred_url": "https://api.github.com/users/Boltzmachine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Boltzmachine/subscriptions", "organizations_url": "https://api.github.com/users/Boltzmachine/orgs", "repos_url": "https://api.github.com/users/Boltzmachine/repos", "events_url": "https://api.github.com/users/Boltzmachine/events{/privacy}", "received_events_url": "https://api.github.com/users/Boltzmachine/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,615
1,619
1,619
NONE
null
for example, ```python >>> tokenizer = BertTokenizer.from_pretrained('bert-base-chinese') >>> tokenizer.add_tokens(' ') 1 ``` ```python >>> tokenizer.encode('你好 世界', add_special_tokens=False) [872, 1962, 21128, 686, 4518] >>> tokenizer.encode(['你','好',' ', '世', '界'], is_split_into_words=True, add_special_tokens=False) [872, 1962, 686, 4518] ``` Obviously, the blank token is ignored. But if you change it to another token like ‘[balabala]’, it works. So what is the proper way to do this?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10643/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10643/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10642
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10642/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10642/comments
https://api.github.com/repos/huggingface/transformers/issues/10642/events
https://github.com/huggingface/transformers/issues/10642
828,880,311
MDU6SXNzdWU4Mjg4ODAzMTE=
10,642
Unable To Load Pretrained Longformer Models' Tokenizers
{ "login": "UmerTariq1", "id": 32323864, "node_id": "MDQ6VXNlcjMyMzIzODY0", "avatar_url": "https://avatars.githubusercontent.com/u/32323864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/UmerTariq1", "html_url": "https://github.com/UmerTariq1", "followers_url": "https://api.github.com/users/UmerTariq1/followers", "following_url": "https://api.github.com/users/UmerTariq1/following{/other_user}", "gists_url": "https://api.github.com/users/UmerTariq1/gists{/gist_id}", "starred_url": "https://api.github.com/users/UmerTariq1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/UmerTariq1/subscriptions", "organizations_url": "https://api.github.com/users/UmerTariq1/orgs", "repos_url": "https://api.github.com/users/UmerTariq1/repos", "events_url": "https://api.github.com/users/UmerTariq1/events{/privacy}", "received_events_url": "https://api.github.com/users/UmerTariq1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Indeed, I can reproduce and traced it back to https://github.com/huggingface/transformers/issues/10624. Investigating!", "Found the issue, opening a PR shortly.", "It should now be fixed on `master`. Thanks a lot for using the `master` branch and letting us know of the issue!" ]
1,615
1,615
1,615
NONE
null
## Environment info - `transformers` version: 4.4.0.dev0 - Platform: Windows - Python version : 3.7.10 - Using GPU in script?: Issue is with both - Using distributed or parallel set-up in script?: Single device @patrickvonplaten (because the issue is with longformers) @LysandreJik (because the issue is with tokenizers) ## Information Model I am using : Longformer. The problem arises when loading tokenizer using from_pretrained() function. The tasks I am working on is Question Answering but it does not matter since I am facing this issue while loading any kind of Longformer: ## To reproduce Steps to reproduce the behavior: 1. Install Transformers 2. import Transformers 3. run tokenizer = transformers.AutoTokenizer.from_pretrained(MODEL_NAME) ## Reference Code: ``` !pip3 install git+https://github.com/huggingface/transformers import transformers DEEP_LEARNING_MODEL_NAME = "mrm8488/longformer-base-4096-finetuned-squadv2" # Not working for 4.4.0.dev0 # DEEP_LEARNING_MODEL_NAME = "a-ware/longformer-QA" # Not working for 4.4.0.dev0 # DEEP_LEARNING_MODEL_NAME = "valhalla/longformer-base-4096-finetuned-squadv1" # Not working for 4.4.0.dev0 # DEEP_LEARNING_MODEL_NAME = "allenai/longformer-base-4096" # Not working for 4.4.0.dev0 # DEEP_LEARNING_MODEL_NAME = "deepset/roberta-base-squad2" # Working for 4.4.0.dev0 # DEEP_LEARNING_MODEL_NAME = "mrm8488/bert-base-portuguese-cased-finetuned-squad-v1-pt" # Working for 4.4.0.dev0 tokenizer = transformers.AutoTokenizer.from_pretrained(DEEP_LEARNING_MODEL_NAME) ``` ## Reference Colab notebook: https://colab.research.google.com/drive/1v10E77og3-7B2_aFfYhrHvzBZzRlo7wo#scrollTo=2zHj2lMsFuv3 ## Further Information: - This issue started appearing today. It **was working fine till yesterday.** - This **issue is only with 4.4.0** dev version. This issue **does not** occur for pip install transformers (which is currently on version **4.3.3**) - The issue is only while loading tokenizers, not models - The issue is only while loading longformers (any longformer model). Other models' tokenizers are loaded correctly (for example 'deepset/roberta-base-squad2' tokenizer can be loaded correctly)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10642/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10642/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10641
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10641/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10641/comments
https://api.github.com/repos/huggingface/transformers/issues/10641/events
https://github.com/huggingface/transformers/issues/10641
828,871,639
MDU6SXNzdWU4Mjg4NzE2Mzk=
10,641
Unable to reduce time in summarization!
{ "login": "divyanshugit", "id": 53843818, "node_id": "MDQ6VXNlcjUzODQzODE4", "avatar_url": "https://avatars.githubusercontent.com/u/53843818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/divyanshugit", "html_url": "https://github.com/divyanshugit", "followers_url": "https://api.github.com/users/divyanshugit/followers", "following_url": "https://api.github.com/users/divyanshugit/following{/other_user}", "gists_url": "https://api.github.com/users/divyanshugit/gists{/gist_id}", "starred_url": "https://api.github.com/users/divyanshugit/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/divyanshugit/subscriptions", "organizations_url": "https://api.github.com/users/divyanshugit/orgs", "repos_url": "https://api.github.com/users/divyanshugit/repos", "events_url": "https://api.github.com/users/divyanshugit/events{/privacy}", "received_events_url": "https://api.github.com/users/divyanshugit/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello!\r\n\r\nThe generation part of TensorFlow can only be run on eagermode, hence doesn't matter how you execute it, you will not be able to run it \"fast\". It is planned to bring a graph execution for the generation, but no ETA yet. Sorry for the inconvenience." ]
1,615
1,616
1,616
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: `t5-large` - Platform: Amazon Sagemaker - Python version: 3.7 - Tensorflow version (GPU?):2.3 - Using distributed or parallel set-up in script?: Unable to implement it properly. ### Who can help @LysandreJik, @patil-suraj, @jplu ### Problem: Unable to reduce the summarization time. The tasks I am working on is: I am using pretrained transformer of T5(`TFT5ForConditionalGeneration`) for text summarization. Brief script: ``` inputs = tokenizer("summarize: " + text, return_tensors="tf").input_ids outputs = model.generate( inputs, max_length=200, min_length=5, num_beams=5,) ``` I tried to use distributed strategy of tensorflow. But it doesn't made any improvement. ``` strategy = tf.distribute.MirroredStrategy() strategy = tf.distribute.MirroredStrategy(devices=["/gpu:0", "/gpu:1"]) ``` ## Expected behavior I am hoping that if we increase the number of **GPU**, time must be reduced. But It is not happening in this case.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10641/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10641/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10640
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10640/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10640/comments
https://api.github.com/repos/huggingface/transformers/issues/10640/events
https://github.com/huggingface/transformers/issues/10640
828,830,142
MDU6SXNzdWU4Mjg4MzAxNDI=
10,640
Nonetype when using deepspeed
{ "login": "dorooddorood606", "id": 79288051, "node_id": "MDQ6VXNlcjc5Mjg4MDUx", "avatar_url": "https://avatars.githubusercontent.com/u/79288051?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorooddorood606", "html_url": "https://github.com/dorooddorood606", "followers_url": "https://api.github.com/users/dorooddorood606/followers", "following_url": "https://api.github.com/users/dorooddorood606/following{/other_user}", "gists_url": "https://api.github.com/users/dorooddorood606/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorooddorood606/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorooddorood606/subscriptions", "organizations_url": "https://api.github.com/users/dorooddorood606/orgs", "repos_url": "https://api.github.com/users/dorooddorood606/repos", "events_url": "https://api.github.com/users/dorooddorood606/events{/privacy}", "received_events_url": "https://api.github.com/users/dorooddorood606/received_events", "type": "User", "site_admin": false }
[ { "id": 2659267025, "node_id": "MDU6TGFiZWwyNjU5MjY3MDI1", "url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed", "name": "DeepSpeed", "color": "4D34F7", "default": false, "description": "" } ]
closed
false
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false } ]
[ "Hi! I think the error you're looking for is actually a bit above to the error you're mentioning, namely:\r\n```\r\nRuntimeError: [enforce fail at CPUAllocator.cpp:67] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 2329531392 bytes. Error code 12 (Cannot allocate memory)\r\n```\r\nwhich would indicate a memory issue!", "What @LysandreJik said - check if you're close to using up all your gpu memory?\r\n\r\nYou can try to reduce `allgather_bucket_size` and `reduce_bucket_size` sizes,\r\nhttps://huggingface.co/transformers/main_classes/trainer.html#zero\r\nand see if it solves the problem.\r\n\r\nIn general if you see the traceback happening inside deepspeed most likely you will want to file an Issue at https://github.com/microsoft/DeepSpeed/ - Deepspeed is pretty much an independent engine.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,615
1,619
1,619
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.3.3 - Platform: linux - Python version: 3.7 - PyTorch version (GPU?): 1.8 - Tensorflow version (GPU?): - - Using GPU in script?: - - Using distributed or parallel set-up in script?: - ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> deepspeed: @stas00 ## Information Hi there, I am training run_mlm.py with wikipedia datasets with deepspeed, and for some datasets, I am getting this error below: Do you have an idea why this might happen with deepspeed? looks like not to be a memory issue rather a None bug occuring: ``` File "run_mlm.py", line 525, in <module> main() File "run_mlm.py", line 491, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/user/diba/libs/anaconda3/envs/transformer/lib/python3.7/site-packages/transformers/trainer.py", line 968, in train self.deepspeed.step() File "/user/diba/libs/anaconda3/envs/transformer/lib/python3.7/site-packages/deepspeed/runtime/engine.py", line 959, in step self._take_model_step(lr_kwargs) File "/user/diba/libs/anaconda3/envs/transformer/lib/python3.7/site-packages/deepspeed/runtime/engine.py", line 914, in _take_model_step self.optimizer.step() File "/user/diba/libs/anaconda3/envs/transformer/lib/python3.7/site-packages/deepspeed/runtime/zero/stage2.py", line 1425, in step self.optimizer.step() File "/user/diba/libs/anaconda3/envs/transformer/lib/python3.7/site-packages/torch/optim/optimizer.py", line 89, in wrapper return func(*args, **kwargs) File "/user/diba/libs/anaconda3/envs/transformer/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/user/diba/libs/anaconda3/envs/transformer/lib/python3.7/site-packages/torch/optim/adamw.py", line 121, in step group['eps']) File "/user/diba/libs/anaconda3/envs/transformer/lib/python3.7/site-packages/torch/optim/_functional.py", line 136, in adamw denom = (exp_avg_sq.sqrt() / math.sqrt(bias_correction2)).add_(eps) RuntimeError: [enforce fail at CPUAllocator.cpp:67] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 2329531392 bytes. Error code 12 (Cannot allocate memory) Exception ignored in: <function tqdm.__del__ at 0x7f9b52ef4440> Traceback (most recent call last): File "/user/diba/libs/anaconda3/envs/transformer/lib/python3.7/site-packages/tqdm/std.py", line 1090, in __del__ File "/user/diba/libs/anaconda3/envs/transformer/lib/python3.7/site-packages/tqdm/std.py", line 1280, in close File "/user/diba/libs/anaconda3/envs/transformer/lib/python3.7/site-packages/tqdm/std.py", line 574, in _decr_instances File "/user/diba/libs/anaconda3/envs/transformer/lib/python3.7/site-packages/tqdm/_monitor.py", line 51, in exit File "/user/diba/libs/anaconda3/envs/transformer/lib/python3.7/threading.py", line 522, in set File "/user/diba/libs/anaconda3/envs/transformer/lib/python3.7/threading.py", line 365, in notify_all File "/user/diba/libs/anaconda3/envs/transformer/lib/python3.7/threading.py", line 348, in notify TypeError: 'NoneType' object is not callable ``` ## To reproduce Sorry a bit hard to reproduce, I have modified a bit data collator of language modeling to let it do the T5 pretraining, which is currently not added in huggingface repo. If you could give me some advice from the error, it is greatly appreciated
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10640/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10640/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10639
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10639/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10639/comments
https://api.github.com/repos/huggingface/transformers/issues/10639/events
https://github.com/huggingface/transformers/issues/10639
828,788,589
MDU6SXNzdWU4Mjg3ODg1ODk=
10,639
Support Quantization Aware Fine-tuning in all models (pytorch)
{ "login": "sai-prasanna", "id": 3595526, "node_id": "MDQ6VXNlcjM1OTU1MjY=", "avatar_url": "https://avatars.githubusercontent.com/u/3595526?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sai-prasanna", "html_url": "https://github.com/sai-prasanna", "followers_url": "https://api.github.com/users/sai-prasanna/followers", "following_url": "https://api.github.com/users/sai-prasanna/following{/other_user}", "gists_url": "https://api.github.com/users/sai-prasanna/gists{/gist_id}", "starred_url": "https://api.github.com/users/sai-prasanna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sai-prasanna/subscriptions", "organizations_url": "https://api.github.com/users/sai-prasanna/orgs", "repos_url": "https://api.github.com/users/sai-prasanna/repos", "events_url": "https://api.github.com/users/sai-prasanna/events{/privacy}", "received_events_url": "https://api.github.com/users/sai-prasanna/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello! Would [I-BERT](https://huggingface.co/transformers/master/model_doc/ibert.html), available on `master` and contributed by @kssteven418 be of interest?", "@LysandreJik, Thanks for the useful reference. I guess the i-BERT model has manually implemented the architectural components (kernels, int8 layer norm etc) to make quantization work for BERT. If I am not wrong, their objective is to train BERT as much as possible in int8. The qat in torch takes the approach of training model in floating point fully but incorporating noise in gradients that mimic noise due to quantization. So it's basically throwing the \"optimizing for quantization error\" part to gradient descent, foregoing any need for altering architectures or fp32/16 training regime.\r\n\r\n This approach would be broader and apply for all the architectures without re-implementation. Maybe we can have a \"qat\" flag in config, that can be used to perform fake quantization and dequantization (which introduces quantization noise to parts of the gradients).", "Do you have an idea of the changes required for that? Could you do PoC and show us so that we can discuss over it?", "@LysandreJik Can you take a look at this [implementation](https://github.com/IntelLabs/nlp-architect/blob/0f00215dcaf81f8a9b296035834310d77015f085/nlp_architect/models/transformers/quantized_bert.py). It's a functioning qat aware BERT fine-tuning implementation. The process is described in this paper, [Q8BERT: Quantized 8Bit BERT](https://arxiv.org/abs/1910.06188).", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "This is a feature I'd like to see as well, as dynamic quantization leads to a huge accuracy drop in my use case. My understanding is that a possible implementation of QAT could also easily be expanded to support static quantization.", "@sai-prasanna is it possible to load Bert-base (FP32 model) weights into Q8Bert ?" ]
1,615
1,624
1,619
NONE
null
# 🚀 Feature request Pytorch supports mimicking quantization errors while training the models. Here is the [tutorial](https://pytorch.org/tutorials/recipes/quantization.html#quantization-aware-training) on this. For our NLP transformers, it requires a "fake quantization" operation to be done on the embeddings. I found this [repository](https://github.com/IntelLabs/nlp-architect/blob/0f00215dcaf81f8a9b296035834310d77015f085/nlp_architect/models/transformers/quantized_bert.py) converting BERT to support this. ## Motivation I think quantization aware fine-tuning (if it works) will help a lot of use-cases where dynamic quantization alone doesn't suffice in maintaining the performance of the quantized model. Supporting it out of the box will remove the duplication of model code in end use cases. ## Your contribution I can work on this ASAP. Would appreciate initial thoughts on what a the MVP for it would be, any thoughts on the API (should we take in a "qat" boolean in config?), any pitfalls that I should be aware of, etc.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10639/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10639/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10638
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10638/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10638/comments
https://api.github.com/repos/huggingface/transformers/issues/10638/events
https://github.com/huggingface/transformers/pull/10638
828,651,712
MDExOlB1bGxSZXF1ZXN0NTkwNDM0MTc5
10,638
Fix decoding score comparison when using logits processors or warpers
{ "login": "bryant1410", "id": 3905501, "node_id": "MDQ6VXNlcjM5MDU1MDE=", "avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bryant1410", "html_url": "https://github.com/bryant1410", "followers_url": "https://api.github.com/users/bryant1410/followers", "following_url": "https://api.github.com/users/bryant1410/following{/other_user}", "gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}", "starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions", "organizations_url": "https://api.github.com/users/bryant1410/orgs", "repos_url": "https://api.github.com/users/bryant1410/repos", "events_url": "https://api.github.com/users/bryant1410/events{/privacy}", "received_events_url": "https://api.github.com/users/bryant1410/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "The failing test is `test_90_generation_from_short_input`, which generates \"have you ever heard of sam harris? he is an american singer and songwriter. have you heard of him?\" instead of \"have you ever been to a sam club? it's a great club in the south.\" or \"have you ever heard of sam harris? he's an american singer, songwriter, and actor.\".\r\n\r\nI honestly don't know what's the expected behavior there, so not sure if it's flaky or not. The weird thing is that this test seems to be greedy search, not beam search.", "Actually, I just looked more closely and the failing test does use beam search (the beam size is specified in the config). This is an example of something that changes since it uses a `NoRepeatNGramLogitsProcessor`, a `MinLengthLogitsProcessor`, and a `ForcedEOSTokenLogitsProcessor`.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "I'm gonna address it, it's been in my mind. Please don't mark it as stale!", "I've added the `WIP` label so that the stale bot doesn't close it!", "@patrickvonplaten sorry for the big delay.\r\n\r\nI changed the normalization to be a logit warper now. What do you think of it, and its documentation?\r\n\r\nAlso, what if we set a deprecation for it? And take advantage of some breaking change in the future and make it the default?", "The failing tests are flaky, right?", "Could we add one tests for the new logits processor as well? :-)", "@patrickvonplaten can you remove the WIP label? This should be done.\r\n\r\nAlso, the latest time a test failed, it seemed to be flaky. It should be good to go :rocket: ", "_The documentation is not available anymore as the PR was closed or merged._", "@patrickvonplaten friendly reminder on this!", "Also, should we add a flag in `generate` so this logit processor gets added to the list? Such as `renormalize_logits`.", "PR looks good to go for me - thanks @bryant1410. Yes indeed could you maybe add a flag `renormalize_logits` to `generate()`?", "> PR looks good to go for me - thanks @bryant1410. Yes indeed could you maybe add a flag `renormalize_logits` to `generate()`?\r\n\r\nOkay, @patrickvonplaten I did this change.\r\n\r\nWhat do you think about also making `renormalize_logits=True` in the future? So then adding some deprecation or warning that this value is gonna change? Or that it should be set to `False` to keep BC?", "Oh, and btw, note I also applied it to the warpers (so it's applied to both the processors and warpers).", "Should the attribute be added to the configs such that the following can be applied?\r\n\r\n```python\r\nrenormalize_logits if renormalize_logits is not None else self.config.renormalize_logits\r\n```", "> Should the attribute be added to the configs such that the following can be applied?\r\n> \r\n> ```python\r\n> renormalize_logits if renormalize_logits is not None else self.config.renormalize_logits\r\n> ```\r\n\r\nNo need for this I think since it's quite a specific logit processor", "@bryant1410, could you also update RAG's generate method to incorporate you changes? The test currently fails with \r\n```TypeError: _get_logits_processor() missing 1 required positional argument: 'renormalize_logits'```\r\n\r\nIt should be easy to adapt here: https://github.com/huggingface/transformers/blob/febe42b5daf4b416f4613e9d7f68617ee983bb40/src/transformers/models/rag/modeling_rag.py#L1608", "> @bryant1410, could you also update RAG's generate method to incorporate you changes? The test currently fails with `TypeError: _get_logits_processor() missing 1 required positional argument: 'renormalize_logits'`\r\n> \r\n> It should be easy to adapt here:\r\n> \r\n> https://github.com/huggingface/transformers/blob/febe42b5daf4b416f4613e9d7f68617ee983bb40/src/transformers/models/rag/modeling_rag.py#L1608\r\n\r\nDone. What about this?\r\n\r\n> What do you think about also making `renormalize_logits=True` in the future? So then adding some deprecation or warning that this value is gonna change? Or that it should be set to `False` to keep BC?", "Good for merge for me! Let's see what @gante says ", "> Good for merge for me! Let's see what @gante says\r\n\r\nOkay! What about the comment/idea on making it `renormalize_logits=True` in the future? So then adding some deprecation or warning that this value is gonna change?", "> > Good for merge for me! Let's see what @gante says\r\n> \r\n> Okay! What about the comment/idea on making it `renormalize_logits=True` in the future? So then adding some deprecation or warning that this value is gonna change?\r\n\r\nDon't really think that's possible due to backwards breaking changes tbh", "> Don't really think that's possible due to backwards breaking changes tbh\r\n\r\nI understand. However, eventually, the breaking change is gonna happen because of some accumulated \"debt\" that gets big enough, after many different fixes or wanted features. Like it happens in other libraries. It could happen after some major version change (e.g., v5), which it's a great opportunity to change a lot of desired changes that are breaking.\r\n\r\nOne approach to track this is to deprecate the value and say when it's gonna be changed (e.g., v5). It could be with a warning, some comment in the docstring, or maybe just a doc that tracks down which is gonna be changed. I guess what I'm saying is to add this change to that list (is it worth it, in your opinion?). BTW, do you have in this repo such a list of things that are eventually gonna be changed (maybe implicitly tracked in various comments)?\r\n\r\nWhat are your thoughts? Maybe you think differently?", "> To ensure this change stays future-proof, I'd like to discuss an additional change. The new logit processor, when it exists in the list of logit processors to be applied, must be the last one. Should we raise an exception when it isn't? (e.g. it has to be the last one in [this list](https://github.com/huggingface/transformers/blob/main/src/transformers/generation_logits_process.py#L82), when it exists) cc @patrickvonplaten\r\n\r\nMakes sense to me. However, what if the user wants to do something custom, by manually adding this processor logit somewhere? If we add a check and an exception, then the user would face it in this custom scenario. Or maybe it's a bit far-fetched?", "> Makes sense to me. However, what if the user wants to do something custom, by manually adding this processor logit somewhere? If we add a check and an exception, then the user would face it in this custom scenario. Or maybe it's a bit far-fetched?\r\n\r\nUhmm I see. We can go with the low effort, low cost, and low consequence alternative (see the following suggestion)", "@bryant1410 regarding the `renormalize_logits` default value, I've added it to a v5 wishlist, to discuss internally when we decide to do the next major change :)\r\n\r\nSince there are no other outstanding requests and CI is green, I'm merging the PR 💪 " ]
1,615
1,666
1,649
CONTRIBUTOR
null
When doing beam search or other decoding search strategies, the logit scores are normalized (with `log_softmax`) so the comparisons between the beams (hypotheses) are meaningful. However, the logit processors or warpers may change the scores, and thus may not be normalized anymore. For example, say you have a beam size of 2. During beam search at some point, beam A is better than B (higher score). You use `prefix_allowed_tokens_fn`, which in turn through a logit processor narrows down the options of the next tokens to only one. Then masks out all tokens with `-inf` but one. The score vector may look like `[-inf, ..., -2.13, ..., -inf]`. This is output and now the scores are not normalized anymore. This filter is not applied to B. Now beam search selects B, which actually keeping the hypothesis A meant having the same probability since the normalized vector should have been `[-inf, ..., 0, ..., -inf]`. In that case, hypothesis A would have been kept (and that's what actually should happen). This erroneous behavior can happen with any logit processor that doesn't normalize its output, which I see it's often the case. So that's why I moved the `log_softmax` to after the logit processor/warper application. I also checked if any logit processor needed the normalization for its input. It doesn't seem to be the case (though I'm not 100% sure). They can still individually apply a normalization if they need to. Maybe the documentation could be changed, by the way: https://github.com/huggingface/transformers/blob/26a33cfd8c2d6923f41ab98683f33172e8948ff3/src/transformers/generation_logits_process.py#L37-L39 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? I feel I should tag @patrickvonplaten, @patil-suraj
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10638/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10638/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10638", "html_url": "https://github.com/huggingface/transformers/pull/10638", "diff_url": "https://github.com/huggingface/transformers/pull/10638.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10638.patch", "merged_at": 1649839053000 }
https://api.github.com/repos/huggingface/transformers/issues/10637
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10637/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10637/comments
https://api.github.com/repos/huggingface/transformers/issues/10637/events
https://github.com/huggingface/transformers/pull/10637
828,317,070
MDExOlB1bGxSZXF1ZXN0NTkwMTM4ODQz
10,637
Remove special treatment for custom vocab files
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,615
1,615
1,615
COLLABORATOR
null
# What does this PR do? This PR follows up from #10624 and removes the ability to specify a custom vocab file that doesn't lie in the model repo for a tokenizer (which prevents using the versioning system for those tokenizers). It also cleans up a tiny bit the `from_pretrained` method, mainly: - use f-strings instead of `.format()` - remove double try since you can have several except in cascade - use a FutureWarning for a deprecation warning that was just sent to the logs and set an end date
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10637/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10637/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10637", "html_url": "https://github.com/huggingface/transformers/pull/10637", "diff_url": "https://github.com/huggingface/transformers/pull/10637.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10637.patch", "merged_at": 1615479117000 }
https://api.github.com/repos/huggingface/transformers/issues/10636
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10636/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10636/comments
https://api.github.com/repos/huggingface/transformers/issues/10636/events
https://github.com/huggingface/transformers/pull/10636
828,255,160
MDExOlB1bGxSZXF1ZXN0NTkwMDgzNTg3
10,636
Layout lm tf 2
{ "login": "atahmasb", "id": 25216362, "node_id": "MDQ6VXNlcjI1MjE2MzYy", "avatar_url": "https://avatars.githubusercontent.com/u/25216362?v=4", "gravatar_id": "", "url": "https://api.github.com/users/atahmasb", "html_url": "https://github.com/atahmasb", "followers_url": "https://api.github.com/users/atahmasb/followers", "following_url": "https://api.github.com/users/atahmasb/following{/other_user}", "gists_url": "https://api.github.com/users/atahmasb/gists{/gist_id}", "starred_url": "https://api.github.com/users/atahmasb/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/atahmasb/subscriptions", "organizations_url": "https://api.github.com/users/atahmasb/orgs", "repos_url": "https://api.github.com/users/atahmasb/repos", "events_url": "https://api.github.com/users/atahmasb/events{/privacy}", "received_events_url": "https://api.github.com/users/atahmasb/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@LysandreJik Thanks to you all tests passed! It's ready for an in depth review. \r\nI've already uploaded TF2 model files under:\r\n- https://huggingface.co/atahmasb/tf-layoutlm-base-uncased\r\n- https://huggingface.co/atahmasb/tf-layoutlm-large-uncased\r\n\r\nwould you please copy them to the main repos when the code is merged?\r\n\r\nAlso, it seems like I can't add you or others from the team as reviewers", "> I think this is starting to look great! Fantastic that you've added all models.\r\n> \r\n> Could you add an additional integration test, to ensure that the current implementation doesn't diverge?\r\n> \r\n> Something like what is done in the PT version of LayoutLM:\r\n> \r\n> https://github.com/huggingface/transformers/blob/8715d20c97b3975c1d89cf0c0cca45af91badd1d/tests/test_modeling_layoutlm.py#L262-L283\r\n> \r\n> I'm asking others to review.\r\n\r\nsure, will do", "> Thanks a lot for adding this model! There is one last problem with the examples in the docstrings (we can't use the base ones since we need to provide bounding boxes), otherwise it's good to be merged!\r\n\r\nThanks, will fix it.", "> I think this is starting to look great! Fantastic that you've added all models.\r\n> \r\n> Could you add an additional integration test, to ensure that the current implementation doesn't diverge?\r\n> \r\n> Something like what is done in the PT version of LayoutLM:\r\n> \r\n> https://github.com/huggingface/transformers/blob/8715d20c97b3975c1d89cf0c0cca45af91badd1d/tests/test_modeling_layoutlm.py#L262-L283\r\n> \r\n> I'm asking others to review.\r\n\r\nfor the tf layoutlm integration tests to pass on CI, the tf model files should live under `microsoft/tayoutlm-base-uncased` or I have to use their location under my account in the model registry which is `atahmasb/tf-layoutlm-base-uncased`. Do you want me to use the temp location under my account for now?", "Yes sure let's use a temporary reference for now and update it right before we merge.", "@LysandreJik I cleaned up the initialisation file, the conflicts are resolved now! Anything else before it can be merged?", "Cool! I just moved the weights to the microsoft organization. Could you update the links/checkpoint identifiers in your PR and test that it has the expected behavior? Thanks!", "> Cool! I just moved the weights to the microsoft organization. Could you update the links/checkpoint identifiers in your PR and test that it has the expected behavior? Thanks!\r\n\r\nit's done! for some reasons one of the tests that has nothing to do with my code failed on CI this time. do you have any idea why? Also it seems like i can't re run the tests on CI without making a change in the code\r\n```\r\nFAILED tests/test_modeling_prophetnet.py::ProphetNetModelTest::test_attn_mask_model\r\n=== 1 failed, 5133 passed, 2140 skipped, 2209 warnings in 282.78s (0:04:42) ====\r\n```", "> Yes, this is a flaky test that's under the process of being fixed so you needn't worry about it.\r\n> \r\n> This looks good to me, thanks for your work @atahmasb!!\r\n\r\nThanks for your help along the way!", "> There are a few things left to fix, then this should be good to merge!\r\n\r\nThanks for catching those! They are fixed [here](https://github.com/huggingface/transformers/pull/10636/commits/3bee70daf2a71066335d360761ce5c7bb432500a)" ]
1,615
1,616
1,616
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR adds TF version of LayoutLM for issue [(10312)](https://github.com/huggingface/transformers/issues/10312) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10636/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10636/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10636", "html_url": "https://github.com/huggingface/transformers/pull/10636", "diff_url": "https://github.com/huggingface/transformers/pull/10636.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10636.patch", "merged_at": 1616689959000 }