url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
sequence
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/7021
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7021/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7021/comments
https://api.github.com/repos/huggingface/transformers/issues/7021/events
https://github.com/huggingface/transformers/issues/7021
696,438,605
MDU6SXNzdWU2OTY0Mzg2MDU=
7,021
where can I download the pre-trained pytorch_model.bin files ?
{ "login": "Deep1994", "id": 24366782, "node_id": "MDQ6VXNlcjI0MzY2Nzgy", "avatar_url": "https://avatars.githubusercontent.com/u/24366782?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Deep1994", "html_url": "https://github.com/Deep1994", "followers_url": "https://api.github.com/users/Deep1994/followers", "following_url": "https://api.github.com/users/Deep1994/following{/other_user}", "gists_url": "https://api.github.com/users/Deep1994/gists{/gist_id}", "starred_url": "https://api.github.com/users/Deep1994/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Deep1994/subscriptions", "organizations_url": "https://api.github.com/users/Deep1994/orgs", "repos_url": "https://api.github.com/users/Deep1994/repos", "events_url": "https://api.github.com/users/Deep1994/events{/privacy}", "received_events_url": "https://api.github.com/users/Deep1994/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You can go to the model hub, click on a model and select \"List all files in model\". These are links to the files.\r\n![download_model](https://user-images.githubusercontent.com/30755778/92567770-2cb95080-f24c-11ea-88c1-008e98556a0e.gif)\r\n\r\n" ]
1,599
1,599
1,599
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7021/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7021/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7020
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7020/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7020/comments
https://api.github.com/repos/huggingface/transformers/issues/7020/events
https://github.com/huggingface/transformers/issues/7020
696,349,669
MDU6SXNzdWU2OTYzNDk2Njk=
7,020
Use `run_language_modeling.py` to finetune gpt2, but it core unexpectedly.
{ "login": "Abbyyan", "id": 12140508, "node_id": "MDQ6VXNlcjEyMTQwNTA4", "avatar_url": "https://avatars.githubusercontent.com/u/12140508?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Abbyyan", "html_url": "https://github.com/Abbyyan", "followers_url": "https://api.github.com/users/Abbyyan/followers", "following_url": "https://api.github.com/users/Abbyyan/following{/other_user}", "gists_url": "https://api.github.com/users/Abbyyan/gists{/gist_id}", "starred_url": "https://api.github.com/users/Abbyyan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Abbyyan/subscriptions", "organizations_url": "https://api.github.com/users/Abbyyan/orgs", "repos_url": "https://api.github.com/users/Abbyyan/repos", "events_url": "https://api.github.com/users/Abbyyan/events{/privacy}", "received_events_url": "https://api.github.com/users/Abbyyan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, do you mind copy-pasting the text in your terminal instead of putting images? It'll be easier for us to understand, to debug, and for other users to search for similar issues. Thanks!", "> Hi, do you mind copy-pasting the text in your terminal instead of putting images? It'll be easier for us to understand, to debug, and for other users to search for similar issues. Thanks!\r\n\r\nI've figure out the problem of gpu core. According to https://github.com/pytorch/pytorch/issues/31285, my gpu card is not supported by pytorch. I'm trying to build pytorch from source.\r\nBut it's weird when i using `run_language_modeling.py ` with `--no_cuda `. There is no error message.\r\nThe command i used is \r\n```shell\r\npython3 run_language_modeling.py --output_dir=/home/xxx/gpt_model/transformers/examples/language-modeling/output_dir --model_type=gpt2 --model_name_or_path=gpt2 --per_device_train_batch_size=1 --do_train --train_data_file=/home/xxx/gpt_model/data_info/transformer.data --block_size=512 --save_steps=500 --overwrite_output_dir --no_cuda\r\n\r\n```\r\nThe output message is \r\n```shell\r\n09/09/2020 16:23:23 - WARNING - __main__ - Process rank: -1, device: cpu, n_gpu: 0, distributed training: False, 16-bits training: False\r\n09/09/2020 16:23:23 - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir='/home/xxx/gpt_model/transformers/examples/language-modeling/output_dir', overwrite_output_dir=True, do_train=True, do_eval=False, do_predict=False, evaluate_during_training=False, prediction_loss_only=False, per_device_train_batch_size=1, per_device_eval_batch_size=8, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=3.0, max_steps=-1, warmup_steps=0, logging_dir='runs/Sep09_16-23-23_TENCENT64.site', logging_first_step=False, logging_steps=500, save_steps=500, save_total_limit=None, no_cuda=True, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=1000, past_index=-1, run_name=None, disable_tqdm=False, remove_unused_columns=True)\r\n/home/xxx/anaconda3/envs/transformers/lib/python3.6/site-packages/transformers/modeling_auto.py:821: FutureWarning: The class `AutoModelWithLMHead` is deprecated and will be removed in a future version. Please use `AutoModelForCausalLM` for causal language models, `AutoModelForMaskedLM` for masked language models and `AutoModelForSeq2SeqLM` for encoder-decoder models.\r\n FutureWarning,\r\n/home/xxx/anaconda3/envs/transformers/lib/python3.6/site-packages/transformers/tokenization_utils_base.py:1321: FutureWarning: The `max_len` attribute has been deprecated and will be removed in a future version, use `model_max_length` instead.\r\n FutureWarning,\r\n09/09/2020 16:23:31 - INFO - filelock - Lock 140250526322472 acquired on /home/xxx/gpt_model/data_info/cached_lm_GPT2Tokenizer_512_transformer.data.lock\r\n09/09/2020 16:23:32 - INFO - filelock - Lock 140250526322472 released on /home/xxx/gpt_model/data_info/cached_lm_GPT2Tokenizer_512_transformer.data.lock\r\n\r\n\r\n/home/xxx/anaconda3/envs/transformers/lib/python3.6/site-packages/transformers/trainer.py:247: FutureWarning: Passing `prediction_loss_only` as a keyword argument is deprecated and won't be possible in a future version. Use `args.prediction_loss_only` instead.\r\n FutureWarning,\r\n<library>In get_train_dataloader ,tarin_batch_size = 1\r\nEpoch: 0%| | 0/3 \r\n\r\nKilledion: 4%|████▎ | 194/5390 [04:30<2:58:02, 2.06s/it] \r\n```\r\nAnd `echo $?`, the return code is `137`. Thanks a lot.", "The return code 137 means that you have an out of memory error. Do you get the same error if you use `distilgpt2` with a `--block_size=64`? (Just for testing purposes).\r\n\r\nWe've also recently patched a memory error on the `Trainer`, could you install from source to benefit from the fix? You can do so as such:\r\n\r\n`pip install git+https://github.com/huggingface/transformers`", "> pip install git+https://github.com/huggingface/transformers\r\n\r\nYes, I found `Out of memory` log in `/var/log/messages` and it turns out the process use a lot of memory.\r\n```shell\r\nSep 9 17:10:44 centos kernel: Out of memory: Kill process 126138 (python3) score 939 or sacrifice child\r\nSep 9 17:10:44 centos kernel: Killed process 126170 (python3) total-vm:325690732kB, anon-rss:125105968kB, file-rss:0kB\r\n```\r\nAnd this is my machine info (caused in shell with `top` command)\r\n```shell\r\ntop - 17:47:55 up 303 days, 2:35, 4 users, load average: 1.14, 2.98, 3.44\r\nTasks: 468 total, 1 running, 467 sleeping, 0 stopped, 0 zombie\r\n%Cpu(s): 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st\r\nKiB Mem : 13132300+total, 12383201+free, 3294300 used, 4196688 buff/cache\r\nKiB Swap: 2088956 total, 12 free, 2088944 used. 12565272+avail Mem \r\n\r\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND \r\n 2868 root 20 0 3006004 19072 0 S 0.3 0.0 225:15.55 dockerd \r\n```\r\nThen i use `pip uninstall transformers; pip install git+https://github.com/huggingface/transformers` to reinstall the `transformers` library, and run the `run_language_modeling.py` again.\r\n```shell\r\npython3 run_language_modeling.py --output_dir=/home/xxx/gpt_model/transformers/examples/language-modeling/output_dir --model_type=gpt2 --model_name_or_path=distilgpt2 --per_device_train_batch_size=1 --do_train --train_data_file=/home/xxx/gpt_model/data_info/data.txt --block_size=64 --save_steps=500 --overwrite_output_dir --no_cuda\r\n```\r\nThis is the memory info showed by `top` after i run `run_language_modeling.py` with cpu.\r\n```shell\r\ntop - 17:51:40 up 303 days, 2:38, 4 users, load average: 23.42, 9.30, 5.50\r\nTasks: 463 total, 2 running, 461 sleeping, 0 stopped, 0 zombie\r\n%Cpu(s): 34.4 us, 49.5 sy, 0.0 ni, 16.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st\r\nKiB Mem : 13132300+total, 12142824+free, 5283664 used, 4611096 buff/cache\r\nKiB Swap: 2088956 total, 12 free, 2088944 used. 12365893+avail Mem \r\n\r\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND \r\n 25414 xxx 20 0 0.188t 1.945g 83876 R 3356 1.6 47:31.03 python3 \r\n```\r\nIt already runs about two hours and works well. Thanks a lot.", "By the way, how can i use the fine-tuned model please? There are 6 files generated under `output_dir/checkpoint`, which are \r\n`config.json log_history.json optimizer.pt pytorch_model.bin scheduler.pt training_args.bin`. Then how can i use them? Should i just use them as follows? Does the fine-tuned model need to be renamed? Which level of the checkpoint directory should I specify? Thanks a lot.\r\n```shell\r\ntokenizer = GPT2Tokenizer.from_pretrained('/home/xxx/gpt_model/transformers/examples/language-modeling/output_dir/checkpoint-15000')\r\n``` \r\n", "Cool! Yes, the way you load the model is correct. I'm guessing you want to use the resulting model, so given that you passed the following as `--output_dir`: `--output_dir=/home/xxx/gpt_model/transformers/examples/language-modeling/output_dir`, you should be able to load it like\r\n\r\n```py\r\ntokenizer = GPT2Tokenizer.from_pretrained('/home/xxx/gpt_model/transformers/examples/language-modeling/output_dir')\r\nmodel = GPT2LMHeadModel.from_pretrained('/home/xxx/gpt_model/transformers/examples/language-modeling/output_dir')\r\n\r\n```", "> Cool! Yes, the way you load the model is correct. I'm guessing you want to use the resulting model, so given that you passed the following as `--output_dir`: `--output_dir=/home/xxx/gpt_model/transformers/examples/language-modeling/output_dir`, you should be able to load it like\r\n> \r\n> ```python\r\n> tokenizer = GPT2Tokenizer.from_pretrained('/home/xxx/gpt_model/transformers/examples/language-modeling/output_dir')\r\n> model = GPT2LMHeadModel.from_pretrained('/home/xxx/gpt_model/transformers/examples/language-modeling/output_dir')\r\n> ```\r\n\r\nGot it! \r\nI use `--output_dir`: `--output_dir=/home/xxx/gpt_model/transformers/examples/language-modeling/output_dir` and there are many checkpoint generate under it. Just choose one of them, for example, `checkpoint-14000` and copy `'vocab.json', 'merges.txt'` into `/home/xxx/gpt_model/transformers/examples/language-modeling/output_dir/checkpoint-14000`.\r\n```shell\r\n/home/xxx/gpt_model/transformers/examples/language-modeling/output_dir/checkpoint-14000 >>> wget https://cdn.huggingface.co/distilgpt2-vocab.json -O vocab.json\r\n/home/xxx/gpt_model/transformers/examples/language-modeling/output_dir/checkpoint-14000 >>> wget https://cdn.huggingface.co/distilgpt2-merges.txt -O merges.txt\r\n```\r\nThen i use the [run_generation.py](https://github.com/huggingface/transformers/blob/master/examples/text-generation/run_generation.py) like follows and it generate text as expect.\r\n```shell\r\npython run_generation.py --model_type=gpt2 --model_name_or_path=/home/xxx/gpt_model/transformers/examples/language-modeling/output_dir/checkpoint-14000 --no_cuda\r\n```\r\nThank you for your help! ", "Very cool, glad you got it to work! Let us know if you face any other issues.", "It seems like I still have the same issue. I try to use run_language_modeling.py to train a small-bert model (6 layer) from scratch. The process is killed after about 3 hours of training. The error message is simply \"Killed\" and I observes there's a constant increasing usage of memory. So I think the issue is also OOM\r\n\r\n```\r\ntop - 16:36:33 up 5:29, 1 user, load average: 1.07, 1.19, 1.16\r\nTasks: 353 total, 3 running, 349 sleeping, 0 stopped, 1 zombie\r\n%Cpu(s): 10.4 us, 1.2 sy, 0.0 ni, 88.1 id, 0.1 wa, 0.0 hi, 0.2 si, 0.0 st\r\nMiB Mem : 16010.9 total, 151.5 free, 3386.3 used, 12473.2 buff/cache\r\nMiB Swap: 2048.0 total, 1550.2 free, 497.8 used. 12181.5 avail Mem \r\n\r\n```\r\nIs this normal? I can't think of a reason why it would use this much memory." ]
1,599
1,599
1,599
NONE
null
# ❓ Questions & Help Use `run_language_modeling.py` to finetune gpt2, but it core unexpectedly. How can i find the cause of the core please? ## Details I'm using transformers's `run_language_modeling.py` to finetune gpt2 as follows. ### (1)with gpu ```shell python3 run_language_modeling.py --output_dir=/home/xxx/transformers/examples/language-modeling/output_dir --model_type=gpt2 --model_name_or_path=gpt2 --per_gpu_train_batch_size=1 --do_train --train_data_file=/home/xxx/data_info/transformer.data --block_size=512 --save_steps=500 --overwrite_output_dir ``` But it core unexpectedly. ```shell terminate called after throwing an instance of 'std::runtime_error' | 0/1348 [00:00<?, ?it/s] what(): NCCL Error 1: unhandled cuda error Aborted ``` And the returncode is `134`。 ![image](https://user-images.githubusercontent.com/12140508/92548004-863f7080-f288-11ea-8e26-50532f59c516.png) ### (2) with cpu ```shell python3 run_language_modeling.py --output_dir=/home/xxx/transformers/examples/language-modeling/output_dir --model_type=gpt2 --model_name_or_path=gpt2 --do_train --train_data_file=/home/xxx/data_info/transformer.data --block_size=512 --save_steps=500 --overwrite_output_dir --no_cuda ``` At first the training is normal , but will quit after a while with return code `137`. ![image](https://user-images.githubusercontent.com/12140508/92548295-39a86500-f289-11ea-86c1-d91d13121162.png) ### (3) question The dataset i use is just a `data.txt` which is a file combined with multiple articles , and a `<|endoftext|>` is added at the end of each article. How can i find the cause of the core please? Hope for your help. Thanks a lot.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7020/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7020/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7019
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7019/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7019/comments
https://api.github.com/repos/huggingface/transformers/issues/7019/events
https://github.com/huggingface/transformers/issues/7019
696,129,892
MDU6SXNzdWU2OTYxMjk4OTI=
7,019
Proposal: Offset based Token Classification utilities
{ "login": "talolard", "id": 5352830, "node_id": "MDQ6VXNlcjUzNTI4MzA=", "avatar_url": "https://avatars.githubusercontent.com/u/5352830?v=4", "gravatar_id": "", "url": "https://api.github.com/users/talolard", "html_url": "https://github.com/talolard", "followers_url": "https://api.github.com/users/talolard/followers", "following_url": "https://api.github.com/users/talolard/following{/other_user}", "gists_url": "https://api.github.com/users/talolard/gists{/gist_id}", "starred_url": "https://api.github.com/users/talolard/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/talolard/subscriptions", "organizations_url": "https://api.github.com/users/talolard/orgs", "repos_url": "https://api.github.com/users/talolard/repos", "events_url": "https://api.github.com/users/talolard/events{/privacy}", "received_events_url": "https://api.github.com/users/talolard/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi, this is a very nice issue and I plan to work soon (in the coming 2 weeks) on related things (improving the examples to make full use of the Rust tokenization features). I'll re-read this issue (and all the links) to extract all the details and likely come back to you at that time.\r\n\r\nIn the meantime, here are two elements for your project:\r\n- the first is that for the fast tokenizers, the output of the tokenizer (a `BatchEncoding` instance) is actually a special kind of python dict with some super fast alignement methods powered by Rust, including a `char_to_token` alignement method that could maybe make your `align_tokens_to_annos` method a lot simpler and faster. You can read about it here: https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.BatchEncoding.char_to_token\r\n- the second element is that support for sentence piece is almost there in `tokenizers` so we will soon be able to use fast tokenizers for almost all the models.", "Thanks! That's super helpful.\r\nI also found out I can iterate over the batch which is really nice. \r\n\r\nI did find a bug and [opened an issue](https://github.com/huggingface/tokenizers/issues/404) \r\n```python\r\nfrom transformers import BertTokenizerFast,GPT2TokenizerFast\r\ntokenizer = BertTokenizerFast.from_pretrained('bert-base-cased',)\r\n\r\n\r\nfor i in range(1,5):\r\n txt = \"💩\"*i\r\n enc = tokenizer(txt,return_offsets_mapping=True)\r\n token_at_i = enc.char_to_token(i-1)\r\n dec = tokenizer.decode(enc['input_ids'])\r\n \r\n print (f\" I wrote {txt} but got back '{dec}' and char_to_tokens({i-1}) returned {token_at_i}\")\r\n```\r\n```\r\n I wrote 💩 but got back '[CLS] [UNK] [SEP]' and char_to_tokens(0) returned 1\r\n I wrote 💩💩 but got back '[CLS] [UNK] [SEP]' and char_to_tokens(1) returned 1\r\n I wrote 💩💩💩 but got back '[CLS] [UNK] [SEP]' and char_to_tokens(2) returned 1\r\n I wrote 💩💩💩💩 but got back '[CLS] [UNK] [SEP]' and char_to_tokens(3) returned 1\r\n```", "## Progress - But Am I doing this right ? \r\nHi,\r\nI made some progress and got some logic that aligns the tokens to offset annotations. A function call looks like this\r\n```python\r\nbatch,labels = tokenize_with_labels(texts,annotations,tokenizer,label_set=label_set)\r\n```\r\n\r\nAnd then a visualization of the alignment looks like this (which is trying to show annotations across multiple tokens) \r\n![image](https://user-images.githubusercontent.com/5352830/92932766-a1a3aa80-f445-11ea-8efa-c76fdb96f8e4.png)\r\n\r\n## Question\r\n\r\nSo I'm trying to get the padding /batching working. I think I've got something good but would love some input on how this might subtly fail. \r\n\r\n```python\r\nbatch,labels = tokenize_with_labels(texts,annotations,tokenizer,label_set=label_set)\r\nbatch.data['labels'] =labels # Put the labels in the BatchEncoding dict\r\n\r\npadded = tokenizer.pad(batch,padding='longest') # Call pad, which ignores the labels and offset_mappings\r\nbatch_size = len(padded['input_ids'][0]) # Get the padded sequence size\r\nfor i in range(len(padded['labels'])): #for each label\r\n ls = padded['labels'][i]\r\n difference = batch_size - len(ls) # How much do we need to pad ? \r\n padded['labels'][i] = padded['labels'][i] +[0] *difference # Pad \r\n padded['offset_mapping'][i]+=[(0,0)]*difference #pad the offset mapping so we can call convert_to_tensors\r\n \r\n \r\ntensors = padded.convert_to_tensors(tensor_type='pt') #convert to a tensor\r\n```\r\n\r\n\r\n", "Hmm I think we should have an option to pad the labels in `tokenizer.pad`.\r\nEither based on the shape of the labels or with an additional flag.\r\nI'll work on the tokenizers this week. Will add this to the stack.", "I think that presupposes that the user has labels aligned to tokens, or that their is one and only one right way to align labels and tokens, which isn't consistent with the original issue. \r\n\r\nWhen that's not the case, then we need to tokenize, then align labels and finally pad. (Also need to deal with overflow, but I haven't gotten that far yet) . Notably, the user may want to use a BIO,BILSO or other schema and needs access to the tokens to modify the labels accordingly. \r\n\r\nSomething that confused me as I've been working on this is that the _pad function [operates explicitly on named attributes of the batch encoding dict](https://github.com/huggingface/transformers/blob/15478c1287a4e7b52c01730ffb0718243d153600/src/transformers/tokenization_utils_base.py#L2642-L2663) whereas as a user I'd expect it to operate on everything in the underlying ```encoding.data``` dict. That however doesn't work because the dict includes offset_mappings which don't tensorize nicely. \r\n\r\n\r\nBecause of the logic involved in alignment, I think that padding of the tokens and labels might be better done outside of the tokenizer, probably with a specialized function / module. \r\nThe upside of padding in one go is the efficiency of doing so in Rust, but I'd speculate that for token classification, the running time would be dominated by training anyway, and the efficiency gains wouldn't justify the API complexity or documentation burdon of doing it all in one place. \r\n\r\nAlso, I think that's a theoretical point because it seems that the padding is [done in python anyway](https://github.com/huggingface/transformers/blob/15478c1287a4e7b52c01730ffb0718243d153600/src/transformers/tokenization_utils_base.py#L2646) ? \r\n\r\nI ended up doing\r\n``` python\r\ndef tokenize_with_labels(\r\n texts: List[str],\r\n raw_labels: List[List[SpanAnnotation]],\r\n tokenizer: PreTrainedTokenizerFast,\r\n label_set: LabelSet, #Basically the alignment strategy\r\n):\r\n batch_encodings = tokenizer(\r\n texts,\r\n return_offsets_mapping=True,\r\n padding=\"longest\",\r\n max_length=256,\r\n truncation=True,\r\n )\r\n batch_labels: IntListList = []\r\n for encoding, annotations in zip(batch_encodings.encodings, raw_labels):\r\n batch_labels.append(label_set.align_labels_to_tokens(encoding, annotations))\r\n return batch_encodings, batch_labels\r\n```\r\n\r\nwhere align_labels_to_tokens operates on already padded tokens.\r\n\r\nI found this the most convenient way to get dynamic batches with a collator \r\n\r\n```python\r\n@dataclass\r\nclass LTCollator:\r\n tokenizer: PreTrainedTokenizerFast\r\n label_set: LabelSet\r\n padding: PaddingStrategy = True\r\n max_length: Optional[int] = None\r\n\r\n def __call__(self, texts_and_labels: Example) -> BatchEncoding:\r\n texts: List[str] = []\r\n annotations: List[List[SpanAnnotation]] = []\r\n for (text, annos) in texts_and_labels:\r\n texts.append(text)\r\n annotations.append(annos)\r\n\r\n batch, labels = tokenize_with_labels(\r\n texts, annotations, self.tokenizer, label_set=self.label_set\r\n )\r\n del batch[\"offset_mapping\"]\r\n batch.data[\"labels\"] = labels # Put the labels in the BatchEncoding dict\r\n tensors = batch.convert_to_tensors(tensor_type=\"pt\") # convert to a tensor\r\n return tensors\r\n```\r\n", "As an example of the end to end flow, (and please No one use this it's a probably buggy work in progress)\r\n```python\r\nfrom typing import Any, Optional, List, Tuple\r\nfrom transformers import (\r\n BertTokenizerFast,\r\n BertModel,\r\n BertForMaskedLM,\r\n BertForTokenClassification,\r\n TrainingArguments,\r\n)\r\nimport torch\r\nfrom transformers import AdamW, Trainer\r\n\r\nfrom dataclasses import dataclass\r\nfrom torch.utils.data import Dataset\r\nimport json\r\n\r\nfrom torch.utils.data.dataloader import DataLoader\r\nfrom transformers import PreTrainedTokenizerFast, DataCollatorWithPadding, BatchEncoding\r\nfrom transformers.tokenization_utils_base import PaddingStrategy\r\n\r\nfrom labelset import LabelSet\r\nfrom token_types import IntListList, SpanAnnotation\r\nfrom tokenize_with_labels import tokenize_with_labels\r\n\r\nExample = Tuple[str, List[List[SpanAnnotation]]]\r\n\r\n\r\n@dataclass\r\nclass LTCollator:\r\n tokenizer: PreTrainedTokenizerFast\r\n label_set: LabelSet\r\n padding: PaddingStrategy = True\r\n max_length: Optional[int] = None\r\n\r\n def __call__(self, texts_and_labels: Example) -> BatchEncoding:\r\n texts: List[str] = []\r\n annotations: List[List[SpanAnnotation]] = []\r\n for (text, annos) in texts_and_labels:\r\n texts.append(text)\r\n annotations.append(annos)\r\n\r\n batch, labels = tokenize_with_labels(\r\n texts, annotations, self.tokenizer, label_set=self.label_set\r\n )\r\n del batch[\"offset_mapping\"]\r\n batch.data[\"labels\"] = labels # Put the labels in the BatchEncoding dict\r\n tensors = batch.convert_to_tensors(tensor_type=\"pt\") # convert to a tensor\r\n return tensors\r\n\r\n\r\nclass LTDataset(Dataset):\r\n def __init__(\r\n self, data: Any, tokenizer: PreTrainedTokenizerFast,\r\n ):\r\n self.tokenizer = tokenizer\r\n for example in data[\"examples\"]:\r\n for a in example[\"annotations\"]:\r\n a[\"label\"] = a[\"tag\"]\r\n self.texts = []\r\n self.annotations = []\r\n for example in data[\"examples\"]:\r\n self.texts.append(example[\"content\"])\r\n self.annotations.append(example[\"annotations\"])\r\n\r\n def __len__(self):\r\n return len(self.texts)\r\n\r\n def __getitem__(self, idx) -> Example:\r\n\r\n return self.texts[idx], self.annotations[idx]\r\n\r\n\r\n@dataclass\r\nclass LTDataControls:\r\n dataset: LTDataset\r\n collator: LTCollator\r\n label_set: LabelSet\r\n\r\n\r\ndef lt_data_factory(\r\n json_path: str, tokenizer: PreTrainedTokenizerFast, max_length=None\r\n):\r\n data = json.load(open(json_path))\r\n dataset = LTDataset(data=data, tokenizer=tokenizer)\r\n tags = list(map(lambda x: x[\"name\"], data[\"schema\"][\"tags\"]))\r\n label_set = LabelSet(tags)\r\n collator = LTCollator(\r\n max_length=max_length, label_set=label_set, tokenizer=tokenizer\r\n )\r\n return LTDataControls(dataset=dataset, label_set=label_set, collator=collator)\r\n\r\n\r\nif __name__ == \"__main__\":\r\n from transformers import BertTokenizerFast, GPT2TokenizerFast\r\n\r\n tokenizer = BertTokenizerFast.from_pretrained(\"bert-base-cased\",)\r\n data_controls = lt_data_factory(\r\n \"/home/tal/Downloads/small_gold_no_paragr_location_types_false_5_annotations.json\",\r\n tokenizer=tokenizer,\r\n max_length=256,\r\n )\r\n dl = DataLoader(\r\n data_controls.dataset, collate_fn=data_controls.collator, batch_size=10\r\n )\r\n model = BertForTokenClassification.from_pretrained(\r\n \"bert-base-cased\", num_labels=len(data_controls.label_set.ids_to_label.values())\r\n )\r\n train = Trainer(\r\n model=model,\r\n data_collator=data_controls.collator,\r\n train_dataset=data_controls.dataset,\r\n args=TrainingArguments(\"/tmp/trainer\", per_device_train_batch_size=2),\r\n )\r\n train.train()\r\n```", "Also, I found [this comment](https://github.com/huggingface/transformers/issues/6860#issuecomment-684118716) by @sgugger about the trainer\r\n>Note that there are multiple frameworks that provide generic training loops. The goal of Trainer (I'm assuming you're talking about it since there is no train.py file) is **not to replace them or compete with them but to provide an easy way to train and finetune Transformers models**. Those models don't take nested inputs, so Trainer does not support this. Those models are expected to return the loss as the first item of their output, so Trainer expects it too.\r\n\r\nI think that sentiment might make sense here, that what I'm looking for is outside the scope of the library. If that's the case I would have preferred it be written in big bold letters, rather than the library trying to cater to this use case\r\n \r\n", "So,\r\nAfter much rabbit hole, I've written a blog post about the [considerations when doing alignment/padding/batching](https://www.lighttag.io/blog/sequence-labeling-with-transformers/) and another [walking through an implementation](https://www.lighttag.io/blog/sequence-labeling-with-transformers/example). \r\n\r\nIt even [comes with a repo](https://github.com/LightTag/sequence-labeling-with-transformers) \r\n\r\nso \r\nIf we have annotated data like this\r\n```python\r\n[{'annotations': [],\r\n 'content': 'No formal drug interaction studies of Aranesp? have been '\r\n 'performed.',\r\n 'metadata': {'original_id': 'DrugDDI.d390.s0'}},\r\n {'annotations': [{'end': 13, 'label': 'drug', 'start': 6, 'tag': 'drug'},\r\n {'end': 60, 'label': 'drug', 'start': 43, 'tag': 'drug'},\r\n {'end': 112, 'label': 'drug', 'start': 105, 'tag': 'drug'},\r\n {'end': 177, 'label': 'drug', 'start': 164, 'tag': 'drug'},\r\n {'end': 194, 'label': 'drug', 'start': 181, 'tag': 'drug'},\r\n {'end': 219, 'label': 'drug', 'start': 211, 'tag': 'drug'},\r\n {'end': 238, 'label': 'drug', 'start': 227, 'tag': 'drug'}],\r\n 'content': 'Since PLETAL is extensively metabolized by cytochrome P-450 '\r\n 'isoenzymes, caution should be exercised when PLETAL is '\r\n 'coadministered with inhibitors of C.P.A. such as ketoconazole '\r\n 'and erythromycin or inhibitors of CYP2C19 such as omeprazole.',\r\n 'metadata': {'original_id': 'DrugDDI.d452.s0'}},\r\n {'annotations': [{'end': 58, 'label': 'drug', 'start': 47, 'tag': 'drug'},\r\n {'end': 75, 'label': 'drug', 'start': 62, 'tag': 'drug'},\r\n {'end': 135, 'label': 'drug', 'start': 124, 'tag': 'drug'},\r\n {'end': 164, 'label': 'drug', 'start': 152, 'tag': 'drug'}],\r\n 'content': 'Pharmacokinetic studies have demonstrated that omeprazole and '\r\n 'erythromycin significantly increased the systemic exposure of '\r\n 'cilostazol and/or its major metabolites.',\r\n 'metadata': {'original_id': 'DrugDDI.d452.s1'}}]\r\n```\r\nWe can do this\r\n```python\r\nfrom sequence_aligner.labelset import LabelSet\r\nfrom sequence_aligner.dataset import TrainingDataset\r\nfrom sequence_aligner.containers import TraingingBatch\r\nimport json\r\nraw = json.load(open('./data/ddi_train.json'))\r\nfor example in raw:\r\n for annotation in example['annotations']:\r\n #We expect the key of label to be label but the data has tag\r\n annotation['label'] = annotation['tag']\r\n\r\nfrom torch.utils.data import DataLoader\r\nfrom transformers import BertForTokenClassification,AdamW\r\nmodel = BertForTokenClassification.from_pretrained(\r\n \"bert-base-cased\", num_labels=len(dataset.label_set.ids_to_label.values())\r\n)\r\noptimizer = AdamW(model.parameters(), lr=5e-6)\r\n\r\ndataloader = DataLoader(\r\n dataset,\r\n collate_fn=TraingingBatch,\r\n batch_size=4,\r\n shuffle=True,\r\n)\r\nfor num, batch in enumerate(dataloader):\r\n loss, logits = model(\r\n input_ids=batch.input_ids,\r\n attention_mask=batch.attention_masks,\r\n labels=batch.labels,\r\n )\r\n loss.backward()\r\n optimizer.step()\r\n\r\n\r\n-------------------------------\r\n\r\nI think most of this is out of scope for the transformers library itself, so am all for closing this issue if no one objects", "(I attempted to fix the links above, let me know if this is correct @talolard)", "> (I attempted to fix the links above, let me know if this is correct @talolard)\r\n\r\nLinks seem kosher, thanks", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,599
1,605
1,605
NONE
null
# 🚀 Feature request Hi. So we work a lot with span annotations on text that isn't tokenized and want a "canonical" way to work with that. I have some ideas and rough implementations, so I'm looking for feedback on if this belongs in the library, and if the proposed implementation is more or less good. I also think there is a good chance that everything I want exists, and the only solution needed is slightly clearer documentation. I should hope that's the case and happy to document if someone can point me in the right direction. ## The Desired Capabilities What I'd like is a canonical way to: * Tokenize the examples in the dataset * Align my annotations with the output tokens (see notes below) * Have the tokens and labels correctly padded to the max length of an example in the batch or max_sequence_length * Have a convenient function that returns predicted offsets ### Some Nice To Haves * It would be nice if such a utility internally handled tagging schemes like IOB BIOES internally and optionally exposed them in the output or "folded" them to the core entities. * It would be nice if there was a recommended/default strategy implemented for handling examples that are longer then the max_sequence_length * It would be amazing if we could pass labels to the tokenizer and have the alignment happen in Rust (in parallel). But I don't know Rust and I have a sense this is complicated so I won't be taking that on myself, and assuming that this is happening in Python. ## Current State and what I'm missing * The docs and examples for Token Classification assume that the [text is pre-tokenized](https://github.com/huggingface/transformers/blob/1b76936d1a9d01cf99a086a3718060a64329afaa/examples/token-classification/utils_ner.py#L110). * For a word that has a label and is tokenized to multiple tokens, [it is recommended](https://github.com/huggingface/transformers/blob/1b76936d1a9d01cf99a086a3718060a64329afaa/examples/token-classification/utils_ner.py#L116) to place the label on the first token and "ignore" the following tokens * However it is not clear where that recommendation came from, and it has [edge cases that seem quite nasty](https://github.com/huggingface/transformers/issues/5077#issuecomment-668384357) * The [example pads all examples to max_sequence_length](https://github.com/huggingface/transformers/blob/1b76936d1a9d01cf99a086a3718060a64329afaa/examples/token-classification/utils_ner.py#L167) which is a big performance hit (as opposed to bucketing by length and padding dynamically) * The example loads the entire dataset at once in memory. I'm not sure if this is a real problem or I'm being nitpicky, but I think "the right way" to do this would be to lazy load a batch or a few batches. ### Alignment The path to align tokens to span annotations is by using the return_offsets_mapping flag on the tokenizer (which is awesome!). There are probably a few strategies, I've been using this I use logic like this: ```python def align_tokens_to_annos(offsets,annos): anno_ix =0 results =[] done =len(annos)==0 for offset in offsets: if done == True: results.append(dict(offset=offset,tag='O',)) else: anno = annos[anno_ix] start, end = offset if end < anno['start']: # the offset is before the next annotation results.append(dict(offset=offset, tag='O', )) elif start <=anno['start'] and end <=anno['end']: results.append(dict(offset=offset, tag=f'B-{anno["tag"]}',)) elif start>=anno['start'] and end<=anno['end']: results.append(dict(offset=offset, tag=f'I-{anno["tag"]}', )) elif start>=anno['start'] and end>anno['end']: anno_ix += 1 results.append(dict(offset=offset, tag=f'E-{anno["tag"]}', )) else: raise Exception(f"Funny Overlap {offset},{anno}",) if anno_ix>=len(annos): done=True return results ``` And then call that function inside add_labels here ```python res_batch = tokenizer([s['text'] for s in pre_batch],return_offsets_mapping=True,padding=True) offsets_batch = res_batch.pop('offset_mapping') res_batch['labels'] =[] for i in range(len(offsets_batch)): labels = add_labels(res_batch['input_ids'][i],offsets_batch[i],pre_batch[i]['annotations']) res_batch['labels'].append(labels) ```` This works, and it's nice because the padding is consistent with the longest sentence so bucketing gives a big boost. But, the add_labels stuff is in python and thus sequential over the examples and not super fast. I haven't measured this to confirm it's a problem, just bring it up. ## Desired Solution I need most of this stuff so I'm going to make it. I could do it The current "NER" examples and issues assume that text is pre-tokenized. Our use case is such that the full text is not tokenized and the labels for "NER" come as offsets. I propose a utility /example to handle that scenario because I haven't been able to find one. In practice, most values of X don't need any modification, and doing what I propose (below) in Rust is beyond me, so this might boil down to a utility class and documentation. ## Motivation I make [text annotation tools](https://lighttag.io) and our output is span annotations on untokenized text. I want our users to be able to easily use transformers. I suspect from my (limited) experience that in many non-academic use cases, span annotations on untokenized text is the norm and that others would benefit from this as well. ## Possible ways to address this I can imagine a few scenarios here * **This is out of scope** Maybe this isn't something that should be handled by transformers at all, and delegated to a library and blog post * **This is in scope and just needs documentation** e.g. all the things I mentioned are things transformers should and can already do. In that case the solution would be pointing someone (me) to the right functions and adding some documentation * **This is in scope and should be a set of utilities ** Solving this could be as simple as making a file similar to [utils_ner.py](https://github.com/huggingface/transformers/blob/1b76936d1a9d01cf99a086a3718060a64329afaa/examples/token-classification/utils_ner.py). I think that would be the simplest way to get something usable and gather feedback see if anyone else cares * **This is in scope but should be done in Rust soon** If we want to be performance purists, it would make sense to handle the alignment of span based labels in Rust. I don't know Rust so I can't help much and I don't know if there is any appetite or capacity from someone that does, or if it's worth the (presumably) additional effort. ## Your contribution I'd be happy to implement and submit a PR, or make an external library or add to a relevant existing one. ## Related issues * #5297 * [This PR](https://github.com/huggingface/transformers/pull/3957)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7019/reactions", "total_count": 38, "+1": 23, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 13, "rocket": 0, "eyes": 2 }
https://api.github.com/repos/huggingface/transformers/issues/7019/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7018
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7018/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7018/comments
https://api.github.com/repos/huggingface/transformers/issues/7018/events
https://github.com/huggingface/transformers/pull/7018
696,119,529
MDExOlB1bGxSZXF1ZXN0NDgyMjQ4ODg4
7,018
[s2s] --eval_max_generate_length
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7018?src=pr&el=h1) Report\n> Merging [#7018](https://codecov.io/gh/huggingface/transformers/pull/7018?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d6c08b07a087e83915b4b3156bbf464cebc7b9b5?el=desc) will **increase** coverage by `0.26%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7018/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7018?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7018 +/- ##\n==========================================\n+ Coverage 78.74% 79.01% +0.26% \n==========================================\n Files 168 164 -4 \n Lines 32172 30987 -1185 \n==========================================\n- Hits 25335 24483 -852 \n+ Misses 6837 6504 -333 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7018?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7018/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `16.37% <0.00%> (-82.31%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/7018/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7018/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `19.71% <0.00%> (-72.34%)` | :arrow_down: |\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7018/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `75.00% <0.00%> (-25.00%)` | :arrow_down: |\n| [src/transformers/utils/logging.py](https://codecov.io/gh/huggingface/transformers/pull/7018/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy91dGlscy9sb2dnaW5nLnB5) | `75.00% <0.00%> (-10.90%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/7018/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.70% <0.00%> (-6.07%)` | :arrow_down: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/7018/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `87.50% <0.00%> (-5.77%)` | :arrow_down: |\n| [src/transformers/testing\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7018/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90ZXN0aW5nX3V0aWxzLnB5) | `65.89% <0.00%> (-3.50%)` | :arrow_down: |\n| [...rc/transformers/data/datasets/language\\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/7018/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `90.80% <0.00%> (-2.14%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7018/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `91.90% <0.00%> (-0.93%)` | :arrow_down: |\n| ... and [40 more](https://codecov.io/gh/huggingface/transformers/pull/7018/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7018?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7018?src=pr&el=footer). Last update [d6c08b0...6d5adc4](https://codecov.io/gh/huggingface/transformers/pull/7018?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,599
1,599
1,599
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7018/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7018/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7018", "html_url": "https://github.com/huggingface/transformers/pull/7018", "diff_url": "https://github.com/huggingface/transformers/pull/7018.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7018.patch", "merged_at": 1599761494000 }
https://api.github.com/repos/huggingface/transformers/issues/7017
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7017/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7017/comments
https://api.github.com/repos/huggingface/transformers/issues/7017/events
https://github.com/huggingface/transformers/pull/7017
696,040,023
MDExOlB1bGxSZXF1ZXN0NDgyMTgyNzU1
7,017
pegasus.rst: fix expected output
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7017?src=pr&el=h1) Report\n> Merging [#7017](https://codecov.io/gh/huggingface/transformers/pull/7017?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5c4eb4b1ac45291e89c1be0fb1fdacd841b19a47?el=desc) will **decrease** coverage by `0.15%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7017/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7017?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7017 +/- ##\n==========================================\n- Coverage 80.39% 80.23% -0.16% \n==========================================\n Files 164 164 \n Lines 30986 30986 \n==========================================\n- Hits 24910 24863 -47 \n- Misses 6076 6123 +47 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7017?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7017/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `16.37% <0.00%> (-82.31%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/7017/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7017/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `79.03% <0.00%> (-7.80%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7017/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `84.14% <0.00%> (-3.07%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7017/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.45% <0.00%> (-1.76%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7017/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `91.90% <0.00%> (-0.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7017/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (-0.33%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/7017/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `81.00% <0.00%> (+0.24%)` | :arrow_up: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7017/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.64% <0.00%> (+0.67%)` | :arrow_up: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7017/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.87% <0.00%> (+2.12%)` | :arrow_up: |\n| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/7017/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7017?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7017?src=pr&el=footer). Last update [5c4eb4b...4de36a2](https://codecov.io/gh/huggingface/transformers/pull/7017?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,599
1,599
1,599
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7017/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7017/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7017", "html_url": "https://github.com/huggingface/transformers/pull/7017", "diff_url": "https://github.com/huggingface/transformers/pull/7017.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7017.patch", "merged_at": 1599586157000 }
https://api.github.com/repos/huggingface/transformers/issues/7016
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7016/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7016/comments
https://api.github.com/repos/huggingface/transformers/issues/7016/events
https://github.com/huggingface/transformers/pull/7016
696,008,142
MDExOlB1bGxSZXF1ZXN0NDgyMTU2Njg5
7,016
[Longformer] Fix longformer documentation
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7016?src=pr&el=h1) Report\n> Merging [#7016](https://codecov.io/gh/huggingface/transformers/pull/7016?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d31031f603043281d4fbac6cbdcfb6497fd500ab?el=desc) will **decrease** coverage by `3.56%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7016/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7016?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7016 +/- ##\n==========================================\n- Coverage 80.03% 76.47% -3.57% \n==========================================\n Files 161 161 \n Lines 30120 30120 \n==========================================\n- Hits 24108 23033 -1075 \n- Misses 6012 7087 +1075 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7016?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7016/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `92.02% <ø> (ø)` | |\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7016/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7016/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.87% <0.00%> (-77.64%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7016/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.36%)` | :arrow_down: |\n| [src/transformers/configuration\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7016/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX21vYmlsZWJlcnQucHk=) | `26.47% <0.00%> (-70.59%)` | :arrow_down: |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7016/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `23.49% <0.00%> (-65.97%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7016/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `52.98% <0.00%> (-13.44%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7016/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `67.10% <0.00%> (-12.67%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7016/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `87.67% <0.00%> (-10.96%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7016/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.72% <0.00%> (-7.32%)` | :arrow_down: |\n| ... and [14 more](https://codecov.io/gh/huggingface/transformers/pull/7016/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7016?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7016?src=pr&el=footer). Last update [d31031f...2f15fa5](https://codecov.io/gh/huggingface/transformers/pull/7016?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,599
1,599
1,599
MEMBER
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #7015
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7016/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7016/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7016", "html_url": "https://github.com/huggingface/transformers/pull/7016", "diff_url": "https://github.com/huggingface/transformers/pull/7016.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7016.patch", "merged_at": 1599583889000 }
https://api.github.com/repos/huggingface/transformers/issues/7015
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7015/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7015/comments
https://api.github.com/repos/huggingface/transformers/issues/7015/events
https://github.com/huggingface/transformers/issues/7015
695,991,609
MDU6SXNzdWU2OTU5OTE2MDk=
7,015
Longformer global attention mask, 2 or 1?
{ "login": "kakeith", "id": 18640437, "node_id": "MDQ6VXNlcjE4NjQwNDM3", "avatar_url": "https://avatars.githubusercontent.com/u/18640437?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kakeith", "html_url": "https://github.com/kakeith", "followers_url": "https://api.github.com/users/kakeith/followers", "following_url": "https://api.github.com/users/kakeith/following{/other_user}", "gists_url": "https://api.github.com/users/kakeith/gists{/gist_id}", "starred_url": "https://api.github.com/users/kakeith/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kakeith/subscriptions", "organizations_url": "https://api.github.com/users/kakeith/orgs", "repos_url": "https://api.github.com/users/kakeith/repos", "events_url": "https://api.github.com/users/kakeith/events{/privacy}", "received_events_url": "https://api.github.com/users/kakeith/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "That is 100% correct - thanks for notifying us :-) PR is linked.", "Also what is the 1, 4, 21 in that example? Can someone answer this?", "Just random indices that will be attended and will attend globally", "So if we are passing a sentence during inferencing `I love Hugging face` and I give value `attention_mask[:, [0,-1]] = 2` then ` <s> and </s>` will be attended globally?? And then I can fetch `<s>` embedding to get embedding of the sentence?" ]
1,599
1,628
1,599
NONE
null
This example in the LongFormer documentation (https://huggingface.co/transformers/model_doc/longformer.html ) ``` attention_mask[:, [1, 4, 21,]] = 2 # Set global attention based on the task. For example, ... # classification: the <s> token ... # QA: question tokens ... # LM: potentially on the beginning of sentences and paragraphs ``` seems to mislead the user to put a 2 as the global attention mask instead of a 1 this is described in the actual documentation. I'm guessing this is a mistake because (a) the example is directly copied from the AllenAI (https://github.com/allenai/longformer#how-to-use), and (b) the source code actually adds a +1 to the attention mask in the function `_merge_to_attention_mask` (https://huggingface.co/transformers/_modules/transformers/modeling_longformer.html#LongformerModel.forward). Could the documentation please be fixed or clarified? Thanks! Also what is the 1, 4, 21 in that example? ### Who can help Longformer/Reformer: @patrickvonplaten ## Information Model I am using (Bert, XLNet ...): LongFormer
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7015/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7015/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7014
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7014/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7014/comments
https://api.github.com/repos/huggingface/transformers/issues/7014/events
https://github.com/huggingface/transformers/pull/7014
695,954,397
MDExOlB1bGxSZXF1ZXN0NDgyMTEyNDI4
7,014
[wip] Pegasus: Hack to never generate unk
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7014?src=pr&el=h1) Report\n> Merging [#7014](https://codecov.io/gh/huggingface/transformers/pull/7014?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ce37be9d94da57897cce9c49b3421e6a8a927d4a?el=desc) will **increase** coverage by `2.40%`.\n> The diff coverage is `20.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7014/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7014?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7014 +/- ##\n==========================================\n+ Coverage 77.60% 80.01% +2.40% \n==========================================\n Files 161 161 \n Lines 30120 30124 +4 \n==========================================\n+ Hits 23374 24103 +729 \n+ Misses 6746 6021 -725 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7014?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/7014/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19wZWdhc3VzLnB5) | `69.23% <20.00%> (-30.77%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/7014/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7014/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7014/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7014/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.90% <0.00%> (-0.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7014/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (-0.33%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7014/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.76% <0.00%> (-0.28%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7014/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+2.50%)` | :arrow_up: |\n| [src/transformers/modeling\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7014/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `90.76% <0.00%> (+20.74%)` | :arrow_up: |\n| ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/7014/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7014?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7014?src=pr&el=footer). Last update [ce37be9...f841173](https://codecov.io/gh/huggingface/transformers/pull/7014?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,599
1,602
1,602
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7014/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7014/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7014", "html_url": "https://github.com/huggingface/transformers/pull/7014", "diff_url": "https://github.com/huggingface/transformers/pull/7014.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7014.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/7013
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7013/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7013/comments
https://api.github.com/repos/huggingface/transformers/issues/7013/events
https://github.com/huggingface/transformers/pull/7013
695,916,979
MDExOlB1bGxSZXF1ZXN0NDgyMDgwODU3
7,013
Fixing FLOPS merge by checking if torch is available
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,599
1,599
1,599
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7013/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7013/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7013", "html_url": "https://github.com/huggingface/transformers/pull/7013", "diff_url": "https://github.com/huggingface/transformers/pull/7013.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7013.patch", "merged_at": 1599576719000 }
https://api.github.com/repos/huggingface/transformers/issues/7012
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7012/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7012/comments
https://api.github.com/repos/huggingface/transformers/issues/7012/events
https://github.com/huggingface/transformers/issues/7012
695,793,824
MDU6SXNzdWU2OTU3OTM4MjQ=
7,012
Error in run_language_modeling on TPU: Transferring data with element type U8 has not been implemented on TPUs
{ "login": "akoksal", "id": 10994107, "node_id": "MDQ6VXNlcjEwOTk0MTA3", "avatar_url": "https://avatars.githubusercontent.com/u/10994107?v=4", "gravatar_id": "", "url": "https://api.github.com/users/akoksal", "html_url": "https://github.com/akoksal", "followers_url": "https://api.github.com/users/akoksal/followers", "following_url": "https://api.github.com/users/akoksal/following{/other_user}", "gists_url": "https://api.github.com/users/akoksal/gists{/gist_id}", "starred_url": "https://api.github.com/users/akoksal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/akoksal/subscriptions", "organizations_url": "https://api.github.com/users/akoksal/orgs", "repos_url": "https://api.github.com/users/akoksal/repos", "events_url": "https://api.github.com/users/akoksal/events{/privacy}", "received_events_url": "https://api.github.com/users/akoksal/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,599
1,605
1,605
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.1.0 - Platform: Linux-4.19.0-10-cloud-amd64-x86_64-with-debian-9.13 - Python version: 3.7.7 - PyTorch version (GPU?): 1.7.0a0+626e410 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No (TPU) - Using distributed or parallel set-up in script?: No ### Who can help albert, bert, GPT2, XLM: @LysandreJik <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...): GPT2 - (Not pretrained) The problem arises when using: * [X] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) ## To reproduce I am trying to train a GPT model from scratch with TPU v2-8 in GCP. I have tried several setups but consistently getting the error below. I have simplified the arguments as much as I can do to increase the reproducibility. Steps to reproduce the behavior: 1. Pytorch/XLA installed through docker: gcr.io/tpu-pytorch/xla:nightly_3.7 2. Transformers 3.1.0 installed through pip. 3. I used the latest xla_spawn.py with previous version of [run_language_modeling.py](https://github.com/huggingface/transformers/blob/a75c64d80c76c3dc71f735d9197a4a601847e0cd/examples/language-modeling/run_language_modeling.py) because the latest version of run_language_modeling gives error about cache_dir. 4. Simple txt file with 10k lines given as input. ```bash python xla_spawn.py --num_cores=1 run_language_modeling.py --output_dir=/home/GPT/output --model_type=gpt2 --tokenizer_name=gpt2 --do_train --train_data_file=/home/GPT/corpus/oscar_wiki_opus_eval.txt --per_device_train_batch_size=128 ``` Output: ```bash 09/08/2020 11:28:50 - WARNING - run_language_modeling - Process rank: -1, device: xla:1, n_gpu: 0, distributed training: False, 16-bits training: False 09/08/2020 11:28:50 - INFO - run_language_modeling - Training/evaluation parameters TrainingArguments(output_dir='/home/GPT/output', overwrite_output_dir=False, do_train=True, do_eval=False, do_predict=False, evaluate_during_training=False, prediction_loss_only=False, per_device_train_batch_size=128, per_device_eval_batch_size=8, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=3.0, max_steps=-1, warmup_steps=0, logging_dir='runs/Sep08_11-28-39_b0b61c9b1d63', logging_first_step=False, logging_steps=500, save_steps=500, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=1, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=1000, past_index=-1, run_name=None, disable_tqdm=False, remove_unused_columns=True) 09/08/2020 11:28:50 - WARNING - run_language_modeling - You are instantiating a new config instance from scratch. 09/08/2020 11:28:50 - INFO - run_language_modeling - Training new model from scratch /root/anaconda3/envs/pytorch/lib/python3.7/site-packages/transformers/modeling_auto.py:732: FutureWarning: The class `AutoModelWithLMHead` is deprecated and will be removed in a future version. Please use `AutoModelForCausalLM` for causal language models, `AutoModelForMaskedLM` for masked language models and `AutoModelForSeq2SeqLM` for encoder-decoder models. FutureWarning, /root/anaconda3/envs/pytorch/lib/python3.7/site-packages/transformers/tokenization_utils_base.py:1321: FutureWarning: The `max_len` attribute has been deprecated and will be removed in a future version, use `model_max_length` instead. FutureWarning, 09/08/2020 11:28:54 - INFO - filelock - Lock 140191457088144 acquired on /home/GPT/corpus/cached_lm_GPT2Tokenizer_1024_oscar_wiki_opus_eval.txt.lock 09/08/2020 11:29:09 - INFO - filelock - Lock 140191457088144 released on /home/GPT/corpus/cached_lm_GPT2Tokenizer_1024_oscar_wiki_opus_eval.txt.lock Traceback (most recent call last): File "xla_spawn.py", line 72, in <module> main() File "xla_spawn.py", line 68, in main xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores) File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 387, in spawn _start_fn(0, pf_cfg, fn, args) File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 324, in _start_fn fn(gindex, *args) File "/home/GPT/run_language_modeling.py", line 294, in _mp_fn main() File "/home/GPT/run_language_modeling.py", line 252, in main prediction_loss_only=True, File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/transformers/trainer.py", line 229, in __init__ self.model = model.to(args.device) if model is not None else None File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 612, in to return self._apply(convert) File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 359, in _apply module._apply(fn) File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 359, in _apply module._apply(fn) File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 359, in _apply module._apply(fn) [Previous line repeated 1 more time] File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 402, in _apply self._buffers[key] = fn(buf) File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 610, in convert return t.to(device, dtype if t.is_floating_point() else None, non_blocking) RuntimeError: tensorflow/compiler/xla/xla_client/xrt_computation_client.cc:383 : Check failed: session->session()->Run( session_work->feed_inputs, session_work->outputs_handles, &outputs) == ::tensorflow::Status::OK() (Unimplemented: From /job:tpu_worker/replica:0/task:0: Attempted to transfer array of shape u8[1,1,1024,1024] to a TPU device. Transferring data with element type U8 has not been implemented on TPUs. [[{{node XRTAllocateFromTensor_6}}]] vs. OK) *** Begin stack trace *** tensorflow::CurrentStackTrace() xla::util::MultiWait::Complete(std::function<void ()> const&) clone *** End stack trace *** terminate called after throwing an instance of 'std::runtime_error' what(): tensorflow/compiler/xla/xla_client/xrt_computation_client.cc:1110 : Check failed: session->session()->Run( feed_inputs, {}, {cached_node.operations[0]}, &outputs) == ::tensorflow::Status::OK() (Not found: Op type not registered 'XRTMemoryInfo' in binary running on n-cec96fe8-w-0. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed. vs. OK) *** Begin stack trace *** tensorflow::CurrentStackTrace() xla::XrtComputationClient::ReleaseHandles(std::vector<xla::XrtComputationClient::DeviceHandle, std::allocator<xla::XrtComputationClient::DeviceHandle> >*, std::function<xla::XrtSession::CachedNode const& (xla::XrtSession*, tensorflow::Scope const&, std::string const&)> const&, xla::metrics::Metric*, xla::metrics::Counter*) xla::XrtComputationClient::HandleReleaser() xla::util::TriggeredTask::Runner() clone *** End stack trace *** Aborted (core dumped) ``` ## Expected behavior I would expect it to train GPT model on TPU with given txt file from scratch. P.S.: I have also tried with the stable version of XLA (gcr.io/tpu-pytorch/xla:r1.6) and different model_type and tokenizer_name (bert). I am getting the same error in both cases. I have tried several things to change the input type to float in the source code of transformers, but I was not successful. Any tips for that direction would be good for me, too.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7012/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7012/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7011
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7011/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7011/comments
https://api.github.com/repos/huggingface/transformers/issues/7011/events
https://github.com/huggingface/transformers/issues/7011
695,754,716
MDU6SXNzdWU2OTU3NTQ3MTY=
7,011
getting 'ValueError-TextInputSequence must be str' in 'train_dataset = train_dataset.map(convert_to_features)'
{ "login": "hemantwani", "id": 34301590, "node_id": "MDQ6VXNlcjM0MzAxNTkw", "avatar_url": "https://avatars.githubusercontent.com/u/34301590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hemantwani", "html_url": "https://github.com/hemantwani", "followers_url": "https://api.github.com/users/hemantwani/followers", "following_url": "https://api.github.com/users/hemantwani/following{/other_user}", "gists_url": "https://api.github.com/users/hemantwani/gists{/gist_id}", "starred_url": "https://api.github.com/users/hemantwani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hemantwani/subscriptions", "organizations_url": "https://api.github.com/users/hemantwani/orgs", "repos_url": "https://api.github.com/users/hemantwani/repos", "events_url": "https://api.github.com/users/hemantwani/events{/privacy}", "received_events_url": "https://api.github.com/users/hemantwani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@patil-suraj \r\nCan you help me with this?" ]
1,599
1,599
1,599
NONE
null
Hello friends, I want to fine tune longformer QnA for different data. I found this code very relevant to my requirement, so was trying to understand the same thoroughly. but getting an error. i am trying to run the same code that Suraj has on github. ** > 'ValueError-TextInputSequence must be str' in 'train_dataset = train_dataset.map(convert_to_features)'. ** `train_dataset = nlp.load_dataset('squad', split=nlp.Split.TRAIN) valid_dataset = nlp.load_dataset('squad', split=nlp.Split.VALIDATION) train_dataset = train_dataset.map(convert_to_features) valid_dataset = valid_dataset.map(convert_to_features, load_from_cache_file=False) #patil_suraj
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7011/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7011/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7010
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7010/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7010/comments
https://api.github.com/repos/huggingface/transformers/issues/7010/events
https://github.com/huggingface/transformers/issues/7010
695,742,999
MDU6SXNzdWU2OTU3NDI5OTk=
7,010
Huggingface "sentiment-analysis" pipeline always output "POSITIVE" label even for negative sentences
{ "login": "abmitra84", "id": 24584702, "node_id": "MDQ6VXNlcjI0NTg0NzAy", "avatar_url": "https://avatars.githubusercontent.com/u/24584702?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abmitra84", "html_url": "https://github.com/abmitra84", "followers_url": "https://api.github.com/users/abmitra84/followers", "following_url": "https://api.github.com/users/abmitra84/following{/other_user}", "gists_url": "https://api.github.com/users/abmitra84/gists{/gist_id}", "starred_url": "https://api.github.com/users/abmitra84/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abmitra84/subscriptions", "organizations_url": "https://api.github.com/users/abmitra84/orgs", "repos_url": "https://api.github.com/users/abmitra84/repos", "events_url": "https://api.github.com/users/abmitra84/events{/privacy}", "received_events_url": "https://api.github.com/users/abmitra84/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Upon further updates, I think the issue was with using wrong AutoModel function. I replaced it with AutoModelforSequenceClassification at the time of download and it worked." ]
1,599
1,599
1,599
NONE
null
## Environment info - `transformers` version: 3.1 - Platform: AWS , Colab etc. - Python version: 3.6 - PyTorch version (GPU?): NA - Tensorflow version (GPU?): NA - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help examples/token-classification: @stefan-it ## Information Model I am using (Bert: https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english): The problem arises when using: * [T ] the official example scripts: (give details below) from transformers import pipeline classifier = pipeline('sentiment-analysis') classifier("I hate this thing") -->Returns Postive label In fact when tested with a large bacth it never returned Negative label at all * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Get transformer 3.1 2. Use pipeline("sentiment-analysis") 3. Get label and score <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior from transformers import pipeline classifier = pipeline('sentiment-analysis') classifier("I hate this thing") -->Returns Negative label Updates post some more experiments: - I am using distilbert-base-uncased-finetuned-sst-2-english following instruction from :https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english - When I download the model and save it in local before using it through AutoModel and AutoTokenizer the score for sentence nlp("i dislike it") is 0.5453 - when I directly use it without manually downloading the model to local the same score jumps to 0.99 [same score in hosted modelhub inference api as well] - another observation is that the tokenizer_config.json has both special_tokens_map_file and full_tokneizer_file as Null. Not sure if it supposed to be like this - for my purpose I have to use the downloaded and manually loaded style of inferencing <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7010/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7010/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7009
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7009/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7009/comments
https://api.github.com/repos/huggingface/transformers/issues/7009/events
https://github.com/huggingface/transformers/issues/7009
695,715,921
MDU6SXNzdWU2OTU3MTU5MjE=
7,009
Index out of range in Bart-large-xsum
{ "login": "tejareddy8888", "id": 58398337, "node_id": "MDQ6VXNlcjU4Mzk4MzM3", "avatar_url": "https://avatars.githubusercontent.com/u/58398337?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tejareddy8888", "html_url": "https://github.com/tejareddy8888", "followers_url": "https://api.github.com/users/tejareddy8888/followers", "following_url": "https://api.github.com/users/tejareddy8888/following{/other_user}", "gists_url": "https://api.github.com/users/tejareddy8888/gists{/gist_id}", "starred_url": "https://api.github.com/users/tejareddy8888/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tejareddy8888/subscriptions", "organizations_url": "https://api.github.com/users/tejareddy8888/orgs", "repos_url": "https://api.github.com/users/tejareddy8888/repos", "events_url": "https://api.github.com/users/tejareddy8888/events{/privacy}", "received_events_url": "https://api.github.com/users/tejareddy8888/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Pinging @sshleifer, the summarization master", "See https://discuss.huggingface.co/t/summarization-on-long-documents/920/2, and feel free to reply there!\r\nI don't have a code snippet but feel free to contribute one to that discussion! " ]
1,599
1,599
1,599
NONE
null
Questions & Help Hello to everyone!! I am facing a problem summarizing long articles. I mean very long text with larger vocab size than it is pre-trained already i guess. I see that many of the models have a limitation of maximum input and trying to execute results in error of index out of range. I am particularly using "BART-large-xsum". Please suggest what is the correct way of using these models with long documents shall I finetuning to increase the vocabsize or do anything else. A code snippet with an example of how to handle long documents with the "BART-large-xsum" would be perfect to start with! Thanks in advance, Teja
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7009/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7009/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7008
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7008/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7008/comments
https://api.github.com/repos/huggingface/transformers/issues/7008/events
https://github.com/huggingface/transformers/issues/7008
695,682,779
MDU6SXNzdWU2OTU2ODI3Nzk=
7,008
Diverse Beam Search decoding
{ "login": "dakshvar22", "id": 8708249, "node_id": "MDQ6VXNlcjg3MDgyNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/8708249?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dakshvar22", "html_url": "https://github.com/dakshvar22", "followers_url": "https://api.github.com/users/dakshvar22/followers", "following_url": "https://api.github.com/users/dakshvar22/following{/other_user}", "gists_url": "https://api.github.com/users/dakshvar22/gists{/gist_id}", "starred_url": "https://api.github.com/users/dakshvar22/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dakshvar22/subscriptions", "organizations_url": "https://api.github.com/users/dakshvar22/orgs", "repos_url": "https://api.github.com/users/dakshvar22/repos", "events_url": "https://api.github.com/users/dakshvar22/events{/privacy}", "received_events_url": "https://api.github.com/users/dakshvar22/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @dakshvar22,\r\n\r\nThanks for posting this Diverse Beam Search paper. Looks cool :-) \r\nAt the moment, I won't find the time to take a deeper look, but feel free to open a PR to add it to our generate() method and I'm happy to discuss possible design choices :-) ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "My understanding is that this should have been closed by https://github.com/huggingface/transformers/pull/9006?" ]
1,599
1,614
1,614
NONE
null
# 🚀 Feature request Currently, with all the decoding strategies available in this library, there isn't any decoding strategy which helps in producing diverse outputs across the beam of the decoder. Such diversity is useful for tasks such as noisy data augmentation where you want to generate multiple possible outputs. [Diverse Beam Search](https://arxiv.org/abs/1610.02424) paper introduces an extremely simple trick to accomplish this and it works really well. It is already implemented in the fairseq library and would be cool to have it in transformers too. ## Motivation Having a decoding strategy which promotes more diversity across the beam. ## Your contribution I would like to submit a PR for it, but before that I would like to know if there has already been some work internally on exploring this. I would appreciate if someone from the team could guide me on what parts of the code are relevant for this to work. cc @patrickvonplaten would love some feedback from you since your wrote this [blog](https://huggingface.co/blog/how-to-generate) and you may have seen some work around this.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7008/reactions", "total_count": 7, "+1": 7, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7008/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7007
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7007/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7007/comments
https://api.github.com/repos/huggingface/transformers/issues/7007/events
https://github.com/huggingface/transformers/issues/7007
695,657,428
MDU6SXNzdWU2OTU2NTc0Mjg=
7,007
MLM performance difference between bert-base-cased and Conversational BERT
{ "login": "manueltonneau", "id": 29440170, "node_id": "MDQ6VXNlcjI5NDQwMTcw", "avatar_url": "https://avatars.githubusercontent.com/u/29440170?v=4", "gravatar_id": "", "url": "https://api.github.com/users/manueltonneau", "html_url": "https://github.com/manueltonneau", "followers_url": "https://api.github.com/users/manueltonneau/followers", "following_url": "https://api.github.com/users/manueltonneau/following{/other_user}", "gists_url": "https://api.github.com/users/manueltonneau/gists{/gist_id}", "starred_url": "https://api.github.com/users/manueltonneau/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/manueltonneau/subscriptions", "organizations_url": "https://api.github.com/users/manueltonneau/orgs", "repos_url": "https://api.github.com/users/manueltonneau/repos", "events_url": "https://api.github.com/users/manueltonneau/events{/privacy}", "received_events_url": "https://api.github.com/users/manueltonneau/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,599
1,605
1,605
NONE
null
## Environment info - `transformers` version: 3.1.0 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.6.0+cu101 (True) - Tensorflow version (GPU?): 2.3.0 (True) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help albert, bert, GPT2, XLM: @LysandreJik Authors of Conversational BERT: @dilyararimovna @yoptar from DeepPavlov ## Information I am using `bert-base-cased` and [Conversational BERT](https://huggingface.co/DeepPavlov/bert-base-cased-conversational) for MLM using the pipeline tool. The only difference between the two is that Conversational BERT was initialized with `bert-base-cased` and further pre-trained on social media text data (Twitter and the like) with a custom vocabulary (more info on the model card). As I'm using models that were not yet finetuned, I understand why I'm getting an output telling me that some weights were not used and that some weights are newly initialized (cf [this issue](https://github.com/huggingface/transformers/issues/5421)). What I don't understand is that even though the two models seem to have the same architecture and were trained on similar tasks (MLM and next sentence prediction), the performance on MLM is really poor for Conversational BERT and pretty good for `bert-base-cased`. This is all the more weird that I used Conversational BERT for tweet classification and found that it performed better than `bert-base-cased` on this downstream task. For example, when giving '[MASK] lost his job yesterday' as input, the output for `bert-base-cased` (`pipeline_bert('[MASK] lost his job yesterday')`) is [{'score': 0.8036763072013855, 'sequence': '[CLS] he lost his job yesterday [SEP]', 'token': 2002, 'token_str': 'he'}, {'score': 0.005656923167407513, 'sequence': '[CLS] i lost his job yesterday [SEP]', 'token': 1045, 'token_str': 'i'}, {'score': 0.005227738991379738, 'sequence': '[CLS] dad lost his job yesterday [SEP]', 'token': 3611, 'token_str': 'dad'}, {'score': 0.0032391520217061043, 'sequence': '[CLS] david lost his job yesterday [SEP]', 'token': 2585, 'token_str': 'david'}, {'score': 0.0028738391119986773, 'sequence': '[CLS] and lost his job yesterday [SEP]', 'token': 1998, 'token_str': 'and'}] and the output for Conversational BERT (`pipeline_convbert('[MASK] lost his job yesterday')`) is: [{'score': 0.0005837088683620095, 'sequence': '[CLS]As lost his job yesterday [SEP]', 'token': 23390, 'token_str': '##As'}, {'score': 0.0004703140293713659, 'sequence': '[CLS] rock lost his job yesterday [SEP]', 'token': 2067, 'token_str': 'rock'}, {'score': 0.0004569509474094957, 'sequence': '[CLS] ACT lost his job yesterday [SEP]', 'token': 21111, 'token_str': 'ACT'}, {'score': 0.0004535183834377676, 'sequence': '[CLS] colour lost his job yesterday [SEP]', 'token': 5922, 'token_str': 'colour'}, {'score': 0.0004286181356292218, 'sequence': '[CLS]inas lost his job yesterday [SEP]', 'token': 16924, 'token_str': '##inas'}] I think this has to do with weight initializations but I'm not grasping everything in the output. Some points that are unclear: - `['cls.seq_relationship.weight', 'cls.seq_relationship.bias']` are not used in the case of `bert-base-cased` (see output) but are used (or missing?) in the case of Conversational BERT - Only `['cls.predictions.decoder.bias']` is newly initialized in the case of `bert-base-cased` but all of the following weights are initialized in the case of Conversational BERT: `['cls.predictions.decoder.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.dense.weight', 'cls.predictions.decoder.bias', 'cls.predictions.bias', 'cls.predictions.transform.dense.bias']` ## To reproduce Steps to reproduce the behavior: ### bert-base-cased: #### Input: `pipeline_bert = pipeline('fill-mask', model='bert-base-uncased', tokenizer='bert-base-uncased', config='bert-base-uncased')` #### Output: Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertForMaskedLM: ['cls.seq_relationship.weight', 'cls.seq_relationship.bias'] - This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model). - This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of BertForMaskedLM were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['cls.predictions.decoder.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ### Conversational BERT #### Input: `pipeline_convbert = pipeline('fill-mask', model='DeepPavlov/bert-base-cased-conversational', tokenizer='DeepPavlov/bert-base-cased-conversational', config='DeepPavlov/bert-base-cased-conversational')` #### Output: Some weights of BertForMaskedLM were not initialized from the model checkpoint at DeepPavlov/bert-base-cased-conversational and are newly initialized: ['cls.predictions.decoder.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.dense.weight', 'cls.predictions.decoder.bias', 'cls.predictions.bias', 'cls.predictions.transform.dense.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7007/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7007/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7006
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7006/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7006/comments
https://api.github.com/repos/huggingface/transformers/issues/7006/events
https://github.com/huggingface/transformers/issues/7006
695,606,141
MDU6SXNzdWU2OTU2MDYxNDE=
7,006
__init__() got an unexpected keyword argument 'cache_dir'
{ "login": "TikaToka", "id": 49054667, "node_id": "MDQ6VXNlcjQ5MDU0NjY3", "avatar_url": "https://avatars.githubusercontent.com/u/49054667?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TikaToka", "html_url": "https://github.com/TikaToka", "followers_url": "https://api.github.com/users/TikaToka/followers", "following_url": "https://api.github.com/users/TikaToka/following{/other_user}", "gists_url": "https://api.github.com/users/TikaToka/gists{/gist_id}", "starred_url": "https://api.github.com/users/TikaToka/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TikaToka/subscriptions", "organizations_url": "https://api.github.com/users/TikaToka/orgs", "repos_url": "https://api.github.com/users/TikaToka/repos", "events_url": "https://api.github.com/users/TikaToka/events{/privacy}", "received_events_url": "https://api.github.com/users/TikaToka/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[ { "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false } ]
[ "Hello! On what version of `transformers` are you running?", "> Hello! On what version of `transformers` are you running?\r\n\r\nHi @LysandreJik, It's 3.1.0", "@TikaToka Faced the same problem. Just installed everything again from master and it worked. \r\n", "> @TikaToka Faced the same problem. Just installed everything again from master and it worked.\r\n\r\nI'm still having a same issue :(", "You need an install from source to use the current examples (as stated in their [README](https://github.com/huggingface/transformers/tree/master/examples)). In colab you can do so by executing a cell with\r\n```\r\n! pip install git+git://github.com/huggingface/transformers/\r\n```\r\n\r\nAlternatively, you can find the version of the example that work with 3.1.0 [here](https://github.com/huggingface/transformers/tree/v3.1.0/examples/language-modeling).", "\r\n\r\n\r\n> @TikaToka Faced the same problem. Just installed everything again from master and it worked.\r\n\r\nThis Worked Thanks! I made a mistake while installing ", "> You need an install from source to use the current examples (as stated in their [README](https://github.com/huggingface/transformers/tree/master/examples)). In colab you can do so by executing a cell with\r\n> \r\n> ```\r\n> ! pip install git+git://github.com/huggingface/transformers/\r\n> ```\r\n> \r\n> Alternatively, you can find the version of the example that work with 3.1.0 [here](https://github.com/huggingface/transformers/tree/v3.1.0/examples/language-modeling).\r\n\r\nThank you for specific explanation! this worked!", "I still get the error:\r\nTypeError: __init__() got an unexpected keyword argument 'cache_dir'\r\nwhen running the latest version for transformers (3.1.0). \r\nI'm also running on Colab environment:\r\nCommand:\r\n!pip3 install transformers\r\n(also tried #! pip3 install git+git://github.com/huggingface/transformers/)\r\n\r\n!wget https://raw.githubusercontent.com/huggingface/transformers/master/examples/language-modeling/run_language_modeling.py\r\n\r\n%%bash\r\nexport TRAIN_FILE=train_path\r\nexport TEST_FILE=valid_path\r\nexport MODEL_NAME=gpt2\r\nexport OUTPUT_DIR=output\r\n\r\npython run_language_modeling.py \\\r\n --output_dir=output \\\r\n --model_type=gpt2 \\\r\n --model_name_or_path=gpt2 \\\r\n --do_train \\\r\n --train_data_file=$TRAIN_FILE \\\r\n --do_eval \\\r\n --eval_data_file=$TEST_FILE \\\r\n --cache_dir=None\r\n\r\nOutput:\r\nTraceback (most recent call last):\r\n File \"run_language_modeling.py\", line 313, in <module>\r\n main()\r\n File \"run_language_modeling.py\", line 242, in main\r\n get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_train else None\r\n File \"run_language_modeling.py\", line 143, in get_dataset\r\n cache_dir=cache_dir,\r\nTypeError: __init__() got an unexpected keyword argument 'cache_dir'" ]
1,599
1,600
1,599
NONE
null
I used command !python /content/transformers/examples/language-modeling/run_language_modeling.py \ --output_dir=/content/output \ --model_type=gpt2 \ --model_name_or_path=gpt2 \ --do_train \ --train_data_file=/content/input.txt \ --do_eval \ --eval_data_file=/content/dev.txt and the error occurs 2020-09-08 06:02:43.113931: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 09/08/2020 06:02:45 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 1, distributed training: False, 16-bits training: False 09/08/2020 06:02:45 - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir='/content/output', overwrite_output_dir=False, do_train=True, do_eval=True, do_predict=False, evaluate_during_training=False, prediction_loss_only=False, per_device_train_batch_size=8, per_device_eval_batch_size=8, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=3.0, max_steps=-1, warmup_steps=0, logging_dir='runs/Sep08_06-02-45_58d9f15c989e', logging_first_step=False, logging_steps=500, save_steps=500, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=1000, past_index=-1, run_name=None, disable_tqdm=False, remove_unused_columns=True) 09/08/2020 06:02:45 - INFO - filelock - Lock 140608954702032 acquired on /root/.cache/torch/transformers/4be02c5697d91738003fb1685c9872f284166aa32e061576bbe6aaeb95649fcf.db13c9bc9c7bdd738ec89e069621d88e05dc670366092d809a9cbcac6798e24e.lock Downloading: 100% 665/665 [00:00<00:00, 556kB/s] 09/08/2020 06:02:46 - INFO - filelock - Lock 140608954702032 released on /root/.cache/torch/transformers/4be02c5697d91738003fb1685c9872f284166aa32e061576bbe6aaeb95649fcf.db13c9bc9c7bdd738ec89e069621d88e05dc670366092d809a9cbcac6798e24e.lock 09/08/2020 06:02:46 - INFO - filelock - Lock 140608954701528 acquired on /root/.cache/torch/transformers/f2808208f9bec2320371a9f5f891c184ae0b674ef866b79c58177067d15732dd.1512018be4ba4e8726e41b9145129dc30651ea4fec86aa61f4b9f40bf94eac71.lock Downloading: 100% 1.04M/1.04M [00:00<00:00, 2.47MB/s] 09/08/2020 06:02:47 - INFO - filelock - Lock 140608954701528 released on /root/.cache/torch/transformers/f2808208f9bec2320371a9f5f891c184ae0b674ef866b79c58177067d15732dd.1512018be4ba4e8726e41b9145129dc30651ea4fec86aa61f4b9f40bf94eac71.lock 09/08/2020 06:02:48 - INFO - filelock - Lock 140608954701640 acquired on /root/.cache/torch/transformers/d629f792e430b3c76a1291bb2766b0a047e36fae0588f9dbc1ae51decdff691b.70bec105b4158ed9a1747fea67a43f5dee97855c64d62b6ec3742f4cfdb5feda.lock Downloading: 100% 456k/456k [00:00<00:00, 1.37MB/s] 09/08/2020 06:02:48 - INFO - filelock - Lock 140608954701640 released on /root/.cache/torch/transformers/d629f792e430b3c76a1291bb2766b0a047e36fae0588f9dbc1ae51decdff691b.70bec105b4158ed9a1747fea67a43f5dee97855c64d62b6ec3742f4cfdb5feda.lock /usr/local/lib/python3.6/dist-packages/transformers/modeling_auto.py:821: FutureWarning: The class `AutoModelWithLMHead` is deprecated and will be removed in a future version. Please use `AutoModelForCausalLM` for causal language models, `AutoModelForMaskedLM` for masked language models and `AutoModelForSeq2SeqLM` for encoder-decoder models. FutureWarning, 09/08/2020 06:02:48 - INFO - filelock - Lock 140608954702312 acquired on /root/.cache/torch/transformers/d71fd633e58263bd5e91dd3bde9f658bafd81e11ece622be6a3c2e4d42d8fd89.778cf36f5c4e5d94c8cd9cefcf2a580c8643570eb327f0d4a1f007fab2acbdf1.lock Downloading: 100% 548M/548M [00:16<00:00, 33.1MB/s] 09/08/2020 06:03:06 - INFO - filelock - Lock 140608954702312 released on /root/.cache/torch/transformers/d71fd633e58263bd5e91dd3bde9f658bafd81e11ece622be6a3c2e4d42d8fd89.778cf36f5c4e5d94c8cd9cefcf2a580c8643570eb327f0d4a1f007fab2acbdf1.lock /usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py:1321: FutureWarning: The `max_len` attribute has been deprecated and will be removed in a future version, use `model_max_length` instead. FutureWarning, Traceback (most recent call last): File "/content/transformers/examples/language-modeling/run_language_modeling.py", line 313, in <module> main() File "/content/transformers/examples/language-modeling/run_language_modeling.py", line 242, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_train else None File "/content/transformers/examples/language-modeling/run_language_modeling.py", line 143, in get_dataset cache_dir=cache_dir, TypeError: __init__() got an unexpected keyword argument 'cache_dir' I'm working on the Google Colab environment
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7006/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7006/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7005
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7005/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7005/comments
https://api.github.com/repos/huggingface/transformers/issues/7005/events
https://github.com/huggingface/transformers/pull/7005
695,597,720
MDExOlB1bGxSZXF1ZXN0NDgxODE3NDA3
7,005
[Community notebooks] Add notebook on fine-tuning GPT-2 Model with Trainer Class
{ "login": "philschmid", "id": 32632186, "node_id": "MDQ6VXNlcjMyNjMyMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/philschmid", "html_url": "https://github.com/philschmid", "followers_url": "https://api.github.com/users/philschmid/followers", "following_url": "https://api.github.com/users/philschmid/following{/other_user}", "gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}", "starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/philschmid/subscriptions", "organizations_url": "https://api.github.com/users/philschmid/orgs", "repos_url": "https://api.github.com/users/philschmid/repos", "events_url": "https://api.github.com/users/philschmid/events{/privacy}", "received_events_url": "https://api.github.com/users/philschmid/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7005?src=pr&el=h1) Report\n> Merging [#7005](https://codecov.io/gh/huggingface/transformers/pull/7005?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c18f5916a03d0a161d003e95ffff8120d8addc0c?el=desc) will **decrease** coverage by `0.59%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7005/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7005?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7005 +/- ##\n==========================================\n- Coverage 80.63% 80.03% -0.60% \n==========================================\n Files 161 161 \n Lines 30123 30123 \n==========================================\n- Hits 24289 24109 -180 \n- Misses 5834 6014 +180 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7005?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/7005/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7005/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `91.90% <0.00%> (-0.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7005/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (-0.33%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7005/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.66% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7005/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.34% <0.00%> (+63.80%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7005?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7005?src=pr&el=footer). Last update [c18f591...0d1e538](https://codecov.io/gh/huggingface/transformers/pull/7005?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,599
1,599
1,599
MEMBER
null
Adding a link to a community notebook containing an example of fine-tuning a German GPT-2 Model with the Trainer Class
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7005/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7005/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7005", "html_url": "https://github.com/huggingface/transformers/pull/7005", "diff_url": "https://github.com/huggingface/transformers/pull/7005.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7005.patch", "merged_at": 1599550940000 }
https://api.github.com/repos/huggingface/transformers/issues/7004
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7004/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7004/comments
https://api.github.com/repos/huggingface/transformers/issues/7004/events
https://github.com/huggingface/transformers/pull/7004
695,568,081
MDExOlB1bGxSZXF1ZXN0NDgxNzkyMDU3
7,004
[seq2seq Examples] Use _step instead of generate for val, test
{ "login": "setu4993", "id": 1833708, "node_id": "MDQ6VXNlcjE4MzM3MDg=", "avatar_url": "https://avatars.githubusercontent.com/u/1833708?v=4", "gravatar_id": "", "url": "https://api.github.com/users/setu4993", "html_url": "https://github.com/setu4993", "followers_url": "https://api.github.com/users/setu4993/followers", "following_url": "https://api.github.com/users/setu4993/following{/other_user}", "gists_url": "https://api.github.com/users/setu4993/gists{/gist_id}", "starred_url": "https://api.github.com/users/setu4993/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/setu4993/subscriptions", "organizations_url": "https://api.github.com/users/setu4993/orgs", "repos_url": "https://api.github.com/users/setu4993/repos", "events_url": "https://api.github.com/users/setu4993/events{/privacy}", "received_events_url": "https://api.github.com/users/setu4993/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@sshleifer : Does it make sense to still add this? If yes, I can rebase and update based on Suraj's comments earlier. If not, will close.", "Let's hold off for now. I think passing --eval_num_beams=1 is close enough to equivalent.\r\nThanks for trying!" ]
1,599
1,600
1,600
CONTRIBUTOR
null
Addresses a workaround I proposed in #6589 for limiting the memory consumption in `.generate` steps. @sshleifer
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7004/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7004/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7004", "html_url": "https://github.com/huggingface/transformers/pull/7004", "diff_url": "https://github.com/huggingface/transformers/pull/7004.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7004.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/7003
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7003/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7003/comments
https://api.github.com/repos/huggingface/transformers/issues/7003/events
https://github.com/huggingface/transformers/issues/7003
695,505,929
MDU6SXNzdWU2OTU1MDU5Mjk=
7,003
On Gpu sharing mechanism, specify model.to(' CPU '), but still use the Gpu
{ "login": "changquanyou", "id": 9205633, "node_id": "MDQ6VXNlcjkyMDU2MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/9205633?v=4", "gravatar_id": "", "url": "https://api.github.com/users/changquanyou", "html_url": "https://github.com/changquanyou", "followers_url": "https://api.github.com/users/changquanyou/followers", "following_url": "https://api.github.com/users/changquanyou/following{/other_user}", "gists_url": "https://api.github.com/users/changquanyou/gists{/gist_id}", "starred_url": "https://api.github.com/users/changquanyou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/changquanyou/subscriptions", "organizations_url": "https://api.github.com/users/changquanyou/orgs", "repos_url": "https://api.github.com/users/changquanyou/repos", "events_url": "https://api.github.com/users/changquanyou/events{/privacy}", "received_events_url": "https://api.github.com/users/changquanyou/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It's my issue" ]
1,599
1,599
1,599
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> You know that The GPU device(K8s) only supports one container exclusive GPU, In the inferencing stage, it is extremely wasteful. For Tiny-Albert model,It's only using about 500MiB。We try to use GPU share device, support more containers use one GPU device。We expect using torch.cuda.is_available() to control Using CUDA or Not. Here is My Code: ` use_cuda = bool(os.environ.get("USE_CUDA")) if use_cuda and torch.cuda.is_available(): self._device = torch.device("cuda") else: self._device = torch.device("cpu") ` 1. USE_CUDA== False and torch.cuda.is_available() is True: +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |============================================================| | 0 11538 C /usr/bin/python3 **342MiB** | +-----------------------------------------------------------------------------+ 2. USE_CUDA== True and torch.cuda.is_available() is True: +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |============================================================| | 0 11538 C /usr/bin/python3 **342MiB** | +-----------------------------------------------------------------------------+ <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7003/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7003/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7002
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7002/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7002/comments
https://api.github.com/repos/huggingface/transformers/issues/7002/events
https://github.com/huggingface/transformers/issues/7002
695,492,136
MDU6SXNzdWU2OTU0OTIxMzY=
7,002
How to bypass "Special tokens have been added in the vocabulary..." warning?
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello @stas00 ,\r\n\r\nCan you show us a sample of your code ?\r\n\r\nDid you explicitely add special tokens to your tokenizer ?\r\n\r\nFrom my understanding this warning appears when the method _sanitize_special_tokens_ of the tokenizer returns a strictly positive integer.\r\n\r\nThe docstring of the method is:\r\n```\r\n\"\"\"\r\n Make sure that all the special tokens attributes of the tokenizer (:obj:`tokenizer.mask_token`,\r\n :obj:`tokenizer.cls_token`, etc.) are in the vocabulary.\r\n Add the missing ones to the vocabulary if needed.\r\n Return:\r\n :obj:`int`: The number of tokens added in the vocaulary during the operation.\r\n\"\"\"\r\n```\r\n\r\nSo this warning appears when you add special tokens to the vocabulary **after** loading the tokenizer. If you use a model trained on the first version of the tokenizer (before adding the new tokens), you might feed it tokens it has not been trained on, which would lead to a random embedding and worse performance.\r\n\r\nIf you load your model and your tokenizer with the same training, for example:\r\n```\r\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\nmodel = BertModel.from_pretrained('bert-base-uncased')\r\n```\r\nthen I suggest not providing special tokens, as the basic ones are already present.\r\n\r\nPlease let me know if it helps.", "You're absolutely correct, the tokenizer was adding special tokens (copied from another tokenizer, but wasn't really needing them), so I removed them now and the warning is gone.\r\n\r\nAnd, yes, I forgot to add the code - wasn't my best!\r\n\r\nMuch appreciating your follow up, @nassim-yagoub ", "The same warning also happens with this code:\r\n```\r\nimport torch\r\nfrom transformers import AutoModel, AutoTokenizer\r\n\r\ntokenizer = AutoTokenizer.from_pretrained('vinai/bertweet-base')\r\nmodel = AutoModel.from_pretrained('vinai/bertweet-base')\r\n```\r\n```\r\npython --version\r\nPython 3.9.13\r\n```\r\nI do not understand where tokens are added to the vocabulary after loading the tokenizer.", "@sbocconi Any idea how to solve the warnings in the case of BERTTweet?\r\n", "No unfortunately @codepujan, it has been a while I have not used this functionality", "This worked for me:\r\n\r\ntransformers.logging.set_verbosity_error()" ]
1,599
1,703
1,599
CONTRIBUTOR
null
Is there a way to avoid always getting: > Special tokens have been added in the vocabulary, make sure the associated word embedding are fine-tuned or trained @ https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils_base.py#L1614 (other than turning logging off) As far as I can see from stepping through that code, there are always special tokens (e.g. eos, pad, etc.), i.e. there is nothing special about it. What purpose does this warning serve when loading a tokenizer? I'm not sure how the end user can act on the suggestion: > make sure the associated word embedding are fine-tuned or trained when they just want to run, say, the `generate` function on a pre-trained model, other than just learning to ignore this warning and not paying heed to when a warning is really saying something crucial. Thoughts?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7002/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7002/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7001
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7001/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7001/comments
https://api.github.com/repos/huggingface/transformers/issues/7001/events
https://github.com/huggingface/transformers/pull/7001
695,486,293
MDExOlB1bGxSZXF1ZXN0NDgxNzI0MTYx
7,001
typo
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,599
1,599
1,599
CONTRIBUTOR
null
apologies for the tiny PRs, just sending those as I find them.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7001/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7001/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7001", "html_url": "https://github.com/huggingface/transformers/pull/7001", "diff_url": "https://github.com/huggingface/transformers/pull/7001.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7001.patch", "merged_at": 1599542541000 }
https://api.github.com/repos/huggingface/transformers/issues/7000
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7000/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7000/comments
https://api.github.com/repos/huggingface/transformers/issues/7000/events
https://github.com/huggingface/transformers/issues/7000
695,398,484
MDU6SXNzdWU2OTUzOTg0ODQ=
7,000
access to the embeddings for query and text used in a downstream NLP task
{ "login": "mchari", "id": 30506151, "node_id": "MDQ6VXNlcjMwNTA2MTUx", "avatar_url": "https://avatars.githubusercontent.com/u/30506151?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchari", "html_url": "https://github.com/mchari", "followers_url": "https://api.github.com/users/mchari/followers", "following_url": "https://api.github.com/users/mchari/following{/other_user}", "gists_url": "https://api.github.com/users/mchari/gists{/gist_id}", "starred_url": "https://api.github.com/users/mchari/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mchari/subscriptions", "organizations_url": "https://api.github.com/users/mchari/orgs", "repos_url": "https://api.github.com/users/mchari/repos", "events_url": "https://api.github.com/users/mchari/events{/privacy}", "received_events_url": "https://api.github.com/users/mchari/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I realized that in QA, word/token embeddings are used, while I was looking for multi-sentence level embeddings. Pl. ignore my question." ]
1,599
1,599
1,599
NONE
null
Newbie question - is there any way to access the embeddings that are generated for the passage(and query) that is fed to a BertForQuestionAnswering model ?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7000/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7000/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6999
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6999/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6999/comments
https://api.github.com/repos/huggingface/transformers/issues/6999/events
https://github.com/huggingface/transformers/pull/6999
695,388,744
MDExOlB1bGxSZXF1ZXN0NDgxNjQwNjI1
6,999
fixed trainer tr_loss memory leak
{ "login": "StuartMesham", "id": 28049022, "node_id": "MDQ6VXNlcjI4MDQ5MDIy", "avatar_url": "https://avatars.githubusercontent.com/u/28049022?v=4", "gravatar_id": "", "url": "https://api.github.com/users/StuartMesham", "html_url": "https://github.com/StuartMesham", "followers_url": "https://api.github.com/users/StuartMesham/followers", "following_url": "https://api.github.com/users/StuartMesham/following{/other_user}", "gists_url": "https://api.github.com/users/StuartMesham/gists{/gist_id}", "starred_url": "https://api.github.com/users/StuartMesham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/StuartMesham/subscriptions", "organizations_url": "https://api.github.com/users/StuartMesham/orgs", "repos_url": "https://api.github.com/users/StuartMesham/repos", "events_url": "https://api.github.com/users/StuartMesham/events{/privacy}", "received_events_url": "https://api.github.com/users/StuartMesham/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6999?src=pr&el=h1) Report\n> Merging [#6999](https://codecov.io/gh/huggingface/transformers/pull/6999?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/90ec78b5140251f093f658ebd4d2925e8c03f5e6?el=desc) will **decrease** coverage by `1.44%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6999/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6999?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6999 +/- ##\n==========================================\n- Coverage 80.58% 79.14% -1.45% \n==========================================\n Files 161 161 \n Lines 30123 30123 \n==========================================\n- Hits 24276 23841 -435 \n- Misses 5847 6282 +435 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6999?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6999/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `54.95% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/6999/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `16.37% <0.00%> (-82.31%)` | :arrow_down: |\n| [src/transformers/modeling\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/6999/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6999/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.73% <0.00%> (-19.35%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6999/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.21% <0.00%> (-2.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6999/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.03% <0.00%> (+0.27%)` | :arrow_up: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6999/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.41% <0.00%> (+0.50%)` | :arrow_up: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6999/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `90.00% <0.00%> (+5.00%)` | :arrow_up: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6999/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.30% <0.00%> (+5.66%)` | :arrow_up: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6999/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `90.00% <0.00%> (+30.00%)` | :arrow_up: |\n| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/6999/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6999?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6999?src=pr&el=footer). Last update [90ec78b...240b7da](https://codecov.io/gh/huggingface/transformers/pull/6999?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "I think this was touched by @jysohn23 recently so pinging him here", "I think the problem is not that we don't call `.item()` but that we don't call `.detach()`, which means some variables are kept forever for a backward pass (that is never called).\r\nThe `.item()` are removed because it's needed for faster TPU training.", "This ends up calling `.item()` every step which ends up hurting performance by like 2X. What @sgugger sounds promising. Can we try that out instead?", "I have updated it to create a detached tensor instead of calling item().", "Can you confirm it fixes the memory leak? This is the right fix IMO (@LysandreJik this might be the fix for the TPU memory leak we have in another issue too.)", "> Can you confirm it fixes the memory leak? This is the right fix IMO (@LysandreJik this might be the fix for the TPU memory leak we have in another issue too.)\r\n\r\nGreat! Yes, I have tested to make sure that this fixes the leak." ]
1,599
1,599
1,599
CONTRIBUTOR
null
Fixes #6939 The Trainer class contains the memory leak described [here](https://discuss.pytorch.org/t/cpu-ram-usage-increasing-for-every-epoch/24475/6). It is not specific to any particular model type and will occur with any model trained using the trainer class. It is fixed in this pull request. The issue is demonstrated in [this Colab notebook](https://colab.research.google.com/drive/1KQZEiZtfY14sDiAQnfw0gkKj9eC6XpTm?usp=sharing). The model trains for around 20 000 steps (this takes around 10 minutes on a t4) before using up all 12GB of RAM.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6999/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6999/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6999", "html_url": "https://github.com/huggingface/transformers/pull/6999", "diff_url": "https://github.com/huggingface/transformers/pull/6999.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6999.patch", "merged_at": 1599566854000 }
https://api.github.com/repos/huggingface/transformers/issues/6998
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6998/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6998/comments
https://api.github.com/repos/huggingface/transformers/issues/6998/events
https://github.com/huggingface/transformers/pull/6998
695,362,039
MDExOlB1bGxSZXF1ZXN0NDgxNjE3Njk0
6,998
Fix TF Trainer loss calculation
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6998?src=pr&el=h1) Report\n> Merging [#6998](https://codecov.io/gh/huggingface/transformers/pull/6998?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0054a48cdd64e7309184a64b399ab2c58d75d4e5?el=desc) will **increase** coverage by `0.19%`.\n> The diff coverage is `20.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6998/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6998?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6998 +/- ##\n==========================================\n+ Coverage 80.53% 80.72% +0.19% \n==========================================\n Files 168 168 \n Lines 32179 32197 +18 \n==========================================\n+ Hits 25915 25991 +76 \n+ Misses 6264 6206 -58 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6998?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6998/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `12.46% <20.00%> (+0.24%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6998/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6998/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6998/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-10.00%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6998/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6998/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.93% <0.00%> (-0.51%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6998/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.92% <0.00%> (-0.28%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6998/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.77% <0.00%> (-0.28%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6998/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6998/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `93.26% <0.00%> (+4.84%)` | :arrow_up: |\n| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/6998/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6998?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6998?src=pr&el=footer). Last update [0054a48...b5c5bdc](https://codecov.io/gh/huggingface/transformers/pull/6998?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "> \r\n> \r\n> Thanks! It still needs some refactoring but in overall it looks ok. Can you also try the examples scripts for sequence classification and multiple choices to see if there is a drop in performance or not and be sure to do not bring any inconsistency.\r\n> \r\n> Optionaly if you have the possibility to run it, also the examples scripts, on a multi-gpu environment and check the same thing it would be appreciated, otherwise, no worries I will do it before merging :)\r\n\r\nI will try to (and learn to) run the other scripts later today, but I can only run on a single-gpu environment.", "No problem I can do it in multiple gpu env.", "The trainer will still be task-agnostic, the goal is just to add a new parameter to the training_step function (or possibly a class field) to handle the value that will be used to compute the scaled loss instead. It should work for all the tasks.\r\n\r\nThis computation cannot be done in the loss, because the loss computation is done over a per replica batch size and not over the global batch size.\r\n\r\nI'm not in favor of having different trainers. I don't mind having few differences between the two trainers as long as the external usage is the same.", "I don't understand your last comment. Having users subclass Trainer for specific behavior is what is indicated in the documentation. We cannot offer everything any user can think of in the training loop so this the way of customizing one. There is a `Seq2SeqTrainer` in preparation on the PyTorch side for instance, for code that is relevant to this only.\r\n\r\nOpening the door to have task-specific components in the main tf_trainer file will make it unreadable in a few months, when every user will have added their own, and then users that rely on a custom Trainers won't use that class anymore, because they won't understand it.\r\n\r\nTagging @julien-c for his advice.", "@sgugger , if fact, I also doubt that the pytorch trainer, while working with token level tasks, have some inaccurate loss computation - when we have distributed training and/or gradient accumulation. For example, [in DistilBertForTokenClassification](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_distilbert.py#L820), we use\r\n\r\n loss_fct = CrossEntropyLoss()\r\n\r\nand \r\n \r\n loss = loss_fct(active_logits, active_labels)\r\n\r\n[CrossEntropyLoss](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#crossentropyloss) has default reduction `mean`, so basically we compute the averaged loss over the tokens (label !=-100) on each single batch. Then we accumulate it `gradient_accumulation_steps`, and finally average again by dividing `gradient_accumulation_steps`, see [here](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L1041).\r\n\r\nHowever, this is no longer the same as the `per example losses over the global batch divided the number of tokens on that global batch`. In practice, it may not harm the training, but theoretically, it is not what exactly the gradient accumulation mean to be.\r\n\r\nBut this should be discussed in another thread, not here.", "> pening the door to have task-specific components in the main tf_trainer file will make it unreadable in a few months, when every user will have added their own, and then users that rely on a custom Trainers won't use that class anymore, because they won't understand it.\r\n\r\nThe current draft code requires a task to compute the number of `instances` in each `example`. For token level tasks, it will be the number of tokens with labels != -100. Fore sentence level, in general, it will just be the number of sentences. The tf trainer will just use this information for further calculation, but it doesn't need to know what the tasks are.", "@jplu , this is not a finalized version. However, I want to have some feedback from you about if this approach is OK. Thanks.", "For the sake of simplicity, I suspect this would be best left as a user-space extension/custom loss (via a subclass of TFTrainer for instance)\r\n\r\nIf needed we can add/improve extension points in TFTrainer to make it easier.\r\n\r\nWhat do you think?", "@julien-c \r\n\r\nMy thoughts are:\r\n\r\n- Hugging Face's ` transformers` has `run_ner_tf.py` which is assumed to give the correct training/evaluation losses\r\n- (maybe I misunderstand the roles of scripts in examples dir?)\r\n- If they are meant to be correct, and let's say the right way for token level tasks is to count the number of tokens (not ignored), then the current script won't give the correct results. In this case, the majority of users who use it won't know they are supposed to subclass TFTrainer class\r\n- Even they are aware of the necessity of counting tokens rather than examples, in TF, it is not easy to make it right - if it has to work correctly in a distributed strategy and/or gradient accumulation. It is not just about modifying the loss calculation - the token counting has to be in a global batch, not a batch already distributed to a single replica. Then in a single replica, the per example losses (on that small batch) has to be divided by the number of tokens computed in the big batch before being distributed.\r\n- If the scripts in examples dir have only the purpose of demonstration of the library's usage - leaving users to customize trainers is fine, and adding/improving extension points seems good to me (although I don't know what it looks like for now). Maybe it is also good to have some warning and a brief tutorial to let users know how to do things.\r\n\r\nPlease let me know the team's decision about this issue (how to continue or if to close it). Thanks.", "@chiapas Now it looks great! I really like it. Did you try it in context of single and multi-replica?\r\n\r\n@julien-c @sgugger I think there is a misunderstanding here, this PR is not to add a new feature or any refactoring, this is a **bugfix** in the loss computation, that means that anybody that are currently using the current version of the trainer get a wrong loss value for token classifications class. And as @chiapas said the way it is computed in the PyTorch trainer might need a fix as well. There are only few changes and do not impact the readability for TF users.", "> \r\n> \r\n> @chiapas Now it looks great! I really like it. Did you try it in context of single and multi-replica?\r\n\r\n@jplu Thanks. I haven't tried testing yet. I preferred to have your feedback about this new way of fix before finalizing it and testing. By the way, for multi-replica, I can only run on Kaggle or Google colab. I will let you know once the test is done.\r\n\r\n", "This is ok no worries, with TPU is fine as well 👍 ", "@chiapas As far as I can say this PR should also fix the issue #6969 right? As we compute the total number of example at every step.", "> \r\n> \r\n> @chiapas As far as I can say this PR should also fix the issue #6969 right? As we compute the total number of example at every step.\r\n\r\nYes. However, after I opened up that issue and work on this PR, I found that we have\r\n\r\n ds = (\r\n self.train_dataset.repeat()\r\n .shuffle(self.num_train_examples, seed=self.args.seed)\r\n .batch(self.total_train_batch_size, drop_remainder=self.args.dataloader_drop_last)\r\n .prefetch(tf.data.experimental.AUTOTUNE)\r\n )\r\n\r\nSince the dataset is repeated first, it has no ending, and the drop_remainder has no actual effect (other than set the batch shpae from None to a fixed number along batch dimension), so issue #6969 will never occur. However, in this case, `args.dataloader_drop_last` is somehow confusing.", "Indeed, having the `repeat` has the advantage to avoid the potential last partial batch in each epoch, so users don't need to think about scaling the gradients based on the actual batch size and makes the `dataloader_drop_last` useless.", "> \r\n> \r\n> Indeed, having the `repeat` has the advantage to avoid the potential last partial batch in each epoch, so users don't need to think about scaling the gradients based on the actual batch size and makes the `dataloader_drop_last` useless.\r\n\r\nYes, I am ok with this. BTW, `dataloader_drop_last` might still have an effect - if `True`, the batch dimension will be fixed in the compiled graph. If it is set to `False`, even we repeat the dataset first, the batch dimension will still be `None` . In this case, while working with TPU with the way we do gradient accumulation int trainer_tf.py, we will get an error message that complains TPU can't handle slice or shape - I don't remember the exact message, but I can reproduce one quickly.", "The default value of `drop_reminder` is False, which result in an unknown batch size because the last batch may not be full, this is exactly why `drop_reminder` on TPU has to be set to `True` if no repeat is applied otherwise we can let it to False.\r\n\r\nSmall example:\r\n\r\n```python\r\n>>> dataset = tf.data.Dataset.range(100)\r\n>>> dataset.batch(4)\r\n<BatchDataset shapes: (None,), types: tf.int64>\r\n>>> dataset = tf.data.Dataset.range(100)\r\n>>> dataset.batch(4, drop_remainder=True)\r\n<BatchDataset shapes: (4,), types: tf.int64>", "> \r\n> \r\n> The default value of `drop_reminder` is False, which result in an unknown batch size because the last batch may not be full, this is exactly why `drop_reminder` on TPU has to be set to `True` if no repeat is applied otherwise we can let it to False.\r\n\r\n otherwise we can let it to False\r\n\r\nWhat I am saying is: We can't let it to `False` even repeat is applied in `trainer_tf.py`. In general, if `repeat` is used, we don't have to drop. But due to the gradient accumulation implementation, if TPU is used and, if we set `drop_remainde=False`, even the `repeat` is applied, we will still get\r\n\r\n\t<PrefetchDataset shapes: ((None, 512, 512, 3), (None,)), types: (tf.float32, tf.int32)>\r\n\r\n\r\n\tNotFoundError: 3 root error(s) found.\r\n\t (0) Not found: {{function_node __inference_train_step_1_epoch_192920}} No proto found for key <<NO PROGRAM AS COMPILATION FAILED>>\r\n\t\t [[{{node TPUVariableReshard/reshard/_16819633198340046116/_31}}]]\r\n\t (1) Not found: {{function_node __inference_train_step_1_epoch_192920}} No proto found for key <<NO PROGRAM AS COMPILATION FAILED>>\r\n\t\t [[{{node TPUVariableReshard/reshard/_17949385379616849075/_19}}]]\r\n\t (2) Unimplemented: {{function_node __inference_train_step_1_epoch_192920}} Compilation failure: Dynamic input dimension to reshape that is both splitted and combined is not supported: output: f32[0,512,512,3], input: f32[<=0,512,512,3], input_dim: 0\r\n\t\t [[{{node strided_slice_2}}]]\r\n\t\t [[while/body/_1/while]]\r\n\t\tTPU compilation failed\r\n\t\t [[tpu_compile_succeeded_assert/_17625000101377989734/_5]]\r\n\t0 successful operations.\r\n\t6 derived errors ignored.\r\n\r\nYou can test it if you want on\r\n\r\nhttps://www.kaggle.com/yihdarshieh/tpu-gradient-accumulation?scriptVersionId=41324202\r\n\r\nby changing one line in \r\n\r\n def get_training_dataset(batch_size):", "Hummm nice catch, I haven't tested this case with gradient accumulation, thanks!", "@chiapas for me it looks ok, do you want to add anything else? If no can you switch the PR to open, in order to be able for us to merge it.", "> \r\n> \r\n> @chiapas for me it looks ok, do you want to add anything else? If no can you switch the PR to open, in order to be able for us to merge it.\r\n\r\n@jplu , I haven't done any test yet - I pushed the code immediately after writting it (in order to have your feedback), so from my side, I feel more comfortable to check a few things before merged to master. Unless you have done some testings and are eager to merge, maybe wait a bit please? I think tomorrow at some point would be ready. ", "I have tested it on single and 4 GPU training with the example script for NER and was ok, but take the time you need the more we can test the better. I will be happy to know how it works on TPU as well, even through a colab.", "@jplu I had to fix bugs, and now I have a working version - I checked some intermediate values to make sure the calculation of number of `instances` is correct and sent to replica(s).\r\n\r\nDue to the bugs I found - I think there might be a chance that your test yesterday didn't use my code - probably forgot to uninstall the original `transformers` and reinstall `transformers` that is based on my version? I have this doubt because those bugs would throw errors, and the training wouldn't be successful. Sorry about this. If possible, could you test it again with the latest version on multiple GPU env.? Thanks.\r\n\r\nOtherwise, I tested with CPU and 1 GPU on colab. It works fine. You can check here\r\n\r\n https://colab.research.google.com/drive/148whpTObbF53qU_ec0bVkyVWUsLlwppn?usp=sharing\r\n\r\nHowever, using TPU with tf 2.2 or 2.3, I had different errors, for which I think irrelevant to this PR code See below for the error messages. We can probably open a bug report and fix it later.\r\n\r\nAlso, the CI is not green because of some problems from the master branch. From my side, the code is ready to be merged (if you can test on multiple GPU again would be better). Thanks.\r\n\r\n Traceback (most recent call last):\r\n File \"run_tf_ner.py\", line 299, in <module>\r\n main()\r\n File \"run_tf_ner.py\", line 128, in main\r\n training_args.n_replicas,\r\n File \"/content/transformers/examples/token-classification/transformers/src/transformers/file_utils.py\", line 926, in wrapper\r\n return func(*args, **kwargs)\r\n File \"/content/transformers/examples/token-classification/transformers/src/transformers/training_args_tf.py\", line 161, in n_replicas\r\n return self._setup_strategy.num_replicas_in_sync\r\n File \"/content/transformers/examples/token-classification/transformers/src/transformers/file_utils.py\", line 904, in __get__\r\n cached = self.fget(obj)\r\n File \"/content/transformers/examples/token-classification/transformers/src/transformers/file_utils.py\", line 926, in wrapper\r\n return func(*args, **kwargs)\r\n File \"/content/transformers/examples/token-classification/transformers/src/transformers/training_args_tf.py\", line 132, in _setup_strategy\r\n tf.tpu.experimental.initialize_tpu_system(tpu)\r\n File \"/usr/local/lib/python3.6/dist-packages/tensorflow/python/tpu/tpu_strategy_util.py\", line 103, in initialize_tpu_system\r\n serialized_topology = output.numpy()\r\n File \"/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py\", line 961, in numpy\r\n maybe_arr = self._numpy() # pylint: disable=protected-access\r\n File \"/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py\", line 929, in _numpy\r\n six.raise_from(core._status_to_exception(e.code, e.message), None)\r\n File \"<string>\", line 3, in raise_from\r\n tensorflow.python.framework.errors_impl.InvalidArgumentError: NodeDef expected inputs 'string' do not match 0 inputs specified; Op<name=_Send; signature=tensor:T -> ; attr=T:type; attr=tensor_name:string; attr=send_device:string ..........\r\n\r\nand TPU with tf 2.3 gives different error\r\n\r\n Traceback (most recent call last):\r\n File \"run_tf_ner.py\", line 299, in <module>\r\n main()\r\n File \"run_tf_ner.py\", line 230, in main\r\n trainer.train()\r\n File \"/content/transformers/src/transformers/trainer_tf.py\", line 474, in train\r\n train_ds = self.get_train_tfdataset()\r\n File \"/content/transformers/src/transformers/trainer_tf.py\", line 137, in get_train_tfdataset\r\n self.num_train_examples = tf.data.experimental.cardinality(self.train_dataset).numpy()\r\n File \"/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py\", line 1063, in numpy\r\n maybe_arr = self._numpy() # pylint: disable=protected-access\r\n File \"/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py\", line 1031, in _numpy\r\n six.raise_from(core._status_to_exception(e.code, e.message), None) # pylint: disable=protected-access\r\n File \"<string>\", line 3, in raise_from\r\n tensorflow.python.framework.errors_impl.UnimplementedError: File system scheme '[local]' not implemented (file: 'runs/Sep11_12-45-28_1d2f5ee8ee35')\r\n Encountered when executing an operation using EagerExecutor. This error cancels all future operations and poisons their output tensors.\r\n Error in atexit._run_exitfuncs:\r\n Traceback (most recent call last):\r\n File \"/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/tpu_strategy.py\", line 540, in async_wait\r\n context.async_wait()\r\n File \"/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/context.py\", line 2319, in async_wait\r\n context().sync_executors()\r\n File \"/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/context.py\", line 658, in sync_executors\r\n pywrap_tfe.TFE_ContextSyncExecutors(self._context_handle)\r\n tensorflow.python.framework.errors_impl.UnimplementedError: File system scheme '[local]' not implemented (file: 'runs/Sep11_12-45-28_1d2f5ee8ee35')\r\n Encountered when executing an operation using EagerExecutor. This error cancels all future operations and poisons their output tensors.\r\n 2020-09-11 12:46:28.864320: W ./tensorflow/core/distributed_runtime/eager/destroy_tensor_handle_node.h:57] Ignoring an error encountered when deleting remote tensors handles: Invalid argument: Unable to find the relevant tensor remote_handle: Op ID: 9416, Output num: 0\r\n Additional GRPC error information from remote target /job:worker/replica:0/task:0:\r\n :{\"created\":\"@1599828388.860894650\",\"description\":\"Error received from peer ipv4:10.42.193.26:8470\",\"file\":\"external/com_github_grpc_grpc/src/core/lib/surface/call.cc\",\"file_line\":1056,\"grpc_message\":\"Unable to find the relevant tensor remote_handle: Op ID: 9416, Output num: 0\",\"grpc_status\":3}", ">Due to the bugs I found - I think there might be a chance that your test yesterday didn't use my code - probably forgot to uninstall the original transformers and reinstall transformers that is based on my version? I have this doubt because those bugs would throw errors, and the training wouldn't be successful. Sorry about this. If possible, could you test it again with the latest version on multiple GPU env.? Thanks.\r\n\r\nAh! Might be possible I forgot to install your version. I will re-test it, to be sure.\r\n\r\n> However, using TPU with tf 2.2 or 2.3, I had different errors, for which I think irrelevant to this PR code See below for the error messages. We can probably open a bug report and fix it later.\r\n\r\nThe second error you get means that you cannot load data from localhost, the files have to be hosted on a GCS to make it works.\r\n\r\n> Also, the CI is not green because of some problems from the master branch. \r\n\r\nCan you try to rebase on the current master and see if the CI error still occurs?\r\n\r\n> From my side, the code is ready to be merged (if you can test on multiple GPU again would be better). Thanks.\r\n\r\nI will test that ASAP today and will let you know.", "I have been able to run a NER task over 4 GPUs. Without gradient accumulation:\r\n\r\n```\r\n***** Running training *****\r\n Num examples = 24000\r\n Num Epochs = 3\r\n Instantaneous batch size per device = 32\r\n Total train batch size (w. parallel, distributed & accumulation) = 128\r\n Gradient Accumulation steps = 1\r\n Steps per epoch = 188\r\n Total optimization steps = 564\r\n{'loss': 18.6387, 'learning_rate': 4.9113474e-05, 'epoch': 0.05851063829787234, 'step': 10}\r\n{'loss': 14.222743, 'learning_rate': 4.822695e-05, 'epoch': 0.11170212765957446, 'step': 20}\r\n{'loss': 12.079448, 'learning_rate': 4.734042e-05, 'epoch': 0.16489361702127658, 'step': 30}\r\n{'loss': 10.455857, 'learning_rate': 4.6453897e-05, 'epoch': 0.21808510638297873, 'step': 40}\r\n{'loss': 9.295527, 'learning_rate': 4.5567373e-05, 'epoch': 0.2712765957446808, 'step': 50}\r\n{'loss': 8.374706, 'learning_rate': 4.468085e-05, 'epoch': 0.324468085106383, 'step': 60}\r\n{'loss': 7.674021, 'learning_rate': 4.379432e-05, 'epoch': 0.3776595744680851, 'step': 70}\r\n{'loss': 7.106626, 'learning_rate': 4.29078e-05, 'epoch': 0.4308510638297872, 'step': 80}\r\n{'loss': 6.6170573, 'learning_rate': 4.202128e-05, 'epoch': 0.48404255319148937, 'step': 90}\r\n{'loss': 6.226298, 'learning_rate': 4.113475e-05, 'epoch': 0.5372340425531915, 'step': 100}\r\n{'loss': 5.859087, 'learning_rate': 4.0248226e-05, 'epoch': 0.5904255319148937, 'step': 110}\r\n{'loss': 5.560567, 'learning_rate': 3.93617e-05, 'epoch': 0.6436170212765957, 'step': 120}\r\n{'loss': 5.2810636, 'learning_rate': 3.8475173e-05, 'epoch': 0.6968085106382979, 'step': 130}\r\n{'loss': 5.040142, 'learning_rate': 3.758865e-05, 'epoch': 0.75, 'step': 140}\r\n{'loss': 4.830164, 'learning_rate': 3.6702124e-05, 'epoch': 0.8031914893617021, 'step': 150}\r\n{'loss': 4.6353145, 'learning_rate': 3.5815603e-05, 'epoch': 0.8563829787234043, 'step': 160}\r\n{'loss': 4.446635, 'learning_rate': 3.492908e-05, 'epoch': 0.9095744680851063, 'step': 170}\r\n{'loss': 4.300565, 'learning_rate': 3.4042554e-05, 'epoch': 0.9627659574468085, 'step': 180}\r\n2020-09-11 23:08:59.187632: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:172] Filling up shuffle buffer (this may take a while): 23119 of 24000\r\n2020-09-11 23:08:59.564647: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:221] Shuffle buffer filled.\r\n{'loss': 1.8294506, 'learning_rate': 3.3156026e-05, 'epoch': 1.0106382978723405, 'step': 190}\r\n{'loss': 1.576384, 'learning_rate': 3.22695e-05, 'epoch': 1.0638297872340425, 'step': 200}\r\n{'loss': 1.4572238, 'learning_rate': 3.1382977e-05, 'epoch': 1.1170212765957448, 'step': 210}\r\n{'loss': 1.4322406, 'learning_rate': 3.0496454e-05, 'epoch': 1.1702127659574468, 'step': 220}\r\n{'loss': 1.3880556, 'learning_rate': 2.960993e-05, 'epoch': 1.2234042553191489, 'step': 230}\r\n{'loss': 1.3613675, 'learning_rate': 2.8723403e-05, 'epoch': 1.2765957446808511, 'step': 240}\r\n{'loss': 1.3514798, 'learning_rate': 2.7836879e-05, 'epoch': 1.3297872340425532, 'step': 250}\r\n{'loss': 1.3266419, 'learning_rate': 2.6950353e-05, 'epoch': 1.3829787234042552, 'step': 260}\r\n{'loss': 1.3012911, 'learning_rate': 2.606383e-05, 'epoch': 1.4361702127659575, 'step': 270}\r\n{'loss': 1.2993147, 'learning_rate': 2.5177305e-05, 'epoch': 1.4893617021276595, 'step': 280}\r\n{'loss': 1.2913059, 'learning_rate': 2.4290777e-05, 'epoch': 1.5425531914893615, 'step': 290}\r\n{'loss': 1.2822802, 'learning_rate': 2.3404255e-05, 'epoch': 1.5957446808510638, 'step': 300}\r\n{'loss': 1.2839314, 'learning_rate': 2.2517732e-05, 'epoch': 1.648936170212766, 'step': 310}\r\n{'loss': 1.2641081, 'learning_rate': 2.1631204e-05, 'epoch': 1.702127659574468, 'step': 320}\r\n{'loss': 1.2524884, 'learning_rate': 2.0744681e-05, 'epoch': 1.7553191489361701, 'step': 330}\r\n{'loss': 1.2450953, 'learning_rate': 1.9858155e-05, 'epoch': 1.8085106382978724, 'step': 340}\r\n{'loss': 1.2448001, 'learning_rate': 1.897163e-05, 'epoch': 1.8617021276595744, 'step': 350}\r\n{'loss': 1.2407304, 'learning_rate': 1.8085108e-05, 'epoch': 1.9148936170212765, 'step': 360}\r\n{'loss': 1.2282307, 'learning_rate': 1.719858e-05, 'epoch': 1.9680851063829787, 'step': 370}\r\n2020-09-11 23:14:15.677977: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:172] Filling up shuffle buffer (this may take a while): 23228 of 24000\r\n2020-09-11 23:14:16.010139: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:221] Shuffle buffer filled.\r\n{'loss': 0.85539556, 'learning_rate': 1.6312057e-05, 'epoch': 2.021276595744681, 'step': 380}\r\n{'loss': 0.94433624, 'learning_rate': 1.542553e-05, 'epoch': 2.074468085106383, 'step': 390}\r\n{'loss': 0.93659353, 'learning_rate': 1.4539006e-05, 'epoch': 2.127659574468085, 'step': 400}\r\n{'loss': 0.8978215, 'learning_rate': 1.365248e-05, 'epoch': 2.1808510638297873, 'step': 410}\r\n{'loss': 0.9126247, 'learning_rate': 1.27659605e-05, 'epoch': 2.2340425531914896, 'step': 420}\r\n{'loss': 0.9438525, 'learning_rate': 1.1879432e-05, 'epoch': 2.2872340425531914, 'step': 430}\r\n{'loss': 0.9655043, 'learning_rate': 1.0992908e-05, 'epoch': 2.3404255319148937, 'step': 440}\r\n{'loss': 0.9694119, 'learning_rate': 1.0106382e-05, 'epoch': 2.393617021276596, 'step': 450}\r\n{'loss': 0.95613927, 'learning_rate': 9.219858e-06, 'epoch': 2.4468085106382977, 'step': 460}\r\n{'loss': 0.9483009, 'learning_rate': 8.333333e-06, 'epoch': 2.5, 'step': 470}\r\n{'loss': 0.93453395, 'learning_rate': 7.4468076e-06, 'epoch': 2.5531914893617023, 'step': 480}\r\n{'loss': 0.92573655, 'learning_rate': 6.5602835e-06, 'epoch': 2.6063829787234045, 'step': 490}\r\n{'loss': 0.9156919, 'learning_rate': 5.67376e-06, 'epoch': 2.6595744680851063, 'step': 500}\r\n{'loss': 0.9098517, 'learning_rate': 4.787233e-06, 'epoch': 2.7127659574468086, 'step': 510}\r\n{'loss': 0.9110744, 'learning_rate': 3.9007095e-06, 'epoch': 2.7659574468085104, 'step': 520}\r\n{'loss': 0.89820355, 'learning_rate': 3.014183e-06, 'epoch': 2.8191489361702127, 'step': 530}\r\n{'loss': 0.8982555, 'learning_rate': 2.1276592e-06, 'epoch': 2.872340425531915, 'step': 540}\r\n{'loss': 0.8997822, 'learning_rate': 1.2411356e-06, 'epoch': 2.925531914893617, 'step': 550}\r\n{'loss': 0.89611685, 'learning_rate': 3.5460886e-07, 'epoch': 2.978723404255319, 'step': 560}\r\nTraining took: 0:17:32.846350\r\nSaving model in /home/jplu/model\r\nConfiguration saved in /home/jplu/model/config.json\r\nModel weights saved in /home/jplu/model/tf_model.h5\r\n***** Running Evaluation *****\r\n Num examples = 2200\r\n Batch size = 32\r\n{'eval_loss': 1.4825085626132246, 'eval_precision': 0.8298914945747288, 'eval_recall': 0.8713708195516354, 'eval_f1': 0.8501254930082466, 'epoch': 3.0, 'step': 564}\r\n```\r\n\r\nWith gradient accumulation:\r\n```\r\n***** Running training *****\r\n Num examples = 24000\r\n Num Epochs = 3\r\n Instantaneous batch size per device = 32\r\n Total train batch size (w. parallel, distributed & accumulation) = 256\r\n Gradient Accumulation steps = 2\r\n Steps per epoch = 94\r\n Total optimization steps = 282\r\n{'loss': 17.029345, 'learning_rate': 4.822695e-05, 'epoch': 0.11702127659574468, 'step': 10}\r\n{'loss': 13.583568, 'learning_rate': 4.6453897e-05, 'epoch': 0.22340425531914893, 'step': 20}\r\n{'loss': 11.413903, 'learning_rate': 4.468085e-05, 'epoch': 0.32978723404255317, 'step': 30}\r\n{'loss': 9.977048, 'learning_rate': 4.29078e-05, 'epoch': 0.43617021276595747, 'step': 40}\r\n{'loss': 8.904137, 'learning_rate': 4.113475e-05, 'epoch': 0.5425531914893617, 'step': 50}\r\n{'loss': 8.056796, 'learning_rate': 3.93617e-05, 'epoch': 0.648936170212766, 'step': 60}\r\n{'loss': 7.339738, 'learning_rate': 3.758865e-05, 'epoch': 0.7553191489361702, 'step': 70}\r\n{'loss': 6.7678766, 'learning_rate': 3.5815603e-05, 'epoch': 0.8617021276595744, 'step': 80}\r\n{'loss': 6.2809086, 'learning_rate': 3.4042554e-05, 'epoch': 0.9680851063829787, 'step': 90}\r\n2020-09-11 23:36:37.142279: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:172] Filling up shuffle buffer (this may take a while): 23596 of 24000\r\n2020-09-11 23:36:37.311979: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:221] Shuffle buffer filled.\r\n{'loss': 2.2556372, 'learning_rate': 3.22695e-05, 'epoch': 1.0638297872340425, 'step': 100}\r\n{'loss': 2.0573573, 'learning_rate': 3.0496454e-05, 'epoch': 1.1702127659574468, 'step': 110}\r\n{'loss': 1.9544038, 'learning_rate': 2.8723403e-05, 'epoch': 1.2765957446808511, 'step': 120}\r\n{'loss': 1.8848253, 'learning_rate': 2.6950353e-05, 'epoch': 1.3829787234042552, 'step': 130}\r\n{'loss': 1.837054, 'learning_rate': 2.5177305e-05, 'epoch': 1.4893617021276595, 'step': 140}\r\n{'loss': 1.7885295, 'learning_rate': 2.3404255e-05, 'epoch': 1.5957446808510638, 'step': 150}\r\n{'loss': 1.7535, 'learning_rate': 2.1631204e-05, 'epoch': 1.702127659574468, 'step': 160}\r\n{'loss': 1.7068337, 'learning_rate': 1.9858155e-05, 'epoch': 1.8085106382978724, 'step': 170}\r\n{'loss': 1.6874169, 'learning_rate': 1.8085108e-05, 'epoch': 1.9148936170212765, 'step': 180}\r\n2020-09-11 23:41:05.123380: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:172] Filling up shuffle buffer (this may take a while): 23596 of 24000\r\n2020-09-11 23:36:05.332080: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:221] Shuffle buffer filled.\r\n{'loss': 1.189175, 'learning_rate': 1.6312057e-05, 'epoch': 2.021276595744681, 'step': 190}\r\n{'loss': 1.2820004, 'learning_rate': 1.4539006e-05, 'epoch': 2.127659574468085, 'step': 200}\r\n{'loss': 1.2558761, 'learning_rate': 1.27659605e-05, 'epoch': 2.2340425531914896, 'step': 210}\r\n{'loss': 1.2865627, 'learning_rate': 1.0992908e-05, 'epoch': 2.3404255319148937, 'step': 220}\r\n{'loss': 1.2853887, 'learning_rate': 9.219858e-06, 'epoch': 2.4468085106382977, 'step': 230}\r\n{'loss': 1.2650005, 'learning_rate': 7.4468076e-06, 'epoch': 2.5531914893617023, 'step': 240}\r\n{'loss': 1.2414478, 'learning_rate': 5.67376e-06, 'epoch': 2.6595744680851063, 'step': 250}\r\n{'loss': 1.243986, 'learning_rate': 3.9007095e-06, 'epoch': 2.7659574468085104, 'step': 260}\r\n{'loss': 1.2295542, 'learning_rate': 2.1276592e-06, 'epoch': 2.872340425531915, 'step': 270}\r\n{'loss': 1.2296557, 'learning_rate': 3.5460886e-07, 'epoch': 2.978723404255319, 'step': 280}\r\nTraining took: 0:19:10.070496\r\nSaving model in /home/jplu/model\r\nConfiguration saved in /home/jplu/model/config.json\r\nModel weights saved in /home/jplu/model/tf_model.h5\r\n***** Running Evaluation *****\r\n Num examples = 2200\r\n Batch size = 32\r\n{'eval_loss': 1.5663623533387114, 'eval_precision': 0.8107354478912513, 'eval_recall': 0.8548327820654171, 'eval_f1': 0.8322003577817532, 'epoch': 3.0, 'step': 282}\r\n```\r\nLooks ok to me. I have also tested for text classification and question answering without any error.\r\n", "> \r\n> \r\n> I have been able to run a NER task over 4 GPUs. Without gradient accumulation:\r\n> \r\n> ```\r\n> ***** Running training *****\r\n> Num examples = 24000\r\n> Num Epochs = 3\r\n> Instantaneous batch size per device = 32\r\n> Total train batch size (w. parallel, distributed & accumulation) = 128\r\n> Gradient Accumulation steps = 1\r\n> Steps per epoch = 188\r\n> Total optimization steps = 564\r\n> {'loss': 18.6387, 'learning_rate': 4.9113474e-05, 'epoch': 0.05851063829787234, 'step': 10}\r\n> {'loss': 14.222743, 'learning_rate': 4.822695e-05, 'epoch': 0.11170212765957446, 'step': 20}\r\n> {'loss': 12.079448, 'learning_rate': 4.734042e-05, 'epoch': 0.16489361702127658, 'step': 30}\r\n> {'loss': 10.455857, 'learning_rate': 4.6453897e-05, 'epoch': 0.21808510638297873, 'step': 40}\r\n> {'loss': 9.295527, 'learning_rate': 4.5567373e-05, 'epoch': 0.2712765957446808, 'step': 50}\r\n> {'loss': 8.374706, 'learning_rate': 4.468085e-05, 'epoch': 0.324468085106383, 'step': 60}\r\n> {'loss': 7.674021, 'learning_rate': 4.379432e-05, 'epoch': 0.3776595744680851, 'step': 70}\r\n> {'loss': 7.106626, 'learning_rate': 4.29078e-05, 'epoch': 0.4308510638297872, 'step': 80}\r\n> {'loss': 6.6170573, 'learning_rate': 4.202128e-05, 'epoch': 0.48404255319148937, 'step': 90}\r\n> {'loss': 6.226298, 'learning_rate': 4.113475e-05, 'epoch': 0.5372340425531915, 'step': 100}\r\n> {'loss': 5.859087, 'learning_rate': 4.0248226e-05, 'epoch': 0.5904255319148937, 'step': 110}\r\n> {'loss': 5.560567, 'learning_rate': 3.93617e-05, 'epoch': 0.6436170212765957, 'step': 120}\r\n> {'loss': 5.2810636, 'learning_rate': 3.8475173e-05, 'epoch': 0.6968085106382979, 'step': 130}\r\n> {'loss': 5.040142, 'learning_rate': 3.758865e-05, 'epoch': 0.75, 'step': 140}\r\n> {'loss': 4.830164, 'learning_rate': 3.6702124e-05, 'epoch': 0.8031914893617021, 'step': 150}\r\n> {'loss': 4.6353145, 'learning_rate': 3.5815603e-05, 'epoch': 0.8563829787234043, 'step': 160}\r\n> {'loss': 4.446635, 'learning_rate': 3.492908e-05, 'epoch': 0.9095744680851063, 'step': 170}\r\n> {'loss': 4.300565, 'learning_rate': 3.4042554e-05, 'epoch': 0.9627659574468085, 'step': 180}\r\n> 2020-09-11 23:08:59.187632: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:172] Filling up shuffle buffer (this may take a while): 23119 of 24000\r\n> 2020-09-11 23:08:59.564647: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:221] Shuffle buffer filled.\r\n> {'loss': 1.8294506, 'learning_rate': 3.3156026e-05, 'epoch': 1.0106382978723405, 'step': 190}\r\n> {'loss': 1.576384, 'learning_rate': 3.22695e-05, 'epoch': 1.0638297872340425, 'step': 200}\r\n> {'loss': 1.4572238, 'learning_rate': 3.1382977e-05, 'epoch': 1.1170212765957448, 'step': 210}\r\n> {'loss': 1.4322406, 'learning_rate': 3.0496454e-05, 'epoch': 1.1702127659574468, 'step': 220}\r\n> {'loss': 1.3880556, 'learning_rate': 2.960993e-05, 'epoch': 1.2234042553191489, 'step': 230}\r\n> {'loss': 1.3613675, 'learning_rate': 2.8723403e-05, 'epoch': 1.2765957446808511, 'step': 240}\r\n> {'loss': 1.3514798, 'learning_rate': 2.7836879e-05, 'epoch': 1.3297872340425532, 'step': 250}\r\n> {'loss': 1.3266419, 'learning_rate': 2.6950353e-05, 'epoch': 1.3829787234042552, 'step': 260}\r\n> {'loss': 1.3012911, 'learning_rate': 2.606383e-05, 'epoch': 1.4361702127659575, 'step': 270}\r\n> {'loss': 1.2993147, 'learning_rate': 2.5177305e-05, 'epoch': 1.4893617021276595, 'step': 280}\r\n> {'loss': 1.2913059, 'learning_rate': 2.4290777e-05, 'epoch': 1.5425531914893615, 'step': 290}\r\n> {'loss': 1.2822802, 'learning_rate': 2.3404255e-05, 'epoch': 1.5957446808510638, 'step': 300}\r\n> {'loss': 1.2839314, 'learning_rate': 2.2517732e-05, 'epoch': 1.648936170212766, 'step': 310}\r\n> {'loss': 1.2641081, 'learning_rate': 2.1631204e-05, 'epoch': 1.702127659574468, 'step': 320}\r\n> {'loss': 1.2524884, 'learning_rate': 2.0744681e-05, 'epoch': 1.7553191489361701, 'step': 330}\r\n> {'loss': 1.2450953, 'learning_rate': 1.9858155e-05, 'epoch': 1.8085106382978724, 'step': 340}\r\n> {'loss': 1.2448001, 'learning_rate': 1.897163e-05, 'epoch': 1.8617021276595744, 'step': 350}\r\n> {'loss': 1.2407304, 'learning_rate': 1.8085108e-05, 'epoch': 1.9148936170212765, 'step': 360}\r\n> {'loss': 1.2282307, 'learning_rate': 1.719858e-05, 'epoch': 1.9680851063829787, 'step': 370}\r\n> 2020-09-11 23:14:15.677977: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:172] Filling up shuffle buffer (this may take a while): 23228 of 24000\r\n> 2020-09-11 23:14:16.010139: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:221] Shuffle buffer filled.\r\n> {'loss': 0.85539556, 'learning_rate': 1.6312057e-05, 'epoch': 2.021276595744681, 'step': 380}\r\n> {'loss': 0.94433624, 'learning_rate': 1.542553e-05, 'epoch': 2.074468085106383, 'step': 390}\r\n> {'loss': 0.93659353, 'learning_rate': 1.4539006e-05, 'epoch': 2.127659574468085, 'step': 400}\r\n> {'loss': 0.8978215, 'learning_rate': 1.365248e-05, 'epoch': 2.1808510638297873, 'step': 410}\r\n> {'loss': 0.9126247, 'learning_rate': 1.27659605e-05, 'epoch': 2.2340425531914896, 'step': 420}\r\n> {'loss': 0.9438525, 'learning_rate': 1.1879432e-05, 'epoch': 2.2872340425531914, 'step': 430}\r\n> {'loss': 0.9655043, 'learning_rate': 1.0992908e-05, 'epoch': 2.3404255319148937, 'step': 440}\r\n> {'loss': 0.9694119, 'learning_rate': 1.0106382e-05, 'epoch': 2.393617021276596, 'step': 450}\r\n> {'loss': 0.95613927, 'learning_rate': 9.219858e-06, 'epoch': 2.4468085106382977, 'step': 460}\r\n> {'loss': 0.9483009, 'learning_rate': 8.333333e-06, 'epoch': 2.5, 'step': 470}\r\n> {'loss': 0.93453395, 'learning_rate': 7.4468076e-06, 'epoch': 2.5531914893617023, 'step': 480}\r\n> {'loss': 0.92573655, 'learning_rate': 6.5602835e-06, 'epoch': 2.6063829787234045, 'step': 490}\r\n> {'loss': 0.9156919, 'learning_rate': 5.67376e-06, 'epoch': 2.6595744680851063, 'step': 500}\r\n> {'loss': 0.9098517, 'learning_rate': 4.787233e-06, 'epoch': 2.7127659574468086, 'step': 510}\r\n> {'loss': 0.9110744, 'learning_rate': 3.9007095e-06, 'epoch': 2.7659574468085104, 'step': 520}\r\n> {'loss': 0.89820355, 'learning_rate': 3.014183e-06, 'epoch': 2.8191489361702127, 'step': 530}\r\n> {'loss': 0.8982555, 'learning_rate': 2.1276592e-06, 'epoch': 2.872340425531915, 'step': 540}\r\n> {'loss': 0.8997822, 'learning_rate': 1.2411356e-06, 'epoch': 2.925531914893617, 'step': 550}\r\n> {'loss': 0.89611685, 'learning_rate': 3.5460886e-07, 'epoch': 2.978723404255319, 'step': 560}\r\n> Training took: 0:17:32.846350\r\n> Saving model in /home/jplu/model\r\n> Configuration saved in /home/jplu/model/config.json\r\n> Model weights saved in /home/jplu/model/tf_model.h5\r\n> ***** Running Evaluation *****\r\n> Num examples = 2200\r\n> Batch size = 32\r\n> {'eval_loss': 1.4825085626132246, 'eval_precision': 0.8298914945747288, 'eval_recall': 0.8713708195516354, 'eval_f1': 0.8501254930082466, 'epoch': 3.0, 'step': 564}\r\n> ```\r\n> \r\n> With gradient accumulation:\r\n> \r\n> ```\r\n> ***** Running training *****\r\n> Num examples = 24000\r\n> Num Epochs = 3\r\n> Instantaneous batch size per device = 32\r\n> Total train batch size (w. parallel, distributed & accumulation) = 256\r\n> Gradient Accumulation steps = 2\r\n> Steps per epoch = 94\r\n> Total optimization steps = 282\r\n> {'loss': 17.029345, 'learning_rate': 4.822695e-05, 'epoch': 0.11702127659574468, 'step': 10}\r\n> {'loss': 13.583568, 'learning_rate': 4.6453897e-05, 'epoch': 0.22340425531914893, 'step': 20}\r\n> {'loss': 11.413903, 'learning_rate': 4.468085e-05, 'epoch': 0.32978723404255317, 'step': 30}\r\n> {'loss': 9.977048, 'learning_rate': 4.29078e-05, 'epoch': 0.43617021276595747, 'step': 40}\r\n> {'loss': 8.904137, 'learning_rate': 4.113475e-05, 'epoch': 0.5425531914893617, 'step': 50}\r\n> {'loss': 8.056796, 'learning_rate': 3.93617e-05, 'epoch': 0.648936170212766, 'step': 60}\r\n> {'loss': 7.339738, 'learning_rate': 3.758865e-05, 'epoch': 0.7553191489361702, 'step': 70}\r\n> {'loss': 6.7678766, 'learning_rate': 3.5815603e-05, 'epoch': 0.8617021276595744, 'step': 80}\r\n> {'loss': 6.2809086, 'learning_rate': 3.4042554e-05, 'epoch': 0.9680851063829787, 'step': 90}\r\n> 2020-09-11 23:36:37.142279: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:172] Filling up shuffle buffer (this may take a while): 23596 of 24000\r\n> 2020-09-11 23:36:37.311979: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:221] Shuffle buffer filled.\r\n> {'loss': 2.2556372, 'learning_rate': 3.22695e-05, 'epoch': 1.0638297872340425, 'step': 100}\r\n> {'loss': 2.0573573, 'learning_rate': 3.0496454e-05, 'epoch': 1.1702127659574468, 'step': 110}\r\n> {'loss': 1.9544038, 'learning_rate': 2.8723403e-05, 'epoch': 1.2765957446808511, 'step': 120}\r\n> {'loss': 1.8848253, 'learning_rate': 2.6950353e-05, 'epoch': 1.3829787234042552, 'step': 130}\r\n> {'loss': 1.837054, 'learning_rate': 2.5177305e-05, 'epoch': 1.4893617021276595, 'step': 140}\r\n> {'loss': 1.7885295, 'learning_rate': 2.3404255e-05, 'epoch': 1.5957446808510638, 'step': 150}\r\n> {'loss': 1.7535, 'learning_rate': 2.1631204e-05, 'epoch': 1.702127659574468, 'step': 160}\r\n> {'loss': 1.7068337, 'learning_rate': 1.9858155e-05, 'epoch': 1.8085106382978724, 'step': 170}\r\n> {'loss': 1.6874169, 'learning_rate': 1.8085108e-05, 'epoch': 1.9148936170212765, 'step': 180}\r\n> 2020-09-11 23:41:05.123380: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:172] Filling up shuffle buffer (this may take a while): 23596 of 24000\r\n> 2020-09-11 23:36:05.332080: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:221] Shuffle buffer filled.\r\n> {'loss': 1.189175, 'learning_rate': 1.6312057e-05, 'epoch': 2.021276595744681, 'step': 190}\r\n> {'loss': 1.2820004, 'learning_rate': 1.4539006e-05, 'epoch': 2.127659574468085, 'step': 200}\r\n> {'loss': 1.2558761, 'learning_rate': 1.27659605e-05, 'epoch': 2.2340425531914896, 'step': 210}\r\n> {'loss': 1.2865627, 'learning_rate': 1.0992908e-05, 'epoch': 2.3404255319148937, 'step': 220}\r\n> {'loss': 1.2853887, 'learning_rate': 9.219858e-06, 'epoch': 2.4468085106382977, 'step': 230}\r\n> {'loss': 1.2650005, 'learning_rate': 7.4468076e-06, 'epoch': 2.5531914893617023, 'step': 240}\r\n> {'loss': 1.2414478, 'learning_rate': 5.67376e-06, 'epoch': 2.6595744680851063, 'step': 250}\r\n> {'loss': 1.243986, 'learning_rate': 3.9007095e-06, 'epoch': 2.7659574468085104, 'step': 260}\r\n> {'loss': 1.2295542, 'learning_rate': 2.1276592e-06, 'epoch': 2.872340425531915, 'step': 270}\r\n> {'loss': 1.2296557, 'learning_rate': 3.5460886e-07, 'epoch': 2.978723404255319, 'step': 280}\r\n> Training took: 0:19:10.070496\r\n> Saving model in /home/jplu/model\r\n> Configuration saved in /home/jplu/model/config.json\r\n> Model weights saved in /home/jplu/model/tf_model.h5\r\n> ***** Running Evaluation *****\r\n> Num examples = 2200\r\n> Batch size = 32\r\n> {'eval_loss': 1.5663623533387114, 'eval_precision': 0.8107354478912513, 'eval_recall': 0.8548327820654171, 'eval_f1': 0.8322003577817532, 'epoch': 3.0, 'step': 282}\r\n> ```\r\n> \r\n> Looks ok to me. I have also tested for text classification and question answering without any error.\r\n\r\nWow! Thank you @jplu, a test and a reply on Friday night :) \r\nI might need to get some gpu though if I continue to contribute - can't let you do all such tests all the time.\r\nGreat to see it works.", "Ahah no worries it is ok, not everybody can have such setup.\n\n@LysandreJik looks ok to merge." ]
1,599
1,651
1,600
COLLABORATOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #6968 ## Description Issue #6968 is about the incorrect loss calculation due to dividing the per example losses by the number of sentences, rather than by the number of tokens (not ignored, i.e. label != -100) - if the task is a token level task. ## Implementation Before a whole batch being distributed to replicas, we compute the number of instances in that batch. Depending on the task types (sentence level or token level), the word `instance` means different things: - sentence level task: it means examples - token level task: it means the tokens with label != -100 This information (number of instances) is injected into global batches. While each replica receives a small batch, it use this information to correctly computing the scaled losses. If no information is provided in the dataset, the default behavior is to use the number of examples in a global batch. This way, the code change is minimal. ## Test code import os import random import shutil shutil.rmtree("./tmp/", ignore_errors=True) os.mkdir("./tmp/") nb_sentences = 70 words = ["i", "like", "dogs", "but", "you", "prefer", "cats"] with open("./tmp/train.txt", "w", encoding="UTF-8") as fp: for i in range(nb_sentences): if i == 0: for word in words: fp.write(f"{word} O\n") else: fp.write(f" \n") fp.write("\n") os.system("cp ./tmp/train.txt ./tmp/dev.txt") labels = ["O", "B-MISC", "I-MISC", "B-PER", "I-PER", "B-ORG", "I-ORG", "B-LOC", "I-LOC"] with open("./tmp/test.txt", "w", encoding="UTF-8") as fp: for i in range(nb_sentences): for word in words: label = random.choice(labels) fp.write(f"{word} {label}\n") fp.write("\n") command = ( "python run_tf_ner.py " + "--model_name_or_path distilbert-base-uncased " + "--data_dir ./tmp/ --seed 2020 --output_dir ./tmp/ " + "--overwrite_output_dir --logging_steps 1 " + "--do_train --do_eval --do_predict " + "--num_train_epochs 1 " + f"--per_device_train_batch_size {nb_sentences} " + f"--per_device_eval_batch_size {nb_sentences} --labels '' " + "--max_seq_length 16" ) print(command) os.system(command) Testing it against master, you will see the loss values is smaller (~0.3) than testing against this PR code, you will (1.0 ~ 2.0), because on master, the denominator is `70` (but 69 of them having only ignored tokens) while on this PR, the denominator is `7` (the number of tokens).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6998/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6998/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6998", "html_url": "https://github.com/huggingface/transformers/pull/6998", "diff_url": "https://github.com/huggingface/transformers/pull/6998.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6998.patch", "merged_at": 1600162860000 }
https://api.github.com/repos/huggingface/transformers/issues/6997
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6997/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6997/comments
https://api.github.com/repos/huggingface/transformers/issues/6997/events
https://github.com/huggingface/transformers/issues/6997
695,357,264
MDU6SXNzdWU2OTUzNTcyNjQ=
6,997
run_squad.py not working on 3.1.0 version
{ "login": "deepanshu650", "id": 65191985, "node_id": "MDQ6VXNlcjY1MTkxOTg1", "avatar_url": "https://avatars.githubusercontent.com/u/65191985?v=4", "gravatar_id": "", "url": "https://api.github.com/users/deepanshu650", "html_url": "https://github.com/deepanshu650", "followers_url": "https://api.github.com/users/deepanshu650/followers", "following_url": "https://api.github.com/users/deepanshu650/following{/other_user}", "gists_url": "https://api.github.com/users/deepanshu650/gists{/gist_id}", "starred_url": "https://api.github.com/users/deepanshu650/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/deepanshu650/subscriptions", "organizations_url": "https://api.github.com/users/deepanshu650/orgs", "repos_url": "https://api.github.com/users/deepanshu650/repos", "events_url": "https://api.github.com/users/deepanshu650/events{/privacy}", "received_events_url": "https://api.github.com/users/deepanshu650/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Indeed, it seems it hasn't been up to date. Did you try running `run_squad_trainer.py`? It should be more up to date.\r\n\r\n@sgugger we should probably deprecate `run_squad.py` now that we have a Trainer-based SQuAD script.", "The missing part was the eval which shouldn't be too hard to add (https://github.com/huggingface/transformers/pull/4829#issuecomment-645994130)\r\n\r\nAnd then we can rename files as in https://github.com/huggingface/transformers/pull/5582", "> Indeed, it seems it hasn't been up to date. Did you try running `run_squad_trainer.py`? It should be more up to date.\r\n> \r\n> @sgugger we should probably deprecate `run_squad.py` now that we have a Trainer-based SQuAD script.\r\n\r\nNo run_squad_trainer.py also gives error in 3.1.0\r\n`!python transformers/examples/question-answering/run_squad_trainer.py --help\r\n`\r\nError is \r\n`python3: can't open file 'transformers/examples/question-answering/run_squad_trainer.py': [Errno 2] No such file or directory`", "@deepanshu650 the file exists, it's [here](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad_trainer.py). Are you sure you cloned the v3.1.0 repo?", "Yes it's working , initially I was working on copy of someone's notebook which was installing v2.3.0 and I had to pip install v3.1.0, so after changing to new notebook it starts running.\r\nBut it doesn't give evaluation results(which `run_squad.py` does) as `run_squad_trainer.py` not calling trainer.evaluate() though it makes eval_dataset .\r\nSo how do I evaluate . Thanks for replying.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,599
1,605
1,605
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: Linux-4.15.0-91-generic-x86_64-with-debian-buster-sid - Python version: 3.7.6 - PyTorch version (GPU?): 1.5.0 (False) - Tensorflow version (GPU?): 2.1.0 (False) - Using GPU in script?: True - Using distributed or parallel set-up in script?: True ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->@LysandreJik @sshleifer ## Information Model I am using (Bert, XLNet ...): Bert The problem arises when using: * [x] the official example scripts: (give details below) run_squad.py is working with 2.9.1 but when I update it to 3.1.0 it gives error * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: SQUaD * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. !pip install transformers==3.1.0 It works when using 2.9.1 but not with this 2. !mkdir dataset \ && cd dataset \ && wget https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v2.0.json \ && wget https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v2.0.json 3. !export SQUAD_DIR=/content/dataset \ && python transformers/examples/run_squad.py \ --model_type bert \ --model_name_or_path bert-base-uncased \ --do_train \ --do_eval \ --do_lower_case \ --train_file $SQUAD_DIR/train-v2.0.json \ --predict_file $SQUAD_DIR/dev-v2.0.json \ --per_gpu_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 1.0 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /content/model_output \ --save_steps 1000 \ --threads 4 \ --version_2_with_negative Error message is: 2020-09-07 19:33:26.850641: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 Traceback (most recent call last): File "transformers/examples/run_squad.py", line 74, in <module> (), File "transformers/examples/run_squad.py", line 73, in <genexpr> (tuple(conf.pretrained_config_archive_map.keys()) for conf in (BertConfig, RobertaConfig, XLNetConfig, XLMConfig)), AttributeError: type object 'BertConfig' has no attribute 'pretrained_config_archive_map' <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> It should start training without error just like older versions
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6997/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6997/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6996
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6996/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6996/comments
https://api.github.com/repos/huggingface/transformers/issues/6996/events
https://github.com/huggingface/transformers/pull/6996
695,328,818
MDExOlB1bGxSZXF1ZXN0NDgxNTg5Mjc2
6,996
[generation] decoder priority for choosing decoder_start_token_id value
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6996?src=pr&el=h1) Report\n> Merging [#6996](https://codecov.io/gh/huggingface/transformers/pull/6996?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/90ec78b5140251f093f658ebd4d2925e8c03f5e6?el=desc) will **decrease** coverage by `0.55%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6996/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6996?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6996 +/- ##\n==========================================\n- Coverage 80.58% 80.03% -0.56% \n==========================================\n Files 161 161 \n Lines 30123 30123 \n==========================================\n- Hits 24276 24109 -167 \n- Misses 5847 6014 +167 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6996?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6996/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.21% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6996/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6996/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (-0.33%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6996/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.63% <0.00%> (-0.14%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6996/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `91.90% <0.00%> (+5.26%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6996/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.34% <0.00%> (+63.80%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6996?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6996?src=pr&el=footer). Last update [90ec78b...fd199a5](https://codecov.io/gh/huggingface/transformers/pull/6996?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Shouldn't this be part of #6996? Or would this PR be needed either way?", "> Shouldn't this be part of #6996? Or would this PR be needed either way?\r\n\r\nDid you mean \"part of \" https://github.com/huggingface/transformers/pull/6940?\r\n\r\nIt's there already, and yes it's required to work. I just thought that it's the best to change core functions in separate PRs and not as a part of a much larger PR. Please advise if this is not the right approach.\r\n\r\nIn this particular case `config.decoder` is more specific than `config` and therefore should be checked first. I don't think currently there is any more that actively uses `config.decoder` (grep didn't find any), therefore it must have been untested since it was added in first place perhaps?\r\n\r\n", "Don't really agree with this PR - I think `bos_token_id` should have higher priority than `model.config.decoder.bos_token_id`. The `model.config.decoder.bos_token_id` was mainly added because of the `EncoderDecoderModel` framework", "Yes I meant \"part of \" #6940? \r\nIf two PRs do not exist/make sense without each other I think they should be together. \r\nOtherwise we can merge one without the other and have either broken or dead code", "@stas00 why can't you use `decoder_start_token_id` for FSMT?", "> why can't you use `decoder_start_token_id` for FSMT?\r\n\r\nThat works - thank you for the suggestion, @sshleifer \r\n", "> Don't really agree with this PR - I think `bos_token_id` should have higher priority than `model.config.decoder.bos_token_id`. The `model.config.decoder.bos_token_id` was mainly added because of the `EncoderDecoderModel` framework\r\n\r\nThe way I read the intention is that if it goes:\r\n```\r\n if self.config.is_encoder_decoder:\r\n```\r\nwe are in encoder-decoder zone and as such `.decoder` should take priority\r\n\r\nI guess Bart and friends are semi-encoder-decoder as far as the framework goes.\r\n\r\nThank you for the feedback, @patrickvonplaten \r\n\r\nI suppose in the ideal world we should have tests that validate such scenarios." ]
1,599
1,603
1,599
CONTRIBUTOR
null
`config.decoder` needs to be checked first before model's `config` to set `decoder_start_token_id`. This is needed for https://github.com/huggingface/transformers/pull/6940 where I think for the first time there is an actual `config.decoder`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6996/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6996/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6996", "html_url": "https://github.com/huggingface/transformers/pull/6996", "diff_url": "https://github.com/huggingface/transformers/pull/6996.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6996.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/6995
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6995/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6995/comments
https://api.github.com/repos/huggingface/transformers/issues/6995/events
https://github.com/huggingface/transformers/pull/6995
695,293,815
MDExOlB1bGxSZXF1ZXN0NDgxNTU4ODk0
6,995
[from_pretrained] Allow tokenizer_type ≠ model_type
{ "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "> Not sure I fully understand the use case, but nothing against the principle of it.\r\n\r\nThe idea is to prevent combinatorial explosion of \"model types\" when only the tokenizer is different (e.g. Flaubert, CamemBERT if we wanted to support them today)\r\n\r\nIn the future we might even want to have a few model-agnostic tokenizer classes like ByteLevelBPETokenizer (basically RobertaTokenizer), as they can be initialized pretty exhaustively from the init args stored in `tokenizer_config.json`\r\n\r\n\r\n\r\n", "Documented by @sgugger in https://github.com/huggingface/transformers/pull/8152" ]
1,599
1,604
1,599
MEMBER
null
For an exemple usage of this PR, see the `tokenizer_class` attribute in this config.json: https://s3.amazonaws.com/models.huggingface.co/bert/julien-c/dummy-diff-tokenizer/config.json Instead of a class, we could have used a `tokenizer_type` belonging to the set of all `model_type`s, like `"bert"`, etc. but it feels more restrictive, especially in case we start having tokenizer classes that are not obviously linked to a "model", like a potential "TweetTokenizer" Context: https://github.com/huggingface/transformers/pull/6129 **Update: documented by @sgugger in https://github.com/huggingface/transformers/pull/8152**
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6995/reactions", "total_count": 3, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6995/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6995", "html_url": "https://github.com/huggingface/transformers/pull/6995", "diff_url": "https://github.com/huggingface/transformers/pull/6995.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6995.patch", "merged_at": 1599639780000 }
https://api.github.com/repos/huggingface/transformers/issues/6994
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6994/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6994/comments
https://api.github.com/repos/huggingface/transformers/issues/6994/events
https://github.com/huggingface/transformers/pull/6994
695,268,507
MDExOlB1bGxSZXF1ZXN0NDgxNTM2NTQy
6,994
Fix typo
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6994?src=pr&el=h1) Report\n> Merging [#6994](https://codecov.io/gh/huggingface/transformers/pull/6994?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/90ec78b5140251f093f658ebd4d2925e8c03f5e6?el=desc) will **decrease** coverage by `0.54%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6994/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6994?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6994 +/- ##\n==========================================\n- Coverage 80.58% 80.04% -0.55% \n==========================================\n Files 161 161 \n Lines 30123 30123 \n==========================================\n- Hits 24276 24111 -165 \n- Misses 5847 6012 +165 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6994?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6994/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6994/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (-0.33%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6994/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6994/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.71% <0.00%> (+6.07%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6994/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.34% <0.00%> (+63.80%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6994?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6994?src=pr&el=footer). Last update [90ec78b...09dda6e](https://codecov.io/gh/huggingface/transformers/pull/6994?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,599
1,599
1,599
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6994/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6994/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6994", "html_url": "https://github.com/huggingface/transformers/pull/6994", "diff_url": "https://github.com/huggingface/transformers/pull/6994.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6994.patch", "merged_at": 1599553378000 }
https://api.github.com/repos/huggingface/transformers/issues/6993
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6993/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6993/comments
https://api.github.com/repos/huggingface/transformers/issues/6993/events
https://github.com/huggingface/transformers/issues/6993
695,116,265
MDU6SXNzdWU2OTUxMTYyNjU=
6,993
PegasusForConditionalGeneration stops at unknown token
{ "login": "adjeiv", "id": 52001888, "node_id": "MDQ6VXNlcjUyMDAxODg4", "avatar_url": "https://avatars.githubusercontent.com/u/52001888?v=4", "gravatar_id": "", "url": "https://api.github.com/users/adjeiv", "html_url": "https://github.com/adjeiv", "followers_url": "https://api.github.com/users/adjeiv/followers", "following_url": "https://api.github.com/users/adjeiv/following{/other_user}", "gists_url": "https://api.github.com/users/adjeiv/gists{/gist_id}", "starred_url": "https://api.github.com/users/adjeiv/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adjeiv/subscriptions", "organizations_url": "https://api.github.com/users/adjeiv/orgs", "repos_url": "https://api.github.com/users/adjeiv/repos", "events_url": "https://api.github.com/users/adjeiv/events{/privacy}", "received_events_url": "https://api.github.com/users/adjeiv/received_events", "type": "User", "site_admin": false }
[ { "id": 1845609017, "node_id": "MDU6TGFiZWwxODQ1NjA5MDE3", "url": "https://api.github.com/repos/huggingface/transformers/labels/seq2seq", "name": "seq2seq", "color": "fef2c0", "default": false, "description": "" } ]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "The easiest way I can think of is to avoid generating the unk token altogether.\r\n\r\nadd the following method to `PegasusForConditionalGeneration`\r\n\r\n```python\r\n def adjust_logits_during_generation(self, logits, cur_len, max_length):\r\n # Note, this will break with a tokenizer that is not PegasusTokenizer\r\n logits[:, list(range(2, 105))] = float(\"-inf\") # never predict unk tokens\r\n if cur_len == max_length - 1 and self.config.eos_token_id is not None:\r\n self._force_token_ids_generation(logits, self.config.eos_token_id)\r\n return logits\r\n```\r\n\r\nLet me know if that helps!\r\n\r\n", "> The easiest way I can think of is to avoid generating the unk token altogether.\r\n> \r\n> add the following method to `PegasusForConditionalGeneration`\r\n> \r\n> ```python\r\n> def adjust_logits_during_generation(self, logits, cur_len, max_length):\r\n> # Note, this will break with a tokenizer that is not PegasusTokenizer\r\n> logits[:, list(range(2, 105))] = float(\"-inf\") # never predict unk tokens\r\n> if cur_len == max_length - 1 and self.config.eos_token_id is not None:\r\n> self._force_token_ids_generation(logits, self.config.eos_token_id)\r\n> return logits\r\n> ```\r\n> \r\n> Let me know if that helps!\r\n\r\nThank you for the reply!\r\nI've added the code and checked it's being run, but unfortunately the output still stops at an unknown token.", "I can't replicate.\r\ngot \r\n> As a child, the documentary maker, who was born in the Indian state of Kerala but now lives in Toronto, saw ceremonial elephants being paraded and thought they were beautiful.\r\n\r\non the branch of #7014 \r\n", "Setting the min_length parameter to 100 yields the problem (as I should have mentioned). Might this be an issue with the minimum length being too long relative to the size of the input?", "Yeah. the `xsum` model especially is trained to generate very short summaries.\r\n`pegasus-arxiv` for example, can generate up to 256 tokens.\r\n\r\nyou can see each available checkpoint and it's maximum input and output sizes [here](https://github.com/huggingface/transformers/blob/0f58903bb62870342eae52f5a02c9105ec6f9b1e/src/transformers/configuration_pegasus.py#L50)\r\n\r\n+ `max_length`: max length to generate\r\n+ `max_position_embeddings`: max input size." ]
1,599
1,602
1,602
NONE
null
Hi all, When following the code snippet from the [huggingface documentation](https://huggingface.co/transformers/master/model_doc/pegasus.html) but replacing the text, I have found that the summary stops when it reaches an unknown token. Is there a way around this? src_text = [ """As a child, the documentary maker, who was born in the Indian state of Kerala but now lives in Toronto, saw ceremonial elephants being paraded and thought they were beautiful. Later, she learned about the ordeal the animals are subjected to. "So many elephants had ghastly wounds on their hips, massive tumours and blood oozing out of their ankles, because chains had cut into their flesh and many of them were blind," Iyer told the BBC. She has made a documentary, Gods in Shackles, in an attempt to draw attention to the treatment of temple elephants she saw in India. "They were so helpless and the chains were so heavy," she said. "It was absolutely heart-breaking for me to witness this.""" ] Using this sample text, for example, my summary is: As a child, the documentary maker, who was born in the Indian state of Kerala but now lives in Toronto, saw ceremonial elephants being paraded and thought they were beautiful, but later, she learned about the ordeal the animals are subjected to in India, in an attempt to draw attention to the treatment of templeunk_9
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6993/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6993/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6992
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6992/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6992/comments
https://api.github.com/repos/huggingface/transformers/issues/6992/events
https://github.com/huggingface/transformers/issues/6992
695,053,049
MDU6SXNzdWU2OTUwNTMwNDk=
6,992
Mobile Bert Tiny model
{ "login": "borsork377", "id": 70897626, "node_id": "MDQ6VXNlcjcwODk3NjI2", "avatar_url": "https://avatars.githubusercontent.com/u/70897626?v=4", "gravatar_id": "", "url": "https://api.github.com/users/borsork377", "html_url": "https://github.com/borsork377", "followers_url": "https://api.github.com/users/borsork377/followers", "following_url": "https://api.github.com/users/borsork377/following{/other_user}", "gists_url": "https://api.github.com/users/borsork377/gists{/gist_id}", "starred_url": "https://api.github.com/users/borsork377/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/borsork377/subscriptions", "organizations_url": "https://api.github.com/users/borsork377/orgs", "repos_url": "https://api.github.com/users/borsork377/repos", "events_url": "https://api.github.com/users/borsork377/events{/privacy}", "received_events_url": "https://api.github.com/users/borsork377/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "MobileBERT is supported, see [here](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_mobilebert.py) for the code, and [here](https://huggingface.co/transformers/model_doc/mobilebert.html) for the docs. [Here](https://huggingface.co/models?search=mobilebert) are all the available mobilebert models on the hub." ]
1,599
1,599
1,599
NONE
null
# 🚀 Feature request Can you add support for variants of MobileBERT? ## Motivation The package currently provides various varients of bert - 'bert-base-cased', 'bert-base-uncased', 'bert-large'... Can you also similarly provide Mobile Bert Tiny as well?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6992/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6992/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6991
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6991/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6991/comments
https://api.github.com/repos/huggingface/transformers/issues/6991/events
https://github.com/huggingface/transformers/pull/6991
695,052,999
MDExOlB1bGxSZXF1ZXN0NDgxMzQ4MjQ5
6,991
Conversion scripts shouldn't have relative imports
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "https://github.com/huggingface/transformers/blob/0203ad43bcd0b29423dec6ca1a58ed58300f0d61/src/transformers/convert_mbart_original_checkpoint_to_pytorch.py#L7\r\n\r\nHello! Does this line also need to be changed?" ]
1,599
1,600
1,599
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6991/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6991/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6991", "html_url": "https://github.com/huggingface/transformers/pull/6991", "diff_url": "https://github.com/huggingface/transformers/pull/6991.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6991.patch", "merged_at": 1599481867000 }
https://api.github.com/repos/huggingface/transformers/issues/6990
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6990/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6990/comments
https://api.github.com/repos/huggingface/transformers/issues/6990/events
https://github.com/huggingface/transformers/pull/6990
695,033,143
MDExOlB1bGxSZXF1ZXN0NDgxMzMwNzI5
6,990
README for HooshvareLab/bert-fa-base-uncased
{ "login": "m3hrdadfi", "id": 2601833, "node_id": "MDQ6VXNlcjI2MDE4MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/2601833?v=4", "gravatar_id": "", "url": "https://api.github.com/users/m3hrdadfi", "html_url": "https://github.com/m3hrdadfi", "followers_url": "https://api.github.com/users/m3hrdadfi/followers", "following_url": "https://api.github.com/users/m3hrdadfi/following{/other_user}", "gists_url": "https://api.github.com/users/m3hrdadfi/gists{/gist_id}", "starred_url": "https://api.github.com/users/m3hrdadfi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/m3hrdadfi/subscriptions", "organizations_url": "https://api.github.com/users/m3hrdadfi/orgs", "repos_url": "https://api.github.com/users/m3hrdadfi/repos", "events_url": "https://api.github.com/users/m3hrdadfi/events{/privacy}", "received_events_url": "https://api.github.com/users/m3hrdadfi/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,599
1,599
1,599
CONTRIBUTOR
null
ParsBERT v2.0 is a fine-tuned and vocab-reconstructed version of ParsBERT, and it's able to be used in other scopes! Some features: - We added some unused-vocab for use in summarization and other scopes. - We fine-tuned the model on vast styles of writing in the Persian language.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6990/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6990/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6990", "html_url": "https://github.com/huggingface/transformers/pull/6990", "diff_url": "https://github.com/huggingface/transformers/pull/6990.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6990.patch", "merged_at": 1599511431000 }
https://api.github.com/repos/huggingface/transformers/issues/6989
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6989/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6989/comments
https://api.github.com/repos/huggingface/transformers/issues/6989/events
https://github.com/huggingface/transformers/issues/6989
695,024,101
MDU6SXNzdWU2OTUwMjQxMDE=
6,989
TypeError: __init__() got an unexpected keyword argument 'cache_dir'
{ "login": "Abbyyan", "id": 12140508, "node_id": "MDQ6VXNlcjEyMTQwNTA4", "avatar_url": "https://avatars.githubusercontent.com/u/12140508?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Abbyyan", "html_url": "https://github.com/Abbyyan", "followers_url": "https://api.github.com/users/Abbyyan/followers", "following_url": "https://api.github.com/users/Abbyyan/following{/other_user}", "gists_url": "https://api.github.com/users/Abbyyan/gists{/gist_id}", "starred_url": "https://api.github.com/users/Abbyyan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Abbyyan/subscriptions", "organizations_url": "https://api.github.com/users/Abbyyan/orgs", "repos_url": "https://api.github.com/users/Abbyyan/repos", "events_url": "https://api.github.com/users/Abbyyan/events{/privacy}", "received_events_url": "https://api.github.com/users/Abbyyan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Solved with https://github.com/huggingface/transformers/issues/319" ]
1,599
1,599
1,599
NONE
null
I'm fine tuning with the example `run_language_modeling.py`as follows. ```shell python run_language_modeling.py --output_dir=output_dir --model_type gpt2 --model_name_or_path distilgpt2 --do_train --train_data_file=xxx.data.txt ``` It failed with error ```shell Traceback (most recent call last): File "run_language_modeling.py", line 313, in <module> main() File "run_language_modeling.py", line 242, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_train else None File "run_language_modeling.py", line 143, in get_dataset cache_dir=cache_dir, TypeError: __init__() got an unexpected keyword argument 'cache_dir' ``` Could you please tell me how to use it ? Thanks a lot. <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6989/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6989/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6988
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6988/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6988/comments
https://api.github.com/repos/huggingface/transformers/issues/6988/events
https://github.com/huggingface/transformers/issues/6988
694,968,749
MDU6SXNzdWU2OTQ5Njg3NDk=
6,988
t5 embed_tokens
{ "login": "agnesmm", "id": 14213975, "node_id": "MDQ6VXNlcjE0MjEzOTc1", "avatar_url": "https://avatars.githubusercontent.com/u/14213975?v=4", "gravatar_id": "", "url": "https://api.github.com/users/agnesmm", "html_url": "https://github.com/agnesmm", "followers_url": "https://api.github.com/users/agnesmm/followers", "following_url": "https://api.github.com/users/agnesmm/following{/other_user}", "gists_url": "https://api.github.com/users/agnesmm/gists{/gist_id}", "starred_url": "https://api.github.com/users/agnesmm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/agnesmm/subscriptions", "organizations_url": "https://api.github.com/users/agnesmm/orgs", "repos_url": "https://api.github.com/users/agnesmm/repos", "events_url": "https://api.github.com/users/agnesmm/events{/privacy}", "received_events_url": "https://api.github.com/users/agnesmm/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,599
1,599
1,599
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6988/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6988/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6987
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6987/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6987/comments
https://api.github.com/repos/huggingface/transformers/issues/6987/events
https://github.com/huggingface/transformers/issues/6987
694,934,313
MDU6SXNzdWU2OTQ5MzQzMTM=
6,987
DefaultCPUAllocator: can't allocate memory: you tried to allocate 100663296 bytes
{ "login": "Abbyyan", "id": 12140508, "node_id": "MDQ6VXNlcjEyMTQwNTA4", "avatar_url": "https://avatars.githubusercontent.com/u/12140508?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Abbyyan", "html_url": "https://github.com/Abbyyan", "followers_url": "https://api.github.com/users/Abbyyan/followers", "following_url": "https://api.github.com/users/Abbyyan/following{/other_user}", "gists_url": "https://api.github.com/users/Abbyyan/gists{/gist_id}", "starred_url": "https://api.github.com/users/Abbyyan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Abbyyan/subscriptions", "organizations_url": "https://api.github.com/users/Abbyyan/orgs", "repos_url": "https://api.github.com/users/Abbyyan/repos", "events_url": "https://api.github.com/users/Abbyyan/events{/privacy}", "received_events_url": "https://api.github.com/users/Abbyyan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This happens because you don't have sufficient memory in your machine, indeed. You can try reducing the batch size.", "> This happens because you don't have sufficient memory in your machine, indeed. You can try reducing the batch size.\r\n\r\nIn my opinion, the problem occurs may due to my dataset is too large(Relative to the memory of my machine). But the loading data part may be optimized referring to the following issue. \r\nhttps://stackoverflow.com/questions/51444059/how-to-iterate-over-two-dataloaders-simultaneously-using-pytorch/57890309#57890309 ", "There are two questions:\r\n(1) I've changed a machine to run the code. It started running normal, but will quit midway while training. Is this also related to my machine's memory? \r\nThis is the data of my machine during training.\r\n![image](https://user-images.githubusercontent.com/12140508/92390398-5d669080-f14d-11ea-9260-55f6ead36c72.png)\r\nThis is the exit interface. I don't know what's the matter. \r\n![image](https://user-images.githubusercontent.com/12140508/92390715-fdbcb500-f14d-11ea-98f5-576d29eb9fb2.png)\r\n\r\n(2) In addition , how can i use gpu to run `run_language_modeling.py` please. Thanks a lot.\r\n", "I have a similar problem. Was there any solution in your case?\r\n", "same problem", "same problem", "Were there any solutions to this? I am encountering the exact same issue trying to train Dolly on the original dataset.", "same question when i train bloom", "same issue, please let me know if any solution on this has been done\r\n" ]
1,599
1,707
1,599
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> I'm using the `run_language_modeling.py` to fine tuning the model `distilgpt2`. The command i used as follows: ```shell python run_language_modeling.py --output_dir=output_dir --model_type gpt2 --model_name_or_path distilgpt2 --do_train --train_data_file=data/data.txt --overwrite_output_dir ``` But it core with following error ```shell File "/data1/xxx/transformers/src/transformers/activations.py", line 30, in gelu_new return 0.5 * x * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (x + 0.044715 * torch.pow(x, 3.0)))) RuntimeError: [enforce fail at CPUAllocator.cpp:64] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 100663296 bytes. Error code 12 (Cannot allocate memory) ``` Does it core duing to the insufficient memory on my machine? Hope for your help. Thanks a lot. <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6987/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6987/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6986
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6986/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6986/comments
https://api.github.com/repos/huggingface/transformers/issues/6986/events
https://github.com/huggingface/transformers/pull/6986
694,908,711
MDExOlB1bGxSZXF1ZXN0NDgxMjIyODI5
6,986
Demoing LXMERT with raw images by incorporating the FRCNN model for roi-pooled extraction and bounding-box predction on the GQA answer set.
{ "login": "eltoto1219", "id": 14030663, "node_id": "MDQ6VXNlcjE0MDMwNjYz", "avatar_url": "https://avatars.githubusercontent.com/u/14030663?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eltoto1219", "html_url": "https://github.com/eltoto1219", "followers_url": "https://api.github.com/users/eltoto1219/followers", "following_url": "https://api.github.com/users/eltoto1219/following{/other_user}", "gists_url": "https://api.github.com/users/eltoto1219/gists{/gist_id}", "starred_url": "https://api.github.com/users/eltoto1219/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eltoto1219/subscriptions", "organizations_url": "https://api.github.com/users/eltoto1219/orgs", "repos_url": "https://api.github.com/users/eltoto1219/repos", "events_url": "https://api.github.com/users/eltoto1219/events{/privacy}", "received_events_url": "https://api.github.com/users/eltoto1219/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,599
1,600
1,600
CONTRIBUTOR
null
This is a follow up PR from initially incorporating LXMERT. This PR includes the Faster-RCNN code to convert raw-images into usable roi-pooled features downstream in lxmert or any other suitable vision model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6986/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6986/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6986", "html_url": "https://github.com/huggingface/transformers/pull/6986", "diff_url": "https://github.com/huggingface/transformers/pull/6986.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6986.patch", "merged_at": 1600092424000 }
https://api.github.com/repos/huggingface/transformers/issues/6985
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6985/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6985/comments
https://api.github.com/repos/huggingface/transformers/issues/6985/events
https://github.com/huggingface/transformers/issues/6985
694,896,158
MDU6SXNzdWU2OTQ4OTYxNTg=
6,985
Enhance a MarianMT pretrained model from HuggingFace with more training data
{ "login": "stelmath", "id": 38814495, "node_id": "MDQ6VXNlcjM4ODE0NDk1", "avatar_url": "https://avatars.githubusercontent.com/u/38814495?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stelmath", "html_url": "https://github.com/stelmath", "followers_url": "https://api.github.com/users/stelmath/followers", "following_url": "https://api.github.com/users/stelmath/following{/other_user}", "gists_url": "https://api.github.com/users/stelmath/gists{/gist_id}", "starred_url": "https://api.github.com/users/stelmath/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stelmath/subscriptions", "organizations_url": "https://api.github.com/users/stelmath/orgs", "repos_url": "https://api.github.com/users/stelmath/repos", "events_url": "https://api.github.com/users/stelmath/events{/privacy}", "received_events_url": "https://api.github.com/users/stelmath/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Have you tried the finetune.sh script shown [here](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune.sh)? In addition to the short list of CLI flags listed there, you could try adding:\r\n\r\n```\r\n--src_lang \"en\" \\\r\n--tgt_lang \"de\" \\\r\n--num_train_epochs 400 \\\r\n--warmup_steps 20 \\\r\n--train_batch_size 32 \\\r\n--eval_batch_size 32 \\\r\n--data_dir \"/data/dir\" \\\r\n--output_dir \"/path/to/store/model/etc\" \\\r\n--cache_dir \"/path/for/misc/files\" \\\r\n--max_source_length 128 \\\r\n--max_target_length 128 \\\r\n--val_max_target_length 128 \\\r\n--test_max_target_length 128 \\\r\n--model_name_or_path \"</path/to/pretrained>\"\r\n```\r\n\r\nwhere the \"/path/to/pretrained\" could be either a local path on your machine or MarianMT model (Opus-en-de or equivalent). The \"data/dir\" has a \"train.source\" and \"train.target\" for the source & target languages, such that line number x of the target is a translation of line x in the source (and same with \"val.source\" and \"val.target\"). I have changed the finetune.py script [here](https://github.com/huggingface/transformers/blob/77cd0e13d2d09f60d2f6d8fb8b08f493d7ca51fe/examples/seq2seq/finetune.py#L415) to \r\n```\r\nparser = TranslationModule.add_model_specific_args(parser, os.getcwd())\r\n\r\n```\r\nand then ran the finetune.sh script.\r\n\r\n\r\nNote: The gradients blew up when I used the \"fp16\" flag (with Pytorch 1.6), so I had removed it. Also, you might want to check on the \"val_check_interval\", \"check_val_every_n_epoch\", and probably check [this issue](https://github.com/huggingface/transformers/issues/3447) on how to save multiple checkpoints.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,599
1,605
1,605
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**: https://stackoverflow.com/questions/63774619/enhance-a-marianmt-pretrained-model-from-huggingface-with-more-training-data
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6985/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6985/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6984
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6984/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6984/comments
https://api.github.com/repos/huggingface/transformers/issues/6984/events
https://github.com/huggingface/transformers/pull/6984
694,868,239
MDExOlB1bGxSZXF1ZXN0NDgxMTg4Nzg5
6,984
Cannot index `None`
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6984?src=pr&el=h1) Report\n> Merging [#6984](https://codecov.io/gh/huggingface/transformers/pull/6984?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/995a958dd18d4326e608efc3bfc4005acfef8e56?el=desc) will **increase** coverage by `0.26%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6984/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6984?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6984 +/- ##\n==========================================\n+ Coverage 80.03% 80.30% +0.26% \n==========================================\n Files 161 161 \n Lines 30122 30123 +1 \n==========================================\n+ Hits 24108 24190 +82 \n+ Misses 6014 5933 -81 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6984?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6984/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.44% <100.00%> (+0.01%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6984/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `21.47% <0.00%> (-69.44%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6984/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6984/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |\n| [src/transformers/modeling\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/6984/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6984/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.46% <0.00%> (-1.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6984/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.32%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6984/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.03% <0.00%> (+0.40%)` | :arrow_up: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6984/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.41% <0.00%> (+0.50%)` | :arrow_up: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6984/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `90.00% <0.00%> (+5.00%)` | :arrow_up: |\n| ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/6984/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6984?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6984?src=pr&el=footer). Last update [995a958...8ecbd15](https://codecov.io/gh/huggingface/transformers/pull/6984?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,599
1,599
1,599
MEMBER
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #6950
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6984/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6984/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6984", "html_url": "https://github.com/huggingface/transformers/pull/6984", "diff_url": "https://github.com/huggingface/transformers/pull/6984.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6984.patch", "merged_at": 1599468968000 }
https://api.github.com/repos/huggingface/transformers/issues/6983
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6983/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6983/comments
https://api.github.com/repos/huggingface/transformers/issues/6983/events
https://github.com/huggingface/transformers/issues/6983
694,782,473
MDU6SXNzdWU2OTQ3ODI0NzM=
6,983
[generation] multiple eos/pad asserts/ifs in generate search functions
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }, { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "I think eos is always defined, but I think this (or just checking `pad_token_id` was one of Patrick's first PRs. He would know more.", "Thank you for that feedback, @sshleifer.\r\n\r\nIf it makes things simpler, I could re-work both functions wrt these 2 tokens' definition checks and you can review a PR instead. \r\n\r\nI just wanted to validate that the issue is real and I'm not missing something obvious before I invest time into doing that.", "Hey @stas00,\r\n\r\nThis is definitely part of the code that should be refactored :D Super hard to follow the logic there :-/\r\n\r\nAs a start, this PR is probably quite useful for context: https://github.com/huggingface/transformers/pull/2885. So there are a couple of models where EOS token is not defined and I'm quite sure that the code you linked does not always get hit. It can very well be that we apply beam search to `OpenAIGPT` - with a given `max_length`. `OpenAIGPT` does not have an EOS token, but beam search should work nevertheless. \r\n\r\nIt's quite a tricky pad token / eos token / ... logic that is implemented there. I think we have to be super careful to not break anything here - even if all the slow tests pass, it might not be enough (`OpenAIGPT` beam search is not integration tested...)\r\n\r\n Also, I'm currently working on refactoring the generate function, will ping you guys in a couple of days with a first design proposition. My idea is to pull apart beam search + greedy / beam search + sampling / no beam search + greedy / no beam searh + greedy to make everything more readable. I'm not sure whether it's worth diving deep into the generate() logic before we have a more readable code", "That sounds like a fantastic plan, @patrickvonplaten!\r\n\r\n> So there are a couple of models where EOS token is not defined and I'm quite sure that the code you linked does not always get hit. \r\n\r\nI stand corrected, that's good to know, thank you!. \r\n\r\nThat means that the code is very tricky, since a reader will expect that at some point the generation should be complete and `done` set to True, which currently absolutely requires eos. I haven't considered the case where it'll go through that loop and not hit done. If I follow it carefully it only happens if `max_length` is reached and there is no `done` yet, and moreover it has to be that the hypos are exactly of the same length. if they aren't the same, eos is almost always required.\r\n\r\nAs you are saying there isn't really a test that covers that (odd?) case. Actually, PR https://github.com/huggingface/transformers/pull/6982 is very likely to break it then, since now it requires eos for both situations where hypos are of the same length and are not. But if it breaks that very special case, then the issue lies elsewhere and it just happened to work. (As I suggested I changed \"is\" for \"was\" in an input and suddenly eos was gone from all of the hypos.) \r\n\r\nNote: I have only run the code in my head and haven't validated that in fact it'd break something. It's possible that you're talking about a completely different case.\r\n", "I think your PR is fine because if no `eos_token_id` is defined, this condition can never happen: `sent_lengths[i] < max_length:`. \r\nWhat I mean is that if no `eos_token_id` is defined no matter what `generate()` method is used, all sent_length will always be == `max_length` and the condition will not be hit.", "ah, yes, you're absolutely correct, Patrick - you definitely have been holding that generation code in your head for much longer than I - I don't have the full coverage yet :)", "Reopen if this was a mistake!" ]
1,599
1,602
1,602
CONTRIBUTOR
null
In `_generate_no_beam_search` `eos_token_id` is required: https://github.com/huggingface/transformers/blob/master/src/transformers/generation_utils.py#L731 (that code always get hit) ``` assert ( eos_token_id is not None and pad_token_id is not None ), "generated beams >= num_beams -> eos_token_id and pad_token have to be defined" ``` why do we assert and check `eos_token_id is not None` multiple times through the code, why not assert once at the top of the function and then just use it? Moreover, all those `if eos_token_id is not None` can be then removed (or reduced if there are other parts to them). Also a larger question - is there a model where `eos_token_id` is not defined? If there is none, then why not assert once at the top of `generate` and then just use it everywhere in sub-calls without testing its definition? Oh, I also see `pad_token_id` is used in `_generate_no_beam_search` w/o testing whether it's defined: https://github.com/huggingface/transformers/blob/master/src/transformers/generation_utils.py#L571 ``` tokens_to_add = next_token * unfinished_sents + (pad_token_id) * (1 - unfinished_sents) ``` is it the same situation as `eos_token_id` - that is it is always needed? I see it's may be defined here: https://github.com/huggingface/transformers/blob/master/src/transformers/generation_utils.py#L355 but only if `eos_token_id` is defined. ``` if pad_token_id is None and eos_token_id is not None: logger.warning( "Setting `pad_token_id` to {} (first `eos_token_id`) to generate sequence".format(eos_token_id) ) pad_token_id = eos_token_id ``` my thinking is that if this worked until now for all models, it's another proof that `eos_token_id` has to be required again. in `_generate_no_beam_search` `pad_token_id` is required and similarly to `eos_token_id` can be asserted once on top and not multiple times through the code. Thank you for reviewing my observations. It's possible that some (all?) are incorrect if I missed something.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6983/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6983/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6982
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6982/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6982/comments
https://api.github.com/repos/huggingface/transformers/issues/6982/events
https://github.com/huggingface/transformers/pull/6982
694,748,013
MDExOlB1bGxSZXF1ZXN0NDgxMDg0MDM1
6,982
[generation] consistently add eos tokens
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "IMO, `eos` should not always be there. The reason is that if the user defines `max_length=30` and the EOS token was not generated by the model, then no EOS token should be added. I think EOS token should only be added if it is produced by the model. *E.g.* a generated sentence like \"I will go to the office and\" should not have an added EOS token at the end.", "> [...] _E.g._ a generated sentence like \"I will go to the office and\" should not have an added EOS token at the end.\r\n\r\nThank you for explaining that, @patrickvonplaten.\r\n\r\nCould you also please review this PR, as it's unrelated to max_length or undefined EOS, that was a related question that I wasn't sure about.\r\n", "@stas00\r\nHave you run the slow tests that might be effected? (will take 10-30 mins)\r\n\r\n```\r\nrun_generation_integration_tests () {\r\n\t# assumes USE_CUDA is exported, rather than passed\r\n\tRUN_SLOW=1 pytest tests/test_modeling_pegasus.py\r\n\tRUN_SLOW=1 pytest tests/test_modeling_bart.py\r\n\tRUN_SLOW=1 pytest tests/test_modeling_t5.py\r\n\tRUN_SLOW=1 pytest tests/test_modeling_marian.py\r\n\tRUN_SLOW=1 pytest tests/test_modeling_mbart.py\r\n\tRUN_SLOW=1 pytest tests/test_modeling_encoder_decoder.py\r\n\tRUN_SLOW=1 pytest tests/test_pipelines.py\r\n\tRUN_SLOW=1 pytest tests/test_modeling_gpt2.py\r\n}\r\n```", "Good call, @sshleifer! (I edited the last one to `tests/test_modeling_gpt2.py`)\r\n\r\n```\r\nRUN_SLOW=1 pytest --disable-warnings tests/test_modeling_pegasus.py tests/test_modeling_bart.py tests/test_modeling_t5.py tests/test_modeling_marian.py tests/test_modeling_mbart.py tests/test_modeling_encoder_decoder.py tests/test_pipelines.py tests/test_modeling_gpt2.py\r\n====================================================================== test session starts =======================================================================\r\nplatform linux -- Python 3.7.5, pytest-5.4.1, py-1.8.1, pluggy-0.13.1\r\nrootdir: /mnt/nvme1/code/huggingface/transformers\r\nplugins: hypothesis-5.5.4, timeout-1.4.2, filter-subpackage-0.1.1, arraydiff-0.3, flaky-3.6.1, ipynb-1.1.1.dev0, cov-2.10.0, astropy-header-0.1.2, forked-1.2.0, doctestplus-0.5.0, openfiles-0.4.0, remotedata-0.3.2, xdist-1.32.0, repeat-0.8.0, flakefinder-1.0.0\r\ncollected 211 items\r\n\r\ntests/test_modeling_pegasus.py .. [ 0%]\r\ntests/test_modeling_bart.py ..............s....s............................ [ 23%]\r\ntests/test_modeling_t5.py ......................s..s........... [ 41%]\r\ntests/test_modeling_marian.py ................ [ 48%]\r\ntests/test_modeling_mbart.py s...s. [ 51%]\r\ntests/test_modeling_encoder_decoder.py ............................... [ 66%]\r\ntests/test_pipelines.py ....................................... [ 84%]\r\ntests/test_modeling_gpt2.py .......................s........ [100%]\r\n\r\n==================================================== 204 passed, 7 skipped, 45 warnings in 980.34s (0:16:20) =====================================================\r\n```", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6982?src=pr&el=h1) Report\n> Merging [#6982](https://codecov.io/gh/huggingface/transformers/pull/6982?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ce37be9d94da57897cce9c49b3421e6a8a927d4a?el=desc) will **increase** coverage by `2.39%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6982/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6982?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6982 +/- ##\n==========================================\n+ Coverage 77.60% 80.00% +2.39% \n==========================================\n Files 161 161 \n Lines 30120 30119 -1 \n==========================================\n+ Hits 23374 24096 +722 \n+ Misses 6746 6023 -723 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6982?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6982/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.20% <100.00%> (-0.01%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6982/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6982/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6982/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6982/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.90% <0.00%> (-0.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6982/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (-0.33%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6982/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.76% <0.00%> (-0.28%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6982/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.21% <0.00%> (+1.00%)` | :arrow_up: |\n| [src/transformers/modeling\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/6982/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `90.76% <0.00%> (+20.74%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6982/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.34% <0.00%> (+63.80%)` | :arrow_up: |\n| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/6982/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6982?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6982?src=pr&el=footer). Last update [ce37be9...85dd09d](https://codecov.io/gh/huggingface/transformers/pull/6982?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,599
1,599
1,599
CONTRIBUTOR
null
Currently beam search returns inconsistent outputs - if hypos have different lengths we get eos, if they are the same - we don't. I found a sentence where if I change one letter in one of the input words the beam search outputs all suddenly lack eos, which is an inconsistent behavior. This PR makes the output more consistent. (but not 100%, please see below) --------- Also why not replace: ``` if sent_lengths[i] < max_length: decoded[i, sent_lengths[i]] = eos_token_id ``` with: ``` decoded[i, sent_lengths[i]] = eos_token_id ``` Shouldn't eos always be there? If generated data gets truncated, the caller needs to use a larger `max_length`. Currently, if the hypos lengths are on the cusp of `max_length`, some of them will get eos, whereas others won't, which is again inconsistent. Please correct me if my logic is flawed. ----- I also looked at `_generate_no_beam_search` - there eos adding logic uses a somewhat different logic. Should the two functions (beam/no_beam) be consistent eos-injection wise?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6982/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6982/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6982", "html_url": "https://github.com/huggingface/transformers/pull/6982", "diff_url": "https://github.com/huggingface/transformers/pull/6982.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6982.patch", "merged_at": 1599638917000 }
https://api.github.com/repos/huggingface/transformers/issues/6981
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6981/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6981/comments
https://api.github.com/repos/huggingface/transformers/issues/6981/events
https://github.com/huggingface/transformers/issues/6981
694,702,347
MDU6SXNzdWU2OTQ3MDIzNDc=
6,981
LongformerForQuestionAnswering sample code error
{ "login": "prince14322", "id": 19497571, "node_id": "MDQ6VXNlcjE5NDk3NTcx", "avatar_url": "https://avatars.githubusercontent.com/u/19497571?v=4", "gravatar_id": "", "url": "https://api.github.com/users/prince14322", "html_url": "https://github.com/prince14322", "followers_url": "https://api.github.com/users/prince14322/followers", "following_url": "https://api.github.com/users/prince14322/following{/other_user}", "gists_url": "https://api.github.com/users/prince14322/gists{/gist_id}", "starred_url": "https://api.github.com/users/prince14322/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/prince14322/subscriptions", "organizations_url": "https://api.github.com/users/prince14322/orgs", "repos_url": "https://api.github.com/users/prince14322/repos", "events_url": "https://api.github.com/users/prince14322/events{/privacy}", "received_events_url": "https://api.github.com/users/prince14322/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "`tokenizer` is not a callable in v.2.11.0, which is why you are getting this error. Also the example that you posted is from stable docs not from 2.11.0, upgrading to latest stable version should resolve the issue.", "@patil-suraj\r\nSorry for asking this naive ques but how to upgrade to latest stable version?\r\nand\r\nWhat about the first error?\r\nTypeError: init() got an unexpected keyword argument 'return_dict'\r\n\r\nCan you please point to some reference or any link?", "Upgrading to latest version should also resolve the first issue. To upgrade\r\n\r\n`pip install -U transformers`", "Thank You. It worked." ]
1,599
1,599
1,599
NONE
null
## Environment info - `transformers` version: 2.11.0 - Platform: Linux-4.19.112+-x86_64-with-debian-buster-sid - Python version: 3.7.6 - PyTorch version (GPU?): 1.5.1 (False) - Tensorflow version (GPU?): 2.3.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No Longformer/Reformer: @patrickvonplaten ## Information Model I am using LongformerForQuestionAnswering The problem arises when using: [https://huggingface.co/transformers/v2.11.0/model_doc/longformer.html#longformerforquestionanswering](url) ## To reproduce Steps to reproduce the behavior: 1. I ran the following code on Kaggle kernel from transformers import LongformerTokenizer, LongformerForQuestionAnswering import torch tokenizer = LongformerTokenizer.from_pretrained("allenai/longformer-large-4096-finetuned-triviaqa") model = LongformerForQuestionAnswering.from_pretrained("allenai/longformer-large-4096-finetuned-triviaqa", return_dict=True) question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" encoding = tokenizer(question, text, return_tensors="pt") input_ids = encoding["input_ids"] attention_mask = encoding["attention_mask"] outputs = model(input_ids, attention_mask=attention_mask) start_logits = outputs.start_logits end_logits = outputs.end_logits all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0].tolist()) answer_tokens = all_tokens[torch.argmax(start_logits) :torch.argmax(end_logits)+1] answer = tokenizer.decode(tokenizer.convert_tokens_to_ids(answer_tokens)) # remove space prepending space token **Following error occurred** --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-7-70f3a4bf4161> in <module> 2 import torch 3 tokenizer = LongformerTokenizer.from_pretrained("allenai/longformer-large-4096-finetuned-triviaqa") ----> 4 model = LongformerForQuestionAnswering.from_pretrained("allenai/longformer-large-4096-finetuned-triviaqa", return_dict=True) 5 question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" 6 encoding = tokenizer(question, text, return_tensors="pt") /opt/conda/lib/python3.7/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 653 654 # Instantiate model. --> 655 model = cls(config, *model_args, **model_kwargs) 656 657 if state_dict is None and not from_tf: TypeError: __init__() got an unexpected keyword argument 'return_dict' > and after doing the following changes: _encoding = tokenizer(question, text, return_tensors="pt") -> encoding = tokenizer(question, text)_ the following error occured --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-10-54aabf4ba7c0> in <module> 1 question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" ----> 2 encoding = tokenizer(question, text, return_tensors="pt") TypeError: 'LongformerTokenizer' object is not callable
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6981/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6981/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6980
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6980/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6980/comments
https://api.github.com/repos/huggingface/transformers/issues/6980/events
https://github.com/huggingface/transformers/pull/6980
694,646,428
MDExOlB1bGxSZXF1ZXN0NDgwOTk2MTIx
6,980
[gen utils] missing else case
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6980?src=pr&el=h1) Report\n> Merging [#6980](https://codecov.io/gh/huggingface/transformers/pull/6980?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ce37be9d94da57897cce9c49b3421e6a8a927d4a?el=desc) will **increase** coverage by `0.39%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6980/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6980?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6980 +/- ##\n==========================================\n+ Coverage 77.60% 77.99% +0.39% \n==========================================\n Files 161 161 \n Lines 30120 30120 \n==========================================\n+ Hits 23374 23492 +118 \n+ Misses 6746 6628 -118 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6980?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6980/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.21% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/6980/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `16.37% <0.00%> (-82.31%)` | :arrow_down: |\n| [src/transformers/configuration\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/6980/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2x4bWVydC5weQ==) | `20.00% <0.00%> (-80.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/6980/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.49% <0.00%> (-71.63%)` | :arrow_down: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6980/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |\n| [src/transformers/modeling\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/6980/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `23.50% <0.00%> (-46.52%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6980/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.73% <0.00%> (-19.35%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6980/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+2.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6980/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.34% <0.00%> (+63.80%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6980/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.91% <0.00%> (+72.35%)` | :arrow_up: |\n| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/6980/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6980?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6980?src=pr&el=footer). Last update [ce37be9...2b9171e](https://codecov.io/gh/huggingface/transformers/pull/6980?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,599
1,599
1,599
CONTRIBUTOR
null
1. `else` is missing - I hit that case while porting a model. Probably needs to assert there? 2. also the comment on top seems to be outdated (just vocab_size is being set there)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6980/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6980/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6980", "html_url": "https://github.com/huggingface/transformers/pull/6980", "diff_url": "https://github.com/huggingface/transformers/pull/6980.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6980.patch", "merged_at": 1599478087000 }
https://api.github.com/repos/huggingface/transformers/issues/6979
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6979/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6979/comments
https://api.github.com/repos/huggingface/transformers/issues/6979/events
https://github.com/huggingface/transformers/issues/6979
694,645,289
MDU6SXNzdWU2OTQ2NDUyODk=
6,979
RunTime Error: CUDA out of memory when running trainer.train()
{ "login": "seyonechithrananda", "id": 46096704, "node_id": "MDQ6VXNlcjQ2MDk2NzA0", "avatar_url": "https://avatars.githubusercontent.com/u/46096704?v=4", "gravatar_id": "", "url": "https://api.github.com/users/seyonechithrananda", "html_url": "https://github.com/seyonechithrananda", "followers_url": "https://api.github.com/users/seyonechithrananda/followers", "following_url": "https://api.github.com/users/seyonechithrananda/following{/other_user}", "gists_url": "https://api.github.com/users/seyonechithrananda/gists{/gist_id}", "starred_url": "https://api.github.com/users/seyonechithrananda/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/seyonechithrananda/subscriptions", "organizations_url": "https://api.github.com/users/seyonechithrananda/orgs", "repos_url": "https://api.github.com/users/seyonechithrananda/repos", "events_url": "https://api.github.com/users/seyonechithrananda/events{/privacy}", "received_events_url": "https://api.github.com/users/seyonechithrananda/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @seyonechithrananda , I'm facing the same problem, how did you solve this issue?", "Ping here about that, I'm having the same problem", "I am suffering from the same problem, would like to know the solution if there is any", "Same problem here. When I run this code on my machine:\r\nhttps://github.com/huggingface/notebooks/blob/master/examples/token_classification.ipynb\r\nhttps://colab.research.google.com/drive/1-JIJlao4dI-Ilww_NnTc0rxtp-ymgDgM?usp=sharing\r\n\r\nI get:\r\nRuntimeError: CUDA out of memory. Tried to allocate 192.00 MiB (GPU 0; 1.96 GiB total capacity; 785.01 MiB already allocated; 111.25 MiB free; 832.00 MiB reserved in total by PyTorch)\r\n\r\n```\r\n+-----------------------------------------------------------------------------+\r\n| NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 |\r\n|-------------------------------+----------------------+----------------------+\r\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\r\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\r\n| | | MIG M. |\r\n|===============================+======================+======================|\r\n| 0 GeForce MX250 Off | 00000000:02:00.0 Off | N/A |\r\n| N/A 56C P3 N/A / N/A | 1897MiB / 2002MiB | 0% Default |\r\n| | | N/A |\r\n+-------------------------------+----------------------+----------------------+\r\n```\r\n\r\n", "Hi ,I'm too facing this issue, any solution found on this ?", "I am having the same issue and it is recurring. Any solution?" ]
1,599
1,624
1,599
CONTRIBUTOR
null
## Environment info - `transformers` version: 3.1.0 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.6.0+cu101 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: P100 GPU instance (Google Colab) - Using distributed or parallel set-up in script?: No ### Who can help @sgugger @julien-c @LysandreJik ## Information Model I am using (Bert, XLNet ...): RoBERTa with Byte-Pair Encoder (loading from the checkpointed pre-trained model on HuggingFace model hub). The problem arises when using: * [x] the official example scripts: (give details below) * [x ] my own modified scripts: (give details below) I modify a variant of the [`01_how_to_train.ipynb`,](https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb) replacing the `LinebyLineTextDataset` as it results in Out of Memory Issues with my text corpus. I built a hugging face NLP dataset which tokenizes the corpus. The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below): the PubChem 1M SELFIES set, a set of one million SELFIES Strings. "SELFIES" is a 100% chemically valid molecular string representation. You can view the library [here](https://github.com/aspuru-guzik-group/selfies) (I'm one of the developers). ## To reproduce Steps to reproduce the behavior: Reproducing requires a copy of the `shard_00_selfies.txt` dataset (click [here](https://drive.google.com/file/d/1DRq8UgBaKSNfyYtqNMyQ4WimG64sSOFR/view?usp=sharing) for a drive link), as well as the tokenizer's files which can be loaded from the HuggingFace hub with the following name: `seyonec/BPE_SELFIES_PubChem_shard00_50k` From there, you can just run a variant of the following colab file, with modified file paths of course: https://colab.research.google.com/drive/1a4edCW1b2rSVA_bkqEaywhvGMM_nQzRC?usp=sharing <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Strangely, the Trainer doesn't output a link to the Weights and Biases run page like it normally did before (using a near-identical script only a couple of days ago). It throws a CUDA out of memory error once I run the `trainer.train()` command: ![Screen Shot 2020-09-06 at 10 42 44 PM](https://user-images.githubusercontent.com/46096704/92343684-621a3e80-f092-11ea-8163-3cff1cd03e6c.png) <img width="1200" alt="Screen Shot 2020-09-06 at 10 45 58 PM" src="https://user-images.githubusercontent.com/46096704/92343851-d8b73c00-f092-11ea-8c89-a0034191975d.png"> Thanks for the help! Any advice or help is desperately welcome, I've been stuck for the past days with various memory issues with tokenization, and now with running the trainer class for some reason. 😄 You can check out the main public repository for this project, alongside with the abstract, and more [here](https://github.com/seyonechithrananda/bert-loves-chemistry)!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6979/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6979/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6978
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6978/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6978/comments
https://api.github.com/repos/huggingface/transformers/issues/6978/events
https://github.com/huggingface/transformers/issues/6978
694,644,712
MDU6SXNzdWU2OTQ2NDQ3MTI=
6,978
[gen utils] missing else case
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "sent PR https://github.com/huggingface/transformers/pull/6980" ]
1,599
1,599
1,599
CONTRIBUTOR
null
This code: https://github.com/huggingface/transformers/blob/master/src/transformers/generation_utils.py#L361-L369 ``` # current position and vocab size if hasattr(self.config, "vocab_size"): vocab_size = self.config.vocab_size elif ( self.config.is_encoder_decoder and hasattr(self.config, "decoder") and hasattr(self.config.decoder, "vocab_size") ): vocab_size = self.config.decoder.vocab_size ``` 1. `else` is missing - I hit that case while porting a model. Probably needs to assert there? ``` raise ValueError("either self.config.vocab_size or self.config.decoder.vocab_size need to be defined") ``` 2. also the comment on top seems to be outdated (just `vocab_size` is being set there)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6978/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6978/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6977
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6977/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6977/comments
https://api.github.com/repos/huggingface/transformers/issues/6977/events
https://github.com/huggingface/transformers/pull/6977
694,569,859
MDExOlB1bGxSZXF1ZXN0NDgwOTI5OTg0
6,977
[s2s] warn if --fp16 for torch 1.6
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6977?src=pr&el=h1) Report\n> Merging [#6977](https://codecov.io/gh/huggingface/transformers/pull/6977?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f72fe1f31aca235c7f675680832cc364efe4088e?el=desc) will **increase** coverage by `0.59%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6977/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6977?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6977 +/- ##\n==========================================\n+ Coverage 79.45% 80.04% +0.59% \n==========================================\n Files 161 161 \n Lines 30120 30120 \n==========================================\n+ Hits 23931 24109 +178 \n+ Misses 6189 6011 -178 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6977?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6977/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6977/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `26.05% <0.00%> (-63.52%)` | :arrow_down: |\n| [src/transformers/modeling\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/6977/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: |\n| [src/transformers/configuration\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6977/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `85.71% <0.00%> (-10.72%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6977/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `71.02% <0.00%> (-5.69%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6977/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `82.95% <0.00%> (-3.76%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6977/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.01% <0.00%> (-2.29%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6977/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `91.90% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6977/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6977/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.41% <0.00%> (+0.16%)` | :arrow_up: |\n| ... and [11 more](https://codecov.io/gh/huggingface/transformers/pull/6977/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6977?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6977?src=pr&el=footer). Last update [f72fe1f...6b88230](https://codecov.io/gh/huggingface/transformers/pull/6977?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,599
1,599
1,599
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6977/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6977/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6977", "html_url": "https://github.com/huggingface/transformers/pull/6977", "diff_url": "https://github.com/huggingface/transformers/pull/6977.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6977.patch", "merged_at": 1599439290000 }
https://api.github.com/repos/huggingface/transformers/issues/6976
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6976/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6976/comments
https://api.github.com/repos/huggingface/transformers/issues/6976/events
https://github.com/huggingface/transformers/issues/6976
694,523,687
MDU6SXNzdWU2OTQ1MjM2ODc=
6,976
LXMERT imports
{ "login": "ecekt", "id": 16474496, "node_id": "MDQ6VXNlcjE2NDc0NDk2", "avatar_url": "https://avatars.githubusercontent.com/u/16474496?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ecekt", "html_url": "https://github.com/ecekt", "followers_url": "https://api.github.com/users/ecekt/followers", "following_url": "https://api.github.com/users/ecekt/following{/other_user}", "gists_url": "https://api.github.com/users/ecekt/gists{/gist_id}", "starred_url": "https://api.github.com/users/ecekt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ecekt/subscriptions", "organizations_url": "https://api.github.com/users/ecekt/orgs", "repos_url": "https://api.github.com/users/ecekt/repos", "events_url": "https://api.github.com/users/ecekt/events{/privacy}", "received_events_url": "https://api.github.com/users/ecekt/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @ecekt, \r\n\r\nThanks for your issue!\r\nYes, you are correct ```\"unc-nlp/lxmert-base-uncased\"``` should only be imported with the `AutoModel` or `LxmertModel` class.\r\n\r\n@julien-c - the config looks correct I'm not sure why it states `AutoModelWithLMHeadModel` in the examlpe here: https://huggingface.co/unc-nlp/lxmert-base-uncased", "Hi @patrickvonplaten, thank you for the reply! I will explore its use with the LxmertModel import then.\r\n\r\nBest.", "This is now fixed on https://huggingface.co/unc-nlp/lxmert-base-uncased, thanks for the heads up" ]
1,599
1,602
1,602
NONE
null
# ❓ Questions & Help Hello, I was very happy to see that LXMERT is being integrated into this library and I wanted to try it out. I got a KeyError in configuration_auto.py, as in CONFIG_MAPPING there was no 'lxmert'. Then, I re-installed transformers from source. The KeyError went away, but this time I encountered issues in modeling_auto.py as below: ```model = AutoModelWithLMHead.from_pretrained("unc-nlp/lxmert-base-uncased")``` /transformers/src/transformers/modeling_auto.py", line 841, in from_pretrained raise ValueError( ValueError: Unrecognized configuration class <class 'transformers.configuration_lxmert.LxmertConfig'> for this kind of AutoModel: AutoModelWithLMHead. Model type should be one of T5Config, DistilBertConfig, AlbertConfig, CamembertConfig, XLMRobertaConfig, MarianConfig, BartConfig, LongformerConfig, RobertaConfig, BertConfig, OpenAIGPTConfig, GPT2Config, MobileBertConfig, TransfoXLConfig, XLNetConfig, FlaubertConfig, XLMConfig, CTRLConfig, ElectraConfig, EncoderDecoderConfig, ReformerConfig. ` In the example [here](https://huggingface.co/unc-nlp/lxmert-base-uncased), AutoModelWithLMHead is imported. However, because of the error above, I also tried LxmertPreTrainedModel, which led to some other error concerning the initialization of the weights. Importing LxmertModel instead seems to work out without errors. Would that be the correct way if I want to extract features from a pretrained model? I would appreciate any help regarding this issue! Best regards, Ece
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6976/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6976/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6975
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6975/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6975/comments
https://api.github.com/repos/huggingface/transformers/issues/6975/events
https://github.com/huggingface/transformers/pull/6975
694,492,082
MDExOlB1bGxSZXF1ZXN0NDgwODYxOTIw
6,975
Created README for labse_bert model card
{ "login": "pvl", "id": 3596, "node_id": "MDQ6VXNlcjM1OTY=", "avatar_url": "https://avatars.githubusercontent.com/u/3596?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pvl", "html_url": "https://github.com/pvl", "followers_url": "https://api.github.com/users/pvl/followers", "following_url": "https://api.github.com/users/pvl/following{/other_user}", "gists_url": "https://api.github.com/users/pvl/gists{/gist_id}", "starred_url": "https://api.github.com/users/pvl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pvl/subscriptions", "organizations_url": "https://api.github.com/users/pvl/orgs", "repos_url": "https://api.github.com/users/pvl/repos", "events_url": "https://api.github.com/users/pvl/events{/privacy}", "received_events_url": "https://api.github.com/users/pvl/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6975?src=pr&el=h1) Report\n> Merging [#6975](https://codecov.io/gh/huggingface/transformers/pull/6975?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f72fe1f31aca235c7f675680832cc364efe4088e?el=desc) will **increase** coverage by `0.56%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6975/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6975?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6975 +/- ##\n==========================================\n+ Coverage 79.45% 80.01% +0.56% \n==========================================\n Files 161 161 \n Lines 30120 30120 \n==========================================\n+ Hits 23931 24102 +171 \n+ Misses 6189 6018 -171 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6975?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.36%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `91.90% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.90% <0.00%> (-0.34%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (-0.33%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.84% <0.00%> (+0.39%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `92.41% <0.00%> (+0.44%)` | :arrow_up: |\n| ... and [10 more](https://codecov.io/gh/huggingface/transformers/pull/6975/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6975?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6975?src=pr&el=footer). Last update [f72fe1f...b65c486](https://codecov.io/gh/huggingface/transformers/pull/6975?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Thanks for sharing this is great!" ]
1,599
1,600
1,600
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6975/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6975/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6975", "html_url": "https://github.com/huggingface/transformers/pull/6975", "diff_url": "https://github.com/huggingface/transformers/pull/6975.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6975.patch", "merged_at": 1600174453000 }
https://api.github.com/repos/huggingface/transformers/issues/6974
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6974/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6974/comments
https://api.github.com/repos/huggingface/transformers/issues/6974/events
https://github.com/huggingface/transformers/pull/6974
694,422,282
MDExOlB1bGxSZXF1ZXN0NDgwODAxNTcx
6,974
Create README.md
{ "login": "abedkhooli", "id": 11407254, "node_id": "MDQ6VXNlcjExNDA3MjU0", "avatar_url": "https://avatars.githubusercontent.com/u/11407254?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abedkhooli", "html_url": "https://github.com/abedkhooli", "followers_url": "https://api.github.com/users/abedkhooli/followers", "following_url": "https://api.github.com/users/abedkhooli/following{/other_user}", "gists_url": "https://api.github.com/users/abedkhooli/gists{/gist_id}", "starred_url": "https://api.github.com/users/abedkhooli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abedkhooli/subscriptions", "organizations_url": "https://api.github.com/users/abedkhooli/orgs", "repos_url": "https://api.github.com/users/abedkhooli/repos", "events_url": "https://api.github.com/users/abedkhooli/events{/privacy}", "received_events_url": "https://api.github.com/users/abedkhooli/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6974?src=pr&el=h1) Report\n> Merging [#6974](https://codecov.io/gh/huggingface/transformers/pull/6974?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f72fe1f31aca235c7f675680832cc364efe4088e?el=desc) will **increase** coverage by `0.57%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6974/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6974?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6974 +/- ##\n==========================================\n+ Coverage 79.45% 80.02% +0.57% \n==========================================\n Files 161 161 \n Lines 30120 30120 \n==========================================\n+ Hits 23931 24105 +174 \n+ Misses 6189 6015 -174 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6974?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6974/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6974/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.36%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6974/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6974/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6974/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.90% <0.00%> (-0.34%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6974/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (-0.33%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6974/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6974/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.84% <0.00%> (+0.39%)` | :arrow_up: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6974/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.71% <0.00%> (+0.40%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6974/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `92.41% <0.00%> (+0.44%)` | :arrow_up: |\n| ... and [10 more](https://codecov.io/gh/huggingface/transformers/pull/6974/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6974?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6974?src=pr&el=footer). Last update [f72fe1f...efa8495](https://codecov.io/gh/huggingface/transformers/pull/6974?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,599
1,599
1,599
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Model Card for https://huggingface.co/akhooli/mbart-large-cc25-ar-en
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6974/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6974/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6974", "html_url": "https://github.com/huggingface/transformers/pull/6974", "diff_url": "https://github.com/huggingface/transformers/pull/6974.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6974.patch", "merged_at": 1599478282000 }
https://api.github.com/repos/huggingface/transformers/issues/6973
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6973/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6973/comments
https://api.github.com/repos/huggingface/transformers/issues/6973/events
https://github.com/huggingface/transformers/pull/6973
694,314,652
MDExOlB1bGxSZXF1ZXN0NDgwNzA3NDY0
6,973
Fixed the default number of attention heads in Reformer Configuration
{ "login": "tznurmin", "id": 2726629, "node_id": "MDQ6VXNlcjI3MjY2Mjk=", "avatar_url": "https://avatars.githubusercontent.com/u/2726629?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tznurmin", "html_url": "https://github.com/tznurmin", "followers_url": "https://api.github.com/users/tznurmin/followers", "following_url": "https://api.github.com/users/tznurmin/following{/other_user}", "gists_url": "https://api.github.com/users/tznurmin/gists{/gist_id}", "starred_url": "https://api.github.com/users/tznurmin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tznurmin/subscriptions", "organizations_url": "https://api.github.com/users/tznurmin/orgs", "repos_url": "https://api.github.com/users/tznurmin/repos", "events_url": "https://api.github.com/users/tznurmin/events{/privacy}", "received_events_url": "https://api.github.com/users/tznurmin/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I'm a bit indifferent to this change, but I'm ok with setting it to `12`" ]
1,599
1,599
1,599
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Just a simple fix. The default number of attention heads was 2 instead of 12.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6973/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6973/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6973", "html_url": "https://github.com/huggingface/transformers/pull/6973", "diff_url": "https://github.com/huggingface/transformers/pull/6973.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6973.patch", "merged_at": 1599473543000 }
https://api.github.com/repos/huggingface/transformers/issues/6972
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6972/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6972/comments
https://api.github.com/repos/huggingface/transformers/issues/6972/events
https://github.com/huggingface/transformers/issues/6972
694,291,022
MDU6SXNzdWU2OTQyOTEwMjI=
6,972
The configuration of 3.0.2 and 3.1.0 is not compatible
{ "login": "franciszzj", "id": 16440889, "node_id": "MDQ6VXNlcjE2NDQwODg5", "avatar_url": "https://avatars.githubusercontent.com/u/16440889?v=4", "gravatar_id": "", "url": "https://api.github.com/users/franciszzj", "html_url": "https://github.com/franciszzj", "followers_url": "https://api.github.com/users/franciszzj/followers", "following_url": "https://api.github.com/users/franciszzj/following{/other_user}", "gists_url": "https://api.github.com/users/franciszzj/gists{/gist_id}", "starred_url": "https://api.github.com/users/franciszzj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/franciszzj/subscriptions", "organizations_url": "https://api.github.com/users/franciszzj/orgs", "repos_url": "https://api.github.com/users/franciszzj/repos", "events_url": "https://api.github.com/users/franciszzj/events{/privacy}", "received_events_url": "https://api.github.com/users/franciszzj/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "issue #6950 is also this problem.", "I see what you mean! From `3.1.0` onwards every configuration has a `config.chunk_size_feed_forward` parameter. So as far as I can see whenever a config is loaded (whether via `.from_pretrained()` or with `BertConfig(....)`, this parameter is part of the config...). Can you give me an example where this would not be the case? ", "@patrickvonplaten Thanks for your replay.\r\n\r\nYes, I load the pretrained model with my own code, which cause this problem.\r\nFor some users who only use part of structures/classes/others in Transfomers, is it necessary to have certain compatibility?", "> I see what you mean! From `3.1.0` onwards every configuration has a `config.chunk_size_feed_forward` parameter. So as far as I can see whenever a config is loaded (whether via `.from_pretrained()` or with `BertConfig(....)`, this parameter is part of the config...). Can you give me an example where this would not be the case?\r\n\r\nI saw the `chunk_size_feed_forward` in BertConfig() after opened this issue.🤪", "@patrickvonplaten \r\nIn the future, I will make more normative to call modules.\r\nThanks for your replay.\r\nI close this issue." ]
1,599
1,599
1,599
NONE
null
The configuration of **3.0.2** and **3.1.0** is not compatible. For example, in https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L388, `config.chunk_size_feed_forward` should change to `config.get('chunk_size_feed_forward', 0)`, because there is not `chunk_size_feed_forward` config in former version. ``` class BertLayer(nn.Module): def __init__(self, config): super().__init__() # self.chunk_size_feed_forward = config.chunk_size_feed_forward self.chunk_size_feed_forward = config.get('chunk_size_feed_forward', 0) self.seq_len_dim = 1 self.attention = BertAttention(config) self.is_decoder = config.is_decoder self.add_cross_attention = config.add_cross_attention if self.add_cross_attention: assert self.is_decoder, f"{self} should be used as a decoder model if cross attention is added" self.crossattention = BertAttention(config) self.intermediate = BertIntermediate(config) self.output = BertOutput(config) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6972/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6972/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6971
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6971/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6971/comments
https://api.github.com/repos/huggingface/transformers/issues/6971/events
https://github.com/huggingface/transformers/issues/6971
694,228,500
MDU6SXNzdWU2OTQyMjg1MDA=
6,971
distilled bart-large/bart-base
{ "login": "kkissmart", "id": 13355967, "node_id": "MDQ6VXNlcjEzMzU1OTY3", "avatar_url": "https://avatars.githubusercontent.com/u/13355967?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kkissmart", "html_url": "https://github.com/kkissmart", "followers_url": "https://api.github.com/users/kkissmart/followers", "following_url": "https://api.github.com/users/kkissmart/following{/other_user}", "gists_url": "https://api.github.com/users/kkissmart/gists{/gist_id}", "starred_url": "https://api.github.com/users/kkissmart/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kkissmart/subscriptions", "organizations_url": "https://api.github.com/users/kkissmart/orgs", "repos_url": "https://api.github.com/users/kkissmart/repos", "events_url": "https://api.github.com/users/kkissmart/events{/privacy}", "received_events_url": "https://api.github.com/users/kkissmart/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
null
[]
[ "Hi @kkissmart \r\nYes, distilbart models are available [here](https://huggingface.co/sshleifer/distilbart-cnn-6-6)", "@patil-suraj I meant a distill Bart ( a pretrain model) not a summarization model. Do I misunderstand the name?", "No, we don't have that.", "Ohh, AFAIK there is no pre-trained distilbart like distilbert. \r\nThere are two types of distillation \r\n1. No teacher distillation: which copies alternate layers from the pre-trained model and creates a small student model.\r\n2. With teacher distillation: enforce that the student and teacher produce similar encoder_outputs, logits, and hidden_states\r\n\r\nYou can easily create a student (no teacher) model using the scripts [here](https://github.com/huggingface/transformers/tree/master/examples/seq2seq#no-teacher-distillation) , you'll just need to use `bart-large` instead of `bart-large-cnn`.\r\n\r\npre-training distilbart is still not included, however you can train the large model on the down-stream task and then do with teacher distillation for a smaller distilled model.", "Also, consider asking such non-bug questions on the forum https://discuss.huggingface.co/ -:)", "Is there a distilBART base model that does not have pretaining weights?", "There is no distilbart-base model.\r\nThere are only distilled models fine-tuned on summarization tasks.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "@sshleifer @patil-suraj \r\n```\r\nYou can easily create a student (no teacher) model using the scripts here , you'll just need to use bart-large instead of bart-large-cnn.\r\nhttps://github.com/huggingface/transformers/tree/master/examples/seq2seq#no-teacher-distillation or\r\nhttps://github.com/huggingface/transformers/tree/master/examples/legacy/seq2seq#no-teacher-distillation ?\r\n```\r\nI want to make a DistilBART model from my japanese BART-large mode. but, No script in this link. Has the script been kept private? I want to see the script.\r\n", "Hi @hisashi-ito \r\n\r\nThe seq2seq distillation scripts are now moved under `examples/research_projects/seq2seq-distillation` directory. You can find them here.\r\nhttps://github.com/huggingface/transformers/tree/master/examples/research_projects/seq2seq-distillation", "Hi @patil-suraj\r\nThank you for teaching !! " ]
1,599
1,615
1,608
NONE
null
@sshleifer Is there a distilled bart model (not CNN/XSUM) model available? Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6971/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6971/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6970
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6970/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6970/comments
https://api.github.com/repos/huggingface/transformers/issues/6970/events
https://github.com/huggingface/transformers/issues/6970
694,198,148
MDU6SXNzdWU2OTQxOTgxNDg=
6,970
Error installing transformers 3.1.0
{ "login": "bluteaur", "id": 12074813, "node_id": "MDQ6VXNlcjEyMDc0ODEz", "avatar_url": "https://avatars.githubusercontent.com/u/12074813?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bluteaur", "html_url": "https://github.com/bluteaur", "followers_url": "https://api.github.com/users/bluteaur/followers", "following_url": "https://api.github.com/users/bluteaur/following{/other_user}", "gists_url": "https://api.github.com/users/bluteaur/gists{/gist_id}", "starred_url": "https://api.github.com/users/bluteaur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bluteaur/subscriptions", "organizations_url": "https://api.github.com/users/bluteaur/orgs", "repos_url": "https://api.github.com/users/bluteaur/repos", "events_url": "https://api.github.com/users/bluteaur/events{/privacy}", "received_events_url": "https://api.github.com/users/bluteaur/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Yeah the latest release of tokenizers is 0.8.1 but the current code still references the rc2:\r\nhttps://github.com/huggingface/transformers/blob/master/setup.py#L113", "I was able to get this working by installing tokenizers 0.8.1. I then\ninstalled transformers 3.1.0 without dependencies using --no-dependencies\nflag (had to install a few other dependencies manually).\n\nYour mileage may vary.\n\nOn Fri, Sep 11, 2020 at 23:21 Fabrizio Milo <[email protected]>\nwrote:\n\n>\n>\n> Yeah the latest release of tokenizers is 0.8.1 but the current code still\n> references the rc2:\n>\n>\n> https://github.com/huggingface/transformers/blob/master/setup.py#L113\n>\n>\n>\n>\n> —\n> You are receiving this because you are subscribed to this thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/6970#issuecomment-691394983>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ACMFA4WL57SIWJT5U2LIBHDSFLSLZANCNFSM4Q3OYZ6Q>\n> .\n>\n>\n>\n", "I simply installed the transformer 3.0.0 version until they fix this problem. \r\n`python3 -m pip install transformers==3.0.0`", "> I simply installed the transformer 3.0.0 version until they fix this problem.\r\n> `python3 -m pip install transformers==3.0.0`\r\n\r\nI need version 3.1.0 for the latest 0-shot pipeline. But the following fixed the problem that @alexuadler mentioned:\r\n\r\npip3 install tokenizers==\"0.8.1\"\r\npip3 install transformers==\"3.1.0\" --no-dependencies", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,599
1,606
1,606
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.1.0 - Platform: Darwin-19.6.0-x86_64-i386-64bit - Python version: 3.7.7 - PyTorch version (GPU?): 1.6.0 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...): pipeline("zero-shot-classification") The problem arises when using: pip3 install transformers=="3.1.0" The tasks I am working on is: Just installing the package to use zero-shot-classification ## To reproduce Steps to reproduce the behavior: 1. pip3 install transformers=="3.1.0" Alternatively: 1. pip3 install tokenizers=="0.8.1.rc2" Notes: It seems that the tokenizers version '0.8.1.rc2' is the issue. I can install the system just fine on different systems by changing the version to '0.8.0' in transformers/setup.py. Alternatively pip3 install transformers=="3.1.0" tokenizers=="0.8.0" seems to be a working method of installation, but tokenizers version "0.8.1.rc2" still has the error. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> error: build failed /tmp/pip-build-env-mey29riz/overlay/lib/python3.6/site-packages/setuptools/dist.py:452: UserWarning: Normalizing '0.8.1.rc2' to '0.8.1rc2' warnings.warn(tmpl.format(**locals())) cargo rustc --lib --manifest-path Cargo.toml --features pyo3/extension-module --release --verbose -- --crate-type cdylib error: cargo failed with code: 101 ERROR: Failed building wheel for tokenizers Running setup.py clean for tokenizers Failed to build tokenizers ERROR: Could not build wheels for tokenizers which use PEP 517 and cannot be installed directly ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> The expected behaviour would be to install properly without error.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6970/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6970/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6969
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6969/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6969/comments
https://api.github.com/repos/huggingface/transformers/issues/6969/events
https://github.com/huggingface/transformers/issues/6969
694,191,802
MDU6SXNzdWU2OTQxOTE4MDI=
6,969
Incorrect loss calculation for the last batch in TFTrainer if dataloader_drop_last is False
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[ { "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false } ]
[ "Thanks @chiapas! Indeed if the batch size becomes lower for the last step we divide by a wrong number. I will investigate this to better handle this edge case.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,599
1,605
1,605
COLLABORATOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.1.0 - Platform: Linux-4.15.0-115-generic-x86_64-with-debian-buster-sid - Python version: 3.6.7 - PyTorch version (GPU?): 1.5.1+cpu (False) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. --> tensorflow: @jplu ## Description In [training_step()](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_tf.py#L595) in `trainer_tf.py`, we have scaled_loss = per_example_loss / self.total_train_batch_size However, if `dataloader_drop_last=False`, the last bacth (before being distributed to replicas) won't necessary have `self.total_train_batch_size` examples. If we allow `dataloader_drop_last=False`, we need a way to dynamically calculate the actual number of examples in a global batch, and pass this information in some way to the replicas.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6969/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6969/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6968
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6968/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6968/comments
https://api.github.com/repos/huggingface/transformers/issues/6968/events
https://github.com/huggingface/transformers/issues/6968
694,181,663
MDU6SXNzdWU2OTQxODE2NjM=
6,968
Potential incorrect loss calculation for TFTokenClassification in TFTrainer
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[ { "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false } ]
[ "Hello @chiapas!\r\n\r\nThanks for having investigating this! Have you checked the way we compute the loss for Token classification? Right here https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_tf_utils.py#L161\r\n\r\nWe are already ignoring all the tokens that have -100 as label. So if there is an issue it might come from somewhere else.\r\n", "Hi, @jplu,\r\n\r\nYes, I checked that. But the issue in this bug report is not about ignoring -100 or not. The problem is that, the loss is calculated from the per example losses, then divided by `total_train_batch_size`. However, for token level tasks, it should be divided by `the number of actual tokens (i.e. tokens not ignored) in the global batch (i.e. the batch that having the size total_train_batch_size)`.", "I don't get what you mean by the number of actual token?\r\n\r\nYou mean the number of batches that contain actual tokens, no? In your example, just 1? ", "If you want a support for the argument about the denominator I claimed, we can look at the pytorch implementation in [DistilBert](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_distilbert.py#L820) about the loss for token classification, you will see\r\n\r\n loss_fct = CrossEntropyLoss() \r\n\r\nand \r\n\r\n loss = loss_fct(active_logits, active_labels)\r\n\r\nAnd from [torch's doc](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#crossentropyloss), the default reduction is `mean`. So this corresponds to the per example losses (-100 ignored) divided by the number of actual tokens (i.e. -100 ignored again).\r\n\r\n\r\n\r\n", "> \r\n> \r\n> I don't get what you mean by the number of actual token?\r\n> \r\n> You mean the number of batches that contain actual tokens, no? In your example, just 1?\r\n\r\nFor any global batch (that has size `total_train_batch_size`), after distributed to replicas, we compute the per example losses on the smaller batches (where the tokens with label -100 are ignored). Then in the current implementation, this per example losses are divided by `total_train_batch_size`.\r\n\r\nBy the actual tokens, it means `the tokens in a global batch with labels != -100`. And what I said is that, the per example losses should be divided by the number of tokens with label != -100 in that global batch.\r\n\r\nThis number of `actual` tokens will be varied for different global batch however.\r\n\r\nIn the example in the code snippet, they are only `4` actual tokens. By when `n_empty_string = 9`, the current implementation divided the per example losses by ` 10 (1 + 9 dummy sentences)`. \r\n", "So basically what you propose is to return a mean reduction in the `call` function instead of the effective per example loss? Something like to replace:\r\n\r\n```\r\nloss = None if labels is None else self.compute_loss(labels, logits)\r\n```\r\nBy\r\n```\r\nloss = None if labels is None else tf.reduce_mean(self.compute_loss(labels, logits))\r\n```\r\n\r\nAnd then dividing the result per the number of replica:\r\n```\r\nper_example_loss, _ = self.run_model(features, labels, True)\r\nscaled_loss = per_example_loss / self.args.n_replicas\r\n```\r\n\r\nOtherwise, please show me you notebook because I still don't get it.", "@jplu ,\r\n\r\nI will explain a bit more and also show my notebook later. But no, I am not suggesting using\r\n\r\n tf.reduce_mean(self.compute_loss(labels, logits)) # If we do so, the average occurs on the small batch in each replica.\r\n\r\nI mentioned the pytorch version just to show that the loss should be averaged over the tokens, not over the sentences in the batch. However, the average shouldn't be over the small batches received by each replica, it should be over the global batches.", "@jplu \r\n\r\nIf you want to look the code directly, here is my (kaggle) notebook\r\n\r\n[Masked, My Dear Watson - MLM with TPU](https://www.kaggle.com/yihdarshieh/masked-my-dear-watson-mlm-with-tpu#MLM-loss-calculation). Please check `def mlm_fine_tune_step(batch):` just below that markdown cell, which has\r\n\r\n loss_mlm = loss_fn(\r\n labels_at_masked_tokens,\r\n logits_at_masked_tokens\r\n )\r\n\r\n # divide the number of masked tokens in the global batch, i.e. the whole batch that is distributed to different replicas.\r\n loss_mlm = loss_mlm / tf.cast(nb_tokens_masked[0], dtype=tf.float32)\r\n\r\nIf you prefer, I can work on this and make a PR. But I think it is better for us to agree that the loss calculation should be corrected. So I will try to explain:\r\n\r\n1. For token level tasks, the loss value is the per example (and here, example = tokens) losses in a batch, divided by the number of tokens in that batch.\r\n2. If we have tokens being ignored for loss calculation, the denominator above becomes the number of tokens not ignored in that batch.\r\n3. By `batch`, it is the whole set used for 1 parameter update by gradients - which is called a `global batch`.\r\n4. Since we use distributed strategy, and optionally gradient accumulation, while `training_step()` processes a batch, it is a small batch (i.e. a batch for `only 1 gradient accumulation step` on a `single replica`). However, the denominator in step `1.` or `2.` should be the `the number of token, not being ignored, in a global batch`, despite the per example losses are still based on the small batch received by a replica.\r\n5. Since gradient accumulation will add the gradients, and distributed strategy will sync across replicas by summing the gradients before applying gradients, the above steps will give us the `averaged losses over the tokens (not ignored) in a global batch`.\r\n\r\nHope it make things a bit clear.", "OK now with an example and the explanation I got it. Thank you very much!\n\nI prefer you do a PR and then you get the credit of this fix :) And if you can tag me as reviewer I will be able to help you if needed, as there is certainly a nicer way to do. Maybe with a class field?\n\nThanks again, waiting your PR ^^", "I am not able to assign reviewer, since I am not a collaborator yet on transformer repository.", "Nice! I will review that carefuly tomorrow. I have assigned 2 other persons and myself as reviewer.", "Hi, sorry, I created the pull request as draft. It is not ready to review at all. I will let you know when it is ready.", "No problem! Take the time you need and let me know." ]
1,599
1,600
1,600
COLLABORATOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.1.0 - Platform: Linux-4.15.0-115-generic-x86_64-with-debian-buster-sid - Python version: 3.6.7 - PyTorch version (GPU?): 1.5.1+cpu (False) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. --> Trainer: @sgugger tensorflow: @jplu examples/token-classification: @stefan-it Mostly for @jplu, potentially for @stefan-it (because the workaround I have in mind requires a bit change in the token classification dataset). ## Information The problem arises when using: * [x] The official example scripts: The involved scripts are: - https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_tf.py - https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_tf_ner.py However, in order to demonstrate the issue in a more clear way, I use a minimal example which doesn't use directly these two scripts. See the description and code snippet below. The tasks I am working on is: * [x] Official token classification task in TensorFlow ## Description In [trainer_tf.py](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_tf.py#L595), the loss calculation is calculated from `per_example_loss` divided by `total_train_batch_size`. per_example_loss, _ = self.run_model(features, labels, True) scaled_loss = per_example_loss / self.total_train_batch_size Here `total_train_batch_size` is the size of a whole batch that will be distributed to (potentially) different replicas and optionally consisting of several smaller batches for accumulation steps. For sentence level tasks, where each example (i.e. sentence) corresponds to a label (for example, sentence classification), the above loss calculation is correct. However, for token level tasks like token classification, the the above loss seems incorrect to me. For such tasks, the loss should be the per example losses **divided by the number of real tokens involved in the batch**. In [utils_ner](https://github.com/huggingface/transformers/blob/master/examples/token-classification/utils_ner.py#L75), `convert_examples_to_features` set labels to `-100` for padding tokens and other special tokens (`[CLS]`, `[SEP]`, etc), which are the places to be ignored for loss calculation. Therefore, the loss calculation should be the per example losses **divided by the number of labels that are not -100 in the \*_batch_\***. By **\*_batch_\***, it should be careful that it is not the batch received by a single replica, and neither the smaller batch in a single accumulation step. It means `the whole batch that will be distributed to (potentially) different replicas and optionally consisting of several smaller batches for accumulation steps.` More precisely, it means a batch passed to [distributed_training_steps()](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_tf.py#L651) - for the same reason as we divide per example losses by `total_train_batch_size` for sentence level tasks, rather than dividing it by the size of batch received by a single replica. In order to calculate the correct loss values, we have to pass the global information - the number of labels that are not `-100` in a `global batch` to each replica. I don't know a clean way to do it, but for my own personal projects, I inject this extra information into global batches as a constant, and each replica receiving a distributed smaller batch will have this information to calculate the correct scaled losses. (I have a notebook showing how to perform it, if you want to look it, let me know.) ## Code Snippets <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> Here is a minimal example to demonstrate the issue. Here, we have only one real example (sentence) and `n_empty_string` empty sentences. Each empty sentence will give only [CLS], [SEP] and [PAD] tokens that will be ignored for token classification. import os os.environ['TF_DETERMINISTIC_OPS'] = '1' SEED = 42 name = 'distilbert-base-uncased' seq_len = 8 num_labels = 2 n_empty_string = 10 import tensorflow as tf tf.random.set_seed(SEED) strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0") from transformers import TFTrainer, AutoConfig, AutoTokenizer, TFAutoModelForTokenClassification from transformers.training_args_tf import TFTrainingArguments text = [ 'My dog is cute' ] text.extend([''] * n_empty_string) n_examples = len(text) config = AutoConfig.from_pretrained( name, num_labels=num_labels ) tokenizer = AutoTokenizer.from_pretrained(name) model = TFAutoModelForTokenClassification.from_pretrained( name ) training_args = TFTrainingArguments( output_dir='./tmp/', per_device_train_batch_size=n_examples, gradient_accumulation_steps=1, seed=SEED ) # Initialize our Trainer trainer = TFTrainer( model=model, args=training_args, train_dataset=None, eval_dataset=None, compute_metrics=None ) trainer.total_train_batch_size = strategy.num_replicas_in_sync \ * training_args.per_device_train_batch_size \ * training_args.gradient_accumulation_steps trainer.train_loss = tf.keras.metrics.Sum() features = tokenizer.batch_encode_plus(text, max_length=seq_len, padding='max_length', return_tensors='tf') # Set all labels to `1`, except for special tokens: cls/sep/pad, where the labels are `-100`. labels = tf.constant(1, shape=[n_examples, seq_len]) for token_id in [tokenizer.pad_token_id] + tokenizer.all_special_ids: labels = labels * tf.cast(features['input_ids'] != token_id, dtype=tf.int32) + \ -100 * tf.cast(features['input_ids'] == token_id, dtype=tf.int32) # Only the first example `features[0]` has real tokens, the other examples have only [PAD]. print(features['input_ids']) # Only the first example has labels that won't be ignored. print(labels) # Copy from: # https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_tf.py#L601 per_example_loss, _ = trainer.run_model(features, labels, True) scaled_loss = per_example_loss / trainer.total_train_batch_size print(scaled_loss) ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> When `n_empty_string = 0`, we get `scaled_loss` tf.Tensor([0.56047076 0.46507886 0.51456743 0.50131255], shape=(4,), dtype=float32) When `n_empty_string = 9`, we get `scaled_loss` tf.Tensor([0.05604707 0.04650789 0.05145674 0.05013125], shape=(4,), dtype=float32) However, in both case, we should get the same value, which should be tf.Tensor([0.56047076 0.46507886 0.51456743 0.50131255], shape=(4,), dtype=float32)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6968/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6968/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6967
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6967/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6967/comments
https://api.github.com/repos/huggingface/transformers/issues/6967/events
https://github.com/huggingface/transformers/pull/6967
694,164,386
MDExOlB1bGxSZXF1ZXN0NDgwNTgxMjc4
6,967
hack to extract cross attention for bart decoder
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,599
1,601
1,601
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6967/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6967/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6967", "html_url": "https://github.com/huggingface/transformers/pull/6967", "diff_url": "https://github.com/huggingface/transformers/pull/6967.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6967.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/6966
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6966/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6966/comments
https://api.github.com/repos/huggingface/transformers/issues/6966/events
https://github.com/huggingface/transformers/issues/6966
694,163,713
MDU6SXNzdWU2OTQxNjM3MTM=
6,966
SPM Tokenizer confusion with fairseq Roberta
{ "login": "hichiaty", "id": 21251528, "node_id": "MDQ6VXNlcjIxMjUxNTI4", "avatar_url": "https://avatars.githubusercontent.com/u/21251528?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hichiaty", "html_url": "https://github.com/hichiaty", "followers_url": "https://api.github.com/users/hichiaty/followers", "following_url": "https://api.github.com/users/hichiaty/following{/other_user}", "gists_url": "https://api.github.com/users/hichiaty/gists{/gist_id}", "starred_url": "https://api.github.com/users/hichiaty/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hichiaty/subscriptions", "organizations_url": "https://api.github.com/users/hichiaty/orgs", "repos_url": "https://api.github.com/users/hichiaty/repos", "events_url": "https://api.github.com/users/hichiaty/events{/privacy}", "received_events_url": "https://api.github.com/users/hichiaty/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @hichiaty ,\r\n\r\nin that case I would try the CamemBERT Tokenizer or the one from XLM-RoBERTa (last one contains some hacky workaround to align fairseq vocab with SPM...) :)", "hey @stefan-it thanks for the advice! I am still slightly confused though, I loaded my spm model with CamemBERT but for some reason it still doesn't match the tokens from fairseq's roberta.encode.\r\n\r\nUsing Fairseq's encode I get:\r\n\r\n```\r\nroberta.encode('HELLO')\r\n>tensor([0, 7, 4, 6, 2]) \r\n```\r\nWith CamemBERT I get:\r\n\r\n```\r\ntokenizer('HELLO')\r\n>{'input_ids': [5, 45, 36, 3863, 3595, 3595, 19, 5], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1]}\r\n```\r\nI'm just trying to figure out how fairseq works at this point because I don't even pass the spm model to it but a dict.txt file", "Solved, created a custom tokenizer (based on camembert) that uses spm to encode as pieces, then fairseq's dict.txt to get ids." ]
1,599
1,599
1,599
NONE
null
Hi, I have pre-trained a custom Roberta Model from scratch with a unigram sentencepiece model (also trained from scratch). I have converted the model from fairseq to huggingface with this [script](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_roberta_original_pytorch_checkpoint_to_pytorch.py) successfully. I have tried loading the model in huggingface, which was successful, but the issue lies with the tokenizer, I tried using the Roberta tokenizer but it screamed at me because it was looking for merges and vocab. I then loaded my spm model with AlbertTokenizer, but when I try to test it out with a simple fill_mask, the answer tokens are incorrect. How do I correctly use my SPM model with Roberta? I also have dict.txt from fairseq. Any help would be appreciated!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6966/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6966/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6965
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6965/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6965/comments
https://api.github.com/repos/huggingface/transformers/issues/6965/events
https://github.com/huggingface/transformers/issues/6965
694,160,579
MDU6SXNzdWU2OTQxNjA1Nzk=
6,965
transformers-cli upload individual files simplification
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Well, I wrote a little script to generate the long commands that are required now - perhaps it'd be useful to someone:\r\n```\r\nperl -le 'for $f (@ARGV) { print qq[yes Y | transformers-cli upload $_/$f --filename $_/$f] for map { \"fsmt-wmt19-$_\" } (\"en-ru\", \"ru-en\", \"de-en\", \"en-de\")}' vocab-src.json vocab-tgt.json tokenizer_config.json\r\n```\r\n\r\ngenerated:\r\n```\r\nyes Y | transformers-cli upload fsmt-wmt19-en-ru/vocab-src.json --filename fsmt-wmt19-en-ru/vocab-src.json\r\nyes Y | transformers-cli upload fsmt-wmt19-ru-en/vocab-src.json --filename fsmt-wmt19-ru-en/vocab-src.json\r\nyes Y | transformers-cli upload fsmt-wmt19-de-en/vocab-src.json --filename fsmt-wmt19-de-en/vocab-src.json\r\nyes Y | transformers-cli upload fsmt-wmt19-en-de/vocab-src.json --filename fsmt-wmt19-en-de/vocab-src.json\r\nyes Y | transformers-cli upload fsmt-wmt19-en-ru/vocab-tgt.json --filename fsmt-wmt19-en-ru/vocab-tgt.json\r\nyes Y | transformers-cli upload fsmt-wmt19-ru-en/vocab-tgt.json --filename fsmt-wmt19-ru-en/vocab-tgt.json\r\nyes Y | transformers-cli upload fsmt-wmt19-de-en/vocab-tgt.json --filename fsmt-wmt19-de-en/vocab-tgt.json\r\nyes Y | transformers-cli upload fsmt-wmt19-en-de/vocab-tgt.json --filename fsmt-wmt19-en-de/vocab-tgt.json\r\nyes Y | transformers-cli upload fsmt-wmt19-en-ru/tokenizer_config.json --filename fsmt-wmt19-en-ru/tokenizer_config.json\r\nyes Y | transformers-cli upload fsmt-wmt19-ru-en/tokenizer_config.json --filename fsmt-wmt19-ru-en/tokenizer_config.json\r\nyes Y | transformers-cli upload fsmt-wmt19-de-en/tokenizer_config.json --filename fsmt-wmt19-de-en/tokenizer_config.json\r\nyes Y | transformers-cli upload fsmt-wmt19-en-de/tokenizer_config.json --filename fsmt-wmt19-en-de/tokenizer_config.json\r\n```\r\n\r\nAs I have an easy workaround that works well, unless others feel the suggested improvements in the OP would be useful, I'd be happy to close this ticket.", "pinging @julien-c ", "@julien-c, Should this be closed or fixed? Thanks.", "Closing as we are migrating to a new system anyways (more info soon)" ]
1,599
1,603
1,603
CONTRIBUTOR
null
Currently it's not possible to upload an individual file in a simple: ``` transformers-cli upload fsmt-wmt19-ru-en/vocab-src.json ``` Getting error: ``` Filename invalid, every file must be nested inside a "model_name" folder. ``` but, instead, have to do: ``` transformers-cli upload fsmt-wmt19-ru-en/vocab-src.json --filename fsmt-wmt19-ru-en/vocab-src.json ``` But this is silly as the exact same input is repeated twice and I'm more likely to make an error while copy-n-pasting when providing an explicit destination filename. Why not look at the relative path and use that? And only give that error when there is no "model_name" folder in the args. i.e. definitely give error on: ``` transformers-cli upload vocab.json ``` I understand the ` --filename` is useful for renaming, but there is no renaming here. Additionally, if the first suggestion is acceptable, would it be OK to support multiple filenames? I want to be able to update several files in one go (just the config files, but not the whole folder, since the model is huge) ``` transformers-cli upload fsmt-wmt19-ru-en/vocab-*.json ``` gives error: ``` Transformers CLI tool: error: unrecognized arguments: fsmt-wmt19-ru-en/vocab-ru.json fsmt-wmt19-ru-en/vocab-src.json fsmt-wmt19-ru-en/vocab-tgt.json ``` Thanks.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6965/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6965/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6964
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6964/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6964/comments
https://api.github.com/repos/huggingface/transformers/issues/6964/events
https://github.com/huggingface/transformers/pull/6964
694,144,913
MDExOlB1bGxSZXF1ZXN0NDgwNTY1MzE2
6,964
Create README.md model card
{ "login": "rbownes", "id": 58034524, "node_id": "MDQ6VXNlcjU4MDM0NTI0", "avatar_url": "https://avatars.githubusercontent.com/u/58034524?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rbownes", "html_url": "https://github.com/rbownes", "followers_url": "https://api.github.com/users/rbownes/followers", "following_url": "https://api.github.com/users/rbownes/following{/other_user}", "gists_url": "https://api.github.com/users/rbownes/gists{/gist_id}", "starred_url": "https://api.github.com/users/rbownes/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rbownes/subscriptions", "organizations_url": "https://api.github.com/users/rbownes/orgs", "repos_url": "https://api.github.com/users/rbownes/repos", "events_url": "https://api.github.com/users/rbownes/events{/privacy}", "received_events_url": "https://api.github.com/users/rbownes/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6964?src=pr&el=h1) Report\n> Merging [#6964](https://codecov.io/gh/huggingface/transformers/pull/6964?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d31031f603043281d4fbac6cbdcfb6497fd500ab?el=desc) will **decrease** coverage by `4.23%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6964/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6964?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6964 +/- ##\n==========================================\n- Coverage 80.03% 75.80% -4.24% \n==========================================\n Files 161 161 \n Lines 30120 30120 \n==========================================\n- Hits 24108 22833 -1275 \n- Misses 6012 7287 +1275 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6964?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/6964/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2x4bWVydC5weQ==) | `20.00% <0.00%> (-80.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/6964/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.49% <0.00%> (-71.63%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6964/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `25.30% <0.00%> (-55.16%)` | :arrow_down: |\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6964/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `34.28% <0.00%> (-48.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/6964/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `23.50% <0.00%> (-46.52%)` | :arrow_down: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6964/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6964/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.00% <0.00%> (-20.08%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6964/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `48.79% <0.00%> (-18.08%)` | :arrow_down: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6964/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6964/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `64.57% <0.00%> (-14.29%)` | :arrow_down: |\n| ... and [15 more](https://codecov.io/gh/huggingface/transformers/pull/6964/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6964?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6964?src=pr&el=footer). Last update [d31031f...4ff71ec](https://codecov.io/gh/huggingface/transformers/pull/6964?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "This is great! Added some custom prompts for the inference widget. Thanks for sharing!" ]
1,599
1,599
1,599
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number} model card for https://huggingface.co/rjbownes/Magic-The-Generating?text=Once+upon+a+time%2C
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6964/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6964/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6964", "html_url": "https://github.com/huggingface/transformers/pull/6964", "diff_url": "https://github.com/huggingface/transformers/pull/6964.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6964.patch", "merged_at": 1599472901000 }
https://api.github.com/repos/huggingface/transformers/issues/6963
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6963/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6963/comments
https://api.github.com/repos/huggingface/transformers/issues/6963/events
https://github.com/huggingface/transformers/issues/6963
694,131,377
MDU6SXNzdWU2OTQxMzEzNzc=
6,963
Longformer config - vocabulary size
{ "login": "blawok", "id": 41793223, "node_id": "MDQ6VXNlcjQxNzkzMjIz", "avatar_url": "https://avatars.githubusercontent.com/u/41793223?v=4", "gravatar_id": "", "url": "https://api.github.com/users/blawok", "html_url": "https://github.com/blawok", "followers_url": "https://api.github.com/users/blawok/followers", "following_url": "https://api.github.com/users/blawok/following{/other_user}", "gists_url": "https://api.github.com/users/blawok/gists{/gist_id}", "starred_url": "https://api.github.com/users/blawok/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/blawok/subscriptions", "organizations_url": "https://api.github.com/users/blawok/orgs", "repos_url": "https://api.github.com/users/blawok/repos", "events_url": "https://api.github.com/users/blawok/events{/privacy}", "received_events_url": "https://api.github.com/users/blawok/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @blawok, \r\n\r\nwhich model are you referring to exactly? This longformer model: https://s3.amazonaws.com/models.huggingface.co/bert/allenai/longformer-base-4096/config.json has vocab_size set to 50265 ", "Thanks for the answer @patrickvonplaten :) \r\n\r\nUsing the code from the docs (https://huggingface.co/transformers/model_doc/longformer.html#longformerconfig):\r\n\r\n```python\r\nfrom transformers import LongformerConfig, LongformerModel\r\n# Initializing a Longformer configuration\r\nconfiguration = LongformerConfig()\r\n# Initializing a model from the configuration\r\nmodel = LongformerModel(configuration)\r\n# Accessing the model configuration\r\nconfiguration = model.config\r\nprint(configuration)\r\n```\r\n\r\nI am getting this output:\r\n\r\n```\r\nLongformerConfig {\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"attention_window\": [\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512\r\n ],\r\n \"bos_token_id\": 0,\r\n \"eos_token_id\": 2,\r\n \"gradient_checkpointing\": false,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.1,\r\n \"hidden_size\": 768,\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 3072,\r\n \"layer_norm_eps\": 1e-12,\r\n \"max_position_embeddings\": 512,\r\n \"model_type\": \"longformer\",\r\n \"num_attention_heads\": 12,\r\n \"num_hidden_layers\": 12,\r\n \"pad_token_id\": 1,\r\n \"sep_token_id\": 2,\r\n \"type_vocab_size\": 2,\r\n \"vocab_size\": 30522\r\n}\r\n```\r\n\r\nIt resulted in error while trying to train:\r\n```python\r\nconfig = LongformerConfig()\r\nmodel = TFLongformerModel.from_pretrained('allenai/longformer-base-4096', config=config) \r\n```\r\nLongformerTokenizer correctly used the 50265 vocab, but the model expected it to be 30522. However, I am not getting this error when I am not specifying the configuration.\r\n\r\nExample to reproduce:\r\n\r\n```python\r\nfrom transformers import LongformerTokenizer, TFLongformerModel, LongformerConfig\r\n\r\ntokenizer = LongformerTokenizer.from_pretrained('allenai/longformer-base-4096')\r\ninput = tokenizer('Hello world')\r\n\r\nconfig = LongformerConfig()\r\nmodel = TFLongformerModel.from_pretrained('allenai/longformer-base-4096',\r\n config=config) \r\n```\r\n\r\nError I am getting with the code above:\r\n\r\n```python\r\n---------------------------------------------------------------------------\r\n\r\nValueError Traceback (most recent call last)\r\n\r\n<ipython-input-4-80be6efb2343> in <module>()\r\n 6 config = LongformerConfig()\r\n 7 model = TFLongformerModel.from_pretrained('allenai/longformer-base-4096',\r\n----> 8 config=config) \r\n 9 print(model(input))\r\n\r\n2 frames\r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/saving/hdf5_format.py in load_weights_from_hdf5_group_by_name(f, layers, skip_mismatch)\r\n 784 symbolic_weights[i])) +\r\n 785 ', but the saved weight has shape ' +\r\n--> 786 str(weight_values[i].shape) + '.')\r\n 787 \r\n 788 else:\r\n\r\nValueError: Layer #0 (named \"longformer\"), weight <tf.Variable 'tf_longformer_model/longformer/embeddings/word_embeddings/weight:0' shape=(30522, 768) dtype=float32, numpy=\r\narray([[-0.01930627, -0.01715518, -0.00557071, ..., -0.01202598,\r\n 0.01007012, -0.00184635],\r\n [ 0.00633493, 0.00123013, -0.0134872 , ..., -0.01304915,\r\n -0.00157391, 0.00082429],\r\n [-0.01581489, 0.01005882, -0.01242067, ..., 0.00555116,\r\n 0.02116241, 0.03123646],\r\n ...,\r\n [-0.01625618, 0.01438301, 0.03368756, ..., -0.02742909,\r\n 0.00300512, 0.00728624],\r\n [-0.0078434 , -0.01735217, 0.00178284, ..., -0.01191203,\r\n -0.01451435, 0.03031485],\r\n [-0.00814894, 0.01228636, 0.00573935, ..., 0.01143655,\r\n -0.00131886, -0.03910364]], dtype=float32)> has shape (30522, 768), but the saved weight has shape (50265, 768).\r\n```", "Hi @blawok, you're initializing a configuration using the default parameters, which may not be the same as the checkpoint's parameters you're initializing (it's not the case here).\r\n\r\nYou should initialize the configuration from the checkpoint here too:\r\n\r\n```py\r\nconfig = LongformerConfig.from_pretrained('allenai/longformer-base-4096')\r\nmodel = TFLongformerModel.from_pretrained('allenai/longformer-base-4096', config=config)\r\n```", "Great, thank you for the explanation @LysandreJik :) \r\n\r\nI am closing this issue." ]
1,599
1,599
1,599
NONE
null
## Environment info - `transformers` version: 3.1.0 ### Who can help Longformer/Reformer: @patrickvonplaten ## Information Why does the LongformerConfig's vocab_size defaults to 30522 while the Longformer has embeddings first dimension 50265? It reuses the same config as RoBERTa which also has embeddings shape (50265,768) due to the BPE tokenization (and also defaults to 30522). ```python LongformerConfig { "attention_probs_dropout_prob": 0.1, "attention_window": 512, "bos_token_id": 0, "eos_token_id": 2, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "longformer", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 1, "sep_token_id": 2, "type_vocab_size": 2, "vocab_size": 30522 } ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6963/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6963/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6962
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6962/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6962/comments
https://api.github.com/repos/huggingface/transformers/issues/6962/events
https://github.com/huggingface/transformers/issues/6962
694,111,029
MDU6SXNzdWU2OTQxMTEwMjk=
6,962
Tokenizers became slow compared to 2.8.0
{ "login": "LSinev", "id": 12072891, "node_id": "MDQ6VXNlcjEyMDcyODkx", "avatar_url": "https://avatars.githubusercontent.com/u/12072891?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LSinev", "html_url": "https://github.com/LSinev", "followers_url": "https://api.github.com/users/LSinev/followers", "following_url": "https://api.github.com/users/LSinev/following{/other_user}", "gists_url": "https://api.github.com/users/LSinev/gists{/gist_id}", "starred_url": "https://api.github.com/users/LSinev/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LSinev/subscriptions", "organizations_url": "https://api.github.com/users/LSinev/orgs", "repos_url": "https://api.github.com/users/LSinev/repos", "events_url": "https://api.github.com/users/LSinev/events{/privacy}", "received_events_url": "https://api.github.com/users/LSinev/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Working with special tokens attributes also became slower:\r\n```python\r\nimport timeit\r\nimport numpy as np\r\nfrom transformers import __version__ as trans_version\r\nfrom transformers import (\r\n CTRLTokenizer,\r\n GPT2Tokenizer,\r\n RobertaTokenizer,\r\n XLMTokenizer,\r\n XLNetTokenizer\r\n)\r\n\r\nTok_CLASSES = {\r\n \"gpt2\": GPT2Tokenizer,\r\n \"ctrl\": CTRLTokenizer,\r\n \"roberta-large\": RobertaTokenizer,\r\n \"xlnet-large-cased\": XLNetTokenizer,\r\n \"xlm-mlm-100-1280\": XLMTokenizer,\r\n}\r\n\r\nprint(trans_version)\r\nrounding = 4\r\nfor k, v in Tok_CLASSES.items():\r\n tokenizer_class = v\r\n tokenizer = tokenizer_class.from_pretrained(k, verbose=False)\r\n\r\n print(tokenizer.__class__)\r\n # print(tokenizer.encode(text))\r\n if tokenizer.eos_token is not None:\r\n r_eos_token = timeit.repeat(stmt=\"tokenizer.eos_token\", repeat=100, number=100000, globals=globals())\r\n r_eos_token_id = timeit.repeat(stmt=\"tokenizer.eos_token_id\", repeat=100, number=100000, globals=globals())\r\n print('Get eos_token, time taken (mean ± 3std):',\r\n str(np.round(np.mean(r_eos_token), rounding)) + '±' + str(np.round(3 * np.std(r_eos_token), rounding)))\r\n print('Get eos_token_id, time taken (mean ± 3std):',\r\n str(np.round(np.mean(r_eos_token_id), rounding)) + '±' + str(\r\n np.round(3 * np.std(r_eos_token_id), rounding)))\r\n if tokenizer.bos_token is not None:\r\n r_bos_token = timeit.repeat(stmt=\"tokenizer.bos_token\", repeat=100, number=100000, globals=globals())\r\n r_bos_token_id = timeit.repeat(stmt=\"tokenizer.bos_token_id\", repeat=100, number=100000, globals=globals())\r\n print('Get bos_token, time taken (mean ± 3std):',\r\n str(np.round(np.mean(r_bos_token), rounding)) + '±' + str(np.round(3 * np.std(r_bos_token), rounding)))\r\n print('Get bos_token_id, time taken (mean ± 3std):',\r\n str(np.round(np.mean(r_bos_token_id), rounding)) + '±' + str(\r\n np.round(3 * np.std(r_bos_token_id), rounding)))\r\n if tokenizer.unk_token is not None:\r\n r_unk_token = timeit.repeat(stmt=\"tokenizer.unk_token\", repeat=100, number=100000, globals=globals())\r\n r_unk_token_id = timeit.repeat(stmt=\"tokenizer.unk_token_id\", repeat=100, number=100000, globals=globals())\r\n print('Get unk_token, time taken (mean ± 3std):',\r\n str(np.round(np.mean(r_unk_token), rounding)) + '±' + str(np.round(3 * np.std(r_unk_token), rounding)))\r\n print('Get unk_token_id, time taken (mean ± 3std):',\r\n str(np.round(np.mean(r_unk_token_id), rounding)) + '±' + str(\r\n np.round(3 * np.std(r_unk_token_id), rounding)))\r\n```\r\ngives for 2.8.0\r\n```\r\n2.8.0\r\n<class 'transformers.tokenization_gpt2.GPT2Tokenizer'>\r\nGet eos_token, time taken (mean ± 3std): 0.0104±0.0003\r\nGet eos_token_id, time taken (mean ± 3std): 0.0715±0.0059\r\nGet bos_token, time taken (mean ± 3std): 0.0101±0.0021\r\nGet bos_token_id, time taken (mean ± 3std): 0.0687±0.0155\r\nGet unk_token, time taken (mean ± 3std): 0.0101±0.0001\r\nGet unk_token_id, time taken (mean ± 3std): 0.0632±0.0004\r\n<class 'transformers.tokenization_ctrl.CTRLTokenizer'>\r\nGet unk_token, time taken (mean ± 3std): 0.0098±0.0002\r\nGet unk_token_id, time taken (mean ± 3std): 0.0639±0.0008\r\n<class 'transformers.tokenization_roberta.RobertaTokenizer'>\r\nGet eos_token, time taken (mean ± 3std): 0.0099±0.0003\r\nGet eos_token_id, time taken (mean ± 3std): 0.0644±0.0017\r\nGet bos_token, time taken (mean ± 3std): 0.0093±0.0001\r\nGet bos_token_id, time taken (mean ± 3std): 0.064±0.0003\r\nGet unk_token, time taken (mean ± 3std): 0.0094±0.0002\r\nGet unk_token_id, time taken (mean ± 3std): 0.0727±0.0039\r\n<class 'transformers.tokenization_xlnet.XLNetTokenizer'>\r\nGet eos_token, time taken (mean ± 3std): 0.01±0.0002\r\nGet eos_token_id, time taken (mean ± 3std): 0.0848±0.0021\r\nGet bos_token, time taken (mean ± 3std): 0.0104±0.0003\r\nGet bos_token_id, time taken (mean ± 3std): 0.0847±0.0072\r\nGet unk_token, time taken (mean ± 3std): 0.0097±0.0001\r\nGet unk_token_id, time taken (mean ± 3std): 0.084±0.0007\r\n<class 'transformers.tokenization_xlm.XLMTokenizer'>\r\nGet bos_token, time taken (mean ± 3std): 0.01±0.0001\r\nGet bos_token_id, time taken (mean ± 3std): 0.0646±0.001\r\nGet unk_token, time taken (mean ± 3std): 0.0098±0.0001\r\nGet unk_token_id, time taken (mean ± 3std): 0.0639±0.0004\r\n```\r\nand for 3.1.0 (2x...4x slower):\r\n```\r\n3.1.0\r\n<class 'transformers.tokenization_gpt2.GPT2Tokenizer'>\r\nGet eos_token, time taken (mean ± 3std): 0.0422±0.0004\r\nGet eos_token_id, time taken (mean ± 3std): 0.1465±0.0015\r\nGet bos_token, time taken (mean ± 3std): 0.0418±0.0005\r\nGet bos_token_id, time taken (mean ± 3std): 0.1453±0.0009\r\nGet unk_token, time taken (mean ± 3std): 0.0417±0.0002\r\nGet unk_token_id, time taken (mean ± 3std): 0.1519±0.0186\r\n<class 'transformers.tokenization_ctrl.CTRLTokenizer'>\r\nGet unk_token, time taken (mean ± 3std): 0.0163±0.0003\r\nGet unk_token_id, time taken (mean ± 3std): 0.0821±0.0006\r\n<class 'transformers.tokenization_roberta.RobertaTokenizer'>\r\nGet eos_token, time taken (mean ± 3std): 0.0419±0.0029\r\nGet eos_token_id, time taken (mean ± 3std): 0.1462±0.004\r\nGet bos_token, time taken (mean ± 3std): 0.042±0.0004\r\nGet bos_token_id, time taken (mean ± 3std): 0.1544±0.0311\r\nGet unk_token, time taken (mean ± 3std): 0.0449±0.0016\r\nGet unk_token_id, time taken (mean ± 3std): 0.1511±0.006\r\n<class 'transformers.tokenization_xlnet.XLNetTokenizer'>\r\nGet eos_token, time taken (mean ± 3std): 0.0165±0.0004\r\nGet eos_token_id, time taken (mean ± 3std): 0.0918±0.0043\r\nGet bos_token, time taken (mean ± 3std): 0.0164±0.0003\r\nGet bos_token_id, time taken (mean ± 3std): 0.0931±0.0034\r\nGet unk_token, time taken (mean ± 3std): 0.0166±0.0002\r\nGet unk_token_id, time taken (mean ± 3std): 0.0933±0.0004\r\n<class 'transformers.tokenization_xlm.XLMTokenizer'>\r\nGet bos_token, time taken (mean ± 3std): 0.0162±0.0003\r\nGet bos_token_id, time taken (mean ± 3std): 0.0801±0.0008\r\nGet unk_token, time taken (mean ± 3std): 0.016±0.0003\r\nGet unk_token_id, time taken (mean ± 3std): 0.0827±0.0002\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,599
1,606
1,606
CONTRIBUTOR
null
## Environment info - `transformers` version: 3.1.0 - Platform: Ubuntu 20.04 - Python version: 3.7.9 - PyTorch version (GPU?): No - Tensorflow version (GPU?): No - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help tokenizers: @mfuntowicz ## To reproduce Steps to reproduce the behavior: ```python import timeit import numpy as np from transformers import __version__ as trans_version from transformers import ( CTRLTokenizer, GPT2Tokenizer, RobertaTokenizer, XLMTokenizer, XLNetTokenizer ) Tok_CLASSES = { "gpt2": GPT2Tokenizer, "ctrl": CTRLTokenizer, "roberta-large": RobertaTokenizer, "xlnet-large-cased": XLNetTokenizer, "xlm-mlm-100-1280": XLMTokenizer, } print(trans_version) for k, v in Tok_CLASSES.items(): tokenizer_class = v tokenizer = tokenizer_class.from_pretrained(k) text = """<|endoftext|>🤗 Transformers (formerly known as pytorch-transformers and pytorch-pretrained-bert) provides general-purpose architectures (BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet…) for Natural Language Understanding (NLU) and Natural Language Generation (NLG) with over 32+ pretrained models in 100+ languages and deep interoperability between TensorFlow 2.0 and PyTorch.</s> <eos>""" print(tokenizer.__class__) r = timeit.repeat(stmt="tokenizer.encode(text)", repeat=100, number=500, globals=globals()) rounding = 4 print('Time taken (mean ± 3std):', str(np.round(np.mean(r), rounding)) + '±' + str(np.round(3 * np.std(r), rounding))) ``` In 3.1.0 output is: ``` 3.1.0 <class 'transformers.tokenization_gpt2.GPT2Tokenizer'> Time taken (mean ± 3std): 0.1808±0.0115 <class 'transformers.tokenization_ctrl.CTRLTokenizer'> Time taken (mean ± 3std): 0.0678±0.0015 <class 'transformers.tokenization_roberta.RobertaTokenizer'> Time taken (mean ± 3std): 0.2051±0.0024 <class 'transformers.tokenization_xlnet.XLNetTokenizer'> Time taken (mean ± 3std): 0.1567±0.002 <class 'transformers.tokenization_xlm.XLMTokenizer'> Time taken (mean ± 3std): 0.3601±0.0248 ``` ## Expected behavior In 2.8.0 output is (and i hope, even these times can be improved without using Fast versions). ``` <class 'transformers.tokenization_gpt2.GPT2Tokenizer'> Time taken (mean ± 3std): 0.1808±0.0115 <class 'transformers.tokenization_ctrl.CTRLTokenizer'> Time taken (mean ± 3std): 0.0678±0.0015 <class 'transformers.tokenization_roberta.RobertaTokenizer'> Time taken (mean ± 3std): 0.2051±0.0024 <class 'transformers.tokenization_xlnet.XLNetTokenizer'> Time taken (mean ± 3std): 0.1567±0.002 <class 'transformers.tokenization_xlm.XLMTokenizer'> Time taken (mean ± 3std): 0.3601±0.0248 ``` ## TLDR With GPT2 and CTRL 3.1.0 tokenizer.encode (with default options) takes ~1.3x time compared to 2.8.0 code. I think this can be improved|solved if code like https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils.py#L254-L256 was not executed every tokenization call but for example once tokens added (and result stored in self.all_special_tokens_extended). Maybe this is not only place with unnecessary calculations per tokenization. For example `self.encoder.get(self.unk_token)` from https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_gpt2.py#L244 could be stored once in some property updated with unk_token change to reduce time to call it every time token-to-id conversion happens. Same storage idea for https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils.py#L266
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6962/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6962/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6961
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6961/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6961/comments
https://api.github.com/repos/huggingface/transformers/issues/6961/events
https://github.com/huggingface/transformers/pull/6961
694,015,505
MDExOlB1bGxSZXF1ZXN0NDgwNDY2MjYz
6,961
adding TRANSFORMERS_VERBOSITY env var
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6961?src=pr&el=h1) Report\n> Merging [#6961](https://codecov.io/gh/huggingface/transformers/pull/6961?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/56742e9f610231d7b28fe2387770dc56014b79de?el=desc) will **increase** coverage by `0.65%`.\n> The diff coverage is `96.42%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6961/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6961?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6961 +/- ##\n==========================================\n+ Coverage 80.00% 80.65% +0.65% \n==========================================\n Files 161 161 \n Lines 30120 30147 +27 \n==========================================\n+ Hits 24097 24315 +218 \n+ Misses 6023 5832 -191 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6961?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/testing\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6961/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90ZXN0aW5nX3V0aWxzLnB5) | `69.17% <94.11%> (+3.28%)` | :arrow_up: |\n| [src/transformers/utils/logging.py](https://codecov.io/gh/huggingface/transformers/pull/6961/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy91dGlscy9sb2dnaW5nLnB5) | `85.89% <100.00%> (+10.89%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6961/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6961/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.32%)` | :arrow_up: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6961/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.71% <0.00%> (+0.80%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6961/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+1.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6961/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `98.95% <0.00%> (+73.82%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6961?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6961?src=pr&el=footer). Last update [56742e9...3c194b7](https://codecov.io/gh/huggingface/transformers/pull/6961?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "All the requested changes have been done.", "Great, thanks @stas00 " ]
1,599
1,599
1,599
CONTRIBUTOR
null
Per discussion at https://github.com/huggingface/transformers/pull/6816#issuecomment-686347433, this PR: - adds `TRANSFORMERS_VERBOSITY` env var - docs - tests - new test utils I'm open to a different name if that one doesn't work. Thank you.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6961/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6961/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6961", "html_url": "https://github.com/huggingface/transformers/pull/6961", "diff_url": "https://github.com/huggingface/transformers/pull/6961.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6961.patch", "merged_at": 1599638881000 }
https://api.github.com/repos/huggingface/transformers/issues/6960
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6960/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6960/comments
https://api.github.com/repos/huggingface/transformers/issues/6960/events
https://github.com/huggingface/transformers/pull/6960
693,888,850
MDExOlB1bGxSZXF1ZXN0NDgwMzQ5Nzcz
6,960
create model card for astroGPT
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6960?src=pr&el=h1) Report\n> Merging [#6960](https://codecov.io/gh/huggingface/transformers/pull/6960?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4ebb52afdb4dc4bcd599e7cb503763e5d4afc962?el=desc) will **increase** coverage by `1.50%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6960/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6960?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6960 +/- ##\n==========================================\n+ Coverage 77.81% 79.31% +1.50% \n==========================================\n Files 157 157 \n Lines 28853 28853 \n==========================================\n+ Hits 22452 22885 +433 \n+ Misses 6401 5968 -433 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6960?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6960/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6960/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `13.54% <0.00%> (-41.13%)` | :arrow_down: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/6960/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `66.66% <0.00%> (-25.00%)` | :arrow_down: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6960/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `59.57% <0.00%> (-19.15%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6960/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.70% <0.00%> (-6.07%)` | :arrow_down: |\n| [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/6960/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `83.58% <0.00%> (-2.99%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6960/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `91.90% <0.00%> (-0.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6960/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.37%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6960/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.84% <0.00%> (+0.39%)` | :arrow_up: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6960/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.21% <0.00%> (+0.83%)` | :arrow_up: |\n| ... and [14 more](https://codecov.io/gh/huggingface/transformers/pull/6960/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6960?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6960?src=pr&el=footer). Last update [4ebb52a...402e26a](https://codecov.io/gh/huggingface/transformers/pull/6960?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "really cool model, thanks for sharing:\r\n\r\n<img width=\"736\" alt=\"Screenshot 2020-09-05 at 18 49 27\" src=\"https://user-images.githubusercontent.com/326577/92309755-4883d480-ef76-11ea-90e5-27a97e7b2746.png\">\r\n", "ヘ( ^o^)ノ\(^_^ ) thanks @julien-c " ]
1,599
1,599
1,599
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6960/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6960/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6960", "html_url": "https://github.com/huggingface/transformers/pull/6960", "diff_url": "https://github.com/huggingface/transformers/pull/6960.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6960.patch", "merged_at": 1599324620000 }
https://api.github.com/repos/huggingface/transformers/issues/6959
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6959/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6959/comments
https://api.github.com/repos/huggingface/transformers/issues/6959/events
https://github.com/huggingface/transformers/pull/6959
693,851,189
MDExOlB1bGxSZXF1ZXN0NDgwMzE0MTcx
6,959
typo
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6959?src=pr&el=h1) Report\n> Merging [#6959](https://codecov.io/gh/huggingface/transformers/pull/6959?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/56742e9f610231d7b28fe2387770dc56014b79de?el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6959/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6959?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6959 +/- ##\n==========================================\n- Coverage 80.00% 79.98% -0.02% \n==========================================\n Files 161 161 \n Lines 30120 30120 \n==========================================\n- Hits 24097 24092 -5 \n- Misses 6023 6028 +5 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6959?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6959/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.21% <ø> (ø)` | |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6959/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.45% <0.00%> (-1.76%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6959/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.66% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6959/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.30% <0.00%> (+0.40%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6959?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6959?src=pr&el=footer). Last update [56742e9...12a1792](https://codecov.io/gh/huggingface/transformers/pull/6959?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,599
1,599
1,599
CONTRIBUTOR
null
there is no var `decoder_input_ids`, but there is `input_ids` for decoder :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6959/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6959/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6959", "html_url": "https://github.com/huggingface/transformers/pull/6959", "diff_url": "https://github.com/huggingface/transformers/pull/6959.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6959.patch", "merged_at": 1599470185000 }
https://api.github.com/repos/huggingface/transformers/issues/6958
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6958/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6958/comments
https://api.github.com/repos/huggingface/transformers/issues/6958/events
https://github.com/huggingface/transformers/pull/6958
693,836,906
MDExOlB1bGxSZXF1ZXN0NDgwMzAwODEz
6,958
[testing] add dependency: parametrize
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6958?src=pr&el=h1) Report\n> Merging [#6958](https://codecov.io/gh/huggingface/transformers/pull/6958?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/56742e9f610231d7b28fe2387770dc56014b79de?el=desc) will **increase** coverage by `0.30%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6958/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6958?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6958 +/- ##\n==========================================\n+ Coverage 80.00% 80.30% +0.30% \n==========================================\n Files 161 161 \n Lines 30120 30120 \n==========================================\n+ Hits 24097 24189 +92 \n+ Misses 6023 5931 -92 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6958?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6958/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `21.47% <0.00%> (-69.44%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6958/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6958/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |\n| [src/transformers/modeling\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/6958/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6958/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.03% <0.00%> (+0.27%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6958/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.32%)` | :arrow_up: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6958/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.30% <0.00%> (+0.40%)` | :arrow_up: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6958/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.41% <0.00%> (+0.50%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6958/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+1.50%)` | :arrow_up: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6958/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `90.00% <0.00%> (+5.00%)` | :arrow_up: |\n| ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/6958/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6958?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6958?src=pr&el=footer). Last update [56742e9...6a043c8](https://codecov.io/gh/huggingface/transformers/pull/6958?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,599
1,599
1,599
CONTRIBUTOR
null
unittest doesn't support pytest's super-handy `@pytest.mark.parametrize`, I researched and there are many proposed workarounds, most are tedious at best. If we include https://pypi.org/project/parameterized/ in dev's testing dependencies - it will provide a very easy to write parameterization in tests. It provides the same functionality as pytest's fixture, plus quite a few other ways. Example: ``` from parameterized import parameterized @parameterized([ (2, 2, 4), (2, 3, 8), (1, 9, 1), (0, 9, 0), ]) def test_pow(base, exponent, expected): assert_equal(math.pow(base, exponent), expected) ``` (extra `self`var if inside a test class) To remind the pytest style is slightly different: ``` @pytest.mark.parametrize("test_input,expected", [("3+5", 8), ("2+4", 6), ("6*9", 42)]) def test_eval(test_input, expected): ``` More examples here: https://pypi.org/project/parameterized May I suggest that it will make it much easier to write some types of tests? And I have an immediate use for it, in the current PR I'm working on. So it's not just nice to have request. Thank you.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6958/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6958/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6958", "html_url": "https://github.com/huggingface/transformers/pull/6958", "diff_url": "https://github.com/huggingface/transformers/pull/6958.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6958.patch", "merged_at": 1599472219000 }
https://api.github.com/repos/huggingface/transformers/issues/6957
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6957/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6957/comments
https://api.github.com/repos/huggingface/transformers/issues/6957/events
https://github.com/huggingface/transformers/issues/6957
693,665,724
MDU6SXNzdWU2OTM2NjU3MjQ=
6,957
PRETRAINED_INIT_CONFIGURATION for local model path
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I found a sort of band-aid, I added this code in the model's tokenization code, right after init of `PRETRAINED_INIT_CONFIGURATION, PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES`:\r\n\r\n```\r\nLOCALIZE=1\r\nif LOCALIZE:\r\n old, new = (\"stas/\", \"/mnt/nvme1/code/huggingface/transformers-fair-wmt/data/\")\r\n\r\n def localize(buf): return buf.replace(old, new)\r\n\r\n for d in [PRETRAINED_INIT_CONFIGURATION, PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES]:\r\n for k, v in d.copy().items():\r\n d[localize(k)] = v\r\n\r\n for d in [PRETRAINED_VOCAB_FILES_MAP]:\r\n for tk, tv in d.items():\r\n for k, v in tv.copy().items():\r\n tv[localize(k)] = v\r\n```\r\n\r\nIt's still not great, since now I can't commit this file to repo as this leads to a problem of committing other changes in this file. But at least I can move forward.", "I dug dipper and found how to solve it - needed to create `model_dir/tokenizer_config.json` and put the special init params there. And do several more tweaks to make the vocab files to not include the language names in the filename, but use the generic 'vocab-src.txt', 'vocab-tgt.txt'." ]
1,599
1,599
1,599
CONTRIBUTOR
null
Tokenizers have a special dict `PRETRAINED_INIT_CONFIGURATION`, which tells the tokenization_utils_base, which extra args to pass to the tokenizer's `__init__`, except it doesn't work for the local model, as the hash is for online s3 models. I have: ``` PRETRAINED_INIT_CONFIGURATION = { "stas/fsmt-wmt19-ru-en": { "langs": ["ru", "en"], }, "stas/fsmt-wmt19-en-ru": { "langs": ["en", "ru"], }, "stas/fsmt-wmt19-de-en": { "langs": ["de", "en"], }, "stas/fsmt-wmt19-en-de": { "langs": ["en", "de"], }, } ``` So in my own code I use ``` if LOCAL: path = "/code/huggingface/transformers-fair-wmt/data/fsmt-wmt19-ru-en/" mname = path mname_tok = f"stas/fsmt-wmt19-{src}-{tgt}" tokenizer = FSMTTokenizer.from_pretrained(mname_tok) model = FSMTForConditionalGeneration.from_pretrained(mname) else: # # s3 uploaded model mname = f"stas/fsmt-wmt19-{src}-{tgt}" tokenizer = FSMTTokenizer.from_pretrained(mname) model = FSMTForConditionalGeneration.from_pretrained(mname) ``` So `mname_tok` overrides to look up the dict above, since it fails to find that entry using local path. This, however, doesn't work in tools that aren't under my control, `run_eval.py` in seq2seq for example. **edit**, more to it - it doesn't pass `PRETRAINED_VOCAB_FILES_MAP` args either for the same reason - it fails to look up those entries for local path. I need to change them all. Any suggestions on how to fix this problem?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6957/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6957/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6956
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6956/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6956/comments
https://api.github.com/repos/huggingface/transformers/issues/6956/events
https://github.com/huggingface/transformers/pull/6956
693,592,422
MDExOlB1bGxSZXF1ZXN0NDgwMDY4MTQ2
6,956
[doc] remove the implied defaults to :obj:`None`, s/True/ :obj:`True/, etc.
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6956?src=pr&el=h1) Report\n> Merging [#6956](https://codecov.io/gh/huggingface/transformers/pull/6956?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/eff274d629c95fca459969b530b4ad0da5563918?el=desc) will **decrease** coverage by `6.41%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6956/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6956?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6956 +/- ##\n==========================================\n- Coverage 80.02% 73.61% -6.42% \n==========================================\n Files 161 161 \n Lines 30120 30120 \n==========================================\n- Hits 24105 22172 -1933 \n- Misses 6015 7948 +1933 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6956?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6956/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `94.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6956/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JlcnQucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6956/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VsZWN0cmEucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6956/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2ZsYXViZXJ0LnB5) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6956/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2dwdDIucHk=) | `97.29% <ø> (ø)` | |\n| [src/transformers/configuration\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/6956/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2x4bWVydC5weQ==) | `20.00% <ø> (-80.00%)` | :arrow_down: |\n| [src/transformers/configuration\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6956/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX21vYmlsZWJlcnQucHk=) | `97.05% <ø> (ø)` | |\n| [src/transformers/configuration\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6956/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX29wZW5haS5weQ==) | `97.14% <ø> (ø)` | |\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6956/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <ø> (-78.38%)` | :arrow_down: |\n| [src/transformers/configuration\\_retribert.py](https://codecov.io/gh/huggingface/transformers/pull/6956/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JldHJpYmVydC5weQ==) | `34.78% <ø> (ø)` | |\n| ... and [89 more](https://codecov.io/gh/huggingface/transformers/pull/6956/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6956?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6956?src=pr&el=footer). Last update [eff274d...e93aa69](https://codecov.io/gh/huggingface/transformers/pull/6956?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Great, thanks a lot!" ]
1,599
1,599
1,599
CONTRIBUTOR
null
as discussed at https://github.com/huggingface/transformers/pull/6932#issuecomment-687362952 **edit**: I also threw in :obj:`True/False - anything else? ``` find . -type d -name ".git" -prune -o -type f -exec perl -pi -e 's|, defaults to :obj:.None.||' {} \; find . -type d -name ".git" -prune -o -type f -exec perl -pi -e 's|, defaults to True|, defaults to :obj:`True`|' {} \; find . -type d -name ".git" -prune -o -type f -exec perl -pi -e 's|, defaults to False|, defaults to :obj:`False`|' {} \; ``` @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6956/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6956/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6956", "html_url": "https://github.com/huggingface/transformers/pull/6956", "diff_url": "https://github.com/huggingface/transformers/pull/6956.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6956.patch", "merged_at": 1599258145000 }
https://api.github.com/repos/huggingface/transformers/issues/6955
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6955/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6955/comments
https://api.github.com/repos/huggingface/transformers/issues/6955/events
https://github.com/huggingface/transformers/pull/6955
693,525,269
MDExOlB1bGxSZXF1ZXN0NDgwMDA1NzE2
6,955
[WIP] Language modeling example for TF Trainer
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6955?src=pr&el=h1) Report\n> Merging [#6955](https://codecov.io/gh/huggingface/transformers/pull/6955?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c5d43a872f0e85ce069e921c5bda02374e5b9cbf?el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6955/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6955?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6955 +/- ##\n==========================================\n- Coverage 80.02% 80.00% -0.02% \n==========================================\n Files 161 161 \n Lines 30120 30120 \n==========================================\n- Hits 24104 24098 -6 \n- Misses 6016 6022 +6 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6955?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6955/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.21% <0.00%> (-1.26%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6955/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6955/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.30% <0.00%> (ø)` | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6955?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6955?src=pr&el=footer). Last update [c5d43a8...8e24159](https://codecov.io/gh/huggingface/transformers/pull/6955?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,599
1,644
1,605
COLLABORATOR
null
To support language modeling for TF Trainer.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6955/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6955/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6955", "html_url": "https://github.com/huggingface/transformers/pull/6955", "diff_url": "https://github.com/huggingface/transformers/pull/6955.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6955.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/6954
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6954/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6954/comments
https://api.github.com/repos/huggingface/transformers/issues/6954/events
https://github.com/huggingface/transformers/issues/6954
693,521,702
MDU6SXNzdWU2OTM1MjE3MDI=
6,954
How to insert a hidden output from GPT2 model directly into a BERT layer?
{ "login": "h56cho", "id": 52889259, "node_id": "MDQ6VXNlcjUyODg5MjU5", "avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4", "gravatar_id": "", "url": "https://api.github.com/users/h56cho", "html_url": "https://github.com/h56cho", "followers_url": "https://api.github.com/users/h56cho/followers", "following_url": "https://api.github.com/users/h56cho/following{/other_user}", "gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}", "starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/h56cho/subscriptions", "organizations_url": "https://api.github.com/users/h56cho/orgs", "repos_url": "https://api.github.com/users/h56cho/repos", "events_url": "https://api.github.com/users/h56cho/events{/privacy}", "received_events_url": "https://api.github.com/users/h56cho/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "how do you fix it ?" ]
1,599
1,646
1,599
NONE
null
Hello, I am trying to do the following: 1. Feed in the input_ids into the embedding layer of the pre-trained GPT2 model, and get the resulting embedding. 2. directly feed the embedding from 1. to each layer of the pre-trained GPT2, by using `GPT2_model.transformers.h`. Store the resulting hidden output from each layer of the GPT-2 in a tensor named `layer_hidden_state`. 3. Directly input the `layer_hidden_state` into the 1st layer (which is on the top of the embedding layer) of the pre-trained BertModel, and let BertModel process the `layer_hidden_state` until it reaches the uppermost layer of BertModel. My attempts for carrying out the above steps are shown below. But I am getting an error when I try to do the step 3...how can I fix my error? the error is shown at the bottom of my code. Thank you for the help, ```Python # turn on the evaluation mode # (to prevent the dropout for evaluation purpose). gpt2DoubleHeadsModel.eval() len_input_ids = len(input_ids) # get the hidden state vector from the embedding layer. # we will use this hidden state vector as an input to each layer. input_hidden_state = gpt2DoubleHeadsModel(input_ids=input_ids, mc_token_ids = mc_token_ids, token_type_ids = token_type_ids, attention_mask = attention_mask)[3][0][:,:,:].detach() for j in range(num_layer_gpt2): # directly feed in the embedding hidden state vector into each layer of the GPT2DoubleHeadsModel, # and retrieve the resulting hidden state vector from each layer. layer_hidden_state = \ gpt2DoubleHeadsModel.transformer.h[j](input_hidden_state)[0][:,(len_input_ids-1),:] # store the hidden state vectors of the last token from each layer in `last_hidden_output_tensor`. last_hidden_output_tensor[:,j,:] = layer_hidden_state last_hidden_output_tensor = tuple(last_hidden_output_tensor) best_model_bert = BertModel.from_pretrained('bert-large-uncased', output_hidden_states=True) # an error is generated here the error is shown below: for k in range(nlayer_bert): last_hidden_output_tensor = best_model_bert.encoder.layer[k]((last_hidden_output_tensor)[0]) """ error: File "/Users/hyunjindominiquecho/opt/anaconda3/lib/python3.7/site-packages/transformers/modeling_bert.py", line 239, in transpose_for_scores return x.permute(0, 2, 1, 3) RuntimeError: number of dims don't match in permute """ ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6954/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6954/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6953
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6953/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6953/comments
https://api.github.com/repos/huggingface/transformers/issues/6953/events
https://github.com/huggingface/transformers/pull/6953
693,493,465
MDExOlB1bGxSZXF1ZXN0NDc5OTc2Njkw
6,953
[s2s] run_eval supports --prefix clarg.
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,599
1,599
1,599
CONTRIBUTOR
null
- Useful for multilingual models. - `model.config.prefix` will still be used if prefix not passed. This prefix is added to the beginning of each example from the source document before calling `generate`. - `decoder_start_token_id` is a different thing and unaffected. Usage: ```bash export dd=wmt_en_de python run_eval.py Helsinki-NLP/opus-mt-en-gem \ $dd/val.source \ $dd/marian_multi_val_gens.txt \ --reference_path $dd/val.target \ --task translation --fp16 --bs 128 \ --score_path $dd/marian_multi_val_bleu.json \ --prefix ">>deu<<" ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6953/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6953/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6953", "html_url": "https://github.com/huggingface/transformers/pull/6953", "diff_url": "https://github.com/huggingface/transformers/pull/6953.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6953.patch", "merged_at": 1599887302000 }
https://api.github.com/repos/huggingface/transformers/issues/6952
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6952/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6952/comments
https://api.github.com/repos/huggingface/transformers/issues/6952/events
https://github.com/huggingface/transformers/pull/6952
693,490,560
MDExOlB1bGxSZXF1ZXN0NDc5OTc0MDY0
6,952
typo
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6952?src=pr&el=h1) Report\n> Merging [#6952](https://codecov.io/gh/huggingface/transformers/pull/6952?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a4fc0c80b11e14aaf6a9ec7c6fa5e6dab54261e4?el=desc) will **decrease** coverage by `2.08%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6952/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6952?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6952 +/- ##\n==========================================\n- Coverage 80.02% 77.94% -2.09% \n==========================================\n Files 161 161 \n Lines 30120 30120 \n==========================================\n- Hits 24104 23477 -627 \n- Misses 6016 6643 +627 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6952?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6952/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.58% <ø> (-7.19%)` | :arrow_down: |\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6952/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6952/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.87% <0.00%> (-77.64%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6952/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6952/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6952/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `76.71% <0.00%> (-21.92%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6952/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.14% <0.00%> (-2.15%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6952/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6952/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <0.00%> (-1.30%)` | :arrow_down: |\n| ... and [6 more](https://codecov.io/gh/huggingface/transformers/pull/6952/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6952?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6952?src=pr&el=footer). Last update [a4fc0c8...865e3b2](https://codecov.io/gh/huggingface/transformers/pull/6952?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,599
1,599
1,599
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6952/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6952/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6952", "html_url": "https://github.com/huggingface/transformers/pull/6952", "diff_url": "https://github.com/huggingface/transformers/pull/6952.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6952.patch", "merged_at": 1599250478000 }
https://api.github.com/repos/huggingface/transformers/issues/6951
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6951/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6951/comments
https://api.github.com/repos/huggingface/transformers/issues/6951/events
https://github.com/huggingface/transformers/issues/6951
693,462,886
MDU6SXNzdWU2OTM0NjI4ODY=
6,951
How to enable grad_fn when calling the generate() method of a T5 model
{ "login": "wmmxk", "id": 10648437, "node_id": "MDQ6VXNlcjEwNjQ4NDM3", "avatar_url": "https://avatars.githubusercontent.com/u/10648437?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wmmxk", "html_url": "https://github.com/wmmxk", "followers_url": "https://api.github.com/users/wmmxk/followers", "following_url": "https://api.github.com/users/wmmxk/following{/other_user}", "gists_url": "https://api.github.com/users/wmmxk/gists{/gist_id}", "starred_url": "https://api.github.com/users/wmmxk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wmmxk/subscriptions", "organizations_url": "https://api.github.com/users/wmmxk/orgs", "repos_url": "https://api.github.com/users/wmmxk/repos", "events_url": "https://api.github.com/users/wmmxk/events{/privacy}", "received_events_url": "https://api.github.com/users/wmmxk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "In fact, in the implementation file of T5, [modeling_t5.py](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_t5.py), I did not see ```with torch.no_grad:```. So I was wondering how ```grad_fn``` is disabled in the ```generate()``` method.", "Although I think it has a low chance to work, I tried to add ```torch.set_grad_enabled(True)``` before calling the ```generate()``` method, but the ```grad_fn``` is still not enabled.", "It turned out there is a decorator before the [generate() method](https://github.com/huggingface/transformers/blob/a4fc0c80b11e14aaf6a9ec7c6fa5e6dab54261e4/src/transformers/generation_utils.py#L110). I was looking for it in the body of the method.", "> It turned out there is a decorator before the [generate() method](https://github.com/huggingface/transformers/blob/a4fc0c80b11e14aaf6a9ec7c6fa5e6dab54261e4/src/transformers/generation_utils.py#L110). I was looking for it in the body of the method.\r\n\r\nDid you solve this problem? I run into the same problem and need a solution. Thanks!", "> > It turned out there is a decorator before the [generate() method](https://github.com/huggingface/transformers/blob/a4fc0c80b11e14aaf6a9ec7c6fa5e6dab54261e4/src/transformers/generation_utils.py#L110). I was looking for it in the body of the method.\r\n> \r\n> Did you solve this problem? I run into the same problem and need a solution. Thanks!\r\n\r\nDid you guys solve the problem? \r\n\r\nI commented this line `@torch.no_grad()` in the transformers [this file](https://github.com/huggingface/transformers/blob/a4fc0c80b11e14aaf6a9ec7c6fa5e6dab54261e4/src/transformers/generation_utils.py#L110), and added `torch.set_grad_enabled(True)` inside `def generate(XXX)`. \r\n\r\nBut I am still not sure if it works. \r\n\r\n![image](https://github.com/huggingface/transformers/assets/31528604/2584f6ee-9657-4079-92b4-a69e9e7690cb)\r\n![image](https://github.com/huggingface/transformers/assets/31528604/96e85143-71c5-4477-bfb6-014acb25c591)\r\n\r\nPlease note that I only edited these two lines. " ]
1,599
1,701
1,599
NONE
null
I was trying to attribute a prediction by a T5 model to the words of an input by gradient method. For a ```T5_model```, I call ```T5_model(**inputs)``` when training, and call ```T5_model.generate(**inputs)``` when doing inference. In training, the grad_fn is enabled for the loss but not in inference. So how do I enable grad_fn when calling the ```generate()``` method to generate a prediction? So that I can get the gradient of the prediction with respect to each word of the input sentence.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6951/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6951/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6950
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6950/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6950/comments
https://api.github.com/repos/huggingface/transformers/issues/6950/events
https://github.com/huggingface/transformers/issues/6950
693,459,849
MDU6SXNzdWU2OTM0NTk4NDk=
6,950
head_mask in modeling_bert.py
{ "login": "franciszzj", "id": 16440889, "node_id": "MDQ6VXNlcjE2NDQwODg5", "avatar_url": "https://avatars.githubusercontent.com/u/16440889?v=4", "gravatar_id": "", "url": "https://api.github.com/users/franciszzj", "html_url": "https://github.com/franciszzj", "followers_url": "https://api.github.com/users/franciszzj/followers", "following_url": "https://api.github.com/users/franciszzj/following{/other_user}", "gists_url": "https://api.github.com/users/franciszzj/gists{/gist_id}", "starred_url": "https://api.github.com/users/franciszzj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/franciszzj/subscriptions", "organizations_url": "https://api.github.com/users/franciszzj/orgs", "repos_url": "https://api.github.com/users/franciszzj/repos", "events_url": "https://api.github.com/users/franciszzj/events{/privacy}", "received_events_url": "https://api.github.com/users/franciszzj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,599
1,599
1,599
NONE
null
Should change https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L487 `head_mask[i]` to `head_mask[i] if head_mask is not None else None`. full code: ``` class BertEncoder(nn.Module): def __init__(self, config): super().__init__() self.config = config self.layer = nn.ModuleList([BertLayer(config) for _ in range(config.num_hidden_layers)]) def forward( self, hidden_states, attention_mask=None, head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, output_attentions=False, output_hidden_states=False, return_dict=False, ): all_hidden_states = () if output_hidden_states else None all_attentions = () if output_attentions else None for i, layer_module in enumerate(self.layer): if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) if getattr(self.config, "gradient_checkpointing", False): def create_custom_forward(module): def custom_forward(*inputs): return module(*inputs, output_attentions) return custom_forward layer_outputs = torch.utils.checkpoint.checkpoint( create_custom_forward(layer_module), hidden_states, attention_mask, head_mask[i] if head_mask is not None else None, encoder_hidden_states, encoder_attention_mask, ) else: layer_outputs = layer_module( hidden_states, attention_mask, head_mask[i] if head_mask is not None else None, encoder_hidden_states, encoder_attention_mask, output_attentions, ) hidden_states = layer_outputs[0] if output_attentions: all_attentions = all_attentions + (layer_outputs[1],) if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) if not return_dict: return tuple(v for v in [hidden_states, all_hidden_states, all_attentions] if v is not None) return BaseModelOutput( last_hidden_state=hidden_states, hidden_states=all_hidden_states, attentions=all_attentions ) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6950/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6950/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6949
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6949/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6949/comments
https://api.github.com/repos/huggingface/transformers/issues/6949/events
https://github.com/huggingface/transformers/pull/6949
693,412,136
MDExOlB1bGxSZXF1ZXN0NDc5OTAyNjA2
6,949
Refactoring the generate() function
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I think this is a massive improvement and really hope it gets merged. Everything is much better encapsulated and it's super easy to find what you want.\r\n\r\nI have some naming ideas, but the code seems to not be finished so I will wait.", "I am very unfamiliar with the generate code because the code has scared me way, this one is way more inviting to read and I can actually understand it so I think that's a very nice achievement :-)\r\n\r\nI don't know if what's hidden in `# add necessary encoder decoder code` and `# add all post processing functions` is different for the four versions of decode, but might be nice to refactor it in some helper methods if possible.", "Seconding Sam and Sylvain. Really excited for the improved legibility but we should also make sure that code isn't copy-pasted four times or we'll start having inconsistencies soon :) \r\n\r\nWe should probably allow `dist_warper` in the beam search too: the current ones wouldn't make a difference but some use cases will (e.g. noisy channel which uses a backward distribution to reweigh the next token scores)", "Related: https://github.com/huggingface/transformers/issues/7626", "Can you please add this functionality too https://github.com/huggingface/transformers/issues/5164?", "This is really cool! I love the additional clarity and think you're building a really strong base for the generation code. Here's what I think we should still chat about before finalizing the PR:\r\n1. We still have inconsistencies in how we handle the end of generation. I really think that `max_length` should be handled the same way as `min_length` with a sampler in `pre_processor ` which forces a Dirac on `eos_token_id` when `cur_len==max_length`\r\n2. Similarly, we should get rid of `adjust_logits_during_generation` and have e.g. a Bart-specific sampler in `pre_processor `\r\n3. We're missing out on supporting some really interesting research by not giving a good option to return and backprop through the generation scores. Also, I think we should extend self-documenting outputs to the `generate` function (cc @sgugger ). My proposal would be to add a `return_dict` argument, with an `output_generation_scores` (and possibly `with_grad`) option. (And we just return the generated ids if `return_dict=False` to stay backwards compatible)\r\n4. Generation with `decoder_prefix_ids` for encoder-decoder models :D ! That can be a future PR though, and can also be handled with `pre_processor` to force the output (a bit wasteful but will do in a pinch)\r\n\r\nI also agree with @sgugger that we should come up with a better name than `Sampler`, will think about it", "@yjernite has great feature requests, but this PR is already huge and I don't see why they need to be handled here.", "> @yjernite has great feature requests, but this PR is already huge and I don't see why they need to be handled here.\r\n\r\nGood point, we can definitely look at 3. and 4. later, and I know from experience that 2. is probably the wrong kind of rabbit hole to get into right now.\r\n\r\nI am a little concerned about 1. though. The samplers will definitely need to consider more than just `input_ids` and `scores` in the future, and we should make sure that we don't need to rebuild them from the ground up when that happens. \r\n\r\n@patrickvonplaten what are your thoughts on changing e.g. [generation_utils.py#L451](https://github.com/huggingface/transformers/blob/5df79e2c41bf4b47ab4a36f903be163252714fe3/src/transformers/generation_utils.py#L451) when we need to also pass `cur_len` to enforce the max length or if we want the samples to look back at the `encoder_input_ids` ?", "Thanks for the feedback! @yjernite - regarding your points: \r\n\r\n1) I think I see your point that `max_length` should be treated the same way as `min_length`. If we follow this appoarch, we would replace the `while cur_len < max_length:` with `while True:` and then break if `max_length` is hit. I'm a bit worried the people that will use one of the four functions directly, such as: \r\n\r\n```python \r\n\r\npre_processor = # create your list of pre_processors here <= THIS MUST INCLUDE max_length\r\ndist_warper = # create your dist warper here\r\n\r\noutputs = model.sample(input_ids, pre_processor, dist_warper, max_length, pad_token_id, eos_token_id, **model_kwargs)\r\n```\r\n\r\nwill forget to put a `max_length` \"processor\" in `pre_processor` and then the `while True:` loop would run forever. \r\n\r\nFor me, the difference between `max_length` and and *e.g.* `min_length` is that `max_length` is a mandatory parameter for generation, which is why I left it as an input to the \"sub\" generation functions.\r\nFor you, what would be the big advantage of moving `max_length` to a preprocessor item (besides consistency?).\r\n\r\nI'm 100% fine with extending the pre-processors or warpers (trying to not use the word samplers anymore :D) to accept more input arguments, but I think we could also do this in a future PR as it would not break backwards compatibility.\r\n\r\nHappy to discuss what opinions the others have on this!\r\n\r\n2) Agree - will have to see how to make that non-breaking, but should be possible!\r\n\r\n3) Agree to add a `ModelOutputs` class to `generate()` that would include `attentions` and `hidden_states`. Regarding being able to backprop through `generate()` - I don't really think that this is super important. It would probably also require a lot of tweaking the way generate() is done currently so I'd prefer if ppl would just use the own fork/branch for these kind of things\r\n\r\n4) This should already be possible I think. `decoder_input_ids` can be passed to generate.", "## Speed comparison:\r\n\r\nsample search / greedy search yields equivalent results (TESTED on GPT2)\r\n\r\nbeam search yields ~5 % speed up thanks to the use of tensors instead of lists in `beam_scorer` (TESTED on BART and T5)", "All slow tests that pass on master, also pass in this PR now. Ran all those tests: \r\n\r\n```\r\nrun_generation_integration_tests () {\r\n RUN_SLOW=1 pytest tests/test_modeling_pegasus.py\r\n RUN_SLOW=1 pytest tests/test_modeling_bart.py\r\n RUN_SLOW=1 pytest tests/test_modeling_t5.py\r\n RUN_SLOW=1 pytest tests/test_modeling_reformer.py\r\n RUN_SLOW=1 pytest tests/test_modeling_marian.py\r\n RUN_SLOW=1 pytest tests/test_modeling_mbart.py\r\n RUN_SLOW=1 pytest tests/test_modeling_prophetnet.py\r\n RUN_SLOW=1 pytest tests/test_modeling_xlm_prophetnet.py\r\n RUN_SLOW=1 pytest tests/test_modeling_encoder_decoder.py\r\n RUN_PIPELINE_TESTS=1 RUN_SLOW=1 pytest tests/test_pipelines_conversational.py\r\n RUN_PIPELINE_TESTS=1 RUN_SLOW=1 pytest tests/test_pipelines_text2text_generation.py\r\n RUN_PIPELINE_TESTS=1 RUN_SLOW=1 pytest tests/test_pipelines_text_generation.py\r\n RUN_PIPELINE_TESTS=1 RUN_SLOW=1 pytest tests/test_pipelines_summarization.py\r\n RUN_PIPELINE_TESTS=1 RUN_SLOW=1 pytest tests/test_pipelines_translation.py\r\n RUN_PIPELINE_TESTS=1 RUN_SLOW=1 pytest tests/test_pipelines_dialog.py\r\n RUN_SLOW=1 pytest tests/test_modeling_gpt2.py\r\n RUN_SLOW=1 pytest tests/test_modeling_xlnet.py\r\n RUN_SLOW=1 pytest tests/test_modeling_transfo_xl.py\r\n RUN_SLOW=1 pytest tests/test_modeling_rag.py\r\n RUN_SLOW=1 pytest tests/test_modeling_fsmt.py\r\n RUN_SLOW=1 pytest tests/test_modeling_blenderbot.py\r\n}\r\n```\r\n\r\nThink I ran all of the important ones (cc @sshleifer )", "> I don't know if this PR was ready for review or not but I still went ahead ;-). I still like this a lot, I added suggestions for the docs and I think it would be great if all your awesome work was documented. `GenerationMixin` is already documented in `main_classes/model` so all its public methods should have nice doc.\r\n> \r\n> Then I think all you added warrants an `internal/generation` where we could document all the tools you added.\r\n> \r\n> Last thing: I'm not super fan of `DistProcessor` as a name, mainly because I don't know that dist means distribution (I think distributed but I'm obsessed with Trainer ;-) ). `DistributionProcessor` might be a bit long. The class seems to be mainly doing some preprocessing on the logits though, so why not `LogitsProcessor`?\r\n\r\nChanged to `LogitsProcessor` - like that name! Yeah the docs weren't ready yet, but thanks for the feedback :-) ", "Hmmm OK so after thinking about it for a bit I realize that having a `max_length` logits processor would change the current beam search behavior in some cases.\r\n\r\nCurrently, the beam gets re-ordered at `max_length` without penalizing sequences that aren't \"well-formed\" (haven't reached EOS within the allotted time). If we switched to a `LogitsProcessor` however, the beam search would up-weight sequences that were more likely to lead to an EOS at time step (`max_length`-1)\r\n\r\nI think this new behavior is better and definitely would like the option to implement it (@srush would love your input on that especially), but I'm not sure how breaking it is (conversely, it might also account for some of the different scores we've had from other libs). We can also just add a `BeamSearchMaxLengthLogitsProcessor` now or later.\r\n\r\nWhat do you think @thomwolf @patrickvonplaten @sshleifer ?", "> We can also just add a BeamSearchMaxLengthLogitsProcessor now or later.\r\n\r\nThis sounds useful! Bart forces all the generations of length `max_length -1` to end in EOS, which helps performance. Might help other models too. Would lean towards adding it later. ", "> This all looks great to me. I have final nits on the docs, some general rules not to forget:\r\n> \r\n> * no abbreviation in the documentation as we have all the space we want to explain things to the user\r\n> * no lines > 119 pretty please, the script takes care of everything except the examples and some of the examples have veeeeeery long lines.\r\n\r\n\r\n\r\n> This all looks great to me. I have final nits on the docs, some general rules not to forget:\r\n> \r\n> * no abbreviation in the documentation as we have all the space we want to explain things to the user\r\n> * no lines > 119 pretty please, the script takes care of everything except the examples and some of the examples have veeeeeery long lines.\r\n\r\nNoo, I broke the 119 rule again - I though the script would now save me from everything :D Will correct this! Should I also add the `>>>` for the example code or is this just for the model's forward function? ", "> Should I also add the `>>>` for the example code or is this just for the model's forward function?\r\n\r\nIt's only used as a marker for doctests, so you should do this for examples that are not slow. We still need to resuscitate the doctests with @LysandreJik for it to have any use, though.\r\n", "The doctest run as slow tests so you can still add them even if they're slow!", "> The doctest run as slow tests so you can still add them even if they're slow!\r\n\r\nOkey added `>>>` to all examples and made them pretty", "Hi, thanks for creating this PR and the code looks great! Is it possible to decode and meanwhile return the token probabilities? It would be very helpful in the following scenarios -\r\n\r\nI. Calculating perplexities of generated texts\r\nII. Reinforcement learning for text generation.", "Is it possible to backpropagate the conditional text generation model parameters on the loss defined via this .generate() function?", "Hi @patrickvonplaten\r\nCan you share how to calculate the output probabilities of each token given by generate() ?" ]
1,599
1,652
1,604
MEMBER
null
## Generation refactor This is a possible design that IMO would make generate much more readable and flexible for future code changes. The code shown in this PR is more or less pseudo code, but I'm sure that this design is fully backwards compatible (for the `generate()` function, I don't plan on keep `beam_search_generation` and `_no_beam_search_generation` -> I don't think anybody directly accessed these functions). It's probably better to look at the code directly, here: https://github.com/huggingface/transformers/blob/3c8a12a3bf44aa7845675852e28c47d0a9cb808e/src/transformers/generation_utils.py then at the diff. The following major changes are made: ### Split the generate method into four generate functions: a) `greedy_search` (corresponds to num_beams = 1, do_sample=False) b) `sample` (corresponds to num_beams = 1, do_sample=True) c) `beam_search` (corresponds to num_beams > 1, do_sample = False) d) `beam_sample` (corresponds to num_beams > 1, do_sample = True) It is split in a way that the functions can be used on their own and don't necessarily have to be accessed by the `generate()` method. This allows for much more flexibility for the user - he can decide what kind of distribution warper he wants to use and what kind of "Beam scorer" he wants to use (more on "distribution warper" and "beam scorer" later. Also mostly a model only uses one of these methods, `EncoderDecoder` models usually use `beam_search()`, `...ForCausalLM` models usually use `sample()`. Because `generate` is now split to the corresponding functions relevant for a model, it makes the code much more readable for the users because they will mostly only look into one of the four functions. Splitting `generate()` into four functions removes **a lot** of `if-else` statements ### Create `LogitsProcessor` and `LogitsWarper` objects. Instead of adding each of these "logit warpers / processors", such as `bad_token_words` with `if-else` statements, a list of these objects is created in the beginning and then called in the code. This is largely copied from this very nice PR: https://github.com/huggingface/transformers/pull/5420 . The arguments are 1) easier to test, 2) adds more flexbility because users can easily add their own "logit warpers / processors". Note that we need both a `pre_processor` and a `dist_warper` list to make `beam_sample` work correctly. This comment explains why in more detail: https://github.com/huggingface/transformers/pull/5420#discussion_r449779867 ### Create `beam_scorer` class. The `generate_beam_search` function has become extremely hard to read because most of the beam search logic is directly written into the function. I would propose to move this code into a `beam_scorer` class that would also replace the `BeamHypotheses` class. The class would essentially expose the following functions: ```python beam_scorer.update(next_scores, next_tokens) next_beam_scores = beam_scorer.get_next_scores() next_beam_tokens = beam_scorer.get_next_tokens() next_beam_idx = beam_scorer.get_next_beam_idx() beam_scorer.is_done() ``` and IMO all the beam search relevant code can be handled in the `beam_scorer.update(...)` function. Besides better readability, such a class could also be replaced by another "Beam seach scorer" logic, which makes the beam search code much easier to extend IMO. The con of this PR is obviously that some code would be copy pasted along the four functions (but it should not be too much). Let me know if something is not clear in the design. Would be super happy to hear your feedback @LysandreJik @thomwolf @sshleifer @yjernite @JetRunner @sgugger ## 1st Review: The current state shows how the refactored code would look like. All important tests are passing now (the slow GPT2, Bart & T5 tests), but the PR is not at all finished yet. It would be nice if @sshleifer @yjernite @thomwolf (and @LysandreJik @sgugger) could take a first look at the complete "new" architecture and give some feedback. If you guys are ok with the new design, I will add a bunch of tests and clean up the code more to make sure we have 100% backward compatibily. ## TODO: - [x] Make all slow tests pass - [x] Rename according to discussion - [x] Add tests for all processors - [x] Add more and better generation tests - [x] Do speed comparison - [x] Add docstring - [x] Final thoughts about design ## Final review: I'm happy with the changes I've done now. Better tests (more aggressive and should also be faster since only 3 tokens are generated instead of 10 now) have been added and docstrings have hopefully become more understandable. A complete explanation of new "generation" philosophy is described in the forum: https://discuss.huggingface.co/t/big-generate-refactor/1857
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6949/reactions", "total_count": 6, "+1": 6, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6949/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6949", "html_url": "https://github.com/huggingface/transformers/pull/6949", "diff_url": "https://github.com/huggingface/transformers/pull/6949.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6949.patch", "merged_at": 1604415863000 }
https://api.github.com/repos/huggingface/transformers/issues/6948
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6948/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6948/comments
https://api.github.com/repos/huggingface/transformers/issues/6948/events
https://github.com/huggingface/transformers/pull/6948
693,403,651
MDExOlB1bGxSZXF1ZXN0NDc5ODk0OTIw
6,948
[s2s] run_eval.py parses generate_kwargs
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6948?src=pr&el=h1) Report\n> Merging [#6948](https://codecov.io/gh/huggingface/transformers/pull/6948?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6078b12098337bcb98c0540b07a623223ffdd1c8?el=desc) will **increase** coverage by `0.52%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6948/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6948?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6948 +/- ##\n==========================================\n+ Coverage 80.01% 80.54% +0.52% \n==========================================\n Files 161 161 \n Lines 30120 30120 \n==========================================\n+ Hits 24102 24259 +157 \n+ Misses 6018 5861 -157 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6948?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6948/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6948/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `86.63% <0.00%> (-5.27%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6948/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.21% <0.00%> (-2.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6948/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.32%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6948/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `98.95% <0.00%> (+73.82%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6948?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6948?src=pr&el=footer). Last update [6078b12...2f212d3](https://codecov.io/gh/huggingface/transformers/pull/6948?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Yes, please and thank you!!! Needing it badly!", "Here you go!" ]
1,599
1,599
1,599
CONTRIBUTOR
null
You can now run ```bash td=test_data/wmt_en_ro python run_eval.py t5-base $td/val.source preds.txt --reference_path $td/val.target \ --score_path metrics.json --length_penalty 0.2 \ --task translation_en_to_ro --num_beams 2 --n_obs 2 --bs 1 --length_penalty 0.6 ``` h/t @stas00 in #6369
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6948/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6948/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6948", "html_url": "https://github.com/huggingface/transformers/pull/6948", "diff_url": "https://github.com/huggingface/transformers/pull/6948.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6948.patch", "merged_at": 1599243572000 }
https://api.github.com/repos/huggingface/transformers/issues/6947
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6947/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6947/comments
https://api.github.com/repos/huggingface/transformers/issues/6947/events
https://github.com/huggingface/transformers/issues/6947
693,237,992
MDU6SXNzdWU2OTMyMzc5OTI=
6,947
Training script for other language (except English)
{ "login": "Tortoise17", "id": 36593708, "node_id": "MDQ6VXNlcjM2NTkzNzA4", "avatar_url": "https://avatars.githubusercontent.com/u/36593708?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Tortoise17", "html_url": "https://github.com/Tortoise17", "followers_url": "https://api.github.com/users/Tortoise17/followers", "following_url": "https://api.github.com/users/Tortoise17/following{/other_user}", "gists_url": "https://api.github.com/users/Tortoise17/gists{/gist_id}", "starred_url": "https://api.github.com/users/Tortoise17/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tortoise17/subscriptions", "organizations_url": "https://api.github.com/users/Tortoise17/orgs", "repos_url": "https://api.github.com/users/Tortoise17/repos", "events_url": "https://api.github.com/users/Tortoise17/events{/privacy}", "received_events_url": "https://api.github.com/users/Tortoise17/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You could try either https://huggingface.co/blog/how-to-train or https://discuss.huggingface.co\r\n\r\n^^ Forum is better for open-ended questions like those ones." ]
1,599
1,599
1,599
NONE
null
Dear Friends. I have found this very nice and great. I would like to ask, is there any script which can be used to train other language model? and how much minimum GPU power is required for such training? If you can guide . ! I am thinking to test for German language.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6947/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6947/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6946
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6946/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6946/comments
https://api.github.com/repos/huggingface/transformers/issues/6946/events
https://github.com/huggingface/transformers/pull/6946
693,200,748
MDExOlB1bGxSZXF1ZXN0NDc5NzEzMTcx
6,946
[LXMERT] Fix tests on gpu
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6946?src=pr&el=h1) Report\n> Merging [#6946](https://codecov.io/gh/huggingface/transformers/pull/6946?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a75e31981915cbd072be7c4050a4b58c63ca6d33?el=desc) will **increase** coverage by `1.25%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6946/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6946?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6946 +/- ##\n==========================================\n+ Coverage 80.01% 81.27% +1.25% \n==========================================\n Files 161 161 \n Lines 30120 30120 \n==========================================\n+ Hits 24102 24479 +377 \n+ Misses 6018 5641 -377 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6946?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.76% <0.00%> (+0.13%)` | :arrow_up: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.30% <0.00%> (+0.40%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `88.05% <0.00%> (+0.55%)` | :arrow_up: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `82.76% <0.00%> (+6.06%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.91% <0.00%> (+72.35%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6946?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6946?src=pr&el=footer). Last update [a75e319...b6fd572](https://codecov.io/gh/huggingface/transformers/pull/6946?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,599
1,599
1,599
MEMBER
null
Some tensors and models were created in tests without specifying the device.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6946/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6946/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6946", "html_url": "https://github.com/huggingface/transformers/pull/6946", "diff_url": "https://github.com/huggingface/transformers/pull/6946.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6946.patch", "merged_at": 1599228535000 }
https://api.github.com/repos/huggingface/transformers/issues/6945
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6945/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6945/comments
https://api.github.com/repos/huggingface/transformers/issues/6945/events
https://github.com/huggingface/transformers/issues/6945
693,198,951
MDU6SXNzdWU2OTMxOTg5NTE=
6,945
Restoring ELECTRA-Small checkpoint doesn't work properly
{ "login": "DevKretov", "id": 38000417, "node_id": "MDQ6VXNlcjM4MDAwNDE3", "avatar_url": "https://avatars.githubusercontent.com/u/38000417?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DevKretov", "html_url": "https://github.com/DevKretov", "followers_url": "https://api.github.com/users/DevKretov/followers", "following_url": "https://api.github.com/users/DevKretov/following{/other_user}", "gists_url": "https://api.github.com/users/DevKretov/gists{/gist_id}", "starred_url": "https://api.github.com/users/DevKretov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DevKretov/subscriptions", "organizations_url": "https://api.github.com/users/DevKretov/orgs", "repos_url": "https://api.github.com/users/DevKretov/repos", "events_url": "https://api.github.com/users/DevKretov/events{/privacy}", "received_events_url": "https://api.github.com/users/DevKretov/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "> Who can help\r\n> Seems like nobody\r\n\r\nHaha hopefully we can help you with that :-) \r\n\r\nPinging our ELECTRA master @LysandreJik ", "Hello! Indeed I think I can help you :) \r\n\r\nI don't think I'm seeing what happened with the first case? Your first option was correct, you should use the conversion script. This should create a directory in which there is a `pytorch_model.bin` and a `config.json`.\r\n\r\nHowever, you should note that ELECTRA contains both a *discriminator* and a *generator*. Only the generator may be used for MLM, as the discriminator is trained with the ELECTRA objective and would output gibberish if used for MLM.\r\n\r\nIf you used the script with the option `--discriminator_or_generator=discriminator`, then you should load your checkpoint in `ElectraForPreTraining`. If you used the script with the option `--discriminator_or_generator=generator`, then you can load your checkpoint in `ElectraForMaskedLM` and should see sensible output when using it for MLM tasks.", "@LysandreJik Thank you for your reply! As I've said, I used the conversion script and specified generator as needed for the MLM. However, I still get gibberish results as shown above. That's why I guess there is a bug in the way how the weights are transferred from Tensorflow checkpoints to HF model. ", "Oh, okay. Let me check.", "I just did the exact following steps and got it to work:\r\n\r\n```bash\r\n# Link from the official google repo\r\nwget https://storage.googleapis.com/electra-data/electra_small.zip \r\nunzip electra_small.zip\r\ncd electra_small\r\n\r\n# If you're converting a different model you should make your own config.json file\r\nwget https://s3.amazonaws.com/models.huggingface.co/bert/google/electra-small-generator/config.json\r\n\r\n# Use the conversion script\r\npython transformers/src/transformers/convert_electra_original_tf_checkpoint_to_pytorch.py \\\r\n --tf_checkpoint_path=electra_small/electra_small \\\r\n --config_file=electra_small/config.json \\\r\n --pytorch_dump_path=electra_small/pytorch_model.bin \\\r\n --discriminator_or_generator=generator\r\n```\r\n\r\nThe last command outputs:\r\n\r\n```\r\n[...]\r\nInitialize PyTorch weight ['generator_predictions', 'LayerNorm', 'beta'] generator_predictions/LayerNorm/beta\r\nInitialize PyTorch weight ['generator_predictions', 'LayerNorm', 'gamma'] generator_predictions/LayerNorm/gamma\r\nInitialize PyTorch weight ['generator_predictions', 'dense', 'bias'] generator_predictions/dense/bias\r\nInitialize PyTorch weight ['generator_predictions', 'dense', 'kernel'] generator_predictions/dense/kernel\r\nInitialize PyTorch weight ['generator_lm_head', 'bias'] generator_predictions/output_bias\r\nSkipping generator_predictions/temperature\r\nSkipping global_step\r\nSave PyTorch model to electra_small/pytorch_model.bin\r\n```\r\n\r\nYou can then load the model you exported:\r\n\r\n```py\r\nfrom transformers import FillMaskPipeline, ElectraForMaskedLM, ElectraTokenizer\r\n\r\nfill_mask = FillMaskPipeline(\r\n model=ElectraForMaskedLM.from_pretrained(\"/path/to/model/and/config/electra_small\"),\r\n tokenizer=ElectraTokenizer.from_pretrained(\"google/electra-small-generator\")\r\n)\r\n\r\nprint(fill_mask(\"Filling the blanks using a pipeline is an [MASK] thing to do.\"))\r\n```\r\n\r\nwhich returns\r\n\r\n```\r\n[{'sequence': '[CLS] filling the blanks using a pipeline is an easy thing to do. [SEP]',\r\n 'score': 0.8874430060386658,\r\n 'token': 3733,\r\n 'token_str': 'easy'},\r\n {'sequence': '[CLS] filling the blanks using a pipeline is an easier thing to do. [SEP]',\r\n 'score': 0.024068119004368782,\r\n 'token': 6082,\r\n 'token_str': 'easier'},\r\n {'sequence': '[CLS] filling the blanks using a pipeline is an interesting thing to do. [SEP]',\r\n 'score': 0.016776252537965775,\r\n 'token': 5875,\r\n 'token_str': 'interesting'},\r\n {'sequence': '[CLS] filling the blanks using a pipeline is an important thing to do. [SEP]',\r\n 'score': 0.014077582396566868,\r\n 'token': 2590,\r\n 'token_str': 'important'},\r\n {'sequence': '[CLS] filling the blanks using a pipeline is an expensive thing to do. [SEP]',\r\n 'score': 0.012089359574019909,\r\n 'token': 6450,\r\n 'token_str': 'expensive'}]\r\n```", "Closing as the issue is resolved." ]
1,599
1,599
1,599
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.1.0 - Platform: Colab - Python version: 3.6.9 - PyTorch version (GPU?): default Colab - Tensorflow version (GPU?): default Colab (checkpoint from 1.15) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help Seems like nobody <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Hello, I am trying to load electra-small checkpoint model from Google Research (https://github.com/google-research/electra) into HuggingFace's ElectraForMaskedLM object. There were several different ways I tried to achieve that: - Converted the checkpoint with the help of cli **convert_electra_original_tf_checkpoint_to_pytorch.py** file - Converted the checkpoint with the help of .from_pretrained() method with the config.json provided here: https://s3.amazonaws.com/models.huggingface.co/bert/google/electra-small-generator/config.json Both worked without any exceptions. The first one didn't write anything to the output except for the contents of config.json file and the path the model would be saved to. The second one writes lots of information about skipping several variables and initialising others: `Initialize PyTorch weight ['discriminator_predictions', 'dense', 'bias'] discriminator_predictions/dense/bias Initialize PyTorch weight ['discriminator_predictions', 'dense', 'kernel'] discriminator_predictions/dense/kernel Initialize PyTorch weight ['discriminator_predictions', 'dense_prediction', 'bias'] discriminator_predictions/dense_1/bias Initialize PyTorch weight ['discriminator_predictions', 'dense_prediction', 'kernel'] discriminator_predictions/dense_1/kernel Initialize PyTorch weight ['electra', 'embeddings', 'LayerNorm', 'beta'] electra/embeddings/LayerNorm/beta Initialize PyTorch weight ['electra', 'embeddings', 'LayerNorm', 'gamma'] electra/embeddings/LayerNorm/gamma Initialize PyTorch weight ['electra', 'embeddings', 'position_embeddings'] electra/embeddings/position_embeddings Initialize PyTorch weight ['electra', 'embeddings', 'token_type_embeddings'] electra/embeddings/token_type_embeddings Initialize PyTorch weight ['electra', 'embeddings', 'word_embeddings'] electra/embeddings/word_embeddings Initialize PyTorch weight ['electra', 'embeddings_project', 'bias'] electra/embeddings_project/bias Initialize PyTorch weight ['electra', 'embeddings_project', 'kernel'] electra/embeddings_project/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_0', 'attention', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_0/attention/output/LayerNorm/beta Initialize PyTorch weight ['electra', 'encoder', 'layer_0', 'attention', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_0/attention/output/LayerNorm/gamma Initialize PyTorch weight ['electra', 'encoder', 'layer_0', 'attention', 'output', 'dense', 'bias'] electra/encoder/layer_0/attention/output/dense/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_0', 'attention', 'output', 'dense', 'kernel'] electra/encoder/layer_0/attention/output/dense/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_0', 'attention', 'self', 'key', 'bias'] electra/encoder/layer_0/attention/self/key/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_0', 'attention', 'self', 'key', 'kernel'] electra/encoder/layer_0/attention/self/key/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_0', 'attention', 'self', 'query', 'bias'] electra/encoder/layer_0/attention/self/query/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_0', 'attention', 'self', 'query', 'kernel'] electra/encoder/layer_0/attention/self/query/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_0', 'attention', 'self', 'value', 'bias'] electra/encoder/layer_0/attention/self/value/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_0', 'attention', 'self', 'value', 'kernel'] electra/encoder/layer_0/attention/self/value/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_0', 'intermediate', 'dense', 'bias'] electra/encoder/layer_0/intermediate/dense/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_0', 'intermediate', 'dense', 'kernel'] electra/encoder/layer_0/intermediate/dense/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_0', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_0/output/LayerNorm/beta Initialize PyTorch weight ['electra', 'encoder', 'layer_0', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_0/output/LayerNorm/gamma Initialize PyTorch weight ['electra', 'encoder', 'layer_0', 'output', 'dense', 'bias'] electra/encoder/layer_0/output/dense/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_0', 'output', 'dense', 'kernel'] electra/encoder/layer_0/output/dense/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_1', 'attention', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_1/attention/output/LayerNorm/beta Initialize PyTorch weight ['electra', 'encoder', 'layer_1', 'attention', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_1/attention/output/LayerNorm/gamma Initialize PyTorch weight ['electra', 'encoder', 'layer_1', 'attention', 'output', 'dense', 'bias'] electra/encoder/layer_1/attention/output/dense/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_1', 'attention', 'output', 'dense', 'kernel'] electra/encoder/layer_1/attention/output/dense/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_1', 'attention', 'self', 'key', 'bias'] electra/encoder/layer_1/attention/self/key/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_1', 'attention', 'self', 'key', 'kernel'] electra/encoder/layer_1/attention/self/key/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_1', 'attention', 'self', 'query', 'bias'] electra/encoder/layer_1/attention/self/query/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_1', 'attention', 'self', 'query', 'kernel'] electra/encoder/layer_1/attention/self/query/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_1', 'attention', 'self', 'value', 'bias'] electra/encoder/layer_1/attention/self/value/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_1', 'attention', 'self', 'value', 'kernel'] electra/encoder/layer_1/attention/self/value/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_1', 'intermediate', 'dense', 'bias'] electra/encoder/layer_1/intermediate/dense/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_1', 'intermediate', 'dense', 'kernel'] electra/encoder/layer_1/intermediate/dense/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_1', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_1/output/LayerNorm/beta Initialize PyTorch weight ['electra', 'encoder', 'layer_1', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_1/output/LayerNorm/gamma Initialize PyTorch weight ['electra', 'encoder', 'layer_1', 'output', 'dense', 'bias'] electra/encoder/layer_1/output/dense/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_1', 'output', 'dense', 'kernel'] electra/encoder/layer_1/output/dense/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_10', 'attention', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_10/attention/output/LayerNorm/beta Initialize PyTorch weight ['electra', 'encoder', 'layer_10', 'attention', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_10/attention/output/LayerNorm/gamma Initialize PyTorch weight ['electra', 'encoder', 'layer_10', 'attention', 'output', 'dense', 'bias'] electra/encoder/layer_10/attention/output/dense/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_10', 'attention', 'output', 'dense', 'kernel'] electra/encoder/layer_10/attention/output/dense/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_10', 'attention', 'self', 'key', 'bias'] electra/encoder/layer_10/attention/self/key/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_10', 'attention', 'self', 'key', 'kernel'] electra/encoder/layer_10/attention/self/key/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_10', 'attention', 'self', 'query', 'bias'] electra/encoder/layer_10/attention/self/query/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_10', 'attention', 'self', 'query', 'kernel'] electra/encoder/layer_10/attention/self/query/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_10', 'attention', 'self', 'value', 'bias'] electra/encoder/layer_10/attention/self/value/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_10', 'attention', 'self', 'value', 'kernel'] electra/encoder/layer_10/attention/self/value/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_10', 'intermediate', 'dense', 'bias'] electra/encoder/layer_10/intermediate/dense/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_10', 'intermediate', 'dense', 'kernel'] electra/encoder/layer_10/intermediate/dense/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_10', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_10/output/LayerNorm/beta Initialize PyTorch weight ['electra', 'encoder', 'layer_10', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_10/output/LayerNorm/gamma Initialize PyTorch weight ['electra', 'encoder', 'layer_10', 'output', 'dense', 'bias'] electra/encoder/layer_10/output/dense/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_10', 'output', 'dense', 'kernel'] electra/encoder/layer_10/output/dense/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_11', 'attention', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_11/attention/output/LayerNorm/beta Initialize PyTorch weight ['electra', 'encoder', 'layer_11', 'attention', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_11/attention/output/LayerNorm/gamma Initialize PyTorch weight ['electra', 'encoder', 'layer_11', 'attention', 'output', 'dense', 'bias'] electra/encoder/layer_11/attention/output/dense/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_11', 'attention', 'output', 'dense', 'kernel'] electra/encoder/layer_11/attention/output/dense/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_11', 'attention', 'self', 'key', 'bias'] electra/encoder/layer_11/attention/self/key/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_11', 'attention', 'self', 'key', 'kernel'] electra/encoder/layer_11/attention/self/key/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_11', 'attention', 'self', 'query', 'bias'] electra/encoder/layer_11/attention/self/query/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_11', 'attention', 'self', 'query', 'kernel'] electra/encoder/layer_11/attention/self/query/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_11', 'attention', 'self', 'value', 'bias'] electra/encoder/layer_11/attention/self/value/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_11', 'attention', 'self', 'value', 'kernel'] electra/encoder/layer_11/attention/self/value/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_11', 'intermediate', 'dense', 'bias'] electra/encoder/layer_11/intermediate/dense/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_11', 'intermediate', 'dense', 'kernel'] electra/encoder/layer_11/intermediate/dense/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_11', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_11/output/LayerNorm/beta Initialize PyTorch weight ['electra', 'encoder', 'layer_11', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_11/output/LayerNorm/gamma Initialize PyTorch weight ['electra', 'encoder', 'layer_11', 'output', 'dense', 'bias'] electra/encoder/layer_11/output/dense/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_11', 'output', 'dense', 'kernel'] electra/encoder/layer_11/output/dense/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_2', 'attention', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_2/attention/output/LayerNorm/beta Initialize PyTorch weight ['electra', 'encoder', 'layer_2', 'attention', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_2/attention/output/LayerNorm/gamma Initialize PyTorch weight ['electra', 'encoder', 'layer_2', 'attention', 'output', 'dense', 'bias'] electra/encoder/layer_2/attention/output/dense/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_2', 'attention', 'output', 'dense', 'kernel'] electra/encoder/layer_2/attention/output/dense/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_2', 'attention', 'self', 'key', 'bias'] electra/encoder/layer_2/attention/self/key/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_2', 'attention', 'self', 'key', 'kernel'] electra/encoder/layer_2/attention/self/key/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_2', 'attention', 'self', 'query', 'bias'] electra/encoder/layer_2/attention/self/query/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_2', 'attention', 'self', 'query', 'kernel'] electra/encoder/layer_2/attention/self/query/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_2', 'attention', 'self', 'value', 'bias'] electra/encoder/layer_2/attention/self/value/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_2', 'attention', 'self', 'value', 'kernel'] electra/encoder/layer_2/attention/self/value/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_2', 'intermediate', 'dense', 'bias'] electra/encoder/layer_2/intermediate/dense/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_2', 'intermediate', 'dense', 'kernel'] electra/encoder/layer_2/intermediate/dense/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_2', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_2/output/LayerNorm/beta Initialize PyTorch weight ['electra', 'encoder', 'layer_2', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_2/output/LayerNorm/gamma Initialize PyTorch weight ['electra', 'encoder', 'layer_2', 'output', 'dense', 'bias'] electra/encoder/layer_2/output/dense/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_2', 'output', 'dense', 'kernel'] electra/encoder/layer_2/output/dense/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_3', 'attention', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_3/attention/output/LayerNorm/beta Initialize PyTorch weight ['electra', 'encoder', 'layer_3', 'attention', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_3/attention/output/LayerNorm/gamma Initialize PyTorch weight ['electra', 'encoder', 'layer_3', 'attention', 'output', 'dense', 'bias'] electra/encoder/layer_3/attention/output/dense/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_3', 'attention', 'output', 'dense', 'kernel'] electra/encoder/layer_3/attention/output/dense/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_3', 'attention', 'self', 'key', 'bias'] electra/encoder/layer_3/attention/self/key/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_3', 'attention', 'self', 'key', 'kernel'] electra/encoder/layer_3/attention/self/key/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_3', 'attention', 'self', 'query', 'bias'] electra/encoder/layer_3/attention/self/query/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_3', 'attention', 'self', 'query', 'kernel'] electra/encoder/layer_3/attention/self/query/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_3', 'attention', 'self', 'value', 'bias'] electra/encoder/layer_3/attention/self/value/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_3', 'attention', 'self', 'value', 'kernel'] electra/encoder/layer_3/attention/self/value/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_3', 'intermediate', 'dense', 'bias'] electra/encoder/layer_3/intermediate/dense/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_3', 'intermediate', 'dense', 'kernel'] electra/encoder/layer_3/intermediate/dense/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_3', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_3/output/LayerNorm/beta Initialize PyTorch weight ['electra', 'encoder', 'layer_3', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_3/output/LayerNorm/gamma Initialize PyTorch weight ['electra', 'encoder', 'layer_3', 'output', 'dense', 'bias'] electra/encoder/layer_3/output/dense/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_3', 'output', 'dense', 'kernel'] electra/encoder/layer_3/output/dense/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_4', 'attention', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_4/attention/output/LayerNorm/beta Initialize PyTorch weight ['electra', 'encoder', 'layer_4', 'attention', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_4/attention/output/LayerNorm/gamma Initialize PyTorch weight ['electra', 'encoder', 'layer_4', 'attention', 'output', 'dense', 'bias'] electra/encoder/layer_4/attention/output/dense/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_4', 'attention', 'output', 'dense', 'kernel'] electra/encoder/layer_4/attention/output/dense/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_4', 'attention', 'self', 'key', 'bias'] electra/encoder/layer_4/attention/self/key/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_4', 'attention', 'self', 'key', 'kernel'] electra/encoder/layer_4/attention/self/key/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_4', 'attention', 'self', 'query', 'bias'] electra/encoder/layer_4/attention/self/query/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_4', 'attention', 'self', 'query', 'kernel'] electra/encoder/layer_4/attention/self/query/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_4', 'attention', 'self', 'value', 'bias'] electra/encoder/layer_4/attention/self/value/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_4', 'attention', 'self', 'value', 'kernel'] electra/encoder/layer_4/attention/self/value/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_4', 'intermediate', 'dense', 'bias'] electra/encoder/layer_4/intermediate/dense/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_4', 'intermediate', 'dense', 'kernel'] electra/encoder/layer_4/intermediate/dense/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_4', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_4/output/LayerNorm/beta Initialize PyTorch weight ['electra', 'encoder', 'layer_4', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_4/output/LayerNorm/gamma Initialize PyTorch weight ['electra', 'encoder', 'layer_4', 'output', 'dense', 'bias'] electra/encoder/layer_4/output/dense/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_4', 'output', 'dense', 'kernel'] electra/encoder/layer_4/output/dense/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_5', 'attention', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_5/attention/output/LayerNorm/beta Initialize PyTorch weight ['electra', 'encoder', 'layer_5', 'attention', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_5/attention/output/LayerNorm/gamma Initialize PyTorch weight ['electra', 'encoder', 'layer_5', 'attention', 'output', 'dense', 'bias'] electra/encoder/layer_5/attention/output/dense/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_5', 'attention', 'output', 'dense', 'kernel'] electra/encoder/layer_5/attention/output/dense/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_5', 'attention', 'self', 'key', 'bias'] electra/encoder/layer_5/attention/self/key/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_5', 'attention', 'self', 'key', 'kernel'] electra/encoder/layer_5/attention/self/key/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_5', 'attention', 'self', 'query', 'bias'] electra/encoder/layer_5/attention/self/query/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_5', 'attention', 'self', 'query', 'kernel'] electra/encoder/layer_5/attention/self/query/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_5', 'attention', 'self', 'value', 'bias'] electra/encoder/layer_5/attention/self/value/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_5', 'attention', 'self', 'value', 'kernel'] electra/encoder/layer_5/attention/self/value/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_5', 'intermediate', 'dense', 'bias'] electra/encoder/layer_5/intermediate/dense/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_5', 'intermediate', 'dense', 'kernel'] electra/encoder/layer_5/intermediate/dense/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_5', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_5/output/LayerNorm/beta Initialize PyTorch weight ['electra', 'encoder', 'layer_5', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_5/output/LayerNorm/gamma Initialize PyTorch weight ['electra', 'encoder', 'layer_5', 'output', 'dense', 'bias'] electra/encoder/layer_5/output/dense/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_5', 'output', 'dense', 'kernel'] electra/encoder/layer_5/output/dense/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_6', 'attention', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_6/attention/output/LayerNorm/beta Initialize PyTorch weight ['electra', 'encoder', 'layer_6', 'attention', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_6/attention/output/LayerNorm/gamma Initialize PyTorch weight ['electra', 'encoder', 'layer_6', 'attention', 'output', 'dense', 'bias'] electra/encoder/layer_6/attention/output/dense/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_6', 'attention', 'output', 'dense', 'kernel'] electra/encoder/layer_6/attention/output/dense/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_6', 'attention', 'self', 'key', 'bias'] electra/encoder/layer_6/attention/self/key/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_6', 'attention', 'self', 'key', 'kernel'] electra/encoder/layer_6/attention/self/key/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_6', 'attention', 'self', 'query', 'bias'] electra/encoder/layer_6/attention/self/query/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_6', 'attention', 'self', 'query', 'kernel'] electra/encoder/layer_6/attention/self/query/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_6', 'attention', 'self', 'value', 'bias'] electra/encoder/layer_6/attention/self/value/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_6', 'attention', 'self', 'value', 'kernel'] electra/encoder/layer_6/attention/self/value/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_6', 'intermediate', 'dense', 'bias'] electra/encoder/layer_6/intermediate/dense/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_6', 'intermediate', 'dense', 'kernel'] electra/encoder/layer_6/intermediate/dense/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_6', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_6/output/LayerNorm/beta Initialize PyTorch weight ['electra', 'encoder', 'layer_6', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_6/output/LayerNorm/gamma Initialize PyTorch weight ['electra', 'encoder', 'layer_6', 'output', 'dense', 'bias'] electra/encoder/layer_6/output/dense/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_6', 'output', 'dense', 'kernel'] electra/encoder/layer_6/output/dense/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_7', 'attention', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_7/attention/output/LayerNorm/beta Initialize PyTorch weight ['electra', 'encoder', 'layer_7', 'attention', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_7/attention/output/LayerNorm/gamma Initialize PyTorch weight ['electra', 'encoder', 'layer_7', 'attention', 'output', 'dense', 'bias'] electra/encoder/layer_7/attention/output/dense/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_7', 'attention', 'output', 'dense', 'kernel'] electra/encoder/layer_7/attention/output/dense/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_7', 'attention', 'self', 'key', 'bias'] electra/encoder/layer_7/attention/self/key/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_7', 'attention', 'self', 'key', 'kernel'] electra/encoder/layer_7/attention/self/key/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_7', 'attention', 'self', 'query', 'bias'] electra/encoder/layer_7/attention/self/query/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_7', 'attention', 'self', 'query', 'kernel'] electra/encoder/layer_7/attention/self/query/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_7', 'attention', 'self', 'value', 'bias'] electra/encoder/layer_7/attention/self/value/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_7', 'attention', 'self', 'value', 'kernel'] electra/encoder/layer_7/attention/self/value/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_7', 'intermediate', 'dense', 'bias'] electra/encoder/layer_7/intermediate/dense/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_7', 'intermediate', 'dense', 'kernel'] electra/encoder/layer_7/intermediate/dense/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_7', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_7/output/LayerNorm/beta Initialize PyTorch weight ['electra', 'encoder', 'layer_7', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_7/output/LayerNorm/gamma Initialize PyTorch weight ['electra', 'encoder', 'layer_7', 'output', 'dense', 'bias'] electra/encoder/layer_7/output/dense/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_7', 'output', 'dense', 'kernel'] electra/encoder/layer_7/output/dense/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_8', 'attention', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_8/attention/output/LayerNorm/beta Initialize PyTorch weight ['electra', 'encoder', 'layer_8', 'attention', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_8/attention/output/LayerNorm/gamma Initialize PyTorch weight ['electra', 'encoder', 'layer_8', 'attention', 'output', 'dense', 'bias'] electra/encoder/layer_8/attention/output/dense/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_8', 'attention', 'output', 'dense', 'kernel'] electra/encoder/layer_8/attention/output/dense/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_8', 'attention', 'self', 'key', 'bias'] electra/encoder/layer_8/attention/self/key/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_8', 'attention', 'self', 'key', 'kernel'] electra/encoder/layer_8/attention/self/key/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_8', 'attention', 'self', 'query', 'bias'] electra/encoder/layer_8/attention/self/query/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_8', 'attention', 'self', 'query', 'kernel'] electra/encoder/layer_8/attention/self/query/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_8', 'attention', 'self', 'value', 'bias'] electra/encoder/layer_8/attention/self/value/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_8', 'attention', 'self', 'value', 'kernel'] electra/encoder/layer_8/attention/self/value/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_8', 'intermediate', 'dense', 'bias'] electra/encoder/layer_8/intermediate/dense/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_8', 'intermediate', 'dense', 'kernel'] electra/encoder/layer_8/intermediate/dense/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_8', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_8/output/LayerNorm/beta Initialize PyTorch weight ['electra', 'encoder', 'layer_8', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_8/output/LayerNorm/gamma Initialize PyTorch weight ['electra', 'encoder', 'layer_8', 'output', 'dense', 'bias'] electra/encoder/layer_8/output/dense/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_8', 'output', 'dense', 'kernel'] electra/encoder/layer_8/output/dense/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_9', 'attention', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_9/attention/output/LayerNorm/beta Initialize PyTorch weight ['electra', 'encoder', 'layer_9', 'attention', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_9/attention/output/LayerNorm/gamma Initialize PyTorch weight ['electra', 'encoder', 'layer_9', 'attention', 'output', 'dense', 'bias'] electra/encoder/layer_9/attention/output/dense/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_9', 'attention', 'output', 'dense', 'kernel'] electra/encoder/layer_9/attention/output/dense/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_9', 'attention', 'self', 'key', 'bias'] electra/encoder/layer_9/attention/self/key/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_9', 'attention', 'self', 'key', 'kernel'] electra/encoder/layer_9/attention/self/key/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_9', 'attention', 'self', 'query', 'bias'] electra/encoder/layer_9/attention/self/query/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_9', 'attention', 'self', 'query', 'kernel'] electra/encoder/layer_9/attention/self/query/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_9', 'attention', 'self', 'value', 'bias'] electra/encoder/layer_9/attention/self/value/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_9', 'attention', 'self', 'value', 'kernel'] electra/encoder/layer_9/attention/self/value/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_9', 'intermediate', 'dense', 'bias'] electra/encoder/layer_9/intermediate/dense/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_9', 'intermediate', 'dense', 'kernel'] electra/encoder/layer_9/intermediate/dense/kernel Initialize PyTorch weight ['electra', 'encoder', 'layer_9', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_9/output/LayerNorm/beta Initialize PyTorch weight ['electra', 'encoder', 'layer_9', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_9/output/LayerNorm/gamma Initialize PyTorch weight ['electra', 'encoder', 'layer_9', 'output', 'dense', 'bias'] electra/encoder/layer_9/output/dense/bias Initialize PyTorch weight ['electra', 'encoder', 'layer_9', 'output', 'dense', 'kernel'] electra/encoder/layer_9/output/dense/kernel Skipping generator/embeddings_project/bias ['generator', 'embeddings_project', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/embeddings_project/kernel ['generator', 'embeddings_project', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_0/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_0', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_0/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_0', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_0/attention/output/dense/bias ['generator', 'encoder', 'layer_0', 'attention', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_0/attention/output/dense/kernel ['generator', 'encoder', 'layer_0', 'attention', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_0/attention/self/key/bias ['generator', 'encoder', 'layer_0', 'attention', 'self', 'key', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_0/attention/self/key/kernel ['generator', 'encoder', 'layer_0', 'attention', 'self', 'key', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_0/attention/self/query/bias ['generator', 'encoder', 'layer_0', 'attention', 'self', 'query', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_0/attention/self/query/kernel ['generator', 'encoder', 'layer_0', 'attention', 'self', 'query', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_0/attention/self/value/bias ['generator', 'encoder', 'layer_0', 'attention', 'self', 'value', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_0/attention/self/value/kernel ['generator', 'encoder', 'layer_0', 'attention', 'self', 'value', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_0/intermediate/dense/bias ['generator', 'encoder', 'layer_0', 'intermediate', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_0/intermediate/dense/kernel ['generator', 'encoder', 'layer_0', 'intermediate', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_0/output/LayerNorm/beta ['generator', 'encoder', 'layer_0', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_0/output/LayerNorm/gamma ['generator', 'encoder', 'layer_0', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_0/output/dense/bias ['generator', 'encoder', 'layer_0', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_0/output/dense/kernel ['generator', 'encoder', 'layer_0', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_1/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_1', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_1/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_1', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_1/attention/output/dense/bias ['generator', 'encoder', 'layer_1', 'attention', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_1/attention/output/dense/kernel ['generator', 'encoder', 'layer_1', 'attention', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_1/attention/self/key/bias ['generator', 'encoder', 'layer_1', 'attention', 'self', 'key', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_1/attention/self/key/kernel ['generator', 'encoder', 'layer_1', 'attention', 'self', 'key', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_1/attention/self/query/bias ['generator', 'encoder', 'layer_1', 'attention', 'self', 'query', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_1/attention/self/query/kernel ['generator', 'encoder', 'layer_1', 'attention', 'self', 'query', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_1/attention/self/value/bias ['generator', 'encoder', 'layer_1', 'attention', 'self', 'value', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_1/attention/self/value/kernel ['generator', 'encoder', 'layer_1', 'attention', 'self', 'value', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_1/intermediate/dense/bias ['generator', 'encoder', 'layer_1', 'intermediate', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_1/intermediate/dense/kernel ['generator', 'encoder', 'layer_1', 'intermediate', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_1/output/LayerNorm/beta ['generator', 'encoder', 'layer_1', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_1/output/LayerNorm/gamma ['generator', 'encoder', 'layer_1', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_1/output/dense/bias ['generator', 'encoder', 'layer_1', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_1/output/dense/kernel ['generator', 'encoder', 'layer_1', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_10/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_10', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_10/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_10', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_10/attention/output/dense/bias ['generator', 'encoder', 'layer_10', 'attention', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_10/attention/output/dense/kernel ['generator', 'encoder', 'layer_10', 'attention', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_10/attention/self/key/bias ['generator', 'encoder', 'layer_10', 'attention', 'self', 'key', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_10/attention/self/key/kernel ['generator', 'encoder', 'layer_10', 'attention', 'self', 'key', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_10/attention/self/query/bias ['generator', 'encoder', 'layer_10', 'attention', 'self', 'query', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_10/attention/self/query/kernel ['generator', 'encoder', 'layer_10', 'attention', 'self', 'query', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_10/attention/self/value/bias ['generator', 'encoder', 'layer_10', 'attention', 'self', 'value', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_10/attention/self/value/kernel ['generator', 'encoder', 'layer_10', 'attention', 'self', 'value', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_10/intermediate/dense/bias ['generator', 'encoder', 'layer_10', 'intermediate', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_10/intermediate/dense/kernel ['generator', 'encoder', 'layer_10', 'intermediate', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_10/output/LayerNorm/beta ['generator', 'encoder', 'layer_10', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_10/output/LayerNorm/gamma ['generator', 'encoder', 'layer_10', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_10/output/dense/bias ['generator', 'encoder', 'layer_10', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_10/output/dense/kernel ['generator', 'encoder', 'layer_10', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_11/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_11', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_11/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_11', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_11/attention/output/dense/bias ['generator', 'encoder', 'layer_11', 'attention', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_11/attention/output/dense/kernel ['generator', 'encoder', 'layer_11', 'attention', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_11/attention/self/key/bias ['generator', 'encoder', 'layer_11', 'attention', 'self', 'key', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_11/attention/self/key/kernel ['generator', 'encoder', 'layer_11', 'attention', 'self', 'key', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_11/attention/self/query/bias ['generator', 'encoder', 'layer_11', 'attention', 'self', 'query', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_11/attention/self/query/kernel ['generator', 'encoder', 'layer_11', 'attention', 'self', 'query', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_11/attention/self/value/bias ['generator', 'encoder', 'layer_11', 'attention', 'self', 'value', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_11/attention/self/value/kernel ['generator', 'encoder', 'layer_11', 'attention', 'self', 'value', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_11/intermediate/dense/bias ['generator', 'encoder', 'layer_11', 'intermediate', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_11/intermediate/dense/kernel ['generator', 'encoder', 'layer_11', 'intermediate', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_11/output/LayerNorm/beta ['generator', 'encoder', 'layer_11', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_11/output/LayerNorm/gamma ['generator', 'encoder', 'layer_11', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_11/output/dense/bias ['generator', 'encoder', 'layer_11', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_11/output/dense/kernel ['generator', 'encoder', 'layer_11', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_2/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_2', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_2/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_2', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_2/attention/output/dense/bias ['generator', 'encoder', 'layer_2', 'attention', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_2/attention/output/dense/kernel ['generator', 'encoder', 'layer_2', 'attention', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_2/attention/self/key/bias ['generator', 'encoder', 'layer_2', 'attention', 'self', 'key', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_2/attention/self/key/kernel ['generator', 'encoder', 'layer_2', 'attention', 'self', 'key', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_2/attention/self/query/bias ['generator', 'encoder', 'layer_2', 'attention', 'self', 'query', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_2/attention/self/query/kernel ['generator', 'encoder', 'layer_2', 'attention', 'self', 'query', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_2/attention/self/value/bias ['generator', 'encoder', 'layer_2', 'attention', 'self', 'value', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_2/attention/self/value/kernel ['generator', 'encoder', 'layer_2', 'attention', 'self', 'value', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_2/intermediate/dense/bias ['generator', 'encoder', 'layer_2', 'intermediate', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_2/intermediate/dense/kernel ['generator', 'encoder', 'layer_2', 'intermediate', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_2/output/LayerNorm/beta ['generator', 'encoder', 'layer_2', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_2/output/LayerNorm/gamma ['generator', 'encoder', 'layer_2', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_2/output/dense/bias ['generator', 'encoder', 'layer_2', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_2/output/dense/kernel ['generator', 'encoder', 'layer_2', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_3/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_3', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_3/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_3', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_3/attention/output/dense/bias ['generator', 'encoder', 'layer_3', 'attention', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_3/attention/output/dense/kernel ['generator', 'encoder', 'layer_3', 'attention', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_3/attention/self/key/bias ['generator', 'encoder', 'layer_3', 'attention', 'self', 'key', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_3/attention/self/key/kernel ['generator', 'encoder', 'layer_3', 'attention', 'self', 'key', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_3/attention/self/query/bias ['generator', 'encoder', 'layer_3', 'attention', 'self', 'query', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_3/attention/self/query/kernel ['generator', 'encoder', 'layer_3', 'attention', 'self', 'query', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_3/attention/self/value/bias ['generator', 'encoder', 'layer_3', 'attention', 'self', 'value', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_3/attention/self/value/kernel ['generator', 'encoder', 'layer_3', 'attention', 'self', 'value', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_3/intermediate/dense/bias ['generator', 'encoder', 'layer_3', 'intermediate', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_3/intermediate/dense/kernel ['generator', 'encoder', 'layer_3', 'intermediate', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_3/output/LayerNorm/beta ['generator', 'encoder', 'layer_3', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_3/output/LayerNorm/gamma ['generator', 'encoder', 'layer_3', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_3/output/dense/bias ['generator', 'encoder', 'layer_3', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_3/output/dense/kernel ['generator', 'encoder', 'layer_3', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_4/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_4', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_4/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_4', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_4/attention/output/dense/bias ['generator', 'encoder', 'layer_4', 'attention', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_4/attention/output/dense/kernel ['generator', 'encoder', 'layer_4', 'attention', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_4/attention/self/key/bias ['generator', 'encoder', 'layer_4', 'attention', 'self', 'key', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_4/attention/self/key/kernel ['generator', 'encoder', 'layer_4', 'attention', 'self', 'key', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_4/attention/self/query/bias ['generator', 'encoder', 'layer_4', 'attention', 'self', 'query', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_4/attention/self/query/kernel ['generator', 'encoder', 'layer_4', 'attention', 'self', 'query', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_4/attention/self/value/bias ['generator', 'encoder', 'layer_4', 'attention', 'self', 'value', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_4/attention/self/value/kernel ['generator', 'encoder', 'layer_4', 'attention', 'self', 'value', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_4/intermediate/dense/bias ['generator', 'encoder', 'layer_4', 'intermediate', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_4/intermediate/dense/kernel ['generator', 'encoder', 'layer_4', 'intermediate', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_4/output/LayerNorm/beta ['generator', 'encoder', 'layer_4', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_4/output/LayerNorm/gamma ['generator', 'encoder', 'layer_4', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_4/output/dense/bias ['generator', 'encoder', 'layer_4', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_4/output/dense/kernel ['generator', 'encoder', 'layer_4', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_5/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_5', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_5/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_5', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_5/attention/output/dense/bias ['generator', 'encoder', 'layer_5', 'attention', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_5/attention/output/dense/kernel ['generator', 'encoder', 'layer_5', 'attention', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_5/attention/self/key/bias ['generator', 'encoder', 'layer_5', 'attention', 'self', 'key', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_5/attention/self/key/kernel ['generator', 'encoder', 'layer_5', 'attention', 'self', 'key', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_5/attention/self/query/bias ['generator', 'encoder', 'layer_5', 'attention', 'self', 'query', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_5/attention/self/query/kernel ['generator', 'encoder', 'layer_5', 'attention', 'self', 'query', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_5/attention/self/value/bias ['generator', 'encoder', 'layer_5', 'attention', 'self', 'value', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_5/attention/self/value/kernel ['generator', 'encoder', 'layer_5', 'attention', 'self', 'value', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_5/intermediate/dense/bias ['generator', 'encoder', 'layer_5', 'intermediate', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_5/intermediate/dense/kernel ['generator', 'encoder', 'layer_5', 'intermediate', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_5/output/LayerNorm/beta ['generator', 'encoder', 'layer_5', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_5/output/LayerNorm/gamma ['generator', 'encoder', 'layer_5', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_5/output/dense/bias ['generator', 'encoder', 'layer_5', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_5/output/dense/kernel ['generator', 'encoder', 'layer_5', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_6/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_6', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_6/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_6', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_6/attention/output/dense/bias ['generator', 'encoder', 'layer_6', 'attention', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_6/attention/output/dense/kernel ['generator', 'encoder', 'layer_6', 'attention', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_6/attention/self/key/bias ['generator', 'encoder', 'layer_6', 'attention', 'self', 'key', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_6/attention/self/key/kernel ['generator', 'encoder', 'layer_6', 'attention', 'self', 'key', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_6/attention/self/query/bias ['generator', 'encoder', 'layer_6', 'attention', 'self', 'query', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_6/attention/self/query/kernel ['generator', 'encoder', 'layer_6', 'attention', 'self', 'query', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_6/attention/self/value/bias ['generator', 'encoder', 'layer_6', 'attention', 'self', 'value', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_6/attention/self/value/kernel ['generator', 'encoder', 'layer_6', 'attention', 'self', 'value', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_6/intermediate/dense/bias ['generator', 'encoder', 'layer_6', 'intermediate', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_6/intermediate/dense/kernel ['generator', 'encoder', 'layer_6', 'intermediate', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_6/output/LayerNorm/beta ['generator', 'encoder', 'layer_6', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_6/output/LayerNorm/gamma ['generator', 'encoder', 'layer_6', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_6/output/dense/bias ['generator', 'encoder', 'layer_6', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_6/output/dense/kernel ['generator', 'encoder', 'layer_6', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_7/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_7', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_7/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_7', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_7/attention/output/dense/bias ['generator', 'encoder', 'layer_7', 'attention', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_7/attention/output/dense/kernel ['generator', 'encoder', 'layer_7', 'attention', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_7/attention/self/key/bias ['generator', 'encoder', 'layer_7', 'attention', 'self', 'key', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_7/attention/self/key/kernel ['generator', 'encoder', 'layer_7', 'attention', 'self', 'key', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_7/attention/self/query/bias ['generator', 'encoder', 'layer_7', 'attention', 'self', 'query', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_7/attention/self/query/kernel ['generator', 'encoder', 'layer_7', 'attention', 'self', 'query', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_7/attention/self/value/bias ['generator', 'encoder', 'layer_7', 'attention', 'self', 'value', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_7/attention/self/value/kernel ['generator', 'encoder', 'layer_7', 'attention', 'self', 'value', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_7/intermediate/dense/bias ['generator', 'encoder', 'layer_7', 'intermediate', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_7/intermediate/dense/kernel ['generator', 'encoder', 'layer_7', 'intermediate', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_7/output/LayerNorm/beta ['generator', 'encoder', 'layer_7', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_7/output/LayerNorm/gamma ['generator', 'encoder', 'layer_7', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_7/output/dense/bias ['generator', 'encoder', 'layer_7', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_7/output/dense/kernel ['generator', 'encoder', 'layer_7', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_8/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_8', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_8/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_8', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_8/attention/output/dense/bias ['generator', 'encoder', 'layer_8', 'attention', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_8/attention/output/dense/kernel ['generator', 'encoder', 'layer_8', 'attention', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_8/attention/self/key/bias ['generator', 'encoder', 'layer_8', 'attention', 'self', 'key', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_8/attention/self/key/kernel ['generator', 'encoder', 'layer_8', 'attention', 'self', 'key', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_8/attention/self/query/bias ['generator', 'encoder', 'layer_8', 'attention', 'self', 'query', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_8/attention/self/query/kernel ['generator', 'encoder', 'layer_8', 'attention', 'self', 'query', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_8/attention/self/value/bias ['generator', 'encoder', 'layer_8', 'attention', 'self', 'value', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_8/attention/self/value/kernel ['generator', 'encoder', 'layer_8', 'attention', 'self', 'value', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_8/intermediate/dense/bias ['generator', 'encoder', 'layer_8', 'intermediate', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_8/intermediate/dense/kernel ['generator', 'encoder', 'layer_8', 'intermediate', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_8/output/LayerNorm/beta ['generator', 'encoder', 'layer_8', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_8/output/LayerNorm/gamma ['generator', 'encoder', 'layer_8', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_8/output/dense/bias ['generator', 'encoder', 'layer_8', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_8/output/dense/kernel ['generator', 'encoder', 'layer_8', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_9/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_9', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_9/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_9', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_9/attention/output/dense/bias ['generator', 'encoder', 'layer_9', 'attention', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_9/attention/output/dense/kernel ['generator', 'encoder', 'layer_9', 'attention', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_9/attention/self/key/bias ['generator', 'encoder', 'layer_9', 'attention', 'self', 'key', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_9/attention/self/key/kernel ['generator', 'encoder', 'layer_9', 'attention', 'self', 'key', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_9/attention/self/query/bias ['generator', 'encoder', 'layer_9', 'attention', 'self', 'query', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_9/attention/self/query/kernel ['generator', 'encoder', 'layer_9', 'attention', 'self', 'query', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_9/attention/self/value/bias ['generator', 'encoder', 'layer_9', 'attention', 'self', 'value', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_9/attention/self/value/kernel ['generator', 'encoder', 'layer_9', 'attention', 'self', 'value', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_9/intermediate/dense/bias ['generator', 'encoder', 'layer_9', 'intermediate', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_9/intermediate/dense/kernel ['generator', 'encoder', 'layer_9', 'intermediate', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_9/output/LayerNorm/beta ['generator', 'encoder', 'layer_9', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_9/output/LayerNorm/gamma ['generator', 'encoder', 'layer_9', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_9/output/dense/bias ['generator', 'encoder', 'layer_9', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator/encoder/layer_9/output/dense/kernel ['generator', 'encoder', 'layer_9', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator' Skipping generator_predictions/LayerNorm/beta ['generator_predictions', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator_predictions' Skipping generator_predictions/LayerNorm/gamma ['generator_predictions', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator_predictions' Skipping generator_predictions/dense/bias ['generator_predictions', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator_predictions' Skipping generator_predictions/dense/kernel ['generator_predictions', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator_predictions' Skipping generator_predictions/output_bias ['generator_lm_head', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator_lm_head'` It seems like OK since the Google's checkpoint consists of both generator and discriminator. However, as soon as I try to make some prediction (e.g. "I love reading [MASK]."), the top-5 most likely words is: - ᵃ - fulfilled - sal - 1809 - drank which is pretty random, I guess. On the other hand, as soon as I initialise the ElectraForMaskedLM model directly from https://huggingface.co/google/electra-small-generator , everything works fantastically! So my hypothesis is, that there is a bug in checkpoint translation to HF format. Can anybody tell me how I can load my own checkpoint (or at least that Google's to check if the whole thing works correctly)? ## To reproduce Steps to reproduce the behavior: 1. Download the official ELECTRA-small checkpoint 2. Try to run CLI script to convert the TF checkpoint to HF .bin model 3. Run classical prediction and see top-5 words (OR import HF pipeline and run it in "fill-mask" mode) You will see that the model from HF web works correctly whereas the model from Google's GitHub gives random tokens. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior I expected to see that the model is capable of making basic predictions, so that I know that it has been restored and reformatted correctly.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6945/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6945/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6944
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6944/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6944/comments
https://api.github.com/repos/huggingface/transformers/issues/6944/events
https://github.com/huggingface/transformers/issues/6944
692,945,608
MDU6SXNzdWU2OTI5NDU2MDg=
6,944
Finetuning XLM-Roberta-2XLM-Roberta on custom dataset gives the following error:
{ "login": "laibamehnaz", "id": 36405283, "node_id": "MDQ6VXNlcjM2NDA1Mjgz", "avatar_url": "https://avatars.githubusercontent.com/u/36405283?v=4", "gravatar_id": "", "url": "https://api.github.com/users/laibamehnaz", "html_url": "https://github.com/laibamehnaz", "followers_url": "https://api.github.com/users/laibamehnaz/followers", "following_url": "https://api.github.com/users/laibamehnaz/following{/other_user}", "gists_url": "https://api.github.com/users/laibamehnaz/gists{/gist_id}", "starred_url": "https://api.github.com/users/laibamehnaz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/laibamehnaz/subscriptions", "organizations_url": "https://api.github.com/users/laibamehnaz/orgs", "repos_url": "https://api.github.com/users/laibamehnaz/repos", "events_url": "https://api.github.com/users/laibamehnaz/events{/privacy}", "received_events_url": "https://api.github.com/users/laibamehnaz/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This looks like an issue with tokenizer in evaluation.\r\nif you don't need generative metrics while Training then set `predict_from_generate` to `False` and don't pass `compute_metrics` function to `Trainer`\r\n\r\npinging @patrickvonplaten ", "Hi @patil-suraj , \r\nDid exactly that. This issue didn't show up. But now all generations on the test set are the same regardless of the input.", "Hey @laibamehnaz, \r\n\r\nCould you copy paste a working code example, so that we can reproduce the error? :-) Note that you have to use a different tokenizer than the one that is used in `bert2bert-cnn_dailymail-fp16`", "Sure, you can see the code here: [https://colab.research.google.com/drive/1xxBQcPe05bFBQvJQLx6Mw2Qd9YVWPNS9?usp=sharing](url)", "Hmm, I don't get protobuf error when running locally...could you maybe adapt the google colab so that I can get your error above by just clicking \"run\" :-) ? (Setting the correct transformer pip installs and the correct trainer params, etc...)", "Oh I am so sorry, this script won't give you the error because I have set `predict_from_generate` as `False` and `prediction_loss_only` to `True`. ", "Sure, I will share the script.\r\n> Hmm, I don't get protobuf error when running locally...could you maybe adapt the google colab so that I can get your error above by just clicking \"run\" :-) ? (Setting the correct transformer pip installs and the correct trainer params, etc...)\r\n\r\n", "Here you go:\r\n[https://colab.research.google.com/drive/1xxBQcPe05bFBQvJQLx6Mw2Qd9YVWPNS9?usp=sharing](url)\r\n\r\n> Hmm, I don't get protobuf error when running locally...could you maybe adapt the google colab so that I can get your error above by just clicking \"run\" :-) ? (Setting the correct transformer pip installs and the correct trainer params, etc...)\r\n\r\n", "I'm sorry the colab does not work for me...I get install errors when running the second cell. Let me merge the \"more_general_trainer_metric\" PR into master next week and then we can work directly on master.", "Hi @patrickvonplaten , I have fixed the issue in the second cell. \r\n[https://colab.research.google.com/drive/1xxBQcPe05bFBQvJQLx6Mw2Qd9YVWPNS9?usp=sharing](url)", "> I'm sorry the colab does not work for me...I get install errors when running the second cell. Let me merge the \"more_general_trainer_metric\" PR into master next week and then we can work directly on master.\r\n\r\nSure, that will be great!!", "@patrickvonplaten These types of metrics are included in #6769, and it's almost ready to merge. \r\nHow about we support `EncoderDecoder` models in `examples/seq2seq` ?\r\n\r\n@sshleifer does that make sense ?", "Depends how much complexity it adds, but on a high level I like that idea a lot!", "There is a more pressing issue of getting incremental decoding/use cache working for Roberta that I would probably prioritize higher.", "@patil-suraj @sshleifer - I like the idea of adding `EncoderDecoder` to your `Seq2SeqTrainer` a lot. This way I won't have to continue my hacky PR here: https://github.com/huggingface/transformers/pull/5840. I would place this actually as more important since people need to be able to quickly fine-tune any `EncoderDecoderModel`. \r\n\r\n@patil-suraj - After merging your PR, I'd be happy to work on adding `EncoderDecoder` to the Seq2Seq Trainer.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,599
1,606
1,606
NONE
null
`Evaluation: 100% 30/30 [00:45<00:00, 1.53s/it] [libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): terminate called after throwing an instance of 'google::protobuf::FatalException' what(): CHECK failed: (index) >= (0): ` I am using the following script: [https://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16](url) Appreciate any help. Thank you.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6944/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6944/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6943
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6943/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6943/comments
https://api.github.com/repos/huggingface/transformers/issues/6943/events
https://github.com/huggingface/transformers/issues/6943
692,939,341
MDU6SXNzdWU2OTI5MzkzNDE=
6,943
Transformer-XL: Remove unused/unnecessary Parameters
{ "login": "RafaelWO", "id": 38643099, "node_id": "MDQ6VXNlcjM4NjQzMDk5", "avatar_url": "https://avatars.githubusercontent.com/u/38643099?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RafaelWO", "html_url": "https://github.com/RafaelWO", "followers_url": "https://api.github.com/users/RafaelWO/followers", "following_url": "https://api.github.com/users/RafaelWO/following{/other_user}", "gists_url": "https://api.github.com/users/RafaelWO/gists{/gist_id}", "starred_url": "https://api.github.com/users/RafaelWO/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RafaelWO/subscriptions", "organizations_url": "https://api.github.com/users/RafaelWO/orgs", "repos_url": "https://api.github.com/users/RafaelWO/repos", "events_url": "https://api.github.com/users/RafaelWO/events{/privacy}", "received_events_url": "https://api.github.com/users/RafaelWO/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "ping @TevenLeScao " ]
1,599
1,600
1,600
CONTRIBUTOR
null
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> The used configuration parameters `tgt_len` and `ext_len` in the Transformer-XL implementation are not utilized in the source code. I would recommend removing those since the only confuse the users of the model(s). ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> As already mentioned, the unnecessary parameters are confusing and can be likely removed. The parameter `tgt_len` is determined for the input tensor anyway and `ext_len` was only an experimental feature (see [issue from the original repo](https://github.com/kimiyoung/transformer-xl/issues/9)). ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md --> I am happy to contribute and open a PR with the requested changes if that's also in your interest?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6943/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6943/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6942
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6942/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6942/comments
https://api.github.com/repos/huggingface/transformers/issues/6942/events
https://github.com/huggingface/transformers/pull/6942
692,869,796
MDExOlB1bGxSZXF1ZXN0NDc5NDIwNzI0
6,942
Create Readme.MD for KanBERTo
{ "login": "Naveenkhasyap", "id": 2017156, "node_id": "MDQ6VXNlcjIwMTcxNTY=", "avatar_url": "https://avatars.githubusercontent.com/u/2017156?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Naveenkhasyap", "html_url": "https://github.com/Naveenkhasyap", "followers_url": "https://api.github.com/users/Naveenkhasyap/followers", "following_url": "https://api.github.com/users/Naveenkhasyap/following{/other_user}", "gists_url": "https://api.github.com/users/Naveenkhasyap/gists{/gist_id}", "starred_url": "https://api.github.com/users/Naveenkhasyap/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Naveenkhasyap/subscriptions", "organizations_url": "https://api.github.com/users/Naveenkhasyap/orgs", "repos_url": "https://api.github.com/users/Naveenkhasyap/repos", "events_url": "https://api.github.com/users/Naveenkhasyap/events{/privacy}", "received_events_url": "https://api.github.com/users/Naveenkhasyap/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6942?src=pr&el=h1) Report\n> Merging [#6942](https://codecov.io/gh/huggingface/transformers/pull/6942?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e95d262f2553859af9bffbfe5f5bc7e362259939?el=desc) will **increase** coverage by `2.32%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6942/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6942?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6942 +/- ##\n==========================================\n+ Coverage 77.70% 80.02% +2.32% \n==========================================\n Files 161 161 \n Lines 30119 30119 \n==========================================\n+ Hits 23403 24103 +700 \n+ Misses 6716 6016 -700 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6942?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6942/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6942/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.36%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6942/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6942/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.70% <0.00%> (-6.07%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6942/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6942/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6942/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.90% <0.00%> (-0.51%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6942/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.30% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6942/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.76% <0.00%> (-0.14%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6942/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (+1.95%)` | :arrow_up: |\n| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/6942/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6942?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6942?src=pr&el=footer). Last update [e95d262...f75d263](https://codecov.io/gh/huggingface/transformers/pull/6942?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "please let me know how to mention language code (kn) also so that it will be easy to filter on website.", "> please let me know how to mention language code (kn) also so that it will be easy to filter on website.\r\n\r\nThere you go!\r\n\r\nThanks for sharing", "https://huggingface.co/models?filter=kn", "> \r\n> \r\n> https://huggingface.co/models?filter=kn\r\n\r\nThanks a lot @julien-c 👍 ." ]
1,599
1,599
1,599
CONTRIBUTOR
null
KanBERTo language model readme for Kannada language. which I trained by following your blog. <!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6942/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6942/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6942", "html_url": "https://github.com/huggingface/transformers/pull/6942", "diff_url": "https://github.com/huggingface/transformers/pull/6942.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6942.patch", "merged_at": 1599258273000 }
https://api.github.com/repos/huggingface/transformers/issues/6941
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6941/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6941/comments
https://api.github.com/repos/huggingface/transformers/issues/6941/events
https://github.com/huggingface/transformers/pull/6941
692,792,640
MDExOlB1bGxSZXF1ZXN0NDc5MzU0MjU2
6,941
match CI's version of flake8
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6941?src=pr&el=h1) Report\n> Merging [#6941](https://codecov.io/gh/huggingface/transformers/pull/6941?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e95d262f2553859af9bffbfe5f5bc7e362259939?el=desc) will **increase** coverage by `2.62%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6941/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6941?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6941 +/- ##\n==========================================\n+ Coverage 77.70% 80.33% +2.62% \n==========================================\n Files 161 161 \n Lines 30119 30119 \n==========================================\n+ Hits 23403 24195 +792 \n+ Misses 6716 5924 -792 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6941?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6941/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `21.47% <0.00%> (-69.44%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6941/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: |\n| [src/transformers/tokenization\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6941/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `78.64% <0.00%> (-17.48%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6941/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6941/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.70% <0.00%> (-6.07%)` | :arrow_down: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6941/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6941/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6941/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.30% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6941/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.03% <0.00%> (+0.13%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6941/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+2.28%)` | :arrow_up: |\n| ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/6941/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6941?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6941?src=pr&el=footer). Last update [e95d262...a0fcde2](https://codecov.io/gh/huggingface/transformers/pull/6941?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,599
1,599
1,599
CONTRIBUTOR
null
my flake8 wasn't up-to-date enough, so my system's `make quality` wasn't reporting the same things CI did - this PR adds the actual required version. Thinking more about some of these minimal versions - CI will always install afresh and thus will always run the latest version. Is there a way to tell pip to always install the latest versions of certain dependencies on `pip install -i ".[dev]"`, rather than hardcoding the minimal numbers which quickly become outdated?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6941/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6941/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6941", "html_url": "https://github.com/huggingface/transformers/pull/6941", "diff_url": "https://github.com/huggingface/transformers/pull/6941.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6941.patch", "merged_at": 1599480746000 }
https://api.github.com/repos/huggingface/transformers/issues/6940
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6940/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6940/comments
https://api.github.com/repos/huggingface/transformers/issues/6940/events
https://github.com/huggingface/transformers/pull/6940
692,782,512
MDExOlB1bGxSZXF1ZXN0NDc5MzQ1NzAy
6,940
[ported model] FSMT (FairSeq MachineTranslation)
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6940?src=pr&el=h1) Report\n> Merging [#6940](https://codecov.io/gh/huggingface/transformers/pull/6940?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/90cde2e938638e64a8696a12b79ee5f52364b162?el=desc) will **increase** coverage by `2.47%`.\n> The diff coverage is `94.60%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6940/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6940?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6940 +/- ##\n==========================================\n+ Coverage 79.62% 82.10% +2.47% \n==========================================\n Files 168 171 +3 \n Lines 32284 33044 +760 \n==========================================\n+ Hits 25706 27130 +1424 \n+ Misses 6578 5914 -664 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6940?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_fsmt.py](https://codecov.io/gh/huggingface/transformers/pull/6940/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mc210LnB5) | `93.58% <93.58%> (ø)` | |\n| [src/transformers/tokenization\\_fsmt.py](https://codecov.io/gh/huggingface/transformers/pull/6940/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnNtdC5weQ==) | `95.23% <95.23%> (ø)` | |\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/6940/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.35% <100.00%> (+0.01%)` | :arrow_up: |\n| [src/transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6940/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `96.15% <100.00%> (+0.04%)` | :arrow_up: |\n| [src/transformers/configuration\\_fsmt.py](https://codecov.io/gh/huggingface/transformers/pull/6940/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2ZzbXQucHk=) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6940/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `82.38% <100.00%> (+0.08%)` | :arrow_up: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6940/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `91.93% <100.00%> (+0.13%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6940/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.55% <0.00%> (-34.28%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6940/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.84% <0.00%> (-23.17%)` | :arrow_down: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6940/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `70.19% <0.00%> (-23.08%)` | :arrow_down: |\n| ... and [15 more](https://codecov.io/gh/huggingface/transformers/pull/6940/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6940?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6940?src=pr&el=footer). Last update [90cde2e...1be40e3](https://codecov.io/gh/huggingface/transformers/pull/6940?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Here is a little paraphrase script to amuse you:\r\n\r\n```python\r\nfrom transformers.tokenization_fsmt import FSMTTokenizer\r\nfrom transformers.modeling_fsmt import FSMTForConditionalGeneration\r\n\r\ntext = \"Every morning when I wake up, I experience an exquisite joy - the joy of being Salvador Dalí - and I ask myself in rapture: What wonderful things is this Salvador Dalí going to accomplish today?\"\r\n\r\ndef translate(src_lang, tgt_lang, text):\r\n mname = f\"facebook/wmt19-{src_lang}-{tgt_lang}\"\r\n tokenizer = FSMTTokenizer.from_pretrained(mname)\r\n model = FSMTForConditionalGeneration.from_pretrained(mname)\r\n\r\n input_ids = tokenizer.encode(text, return_tensors='pt')\r\n outputs = model.generate(input_ids, num_beams=5, early_stopping=True)\r\n decoded = tokenizer.decode(outputs[0], skip_special_tokens=True)\r\n return decoded\r\n\r\ndef paraphrase(src_lang, tgt_lang, text):\r\n return translate(tgt_lang, src_lang, translate(src_lang, tgt_lang, text))\r\n\r\nprint(f\"original:\\n{text}\")\r\nprint(f\"paraphrased en-ru-en:\\n{paraphrase('en', 'ru', text)}\")\r\nprint(f\"paraphrased en-de-en:\\n{paraphrase('en', 'de', text)}\")\r\n```\r\n\r\n* original: \r\n\r\n Every morning when I wake up, I experience an exquisite joy - the joy of being Salvador Dalí - and I ask myself in rapture: What wonderful things is this Salvador Dalí going to accomplish today?\r\n\r\n* paraphrased en-ru-en: \r\n\r\n Every morning when I wake up, I have an amazing joy - the joy of being Salvador Dali - and I ask myself in awe: what wonderful things is this Salvador Dali going to do today?\r\n\r\n* paraphrased en-de-en:\r\n\r\n Every morning when I wake up, I experience an exquisite joy - the joy of being Salvador Dalí - and I ask myself in ecstasy: what wonderful things will this Salvador Dalí do today?\r\n\r\nDali would have been proud! :)", "Hi, @stas00 \r\nCan the models be torchscripted or quantized?\r\nI understand they are from fairseq and are pre-trained. What about optimizations in training a seq2seq model in transfomers?", "Also the integration test fails in my torch 1.5.1 environment: https://gist.github.com/sshleifer/4ba0386e06d2b348c809f80c19f283fd\r\n", "Super excited about this!", "> Hi, @stas00\r\n> Can the models be torchscripted or quantized?\r\n> I understand they are from fairseq and are pre-trained. What about optimizations in training a seq2seq model in transfomers?\r\n\r\nThe first step is just to make things work and have a similar BLEU performance. At a later stage we can work on more goals. The plan is to polish this PR, have it merged and then I plan to post to the forums and then you guys can experiment, report problems, ask for things, etc. How does that sound?\r\n", "> Have not read modeling.py yet, but left some other nitpicks.\r\n\r\nThank you very much, @sshleifer - I will address those later today.\r\n\r\n> More importantly, I couldn't replicate `run_eval.py` results from this branch.\r\n\r\nI know why. I uploaded an experimental version of the models last night, thought I forced the caching off, as the models were re-downloaded, but just now while re-running run_eval I got suddenly a re-download and blue is 0.1. So the experimental model didn't work. :(\r\n\r\nSo I still need to sort out the caching issue: https://github.com/huggingface/transformers/issues/6916\r\n\r\nI'm reverting the models - takes a while to upload 5GB. I will update once this is complete and then you can re-eval.\r\n\r\n----\r\n\r\nI'm also thinking - needing an actual run_eval quality test, which can be run as a part of the test suite. perhaps on a small sample, maybe 100 instead of 2000 and a smallish beam size? then it can be slow, but not too slow? \r\n\r\n----\r\n\r\nAlso, as I mentioned earlier there is no way to override `num_beans` in run_eval so one has to manually change it in configuration_fsmt.py. \r\n\r\nSo you were running it with `num_beans=8`.\r\n\r\nHere are the results that I get for `PAIR=en-ru`:\r\n```\r\n# 15:\r\n# {'bleu': 31.2512, 'n_obs': 1997, 'runtime': 521, 'seconds_per_sample': 0.2609}\r\n# 50:\r\n# {'bleu': 31.2695, 'n_obs': 1997, 'runtime': 1692, 'seconds_per_sample': 0.8473}\r\n```\r\n\r\nI will rebase, once this is merged https://github.com/huggingface/transformers/pull/6948 - thank you!\r\n", "**edit**: CDN has been updated so you're good to go to eval the model.\r\n\r\nSo models have been updated, but I can't figure out how to bypass caching, so still getting the old versions - might have to wait 24h :( See this issue: https://github.com/huggingface/transformers/issues/6916#issuecomment-687321087\r\n\r\nSo until this caching issue is sorted out (or 24h have passed) please don't waste your time on trying to eval this model. It won't work.", "I wrote a bash script that `run_eval.py`s each of 4 checkpoints separately for each pair. So let's see which is the winner and use that one for now (could be different for different models):\r\n\r\n```\r\n\r\nexport BS=8\r\n# set to 5 for a quick test run, set to 2000 to eval all available records\r\nexport OBJS=2000\r\n# at the end we want NUM_BEAMS=50 (as that's what fairseq used in their eval)\r\nexport NUM_BEAMS=50\r\n\r\npairs=(ru-en en-ru en-de de-en)\r\nfor pair in \"${pairs[@]}\"\r\ndo\r\n export PAIR=$pair\r\n export DATA_DIR=data/$PAIR\r\n export SAVE_DIR=data/$PAIR\r\n mkdir -p $DATA_DIR\r\n sacrebleu -t wmt19 -l $PAIR --echo src | head -$OBJS > $DATA_DIR/val.source\r\n sacrebleu -t wmt19 -l $PAIR --echo ref | head -$OBJS > $DATA_DIR/val.target\r\n\r\n if [[ $pair =~ \"ru\" ]]\r\n then\r\n subdir=ensemble # ru folders\r\n else\r\n subdir=joined-dict.ensemble # de data folders are different\r\n fi\r\n\r\n END=4;\r\n for i in $(seq 1 $END);\r\n do\r\n model=model$i.pt;\r\n CHKPT=$model PYTHONPATH=\"src\" python src/transformers/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py --fsmt_checkpoint_path data/wmt19.$PAIR.$subdir --pytorch_dump_folder_path data/fsmt-wmt19-$PAIR > log.$PAIR-$model 2>&1\r\n echo \"###\" $PAIR $model num_beams=$NUM_BEAMS objs=$OBJS\r\n PYTHONPATH=\"src:examples/seq2seq\" python examples/seq2seq/run_eval.py /code/huggingface/transformers-fair-wmt/data/fsmt-wmt19-$PAIR $DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target --score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation 2> /dev/null\r\n done\r\n\r\n echo\r\n echo\r\ndone\r\n```\r\nIf someone decides to run this, you have to modify `convert_fsmt_original_pytorch_checkpoint_to_pytorch.py` to get the checkpoint name from `os.getenv(\"CHKPT\")`", "Results:\r\n\r\n```\r\n### ru-en model1.pt num_beams=50 objs=2000\r\n{'bleu': 38.8222, 'n_obs': 2000, 'runtime': 233, 'seconds_per_sample': 0.1165}\r\n### ru-en model2.pt num_beams=50 objs=2000\r\n{'bleu': 38.4053, 'n_obs': 2000, 'runtime': 233, 'seconds_per_sample': 0.1165}\r\n### ru-en model3.pt num_beams=50 objs=2000\r\n{'bleu': 38.8222, 'n_obs': 2000, 'runtime': 234, 'seconds_per_sample': 0.117}\r\n### ru-en model4.pt num_beams=50 objs=2000\r\n{'bleu': 38.933, 'n_obs': 2000, 'runtime': 236, 'seconds_per_sample': 0.118}\r\n\r\n\r\n### en-ru model1.pt num_beams=50 objs=2000\r\n{'bleu': 31.2898, 'n_obs': 1997, 'runtime': 295, 'seconds_per_sample': 0.1477}\r\n### en-ru model2.pt num_beams=50 objs=2000\r\n{'bleu': 31.4669, 'n_obs': 1997, 'runtime': 293, 'seconds_per_sample': 0.1467}\r\n### en-ru model3.pt num_beams=50 objs=2000\r\n{'bleu': 33.4736, 'n_obs': 1997, 'runtime': 289, 'seconds_per_sample': 0.1447}\r\n### en-ru model4.pt num_beams=50 objs=2000\r\n{'bleu': 33.2873, 'n_obs': 1997, 'runtime': 296, 'seconds_per_sample': 0.1482}\r\n\r\n\r\n### en-de model1.pt num_beams=50 objs=2000\r\n{'bleu': 40.7906, 'n_obs': 1997, 'runtime': 304, 'seconds_per_sample': 0.1522}\r\n### en-de model2.pt num_beams=50 objs=2000\r\n{'bleu': 40.7677, 'n_obs': 1997, 'runtime': 307, 'seconds_per_sample': 0.1537}\r\n### en-de model3.pt num_beams=50 objs=2000\r\n{'bleu': 40.7677, 'n_obs': 1997, 'runtime': 306, 'seconds_per_sample': 0.1532}\r\n### en-de model4.pt num_beams=50 objs=2000\r\n{'bleu': 42.7892, 'n_obs': 1997, 'runtime': 305, 'seconds_per_sample': 0.1527}\r\n\r\n\r\n### de-en model1.pt num_beams=50 objs=2000\r\n{'bleu': 39.4096, 'n_obs': 2000, 'runtime': 238, 'seconds_per_sample': 0.119}\r\n### de-en model2.pt num_beams=50 objs=2000\r\n{'bleu': 39.3009, 'n_obs': 2000, 'runtime': 238, 'seconds_per_sample': 0.119}\r\n### de-en model3.pt num_beams=50 objs=2000\r\n{'bleu': 38.9375, 'n_obs': 2000, 'runtime': 238, 'seconds_per_sample': 0.119}\r\n### de-en model4.pt num_beams=50 objs=2000\r\n{'bleu': 41.1808, 'n_obs': 2000, 'runtime': 237, 'seconds_per_sample': 0.1185}\r\n```\r\n\r\nSo the differences between checkpoints are quite significant, clearly the 4th checkpoint outperforms them all for each pair.", "Here is where we are at right now BLEU score-wise: (w/ `num_beams=50`) (after switching to using the 4th checkpoint file):\r\n\r\npair | fairseq | transformers\r\n-------|----|----------\r\n\"en-ru\"|36.4| 33.29\r\n\"ru-en\"|41.3| 38.93\r\n\"de-en\"|42.3| 41.18\r\n\"en-de\"|43.1| 42.79\r\n\r\nWe are very close with de/en/de, but 2-3 points below on ru/en/ru\r\n\r\nSo wrt/ model ensemble - as it was suggested transformers currently won't support that mechanism - so do we just stop here and release this ported version with a slight handicap? \r\n\r\nI'm going to work on the remaining divergence in beam search and may be score a little bit more. But I doubt we will get to the same level w/o ensemble.\r\n\r\np.s. I'm uploading new models, so in about 11 hours the CDN cache should update, if you want to validate these numbers.", "1) We can definitely merge this PR without ensemble, smart move checking each .pt file.\r\n2) Would be good to figure out how to eval 1 model.pt file with `fairseq-generate` so that we can figure out whether the discrepancy is from anything besides ensembling.\r\n\r\n", "> Would be good to figure out how to eval 1 model.pt file with `fairseq-generate` so that we can figure out whether the discrepancy is from anything besides ensembling.\r\n\r\nI agree. That would help a lot! But no word yet from @edunov: https://github.com/pytorch/fairseq/issues/2544\r\n\r\n", "wrt to shrinking it and reusing more from bart: probably it would take modifying bart to work with two vocabs and fall back on one, e.g. `if tgt_vocab is None: tgt_vocab_size = src_vocab_size`? That's the major reason for the \"fork\".\r\n\r\nThen we can definitely fold most of it back with a few extra flags.", "Here is an update on fairseq bleu scores validation. Got a [reply with great instructions](https://github.com/pytorch/fairseq/issues/2544#issuecomment-688054859) from @edunov and as a result I was able to get 35.7 with 4 models and 36.0 with model4 (en-ru pair). Sergei suggests that one more step is needed to re-rank the scores to reach the reported in the paper 36.4 score. Our best score at the moment is 33.29 for this pair. (other pairs are much closer to the goal than this one).\r\n\r\nSo now I know we are comparing apples to apples and I have more figuring out to do.", "@sshleifer, I added a new test `test_bleu_scores` that evals bleu score on a small batch, which I think is very useful for regression testing - as it's now built into the test suite. it's the same speed as other integration tests (model loading still takes much longer). Surely, it gives about 2/3rd of the best score, but it's enough to detect a regression in the model.\r\n\r\nI added caching so now it should be almost as fast to have many more integration tests.\r\n\r\nQuestion: currently I had to hack `sys.path` to get to the code in `examples/seq2seq`.\r\n\r\n```\r\n# XXX: make calculate_bleu accessible to integration tests?\r\nexamples_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), \"..\", \"examples\"))\r\nsys.path.insert(0, examples_dir)\r\nfrom seq2seq.utils import calculate_bleu # noqa\r\n```\r\n\r\nIs it time we make this and similar functions available to our \"normal\" integration tests? Thoughts?\r\n\r\n**edit**: I ended up just copying the function as it's just 1 line.\r\n\r\nbut then of course CI doesn't have `sacrebleu` installed.\r\n\r\nWhere is the information on how/where slow tests are run - i.e. not on CI? I understand they are being run, just don't know where and how to see the status. Then need to add `sacrebleu` to the requirements file of that special CI.", "Tried to answer your CI Q: https://discuss.huggingface.co/t/circleci-github-actions-which-tests-run-where-and-when/1042\r\n\r\nIn terms of moving the scorers, you will encounter some resistance because of the need to add dependencies.\r\n\r\nYou could just put your test in `examples/seq2seq/test_fsmt_bleu_score.py`. \r\n\r\n`sacrebleu` is in `examples/requirements.txt` so the self-scheduled nightly CI will run it (hopefully).", "OK, the model has been decoupled from Bart and a lot of the unneeded code removed. \r\n\r\nPlease let me know if anything else is needed. Thank you.", "Finally, before merging this, let's discuss the model naming with thinking into the future.\r\n\r\nWhat we give user is not fairseq, but a wmtXX-based/trained model, so perhaps fairseq shouldn't be anywhere in the name. \r\n\r\nfairseq have done other wmt datasets in the past. And so very likely to do wmt20 and others. If such future versions of the model are not much different perhaps those too could be folded into this model, therefore it shouldn't be hardwired to wmt19.\r\n\r\nThey call their line of \"products\" related to wmt: `transformer.wmt\\d\\d.\\w\\w-\\w\\w` (`transformer.wmt19.en-ru`, `transformer.wmt20.de-en`, etc.\r\n\r\nPerhaps therefore `TWMT` is most fitting then as the base name? As in `TransformerWMT`, but shorter? So we end up with:\r\n\r\n* `TWMTTokenizer`\r\n* `TWMTForConditionalGeneration`\r\n\r\nThoughts?\r\n\r\np.s. for better context, the loading code for fairseq wmt is:\r\n\r\n```\r\nmodel = torch.hub.load('pytorch/fairseq', \"transformer.wmt19.en-ru\", \r\ncheckpoint_file='model1.pt:model2.pt:model3.pt:model4.pt', tokenizer='moses', bpe='fastbpe')\r\n```", "- transformer is unhelpful -- all models in this lib are transformers.\r\n- `FairseqMTModel` works for me -- its analogous to `MarianMTModel`.\r\n\r\n@thomwolf: this is good from my side. You may have opinions on naming/nonstandard use of `DecoderConfig` \r\n", "> transformer is unhelpful -- all models in this lib are transformers.\r\n\r\nagreed!\r\n\r\n> FairseqMTModel works for me -- its analogous to MarianMTModel.\r\n\r\nToo big of a scope? this one is wmt-specific.\r\n\r\nPerhaps `FWMT`? as in FairseqWMT\r\n\r\nand lowercased `fairseqmt` where it's needed sucks readability-wise, full abbreviation `fmt` works much better - i guess that's why it's `modeling_marian.py` and not `modeling_marianmt.py`.\r\n\r\nOr just `WMT` from `wmtxx` series?", "> nonstandard use of `DecoderConfig`\r\n\r\nthis is a non-standard model with 2 vocabs of different sizes, that if I'm correct is the first one in the family, so it calls for a non-standard solution in lieu of changing the core functions to support such models. \r\n\r\nThere are at least 3 other hacks I had to add in the tokenizer and the model/config to fit into the current world of \"same size src/tgt vocab\". And there is at least one core function (resize) that will most likely break on this model, since it resizes to the same size, but we haven't had a need for it so far.\r\n\r\np.s. Oddly enough their en-de/de--en models are of the same size merged vocabs, but ru-en/en-ru are not.\r\n", "I wrote a script that translates with fairseq/model4 and this model based on model4 side by side and comparing the outputs.\r\nI fed it all of sacrebleu eval text, so out of 8000 sentences there were ~10 mismatches - the rest matches up perfectly on the top ranking beam choice (beams=5). Excellent!\r\n\r\nYet, we are still behind on the bleu scores. \r\n\r\nWe don't have (1) the model ensemble, and also (2) the re-ranking algorithm that they use, which is responsible for the extra points.", "Bringing some of the insights from porting allen nlp models at https://github.com/huggingface/transformers/issues/7049, I tried to re-run eval with `len_penalty=0.6` (until now we used the default `len_penalty=1.0`). \r\n\r\nAnd yes, for 3 out of 4 models we get a significant improvement.\r\n\r\n| pair | fairseq +rerank | fairseq -rerank | transformers |\r\n| ------- | --------- | --------- | ------------- |\r\n| \"ru-en\" | 41.3 | 38.55 | 38.14/39.05 |\r\n| \"en-ru\" | 36.4 | 31.26 | 32.76 |\r\n| \"en-de\" | 43.1 | 40.88 | 42.23 |\r\n| \"de-en\" | 42.3 | 39.38 | 40.71 |\r\n\r\nWe score higher on ru-en with `len_penalty=1.0` 38.8524, vs `38.13` with `len_penalty=0.6`. Rerunning with `len_penalty=1.1`, I get `39.0498` - almost a point higher!\r\n\r\nI'm not sure how to get the `len_penalty` used by fairseq - this data is not being shared, other than the paper alluding that they found the best fit by searching the space.\r\n\r\nI suppose we could search too, but how are we to know that finding the length penalty that leads to the highest bleu score on 2000 items is generic enough to lead to the best translation quality for any other input? \r\n\r\nWhat do you think?\r\n\r\nAnd we now beat fairseq's results on a single model with no re-ranking.", "Using the new `run_eval_search.py` script https://github.com/huggingface/transformers/pull/7109 I run an extensive search last night and got some extra score!\r\n\r\n```\r\nPAIR=en-de\r\n--search=\"num_beams=5:8:11:15 length_penalty=0.6:0.7:0.8:0.9:1.0:1.1 early_stopping=true:false\"\r\n```\r\nHere is just the top results.\r\n```\r\nbleu | num_beams | length_penalty | early_stopping\r\n----- | --------- | -------------- | --------------\r\n42.83 | 15 | 1.0 | 0\r\n42.79 | 8 | 1.0 | 0\r\n42.79 | 15 | 0.9 | 0\r\n42.79 | 15 | 1.1 | 1\r\n42.77 | 5 | 1.0 | 0\r\n42.76 | 8 | 0.8 | 0\r\n```\r\nI think I will run it for all others and use the best reasonable hparam set as the default. Here it'd be: `5 | 1 | False` - the user can of course override these during `generate`.", "> This is great, impressive work @stas00!\r\n\r\nThank you for the kind words, @thomwolf and doing the review!\r\n\r\n> Regarding naming I like both `FSMTModel` and `FairseqMTModel` with a preference for the later more explicit naming option.\r\n\r\nThe only issue I see with `FairseqMTModel` is that when we have to use the lowercased version of it in the code: `fairseqmt` it doesn't lend to readability `fsmt` on the other hand reads easily. \r\n\r\nAlso when typing it out often I couldn't remember whether to use FairSeq or Fairseq. This was my initial name, and it was quite painful working with it. Once I switched to FSMT I experienced much more flow.\r\n\r\nSo based on these 2 points my vote goes for `FSMTModel`", "Sounds like `FSMTModel` wins!", "> Sounds like `FSMTModel` wins!\r\n\r\nExcellent! So once @LysandreJik and @sgugger get a chance to review we can finally merge it!", "Hmm, since we removed the `fsmt-` prefix in model names, it is no longer possible to identify all models for this arch:\r\n\r\nhttps://huggingface.co/models?search=wmt\r\n\r\ngives models from other arch as well.\r\n\r\n@sshleifer - do you have any Ideas how to solve this? \r\n\r\nRestore the `fsmt-` prefix?", "Your yaml front matter allows filters!\r\nIs this page correct: https://huggingface.co/models?filter=fsmt ?" ]
1,599
1,600
1,600
CONTRIBUTOR
null
This PR implements the spec specified at https://github.com/huggingface/transformers/issues/5419 The new model is FSMT (aka FairSeqMachineTranslation): `FSMTForConditionalGeneration` which comes with 4 models: * "facebook/wmt19-ru-en" * "facebook/wmt19-en-ru" * "facebook/wmt19-de-en" * "facebook/wmt19-en-de" This is a ported version of [fairseq wmt19 transformer](https://github.com/pytorch/fairseq/blob/master/examples/wmt19/README.md) which includes 3 languages and 4 pairs. For more details of the original, please see, [Facebook FAIR's WMT19 News Translation Task Submission](https://arxiv.org/abs/1907.06616). **Huge, huge thanks to @sshleifer, who has been incredibly supportive of this very difficult, yet, fun learning experience! Thank you, Sam!** **And many thanks to all those who wrote all the existing transformers code, so that I just needed to tweak a few things here and there, rather than write from scratch. And, last, but not least, to the fairseq developers, who have done the heavy lifting with the initial training and finetuning, and coding.** The tokenizer is a tweaked XLM tokenizer, the model is a tweaked Bart model. There were too many differences that I couldn't just subclass either of these 2, having 2 unmerged dictionaries of different sized being the main cause. But there were quite a few other nuances, please see the porting notes in the code. There are a few more things to complete, in particular we currently don't have support for model ensemble, which is used by fairseq - they run eval on an ensemble of 4 model checkpoints. This implementation currently uses only the first checkpoint. And then more work on matching fairseq outputs is needed - no beam is perfect, and with beam search there are some small differences - I was encouraged to release the model and continue working on improving it. I'm still a few points behind on the BLEU score - most likely due to having the ensemble, but since I am not able to reproduce fairseq reported scores, I'm not sure how to evaluate against a single model. See the [issue](https://github.com/pytorch/fairseq/issues/2544). I added the current and the expected scores in the model cards. If one of you has already started working on ensemble support please let me know. You will find 'Porting Notes' in `modeling_fsmt.py` and `tokenization_fsmt.py` with what has been done, nuances and what still needs to be done. The 4 models are up on s3 and can be used already. Usage: ```python from transformers.tokenization_fsmt import FSMTTokenizer from transformers.modeling_fsmt import FSMTForConditionalGeneration mname = "facebook/wmt19-en-ru" tokenizer = FSMTTokenizer.from_pretrained(mname) model = FSMTForConditionalGeneration.from_pretrained(mname) input = "Machine learning is great, isn't it? input_ids = tokenizer.encode(input, return_tensors="pt") outputs = model.generate(input_ids) decoded = tokenizer.decode(outputs[0], skip_special_tokens=True) print(decoded) # Машинное обучение - это здорово, не так ли? ``` **edit**: we have 5 more wmt models en/de from https://github.com/jungokasai/deep-shallow/ ready to be added as well, once this is merged. @sshleifer
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6940/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 2, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6940/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6940", "html_url": "https://github.com/huggingface/transformers/pull/6940", "diff_url": "https://github.com/huggingface/transformers/pull/6940.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6940.patch", "merged_at": 1600356690000 }
https://api.github.com/repos/huggingface/transformers/issues/6939
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6939/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6939/comments
https://api.github.com/repos/huggingface/transformers/issues/6939/events
https://github.com/huggingface/transformers/issues/6939
692,721,273
MDU6SXNzdWU2OTI3MjEyNzM=
6,939
PyTorch (with GPU) Trainer leaks CPU memory on Google Colab
{ "login": "rajaswa", "id": 34607601, "node_id": "MDQ6VXNlcjM0NjA3NjAx", "avatar_url": "https://avatars.githubusercontent.com/u/34607601?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rajaswa", "html_url": "https://github.com/rajaswa", "followers_url": "https://api.github.com/users/rajaswa/followers", "following_url": "https://api.github.com/users/rajaswa/following{/other_user}", "gists_url": "https://api.github.com/users/rajaswa/gists{/gist_id}", "starred_url": "https://api.github.com/users/rajaswa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rajaswa/subscriptions", "organizations_url": "https://api.github.com/users/rajaswa/orgs", "repos_url": "https://api.github.com/users/rajaswa/repos", "events_url": "https://api.github.com/users/rajaswa/events{/privacy}", "received_events_url": "https://api.github.com/users/rajaswa/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,599
1,599
1,599
NONE
null
## Environment info - `transformers` version: 3.1.0 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.6.0+cu101 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help Trainer: @sgugger ## Information Model I am using `RoBERTaForMaskedLM`: ` config = RobertaConfig( vocab_size=32_000, max_position_embeddings=256+2, num_attention_heads=12, num_hidden_layers=6, type_vocab_size=1, ) ` The problem arises when using: * [ ] [my own modified scripts](https://colab.research.google.com/drive/1bVv6V9IIhTNXvWbpUvQyPyRpK3LOgFfz?usp=sharing): I am trying to follow the official blog tutorial on training a language model from scratch. I have made a few changes from the official script (using Marathi OSCAR corpus, changed model config, and vocabulary size, and am fetching the sentences in the dataset on the fly The tasks I am working on is: * [ ] my own task or dataset: Masked Language Modelling with RoBERTa on Marathi Oscar Corpus ## To reproduce Steps to reproduce the behavior: 1. Run this [colab notebook](https://colab.research.google.com/drive/1bVv6V9IIhTNXvWbpUvQyPyRpK3LOgFfz?usp=sharing). ## Expected behavior The RAM consumption starts rising towards the end of the first epoch, ultimately crashing the entire session due to full memory consumption (12.72GB of Colab RAM)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6939/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6939/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6938
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6938/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6938/comments
https://api.github.com/repos/huggingface/transformers/issues/6938/events
https://github.com/huggingface/transformers/issues/6938
692,656,786
MDU6SXNzdWU2OTI2NTY3ODY=
6,938
The downloading url of GermEval 2014 dataset is out dated.
{ "login": "YuanEric88", "id": 32417149, "node_id": "MDQ6VXNlcjMyNDE3MTQ5", "avatar_url": "https://avatars.githubusercontent.com/u/32417149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/YuanEric88", "html_url": "https://github.com/YuanEric88", "followers_url": "https://api.github.com/users/YuanEric88/followers", "following_url": "https://api.github.com/users/YuanEric88/following{/other_user}", "gists_url": "https://api.github.com/users/YuanEric88/gists{/gist_id}", "starred_url": "https://api.github.com/users/YuanEric88/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/YuanEric88/subscriptions", "organizations_url": "https://api.github.com/users/YuanEric88/orgs", "repos_url": "https://api.github.com/users/YuanEric88/repos", "events_url": "https://api.github.com/users/YuanEric88/events{/privacy}", "received_events_url": "https://api.github.com/users/YuanEric88/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,599
1,605
1,605
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information The downloading url in GermanEval 2014 dataset is out dated in Readme file https://github.com/huggingface/transformers/tree/master/examples/token-classification The urls should be substituted by those in run.sh Model I am using (Bert, XLNet ...): The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6938/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6938/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6937
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6937/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6937/comments
https://api.github.com/repos/huggingface/transformers/issues/6937/events
https://github.com/huggingface/transformers/issues/6937
692,650,879
MDU6SXNzdWU2OTI2NTA4Nzk=
6,937
Finetune other models for sentence-classification
{ "login": "cmdllx", "id": 50104519, "node_id": "MDQ6VXNlcjUwMTA0NTE5", "avatar_url": "https://avatars.githubusercontent.com/u/50104519?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cmdllx", "html_url": "https://github.com/cmdllx", "followers_url": "https://api.github.com/users/cmdllx/followers", "following_url": "https://api.github.com/users/cmdllx/following{/other_user}", "gists_url": "https://api.github.com/users/cmdllx/gists{/gist_id}", "starred_url": "https://api.github.com/users/cmdllx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cmdllx/subscriptions", "organizations_url": "https://api.github.com/users/cmdllx/orgs", "repos_url": "https://api.github.com/users/cmdllx/repos", "events_url": "https://api.github.com/users/cmdllx/events{/privacy}", "received_events_url": "https://api.github.com/users/cmdllx/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, there is a `LongformerForSequenceClassification` so you should be able to use `run_glue.py` with that model.\r\n\r\nI've removed the misleading dosctring in https://github.com/huggingface/transformers/commit/1650130b0fd71eb80380c47c8ffed89d49ff3481.", "thank you.I can finetune all models for downstream tasks", "sorry to interrupt you, I have a question for new version of transformers.\r\nI once used the old version, when I run the python \"run_glue.py\", I found the time for loading different models to GPU is defferent(the code is model.to_device()&nbsp; &nbsp;),the more layer the model has, the more time it takes to load the model to GPU\r\nHowever, when I use the new version,when I run the python \"run_glue.py\", I found the time for loading different models to GPU is nearly the same . (I think the new script load the model to GPU when init the Trainer,&nbsp;)\r\ncan you explain the reason&nbsp;&nbsp;\r\n\r\n\r\ncczhou\r\[email protected]\r\n\r\n\r\n\r\n&nbsp;\r\n\r\n\r\n\r\n\r\n------------------&nbsp;原始邮件&nbsp;------------------\r\n发件人: \"huggingface/transformers\" <[email protected]&gt;;\r\n发送时间:&nbsp;2020年9月7日(星期一) 晚上8:17\r\n收件人:&nbsp;\"huggingface/transformers\"<[email protected]&gt;;\r\n抄送:&nbsp;\"沉默\"<[email protected]&gt;;\"Author\"<[email protected]&gt;;\r\n主题:&nbsp;Re: [huggingface/transformers] Finetune other models for sentence-classification (#6937)\r\n\r\n\r\n\r\n\r\n\r\n \r\nClosed #6937.\r\n \r\n—\r\nYou are receiving this because you authored the thread.\r\nReply to this email directly, view it on GitHub, or unsubscribe." ]
1,599
1,599
1,599
NONE
null
# ❓ Questions & Help I want to use run_glue.py to finetune pre-trained models for classification,but I find that the script can only be used for BERT, XLM, XLNet and RoBERTa.I want to finetune other models like longformer ,what should I do?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6937/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6937/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6936
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6936/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6936/comments
https://api.github.com/repos/huggingface/transformers/issues/6936/events
https://github.com/huggingface/transformers/issues/6936
692,647,183
MDU6SXNzdWU2OTI2NDcxODM=
6,936
Load BERT+GPT2 in EncoderDecoder
{ "login": "AmbiTyga", "id": 39136064, "node_id": "MDQ6VXNlcjM5MTM2MDY0", "avatar_url": "https://avatars.githubusercontent.com/u/39136064?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AmbiTyga", "html_url": "https://github.com/AmbiTyga", "followers_url": "https://api.github.com/users/AmbiTyga/followers", "following_url": "https://api.github.com/users/AmbiTyga/following{/other_user}", "gists_url": "https://api.github.com/users/AmbiTyga/gists{/gist_id}", "starred_url": "https://api.github.com/users/AmbiTyga/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AmbiTyga/subscriptions", "organizations_url": "https://api.github.com/users/AmbiTyga/orgs", "repos_url": "https://api.github.com/users/AmbiTyga/repos", "events_url": "https://api.github.com/users/AmbiTyga/events{/privacy}", "received_events_url": "https://api.github.com/users/AmbiTyga/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi @AmbiTyga ,\r\nBert2GPT2 is available in the latest release, however `predict_from_generate` from generate is not yet added in `Trainer`. \r\nYou can set `predict_from_generate` to `False` and `comput_metrics` to `None` if you done't need generative metrics (ROUGE etc) at training time. \r\n\r\nIf you want to use `predict_from_generate` from generate then you'll need to install transformers from this fork.\r\nhttps://github.com/huggingface/transformers/tree/more_general_trainer_metric", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,599
1,605
1,605
CONTRIBUTOR
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details I am working on modelling an EncoderDecoderModel using weights of BERT and GPT2, after going through lots of repo and issues I found that currently its not possible, but I found a model card that has used this model of BERT+GPT2 on dataset cnn-dailymail [here](https://huggingface.co/patrickvonplaten/bert2gpt2-cnn_dailymail-fp16). I would like to know in which version of transformers is that possible and one thing more that there was an attribute passed in `TrainingArguments` module, that was `predict_from_generate`, I can't find that in `transformers`: 3.1.0, 3.0.2, 2.11.0, please clear me in which version does these parameters constitute. @patrickvonplaten Please answer my query <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6936/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6936/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6935
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6935/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6935/comments
https://api.github.com/repos/huggingface/transformers/issues/6935/events
https://github.com/huggingface/transformers/pull/6935
692,595,668
MDExOlB1bGxSZXF1ZXN0NDc5MTgwMTIz
6,935
Replaced torch.load for loading the pretrained vocab of TransformerXL tokenizer to pickle.load
{ "login": "w4nderlust", "id": 349256, "node_id": "MDQ6VXNlcjM0OTI1Ng==", "avatar_url": "https://avatars.githubusercontent.com/u/349256?v=4", "gravatar_id": "", "url": "https://api.github.com/users/w4nderlust", "html_url": "https://github.com/w4nderlust", "followers_url": "https://api.github.com/users/w4nderlust/followers", "following_url": "https://api.github.com/users/w4nderlust/following{/other_user}", "gists_url": "https://api.github.com/users/w4nderlust/gists{/gist_id}", "starred_url": "https://api.github.com/users/w4nderlust/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/w4nderlust/subscriptions", "organizations_url": "https://api.github.com/users/w4nderlust/orgs", "repos_url": "https://api.github.com/users/w4nderlust/repos", "events_url": "https://api.github.com/users/w4nderlust/events{/privacy}", "received_events_url": "https://api.github.com/users/w4nderlust/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @w4nderlust that's a good idea!\r\n\r\nThe CI seems not happy with the change though and it seems related to your changes.\r\n\r\nDo you think you could take a look?", "@thomwolf trying to see the details of the failing tests, but circleci wat me to login with github and grant access to all my repos and orgs, I prefer to avoid it.\r\n\r\nIf you can point out the failing tests I'm happy to take a look at it.", "@thomwolf I inspected further and this is what i discovered:\r\n1. i was running the `modeling_transfo_xl.py` tests, but I should have been running the `tokenization_transfo_xl.py` test. I Imagine the errors in CI are coming from there.\r\n2. upon further inspection, I noticed that inside `tokenization_transfo_xl.py` torch is used everywhere. There probably was a design decision that led to that which I'm not aware of, but as a user i would ask you to reconsider, because if the tokenizer uses torch, the TF version of TransformerXL can neve be used without installing torch. The extent to which torch is used goes beyond my current familiarity with the library, so i will refrain to propose modifications to it, a part from what i propose in the next point.\r\n3. in my commit I replaced the loading of the vocab, but upon inspection i realized that, yes, torch uses pickle, but it does that in a way that is peculiar, including magic numbers and protocol versions and some custom logic that will take some time to reverse engineer ( https://github.com/pytorch/pytorch/blob/0c01f136f3c8d16f221d641befcb5a74142bbeb1/torch/serialization.py#L764-L774 ). It doesn't seem you can directly load the vocab dictionary without re-implementing quite some load code from torch, plus this doesn't sound like a sound approach because PyTorch can itself start using a new load mechanism in the future. So, what I tried to do is to replace ALSO `torch.save` usages within the context of vocabulary save with `pickle.dump`.( line 260-262) in my last commit). The effect is that now all `test_tokenize_transfo_cl.py` tests pass, the vocab can be saved and loaded, but, because the vocab that ships with the pretrained models was saved originally with torch, If it try to load from pretrained model, loading doesn't work (what is loaded is just the torch magic number). So here I guess you have to make a call about what you want to do: if you want to use pickle to load and save vocab, this PR does it for you, but you have to change the TransformerXL pretrained model that you ship by replacing the vocab file saved with PyTorch with one saved with pickle (the code to do it from the current vocab file is straightforward `pickle.dump(torch.load(vocab_file), vocab_file)`).\r\n\r\nAs I realized the issue is bigger than I originally thought, it would be great if someone could look at it in more detail from the HF side.", "Hi @w4nderlust ok, I'm reaching this PR now.\r\n\r\nSo the original tokenizer for Transformer-XL was copied from the original research work to be able to import the trained checkpoints. The reliance on PyTorch is thus not really a design decision of us but more of the original author.\r\n\r\nWe can definitely reconsider it and if you don't mind, I'll try to build upon your PR to relax this reliance on PyTorch while keeping backward compatibility if possible.", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6935?src=pr&el=h1) Report\n> Merging [#6935](https://codecov.io/gh/huggingface/transformers/pull/6935?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/aba4e22944f0c985bebdcde51d47a565dd4f551d?el=desc) will **increase** coverage by `1.96%`.\n> The diff coverage is `79.16%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6935/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6935?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6935 +/- ##\n==========================================\n+ Coverage 74.71% 76.67% +1.96% \n==========================================\n Files 194 181 -13 \n Lines 39407 35738 -3669 \n==========================================\n- Hits 29441 27401 -2040 \n+ Misses 9966 8337 -1629 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6935?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6935/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.09% <60.00%> (+0.17%)` | :arrow_up: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6935/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <84.21%> (+0.74%)` | :arrow_up: |\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6935/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6935/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.71% <0.00%> (-77.89%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6935/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `18.73% <0.00%> (-74.53%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6935/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `21.53% <0.00%> (-68.15%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6935/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `32.75% <0.00%> (-66.38%)` | :arrow_down: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6935/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `60.63% <0.00%> (-20.14%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6935/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `55.70% <0.00%> (-15.11%)` | :arrow_down: |\n| [src/transformers/integrations.py](https://codecov.io/gh/huggingface/transformers/pull/6935/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9pbnRlZ3JhdGlvbnMucHk=) | `29.00% <0.00%> (-5.66%)` | :arrow_down: |\n| ... and [71 more](https://codecov.io/gh/huggingface/transformers/pull/6935/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6935?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6935?src=pr&el=footer). Last update [aba4e22...cd57922](https://codecov.io/gh/huggingface/transformers/pull/6935?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Thank you for the work on this! Much appreciated! :)" ]
1,599
1,602
1,602
CONTRIBUTOR
null
TransformerXL tokenizer requires torch to work because it uses torch.load to load the vocabulary. This means that if I'm using the TF2 implementation, I have to add torch as a dependency just for that. So I replaced the call to a call to pickle.load (which is what torch.load internally uses) to solve the issue. Tested an all the TransformerXL related tests (also the slow ones) and they all passed.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6935/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6935/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6935", "html_url": "https://github.com/huggingface/transformers/pull/6935", "diff_url": "https://github.com/huggingface/transformers/pull/6935.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6935.patch", "merged_at": 1602144971000 }
https://api.github.com/repos/huggingface/transformers/issues/6934
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6934/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6934/comments
https://api.github.com/repos/huggingface/transformers/issues/6934/events
https://github.com/huggingface/transformers/issues/6934
692,551,411
MDU6SXNzdWU2OTI1NTE0MTE=
6,934
non-interactive transformers-cli upload?
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "FWIW, I found a workaround (god bless Stackoverflow):\r\n\r\n```\r\ncd data\r\nyes Y | transformers-cli upload fsmt-wmt19-ru-en\r\nyes Y | transformers-cli upload fsmt-wmt19-en-ru\r\nyes Y | transformers-cli upload fsmt-wmt19-de-en\r\nyes Y | transformers-cli upload fsmt-wmt19-en-de\r\ncd -\r\n```", "Ah, nice find :)\r\n\r\nI think a `-y` flag would be reasonable if you want to open a PR", "Will do. Thank you.", "Done: https://github.com/huggingface/transformers/pull/7035" ]
1,599
1,599
1,599
CONTRIBUTOR
null
# 🚀 Feature request Currently, `transformers-cli upload` works only interactively due to its prompt: `Proceed? [Y/n]` After running the updated model conversion, I would like to be able to do: ``` cd data transformers-cli upload fsmt-wmt19-ru-en transformers-cli upload fsmt-wmt19-en-ru transformers-cli upload fsmt-wmt19-de-en transformers-cli upload fsmt-wmt19-en-de cd - ``` But this won't work: Would it be possible to add a `-y` override? Alternatively, would it be possible to give it all dirs to upload in one command? ``` transformers-cli upload fsmt-wmt19-ru-en fsmt-wmt19-en-ru fsmt-wmt19-de-en fsmt-wmt19-en-de ``` ## Motivation I have been re-uploading 4 x 1.1GB models on a relatively slow connection, and I have to be around to hit Y for each one of them, which is very counter-productive, as I have to go back and re-check whether each upload has been completed. I can probably code some shell expect script to feed it automatically, but this defeats the purpose. Thank you!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6934/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6934/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6933
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6933/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6933/comments
https://api.github.com/repos/huggingface/transformers/issues/6933/events
https://github.com/huggingface/transformers/pull/6933
692,433,844
MDExOlB1bGxSZXF1ZXN0NDc5MDMxNzE2
6,933
[docstring] missing arg
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6933?src=pr&el=h1) Report\n> Merging [#6933](https://codecov.io/gh/huggingface/transformers/pull/6933?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e95d262f2553859af9bffbfe5f5bc7e362259939?el=desc) will **increase** coverage by `1.66%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6933/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6933?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6933 +/- ##\n==========================================\n+ Coverage 77.70% 79.36% +1.66% \n==========================================\n Files 161 161 \n Lines 30119 30119 \n==========================================\n+ Hits 23403 23905 +502 \n+ Misses 6716 6214 -502 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6933?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `100.00% <ø> (+78.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `21.47% <0.00%> (-69.44%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `48.80% <0.00%> (-46.43%)` | :arrow_down: |\n| [src/transformers/tokenization\\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `57.14% <0.00%> (-39.69%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `67.79% <0.00%> (-31.36%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/6933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `52.98% <0.00%> (-13.44%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `67.10% <0.00%> (-12.67%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `88.77% <0.00%> (-2.55%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <0.00%> (-1.30%)` | :arrow_down: |\n| ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/6933/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6933?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6933?src=pr&el=footer). Last update [e95d262...ca0f022](https://codecov.io/gh/huggingface/transformers/pull/6933?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Perhaps this is not the place to ask, but why do we use rst in some docs and md in others? I am yet to use rst, so I don't know what the cons/pros are. Perhaps it has to do with sphinx's preferred format for its linking features? ", "We need rst in the docstrings because that's the format sphinx uses. Then we need to use rst in the doc files that want to link to some functions/classes to be able to leverage sphinx autolinking features. Markdown is also supported, but you can't automatically link to a class/function in it, so I prefer using rst. \r\n\r\nIn the source docs most of the files are in rst apart from some simlinks to some READMEs (that need to be in Markdown to properly display on GitHub), the CONTRIBUTING and one file bout migration (this one could be converted to rst if we really wanted to). For a new file, I'd strongly encourage rst unless there is a reason to use Markdown.", "Excellent. I didn't know any of this. Will be adding .rst for new files in the future (though can't help but notice that markdown seems a way easier/more intuitive to write)." ]
1,599
1,599
1,599
CONTRIBUTOR
null
add the missing `tie_word_embeddings` entry
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6933/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6933/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6933", "html_url": "https://github.com/huggingface/transformers/pull/6933", "diff_url": "https://github.com/huggingface/transformers/pull/6933.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6933.patch", "merged_at": 1599471377000 }
https://api.github.com/repos/huggingface/transformers/issues/6932
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6932/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6932/comments
https://api.github.com/repos/huggingface/transformers/issues/6932/events
https://github.com/huggingface/transformers/pull/6932
692,427,029
MDExOlB1bGxSZXF1ZXN0NDc5MDI1MzU3
6,932
[docstring] misc arg doc corrections
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6932?src=pr&el=h1) Report\n> Merging [#6932](https://codecov.io/gh/huggingface/transformers/pull/6932?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e95d262f2553859af9bffbfe5f5bc7e362259939?el=desc) will **increase** coverage by `1.83%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6932/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6932?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6932 +/- ##\n==========================================\n+ Coverage 77.70% 79.53% +1.83% \n==========================================\n Files 161 161 \n Lines 30119 30119 \n==========================================\n+ Hits 23403 23956 +553 \n+ Misses 6716 6163 -553 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6932?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6932/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `94.00% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6932/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `21.47% <0.00%> (-69.44%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6932/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6932/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.84% <0.00%> (-23.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6932/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `60.81% <0.00%> (-22.62%)` | :arrow_down: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/6932/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `71.55% <0.00%> (-20.48%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6932/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `86.63% <0.00%> (-6.08%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6932/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `88.42% <0.00%> (-4.85%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6932/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6932/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.26% <0.00%> (-0.17%)` | :arrow_down: |\n| ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/6932/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6932?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6932?src=pr&el=footer). Last update [e95d262...1c08fdb](https://codecov.io/gh/huggingface/transformers/pull/6932?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Thanks. Will in these fixes don't hesitate to replace the True or False by :obj:\\`True\\` and :obj:\\`False\\` for consistency (also don't hesitate to tag me on doc PR for quicker reviews :-) )", "Understood!\r\n\r\nDo we have a model of perhaps one largish module that we can use a reference for how the rest should be done? So that you polish the hell out of it, and then this will be the model to follow.\r\n", "`tokenization_utils_base` is a good example for instance, all other utils modules too. `config_utils` has an example of how to split parameters in several subgroups if you ever need a model of that. Rules are the usual sphinx-like and some more personal nits are:\r\n- not writing \"defaults to :obj:\\`None\\`\" for optional things that have default (it's implied)\r\n- using :obj:\\`foo\\` syntax for objects (like False, True, all strings) or mention to other arguments\r\n but not numbers (like 0, 1.0...)\r\n- using italics for optional", "Great tips on the model docs and the small specifics. I see some are already here:\r\nhttps://github.com/huggingface/transformers/tree/master/docs#writing-documentation---specification\r\nadd the others too?\r\n\r\nLoving params subgroup docs - it's very helpful. I'd organize the params in the function in the same groups too.\r\n\r\nThank you for sharing all these, @sgugger!", "Yes we could add those general rules to that section of the docs README. (I am unsure people actually read that so did not take the time to properly update it :-) )", "I didn't know it was there, but now that I do, I'd definitely per-use it - so yes, please update it!", "> * not writing \"defaults to :obj:`None`\" for optional things that have default (it's implied)\r\n```\r\ngrep -r \"defaults to :obj:.None.\" src | wc -l\r\n```\r\n```\r\n580\r\n```\r\nmight be easy to replace in one swoop.\r\n\r\n", "Feel free to do it in one PR :-)", "Done: https://github.com/huggingface/transformers/pull/6956" ]
1,599
1,599
1,599
CONTRIBUTOR
null
- fix docstring s/int/bool/ - correct arg description - fix num_labels to match reality
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6932/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6932/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6932", "html_url": "https://github.com/huggingface/transformers/pull/6932", "diff_url": "https://github.com/huggingface/transformers/pull/6932.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6932.patch", "merged_at": 1599228583000 }
https://api.github.com/repos/huggingface/transformers/issues/6931
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6931/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6931/comments
https://api.github.com/repos/huggingface/transformers/issues/6931/events
https://github.com/huggingface/transformers/pull/6931
692,415,359
MDExOlB1bGxSZXF1ZXN0NDc5MDE0NzMw
6,931
remove arg that is not being used
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6931?src=pr&el=h1) Report\n> Merging [#6931](https://codecov.io/gh/huggingface/transformers/pull/6931?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e95d262f2553859af9bffbfe5f5bc7e362259939?el=desc) will **increase** coverage by `0.42%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6931/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6931?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6931 +/- ##\n==========================================\n+ Coverage 77.70% 78.12% +0.42% \n==========================================\n Files 161 161 \n Lines 30119 30119 \n==========================================\n+ Hits 23403 23530 +127 \n+ Misses 6716 6589 -127 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6931?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6931/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `94.00% <ø> (ø)` | |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6931/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.30% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6931/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6931/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.03% <0.00%> (+0.13%)` | :arrow_up: |\n| [src/transformers/modeling\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/6931/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `90.76% <0.00%> (+20.74%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6931?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6931?src=pr&el=footer). Last update [e95d262...716864a](https://codecov.io/gh/huggingface/transformers/pull/6931?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "I think the optimal solution here is the deletion and then hardcode\r\n`self.extra_pos_embeddings = 2` lower in the file.\r\n\r\nOtherwise LGTM.", "But it's there already in a different form: https://github.com/huggingface/transformers/blob/master/src/transformers/configuration_bart.py#L197\r\n```\r\n self.extra_pos_embeddings = self.pad_token_id + 1\r\n```\r\njust to validate, you're suggesting to replace ` = self.pad_token_id + 1` with `= 2`, yes?", "@stas00 yes! you can even delete the config attribute. and replace it with 2 in `modeling_bart.py` code. \r\n\r\ngithub wont let me suggest cause too low in file :)", "I didnt know blenderbot was active. We may need this.\r\nDon't merge yet pls.", "I think I will need this for blenderbot, otherwise I'll reopen." ]
1,599
1,603
1,600
CONTRIBUTOR
null
`extra_pos_embeddings` is passed but not being used anywhere, so deleting it.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6931/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6931/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6931", "html_url": "https://github.com/huggingface/transformers/pull/6931", "diff_url": "https://github.com/huggingface/transformers/pull/6931.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6931.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/6930
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6930/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6930/comments
https://api.github.com/repos/huggingface/transformers/issues/6930/events
https://github.com/huggingface/transformers/pull/6930
692,346,105
MDExOlB1bGxSZXF1ZXN0NDc4OTUwODg3
6,930
Trainer with grad accum
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6930?src=pr&el=h1) Report\n> Merging [#6930](https://codecov.io/gh/huggingface/transformers/pull/6930?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/207ed8cb78ceb4980e40c89f867b06202e660395?el=desc) will **decrease** coverage by `3.53%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6930/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6930?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6930 +/- ##\n==========================================\n- Coverage 80.60% 77.07% -3.54% \n==========================================\n Files 161 161 \n Lines 30119 30119 \n==========================================\n- Hits 24278 23214 -1064 \n- Misses 5841 6905 +1064 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6930?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/6930/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `91.66% <ø> (ø)` | |\n| [src/transformers/training\\_args\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6930/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzX3RmLnB5) | `47.45% <ø> (ø)` | |\n| [src/transformers/configuration\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/6930/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2x4bWVydC5weQ==) | `20.00% <0.00%> (-80.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/6930/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.49% <0.00%> (-71.63%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6930/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `21.47% <0.00%> (-69.44%)` | :arrow_down: |\n| [src/transformers/modeling\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/6930/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `23.50% <0.00%> (-67.27%)` | :arrow_down: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6930/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6930/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `60.81% <0.00%> (-22.62%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6930/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `20.53% <0.00%> (-21.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/6930/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `52.98% <0.00%> (-13.44%)` | :arrow_down: |\n| ... and [15 more](https://codecov.io/gh/huggingface/transformers/pull/6930/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6930?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6930?src=pr&el=footer). Last update [207ed8c...68c12f3](https://codecov.io/gh/huggingface/transformers/pull/6930?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,599
1,599
1,599
COLLABORATOR
null
As mentioned on the forum, the behavior of `Trainer` can be confusing when using gradient accumulation as the count of steps becomes the count of update steps, not the count of training examples seen. This PR adds a warning in the doc.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6930/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6930/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6930", "html_url": "https://github.com/huggingface/transformers/pull/6930", "diff_url": "https://github.com/huggingface/transformers/pull/6930.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6930.patch", "merged_at": 1599468840000 }
https://api.github.com/repos/huggingface/transformers/issues/6929
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6929/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6929/comments
https://api.github.com/repos/huggingface/transformers/issues/6929/events
https://github.com/huggingface/transformers/pull/6929
692,340,186
MDExOlB1bGxSZXF1ZXN0NDc4OTQ1Mzk3
6,929
replace torch.triu with onnx compatible code
{ "login": "HenryDashwood", "id": 17177967, "node_id": "MDQ6VXNlcjE3MTc3OTY3", "avatar_url": "https://avatars.githubusercontent.com/u/17177967?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HenryDashwood", "html_url": "https://github.com/HenryDashwood", "followers_url": "https://api.github.com/users/HenryDashwood/followers", "following_url": "https://api.github.com/users/HenryDashwood/following{/other_user}", "gists_url": "https://api.github.com/users/HenryDashwood/gists{/gist_id}", "starred_url": "https://api.github.com/users/HenryDashwood/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HenryDashwood/subscriptions", "organizations_url": "https://api.github.com/users/HenryDashwood/orgs", "repos_url": "https://api.github.com/users/HenryDashwood/repos", "events_url": "https://api.github.com/users/HenryDashwood/events{/privacy}", "received_events_url": "https://api.github.com/users/HenryDashwood/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6929?src=pr&el=h1) Report\n> Merging [#6929](https://codecov.io/gh/huggingface/transformers/pull/6929?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/207ed8cb78ceb4980e40c89f867b06202e660395?el=desc) will **decrease** coverage by `0.58%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6929/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6929?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6929 +/- ##\n==========================================\n- Coverage 80.60% 80.02% -0.59% \n==========================================\n Files 161 161 \n Lines 30119 30122 +3 \n==========================================\n- Hits 24278 24105 -173 \n- Misses 5841 6017 +176 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6929?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.93% <100.00%> (+0.03%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.36%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.70% <0.00%> (-6.07%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (+1.95%)` | :arrow_up: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `91.90% <0.00%> (+5.26%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+5.26%)` | :arrow_up: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `97.08% <0.00%> (+19.34%)` | :arrow_up: |\n| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/6929/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6929?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6929?src=pr&el=footer). Last update [207ed8c...82d9234](https://codecov.io/gh/huggingface/transformers/pull/6929?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "This is great, want to check in your tests?", "I'm not quite sure what the most appropriate way of including them would be to be honest!\r\n\r\nCurrently I have a little script that looks like this \r\n```python\r\nimport torch\r\nfrom transformers.modeling_bart import fill_with_neg_inf\r\n\r\n\r\ndef test_upper_right_triangle(torch_device):\r\n tgt_len = 512\r\n causal_mask_dtype = torch.float32\r\n\r\n causal_mask_short = torch.triu(\r\n fill_with_neg_inf(torch.zeros(tgt_len, tgt_len)),\r\n 1).to(dtype=torch.float32, device=torch_device)\r\n\r\n tmp = fill_with_neg_inf(torch.zeros(tgt_len, tgt_len))\r\n mask = torch.arange(tmp.size(-1))\r\n tmp.masked_fill_(mask < (mask + 1).view(tmp.size(-1), 1), 0)\r\n causal_mask_long = tmp.to(dtype=causal_mask_dtype, device=torch_device)\r\n\r\n assert torch.all(torch.eq(causal_mask_short, causal_mask_long))\r\n\r\n\r\nif __name__ == \"__main__\":\r\n test_upper_right_triangle('cpu')\r\n```\r\nas well as the fact that when I run the ```convert``` function, the output gives the same predictions at the same speed.\r\n\r\nSince this this would be testing one version of the code against another possible version, as opposed to some external ground truth or expected value, it feels a bit self referential?", "you're right, LGTM @LysandreJik !" ]
1,599
1,599
1,599
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #5075 There was a [draft pull request](https://github.com/huggingface/transformers/pull/6334) to this effect a few months ago but the author withdrew it. I'm not sure why. It resolves the _torch.triu_ issue with ONNX. It gives the same output in my tests and runs at the same speed. Entirely possible that I've missed something though!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6929/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6929/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6929", "html_url": "https://github.com/huggingface/transformers/pull/6929", "diff_url": "https://github.com/huggingface/transformers/pull/6929.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6929.patch", "merged_at": 1599641800000 }
https://api.github.com/repos/huggingface/transformers/issues/6928
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6928/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6928/comments
https://api.github.com/repos/huggingface/transformers/issues/6928/events
https://github.com/huggingface/transformers/issues/6928
692,204,494
MDU6SXNzdWU2OTIyMDQ0OTQ=
6,928
onnx-export example notebook is failing for TF
{ "login": "Zhen-hao", "id": 10957195, "node_id": "MDQ6VXNlcjEwOTU3MTk1", "avatar_url": "https://avatars.githubusercontent.com/u/10957195?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Zhen-hao", "html_url": "https://github.com/Zhen-hao", "followers_url": "https://api.github.com/users/Zhen-hao/followers", "following_url": "https://api.github.com/users/Zhen-hao/following{/other_user}", "gists_url": "https://api.github.com/users/Zhen-hao/gists{/gist_id}", "starred_url": "https://api.github.com/users/Zhen-hao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Zhen-hao/subscriptions", "organizations_url": "https://api.github.com/users/Zhen-hao/orgs", "repos_url": "https://api.github.com/users/Zhen-hao/repos", "events_url": "https://api.github.com/users/Zhen-hao/events{/privacy}", "received_events_url": "https://api.github.com/users/Zhen-hao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The following should work:\r\n\r\n```\r\nfrom pathlib import Path\r\nfrom transformers.convert_graph_to_onnx import convert\r\n\r\n# Tensorflow \r\nconvert(framework=\"tf\", model=\"bert-base-cased\", output=Path(\"onnx/bert-base-cased.onnx\"), opset=11)\r\n```", "@subho406 thanks! I thought the error was about the model output since using string in output path had worked in the previous version." ]
1,599
1,599
1,599
NONE
null
hi, I'm using the latest 3.1.0 release. When I run ``` from transformers.convert_graph_to_onnx import convert # Tensorflow convert(framework="tf", model="bert-base-cased", output="onnx/bert-base-cased.onnx", opset=11) ``` as shown in https://github.com/huggingface/transformers/blob/master/notebooks/04-onnx-export.ipynb, the following error occurs ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-1-bc5982e91176> in <module> 7 8 # Tensorflow ----> 9 convert(framework="tf", model="bert-base-cased", output="onnx/bert-base-cased.onnx", opset=11) /nix/store/w8xw8jng4dfjcqijfjw1sps8pim669kj-python3.7-transformers-3.1.0/lib/python3.7/site-packages/transformers/convert_graph_to_onnx.py in convert(framework, model, output, opset, tokenizer, use_external_format, pipeline_name) 335 nlp = load_graph_from_args(pipeline_name, framework, model, tokenizer) 336 --> 337 if not output.parent.exists(): 338 print(f"Creating folder {output.parent}") 339 makedirs(output.parent.as_posix()) AttributeError: 'str' object has no attribute 'parent' ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6928/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6928/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6927
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6927/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6927/comments
https://api.github.com/repos/huggingface/transformers/issues/6927/events
https://github.com/huggingface/transformers/pull/6927
692,136,417
MDExOlB1bGxSZXF1ZXN0NDc4NzY4NzIw
6,927
[s2s] support early stopping based on loss, rather than rouge
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6927?src=pr&el=h1) Report\n> Merging [#6927](https://codecov.io/gh/huggingface/transformers/pull/6927?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/207ed8cb78ceb4980e40c89f867b06202e660395?el=desc) will **decrease** coverage by `3.98%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6927/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6927?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6927 +/- ##\n==========================================\n- Coverage 80.60% 76.61% -3.99% \n==========================================\n Files 161 161 \n Lines 30119 30119 \n==========================================\n- Hits 24278 23077 -1201 \n- Misses 5841 7042 +1201 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6927?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6927/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6927/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.87% <0.00%> (-77.64%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6927/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.36%)` | :arrow_down: |\n| [src/transformers/configuration\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6927/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX21vYmlsZWJlcnQucHk=) | `26.47% <0.00%> (-70.59%)` | :arrow_down: |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6927/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `23.49% <0.00%> (-65.97%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6927/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `66.66% <0.00%> (-23.43%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/6927/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `52.98% <0.00%> (-13.44%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6927/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `67.10% <0.00%> (-12.67%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6927/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.65% <0.00%> (-2.18%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6927/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.97% <0.00%> (-0.68%)` | :arrow_down: |\n| ... and [10 more](https://codecov.io/gh/huggingface/transformers/pull/6927/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6927?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6927?src=pr&el=footer). Last update [207ed8c...b1d4604](https://codecov.io/gh/huggingface/transformers/pull/6927?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,599
1,599
1,599
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6927/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6927/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6927", "html_url": "https://github.com/huggingface/transformers/pull/6927", "diff_url": "https://github.com/huggingface/transformers/pull/6927.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6927.patch", "merged_at": 1599168696000 }
https://api.github.com/repos/huggingface/transformers/issues/6926
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6926/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6926/comments
https://api.github.com/repos/huggingface/transformers/issues/6926/events
https://github.com/huggingface/transformers/pull/6926
692,107,949
MDExOlB1bGxSZXF1ZXN0NDc4NzQ1MTA2
6,926
[s2s] use --eval_beams command line arg
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6926?src=pr&el=h1) Report\n> Merging [#6926](https://codecov.io/gh/huggingface/transformers/pull/6926?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0f360d3d1c606d6d79cdf1efa53c3d719249573d?el=desc) will **increase** coverage by `0.32%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6926/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6926?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6926 +/- ##\n==========================================\n+ Coverage 80.23% 80.56% +0.32% \n==========================================\n Files 161 161 \n Lines 30119 30119 \n==========================================\n+ Hits 24167 24265 +98 \n+ Misses 5952 5854 -98 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6926?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/6926/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.49% <0.00%> (-71.63%)` | :arrow_down: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6926/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6926/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6926/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `88.42% <0.00%> (-4.85%)` | :arrow_down: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6926/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6926/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.24% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6926/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.03% <0.00%> (+0.13%)` | :arrow_up: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6926/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.30% <0.00%> (+0.40%)` | :arrow_up: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6926/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.64% <0.00%> (+0.67%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6926/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.21% <0.00%> (+1.00%)` | :arrow_up: |\n| ... and [6 more](https://codecov.io/gh/huggingface/transformers/pull/6926/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6926?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6926?src=pr&el=footer). Last update [0f360d3...90aec7a](https://codecov.io/gh/huggingface/transformers/pull/6926?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,599
1,599
1,599
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6926/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6926/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6926", "html_url": "https://github.com/huggingface/transformers/pull/6926", "diff_url": "https://github.com/huggingface/transformers/pull/6926.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6926.patch", "merged_at": 1599151330000 }
https://api.github.com/repos/huggingface/transformers/issues/6925
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6925/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6925/comments
https://api.github.com/repos/huggingface/transformers/issues/6925/events
https://github.com/huggingface/transformers/issues/6925
692,043,725
MDU6SXNzdWU2OTIwNDM3MjU=
6,925
Reopen: Unable to use run_squad with xla_spawn.py on TPU
{ "login": "christian-janiake-movile", "id": 1670865, "node_id": "MDQ6VXNlcjE2NzA4NjU=", "avatar_url": "https://avatars.githubusercontent.com/u/1670865?v=4", "gravatar_id": "", "url": "https://api.github.com/users/christian-janiake-movile", "html_url": "https://github.com/christian-janiake-movile", "followers_url": "https://api.github.com/users/christian-janiake-movile/followers", "following_url": "https://api.github.com/users/christian-janiake-movile/following{/other_user}", "gists_url": "https://api.github.com/users/christian-janiake-movile/gists{/gist_id}", "starred_url": "https://api.github.com/users/christian-janiake-movile/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/christian-janiake-movile/subscriptions", "organizations_url": "https://api.github.com/users/christian-janiake-movile/orgs", "repos_url": "https://api.github.com/users/christian-janiake-movile/repos", "events_url": "https://api.github.com/users/christian-janiake-movile/events{/privacy}", "received_events_url": "https://api.github.com/users/christian-janiake-movile/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "hi @christian-janiake-movile ,\r\n`run_squad` won't work with `xla_spawn` since it doesn't use `Trainer`. You can use `run_squad_trainer.py` with `xla_spawn.py` if you want to fine-tune on TPU", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,599
1,605
1,605
NONE
null
## Environment info - `transformers` version: 3.1.0 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.7.0a0+ab76067 (False) - Tensorflow version (GPU?): 2.3.0 (False) - Using GPU in script? TPU - Using distributed or parallel set-up in script?: YES ### Who can help @LysandreJik ## Information I see there is an issue (#5470) closed on July, because the SQuAD example didn't have trainer support yet, but it seems that now it does have according to the table (https://github.com/huggingface/transformers/tree/master/examples#the-big-table-of-tasks) Model I am using (Bert, XLNet ...): BERT The problem arises when using: * [X] the official example scripts: (give details below) the official example scripts: RUN_squad.py + xla_spawn.py * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [X] an official GLUE/SQUaD task: (give the name) SQuAD v2.0 * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: Install pytorch-xla on colab using: VERSION = "20200325" #@param ["1.5" , "20200325", "nightly"] !curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py !python pytorch-xla-env-setup.py --version $VERSION Trying to run_squad on colab TPUs using xla_spawn.py python examples/xla_spawn.py --num_cores 8 \ examples/question-answering/run_squad.py \ --model_type electra \ --model_name_or_path google/electra-base-discriminator \ --do_train \ --do_eval \ --do_lower_case \ --train_file "/content/drive/My Drive/bert/train.json" \ --predict_file "/content/drive/My Drive/bert/val.json" \ --per_gpu_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2.0 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir "/content/drive/My Drive/bert/newdir6" Error is thrown up Traceback (most recent call last): File "examples/xla_spawn.py", line 72, in <module> main() File "examples/xla_spawn.py", line 68, in main xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores) AttributeError: module 'run_squad' has no attribute '_mp_fn' Expected behavior Training should run properly using xla_spawn.py
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6925/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6925/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6924
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6924/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6924/comments
https://api.github.com/repos/huggingface/transformers/issues/6924/events
https://github.com/huggingface/transformers/issues/6924
692,041,151
MDU6SXNzdWU2OTIwNDExNTE=
6,924
AttributeError: 'list' object has no attribute 'clone' with BartTokenizer
{ "login": "spate141", "id": 10580847, "node_id": "MDQ6VXNlcjEwNTgwODQ3", "avatar_url": "https://avatars.githubusercontent.com/u/10580847?v=4", "gravatar_id": "", "url": "https://api.github.com/users/spate141", "html_url": "https://github.com/spate141", "followers_url": "https://api.github.com/users/spate141/followers", "following_url": "https://api.github.com/users/spate141/following{/other_user}", "gists_url": "https://api.github.com/users/spate141/gists{/gist_id}", "starred_url": "https://api.github.com/users/spate141/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/spate141/subscriptions", "organizations_url": "https://api.github.com/users/spate141/orgs", "repos_url": "https://api.github.com/users/spate141/repos", "events_url": "https://api.github.com/users/spate141/events{/privacy}", "received_events_url": "https://api.github.com/users/spate141/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "I think you flipped model and tokenizer at the beginning. It should be\r\n```python\r\n\r\nfrom transformers import BartTokenizer, BartForConditionalGeneration\r\n\r\ntokenizer = BartTokenizer.from_pretrained('/Downloads/facebook-bart-large-cnn')\r\nmodel = BartForConditionalGeneration.from_pretrained('/Downloads/facebook-bart-large-cnn')\r\n\r\n```", "Pls reopen if there is another issue!", "Damn, this was embarrassing bug on my end. Thank you! 🍻" ]
1,599
1,599
1,599
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.1.0 - Platform: MacOS - Python version: 3.7.6 - PyTorch version (GPU?): 1.6.0 (No GPU) - Tensorflow version (GPU?): 2.3.0 (No GPU) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help - Summarization & Bart: @sshleifer ## Information - Model I am using (Bert, XLNet ...): **`BartTokenizer, BartForConditionalGeneration`** - I'm loading the model from the directory. I saved the model which I initially loaded with `'facebook/bart-large-cnn'` and saved later after using `.save_pretrained(tmp_model_dir)` command. The problem arises when using: * [x] example scripts: (give details below) The tasks I am working on is: * [x] summarization task: (give the name) ## To reproduce Steps to reproduce the behavior: ```python from transformers import BartTokenizer, BartForConditionalGeneration model = BartTokenizer.from_pretrained('/Downloads/facebook-bart-large-cnn') tokenizer = BartForConditionalGeneration.from_pretrained('/Downloads/facebook-bart-large-cnn') raw_text = """ New York (CNN)When Liana Barrientos was 23 years old, she got married in Westchester County, New York. A year later, she got married again in Westchester County, but to a different man and without divorcing her first husband. Only 18 days after that marriage, she got hitched yet again. Then, Barrientos declared "I do" five more times, sometimes only within two weeks of each other. In 2010, she married once more, this time in the Bronx. In an application for a marriage license, she stated it was her "first and only" marriage. Barrientos, now 39, is facing two criminal counts of "offering a false instrument for filing in the first degree," referring to her false statements on the 2010 marriage license application, according to court documents. Prosecutors said the marriages were part of an immigration scam. On Friday, she pleaded not guilty at State Supreme Court in the Bronx, according to her attorney, Christopher Wright, who declined to comment further. After leaving court, Barrientos was arrested and charged with theft of service and criminal trespass for allegedly sneaking into the New York subway through an emergency exit, said Detective Annette Markowski, a police spokeswoman. In total, Barrientos has been married 10 times, with nine of her marriages occurring between 1999 and 2002. All occurred either in Westchester County, Long Island, New Jersey or the Bronx. She is believed to still be married to four men, and at one time, she was married to eight men at once, prosecutors say. Prosecutors said the immigration scam involved some of her husbands, who filed for permanent residence status shortly after the marriages. Any divorces happened only after such filings were approved. It was unclear whether any of the men will be prosecuted. The case was referred to the Bronx District Attorney\'s Office by Immigration and Customs Enforcement and the Department of Homeland Security\'s Investigation Division. Seven of the men are from so-called "red-flagged" countries, including Egypt, Turkey, Georgia, Pakistan and Mali. Her eighth husband, Rashid Rajput, was deported in 2006 to his native Pakistan after an investigation by the Joint Terrorism Task Force. If convicted, Barrientos faces up to four years in prison. Her next court appearance is scheduled for May 18. """ inputs = tokenizer([raw_text], max_length=1024, return_tensors='pt', truncation=True) ``` - Error: ```python --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-4-74e209dd3da0> in <module> ----> 1 inputs = tokenizer([raw_text], max_length=1024, return_tensors='pt', truncation=True) ~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), ~/anaconda3/lib/python3.7/site-packages/transformers/modeling_bart.py in forward(self, input_ids, attention_mask, encoder_outputs, decoder_input_ids, decoder_attention_mask, past_key_values, labels, use_cache, output_attentions, output_hidden_states, return_dict, **unused) 1074 output_attentions=output_attentions, 1075 output_hidden_states=output_hidden_states, -> 1076 return_dict=return_dict, 1077 ) 1078 lm_logits = F.linear(outputs[0], self.model.shared.weight, bias=self.final_logits_bias) ~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), ~/anaconda3/lib/python3.7/site-packages/transformers/modeling_bart.py in forward(self, input_ids, attention_mask, decoder_input_ids, encoder_outputs, decoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict, **kwargs) 904 decoder_input_ids=decoder_input_ids, 905 decoder_padding_mask=decoder_attention_mask, --> 906 causal_mask_dtype=self.shared.weight.dtype, 907 ) 908 else: ~/anaconda3/lib/python3.7/site-packages/transformers/modeling_bart.py in _prepare_bart_decoder_inputs(config, input_ids, decoder_input_ids, decoder_padding_mask, causal_mask_dtype) 146 pad_token_id = config.pad_token_id 147 if decoder_input_ids is None: --> 148 decoder_input_ids = shift_tokens_right(input_ids, pad_token_id) 149 bsz, tgt_len = decoder_input_ids.size() 150 if decoder_padding_mask is None: ~/anaconda3/lib/python3.7/site-packages/transformers/modeling_bart.py in shift_tokens_right(input_ids, pad_token_id) 204 def shift_tokens_right(input_ids, pad_token_id): 205 """Shift input ids one token to the right, and wrap the last non pad token (usually <eos>).""" --> 206 prev_output_tokens = input_ids.clone() 207 index_of_eos = (input_ids.ne(pad_token_id).sum(dim=1) - 1).unsqueeze(-1) 208 prev_output_tokens[:, 0] = input_ids.gather(1, index_of_eos).squeeze() AttributeError: 'list' object has no attribute 'clone' ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6924/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6924/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6923
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6923/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6923/comments
https://api.github.com/repos/huggingface/transformers/issues/6923/events
https://github.com/huggingface/transformers/pull/6923
692,007,153
MDExOlB1bGxSZXF1ZXN0NDc4NjYxMjMw
6,923
[s2s] allow task_specific_params=summarization_xsum
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6923?src=pr&el=h1) Report\n> Merging [#6923](https://codecov.io/gh/huggingface/transformers/pull/6923?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4ebb52afdb4dc4bcd599e7cb503763e5d4afc962?el=desc) will **increase** coverage by `1.90%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6923/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6923?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6923 +/- ##\n==========================================\n+ Coverage 77.81% 79.72% +1.90% \n==========================================\n Files 157 157 \n Lines 28853 28853 \n==========================================\n+ Hits 22452 23002 +550 \n+ Misses 6401 5851 -550 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6923?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |\n| [src/transformers/configuration\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX29wZW5haS5weQ==) | `34.28% <0.00%> (-62.86%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `23.87% <0.00%> (-58.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `69.06% <0.00%> (-29.32%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `87.67% <0.00%> (-10.96%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.72% <0.00%> (-7.19%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.70% <0.00%> (-6.07%)` | :arrow_down: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |\n| ... and [25 more](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6923?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6923?src=pr&el=footer). Last update [4ebb52a...eaef0cb](https://codecov.io/gh/huggingface/transformers/pull/6923?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,599
1,599
1,599
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6923/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6923/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6923", "html_url": "https://github.com/huggingface/transformers/pull/6923", "diff_url": "https://github.com/huggingface/transformers/pull/6923.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6923.patch", "merged_at": 1599145901000 }
https://api.github.com/repos/huggingface/transformers/issues/6922
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6922/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6922/comments
https://api.github.com/repos/huggingface/transformers/issues/6922/events
https://github.com/huggingface/transformers/issues/6922
691,997,147
MDU6SXNzdWU2OTE5OTcxNDc=
6,922
inference over onnx output
{ "login": "MohitTare", "id": 5728793, "node_id": "MDQ6VXNlcjU3Mjg3OTM=", "avatar_url": "https://avatars.githubusercontent.com/u/5728793?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MohitTare", "html_url": "https://github.com/MohitTare", "followers_url": "https://api.github.com/users/MohitTare/followers", "following_url": "https://api.github.com/users/MohitTare/following{/other_user}", "gists_url": "https://api.github.com/users/MohitTare/gists{/gist_id}", "starred_url": "https://api.github.com/users/MohitTare/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MohitTare/subscriptions", "organizations_url": "https://api.github.com/users/MohitTare/orgs", "repos_url": "https://api.github.com/users/MohitTare/repos", "events_url": "https://api.github.com/users/MohitTare/events{/privacy}", "received_events_url": "https://api.github.com/users/MohitTare/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "I have the same issue too! Please some guidelines ?", "Same" ]
1,599
1,625
1,605
NONE
null
# ❓ inference over onnx output <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details How to do decoding on the output obtained from onnx inference I am trying to use onnx runtime for inferring on CPU by following the https://github.com/huggingface/transformers/blob/d822ab636b6a14ed50f7bca0797c1de42c19de61/notebooks/04-onnx-export.ipynb I have a Marian MT hindi to english fine tuned [model ](https://huggingface.co/Helsinki-NLP/opus-mt-hi-en)which i have managed to convert using conver_graph_to_onnx.py script. On calling `sequence, pooled = cpu_model.run(None, inputs_onnx)` I guess the pooled is the encoder output, and sequence is the final decoder output. Please correct if wrong How can i use the api to get the final tokens (by greedy/beamsearch). For the normal way, we call the `generate` function. Is there any helper function to get the final decoded output form onnx output. Any other guidelines ? Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6922/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6922/timeline
completed
null
null