url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/10835 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10835/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10835/comments | https://api.github.com/repos/huggingface/transformers/issues/10835/events | https://github.com/huggingface/transformers/issues/10835 | 837,144,176 | MDU6SXNzdWU4MzcxNDQxNzY= | 10,835 | Issues finetuning MBART 50 many to many | {
"login": "tuhinjubcse",
"id": 3104771,
"node_id": "MDQ6VXNlcjMxMDQ3NzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3104771?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tuhinjubcse",
"html_url": "https://github.com/tuhinjubcse",
"followers_url": "https://api.github.com/users/tuhinjubcse/followers",
"following_url": "https://api.github.com/users/tuhinjubcse/following{/other_user}",
"gists_url": "https://api.github.com/users/tuhinjubcse/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tuhinjubcse/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tuhinjubcse/subscriptions",
"organizations_url": "https://api.github.com/users/tuhinjubcse/orgs",
"repos_url": "https://api.github.com/users/tuhinjubcse/repos",
"events_url": "https://api.github.com/users/tuhinjubcse/events{/privacy}",
"received_events_url": "https://api.github.com/users/tuhinjubcse/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"@patil-suraj any help is appreciated",
"@patrickvonplaten",
"Can anyone look into this @patil-suraj @patrickvonplaten ",
"Hi @tuhinjubcse , sorry to reply only now, I've been a bit busy with the sprint and other projects so couldn't really allocate any time for this. I will get back to you by tomorrow.\r\n\r\nAlso please don't tag people who are not related to this model, it might disturb them unnecessarily.\r\n\r\nThank you for your patience.",
"Thank you, it would be good to know how to finetune a many to many models with more than one lang pairs in train and validation like fairseq multilingual\r\n\r\nhttps://github.com/pytorch/fairseq/tree/master/examples/multilingual",
"Okay, one issue at a time\r\n\r\nI'm taking a look at the error that you posted above.\r\n\r\nAlso, the many-to-one model was not released when we ported this model to `Transformers`, it seems to have been released recently. I will convert and push it by tomorrow.\r\n\r\nAnd regarding multi-lingual fine-tuning, I will try to write a notebook about it. What we need to do here is, say we are fine-tuning on two language pairs, in that case, we need to concatenate the two datasets or in case the two language pairs don't have the same number of examples then add some sort of sampler which will sample the example from the datasets depending on the number of examples in which one. And when processing each language pair, set the appropriate `src_lang` and `tgt_lang` tokens. The processing part is explained in the [docs](https://huggingface.co/transformers/model_doc/mbart.html#training-of-mbart-50).",
"That would be really helpful if you can have a notebook which documents how to do that , or even a read me , just so that its clear",
"Thanks so much for your response and looking forward to use it ",
"The many to one checkpoint is now available on the hub\r\nhttps://huggingface.co/facebook/mbart-large-50-many-to-one-mmt",
"Thanks for releasing this. Looking forward to the instructions to do many to one finetuning as that is what this model will be superuseful for",
"Any updates on how to run many to one, can we pass --source_lang ru_RU,es_XX as a ',' separated string. Sorry I am not sure if that support is available yet. Would be really helpful if you could help here. The EMNLP arxiv deadline is super close on 17th April :) I know you are busy but this would be a huge favor\r\n",
"Multilingual fine-tuning won't be included in the example script, the goal of examples is to keep them simple and let the user extend them for custom training. I'm working on the notebook, but can probably share that on Monday.\r\n\r\nAs I said in the above comment, for multilingual fine-tuning, in the simplest case you would just need to process the two datasets by setting correct `src_lang`, `tgt_lang` tokens, the rest of the training will be similar to traditional fine-tuning.\r\n\r\nFeel free to post the question on the [forum](https://discuss.huggingface.co/) as well, someone there might have better ideas for this.",
"Thank you so much, if you post the notebook here by Monday that would solve my problem. I am trying on my own to do it as well",
"Hi @tuhinjubcse \r\n\r\nWe just merged #11170 which now allows to fine-tune mBART-50 on **single language pair** using the `run_translation.py` script. This should resolve the issue that you posted in the first comment.",
"Thanks so much",
"Suraj I got multilingual to work, however, while decoding I get this error. My added token dictionary is\r\n\r\n\r\n`{\"uk_UA\": 250049, \"mk_MK\": 250036, \"mn_MN\": 250038, \"id_ID\": 250033, \"he_IL\": 250031, \"sl_SI\": 250053, \"pt_XX\": 250042, \"hr_HR\": 250032, \"th_TH\": 250047, \"tl_XX\": 250048, \"pl_PL\": 250040, \"ka_GE\": 250034, \"ta_IN\": 250045, \"km_KH\": 250035, \"te_IN\": 250046, \"xh_ZA\": 250051, \"sv_SE\": 250043, \"sw_KE\": 250044, \"ps_AF\": 250041, \"bn_IN\": 250029, \"ml_IN\": 250037, \"az_AZ\": 250027, \"af_ZA\": 250028, \"gl_ES\": 250052, \"ur_PK\": 250050, \"mr_IN\": 250039, \"fa_IR\": 250030}\r\n`\r\n\r\n File \"translate.py\", line 26, in <module>\r\n tokenizer = MBart50Tokenizer.from_pretrained(path)\r\n File \"/home/tuhin.chakr/yes/lib/python3.8/site-packages/transformers/tokenization_utils_base.py\", line 1704, in from_pretrained\r\n return cls._from_pretrained(\r\n File \"/home/tuhin.chakr/yes/lib/python3.8/site-packages/transformers/tokenization_utils_base.py\", line 1810, in _from_pretrained\r\n assert index == len(tokenizer), (\r\nAssertionError: Non-consecutive added token 'bn_IN' found. Should have index 250054 but has index 250029 in saved vocabulary.\r\n\r\n\r\nThe error comes from MBart50Tokenizer\r\nmodel = MBartForConditionalGeneration.from_pretrained(path)\r\nmodel.eval()\r\nmodel.to('cuda')\r\ntokenizer = MBart50Tokenizer.from_pretrained(path)\r\n\r\nIt works fine with MBartTokenizer\r\n\r\nI can use MBartTokenizer for common languages in mbart25 and mbart50 for my manytoone model but for languages like pt_XX i can't .",
"HI @tuhinjubcse \r\n\r\nGlad you got it working.\r\n\r\nAnd this seems like a bug, I will take a look. How many new tokens did you add?",
"I tried adding tokens using the `add_tokens` and `add_special_tokens` method, saved and loaded it again, I didn't observe this issue.\r\n\r\nHere's what I did\r\n```python\r\ntok = MBart50Tokenizer.from_pretrained(\"facebook/mbart-large-50\")\r\n\r\ntok.add_special_tokens({\"MY_XX\": \"MY_XX\"})\r\ntok.add_special_tokens({\"additional_special_tokens\": [\"MY2_XX\"]})\r\n\r\ntok.save_pretrained(\"./tmp\")\r\n\r\ntok = MBart50Tokenizer.from_pretrained(\"./tmp\")\r\ntok.convert_tokens_to_ids(\"MY_XX\") # 250054\r\ntok.convert_tokens_to_ids(\"MY2_XX\") # 250055\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Unstale",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,624 | 1,624 | NONE | null | - `transformers` version: Latest
- Platform:
- Python version: 1.8.0
- Using GPU in script?: Yes A100
- Using distributed or parallel set-up in script?: No
I am trying to finetune MBART50-many-to-many
```
python ./transformers/examples/seq2seq/run_translation.py \
--model_name_or_path facebook/mbart-large-50-many-to-many-mmt \
--do_train \
--do_eval \
--source_lang ru_RU \
--target_lang en_XX \
--train_file ./corpus_v2/train.json \
--validation_file ./corpus_v2/valid.json \
--output_dir /local/nlpswordfish/tuhin/mbart50/tst-translation \
--per_device_train_batch_size=32 \
--per_device_eval_batch_size=8 \
--overwrite_output_dir \
--predict_with_generate \
--max_train_samples 51373 \
--max_val_samples 6424 \
--gradient_accumulation_steps 1\
--num_train_epochs 8 \
--save_strategy epoch \
--evaluation_strategy epoch
```
Even though I explicitly pass Src lang as ru_RU and Target as en_XX I get an error and see my log.
I tried printing Src and Tgt language
```
Assigning ['ar_AR', 'cs_CZ', 'de_DE', 'en_XX', 'es_XX', 'et_EE', 'fi_FI', 'fr_XX', 'gu_IN', 'hi_IN', 'it_IT', 'ja_XX', 'kk_KZ', 'ko_KR', 'lt_LT', 'lv_LV', 'my_MM', 'ne_NP', 'nl_XX', 'ro_RO', 'ru_RU', 'si_LK', 'tr_TR', 'vi_VN', 'zh_CN', 'af_ZA', 'az_AZ', 'bn_IN', 'fa_IR', 'he_IL', 'hr_HR', 'id_ID', 'ka_GE', 'km_KH', 'mk_MK', 'ml_IN', 'mn_MN', 'mr_IN', 'pl_PL', 'ps_AF', 'pt_XX', 'sv_SE', 'sw_KE', 'ta_IN', 'te_IN', 'th_TH', 'tl_XX', 'uk_UA', 'ur_PK', 'xh_ZA', 'gl_ES', 'sl_SI'] to the additional_special_tokens key of the tokenizer
Src lang is en_XX
ids [250004]
ids [2]
loading weights file https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt/resolve/main/pytorch_model.bin from cache at /home/tuhin.chakr/.cache/huggingface/transformers/e33fcda1a71396b8475e16e2fe1458cfa62c6013f8cb3787d6aa4364ec5251c6.d802a5ca7720894045dd2c9dcee6069d27aa92fbbe33f52b44d479538dc3ccc3
All model checkpoint weights were used when initializing MBartForConditionalGeneration.
All the weights of MBartForConditionalGeneration were initialized from the model checkpoint at facebook/mbart-large-50-many-to-many-mmt.
If your task is similar to the task the model of the checkpoint was trained on, you can already use MBartForConditionalGeneration for predictions without further training.
Tgt lang is None
self.prefix_tokens is [None]
ids [None]
Traceback (most recent call last):
File "./transformers/examples/seq2seq/run_translation.py", line 564, in <module
main()
File "./transformers/examples/seq2seq/run_translation.py", line 403, in main
train_dataset = train_dataset.map(
File "/home/tuhin.chakr/yes/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1289, in map
update_data = does_function_return_dict(test_inputs, test_indices)
File "/home/tuhin.chakr/yes/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1260, in does_function_return_dict
function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
File "./transformers/examples/seq2seq/run_translation.py", line 384, in preprocess_function
with tokenizer.as_target_tokenizer():
File "/home/tuhin.chakr/yes/lib/python3.8/contextlib.py", line 113, in __enter__
return next(self.gen)
File "/home/tuhin.chakr/yes/lib/python3.8/site-packages/transformers/models/mbart/tokenization_mbart50_fast.py", line 242, in as_target_tokenizer
self.set_tgt_lang_special_tokens(self.tgt_lang)
File "/home/tuhin.chakr/yes/lib/python3.8/site-packages/transformers/models/mbart/tokenization_mbart50_fast.py", line 269, in set_tgt_lang_special_tokens
prefix_tokens_str = self.convert_ids_to_tokens(self.prefix_tokens)
File "/home/tuhin.chakr/yes/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py", line 287, in convert_ids_to_tokens
index = int(index)
TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'
```
Also as far I understand in many to many for finetuning it requires some separate processing based on the paper which is missing ?

What should be the data format. Additionally will u guys release a many to one model as well ? although many to one is a subset of many to many
@patrickvonplaten, @patil-suraj | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10835/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10835/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10834 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10834/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10834/comments | https://api.github.com/repos/huggingface/transformers/issues/10834/events | https://github.com/huggingface/transformers/issues/10834 | 837,123,808 | MDU6SXNzdWU4MzcxMjM4MDg= | 10,834 | Local Attention for GPT2 | {
"login": "leogao2",
"id": 54557097,
"node_id": "MDQ6VXNlcjU0NTU3MDk3",
"avatar_url": "https://avatars.githubusercontent.com/u/54557097?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leogao2",
"html_url": "https://github.com/leogao2",
"followers_url": "https://api.github.com/users/leogao2/followers",
"following_url": "https://api.github.com/users/leogao2/following{/other_user}",
"gists_url": "https://api.github.com/users/leogao2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leogao2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leogao2/subscriptions",
"organizations_url": "https://api.github.com/users/leogao2/orgs",
"repos_url": "https://api.github.com/users/leogao2/repos",
"events_url": "https://api.github.com/users/leogao2/events{/privacy}",
"received_events_url": "https://api.github.com/users/leogao2/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@patil-suraj is working on implementing GPT Neo over at https://github.com/huggingface/transformers/pull/10848! The 1.3B and 2.7B should be loadable in that architecture once finalized."
] | 1,616 | 1,617 | 1,617 | CONTRIBUTOR | null | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
Our model uses local attention in some layers (i.e each position can only see the last k=256 tokens in every other layer). We would like to be able to specify this in the config on the model hub.
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
Right now we can't integrate the 1.3B and 2.7B EleutherAI GPT models because local attention is not supported in transformers.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10834/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10834/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10833 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10833/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10833/comments | https://api.github.com/repos/huggingface/transformers/issues/10833/events | https://github.com/huggingface/transformers/issues/10833 | 837,117,667 | MDU6SXNzdWU4MzcxMTc2Njc= | 10,833 | weird large memory usage of mbert model | {
"login": "dorost1234",
"id": 79165106,
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorost1234",
"html_url": "https://github.com/dorost1234",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"One possible reason for that is bigger tokenizer cardinality and the fact that HF code predicts all tokens, not only one that are missing, so this operation might dominate overall runtime/memory while doing no useful output.\r\n\r\nIn my case it was the case and I did a small fix here https://github.com/yurymalkov/transformers/commit/0fe0725c0f7fcc13df698bba1bd01847c1494e43 that ended up in 6X larger batch at my memory usage (1060 GTX) and 3-4X times faster training of a small mbert model, but it causes some tests to fail (I think mainly due to pytorch jit failure which sees named tensor in slice, secondly to missing output of non-masked tokens - though who cares about them?)."
] | 1,616 | 1,621 | 1,619 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4.2
- Platform:
- Python version: 3.7
- PyTorch version (GPU?): 1.8
- Tensorflow version (GPU?):
- Using GPU in script?: -
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
albert, bert, xlm: @LysandreJik
## Information
I am using mbert model, this is 110 M params, I am testing the pretrianing codes with mt5-small and compare this with mbert, mbert weirdly use a lot of memory, I need to reduce 1/2 of batch_size on the same machine I train my mt5-small model which is 3x larger than mbert model. This is weird that mbert while being 1/3 of mt5-small require large memory.
* I am using run_mlm command
* I run the codes on V100 GPU
## To reproduce
Steps to reproduce the behavior:
python run_mlm.py --model_name_or_path bert-base-multilingual-uncased --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --do_eval --output_dir output --per_device_train_batch_size 88 --fp16 --max_seq_length 128
## Expected behavior
larger batch size should be possible with mbert | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10833/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10833/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10832 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10832/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10832/comments | https://api.github.com/repos/huggingface/transformers/issues/10832/events | https://github.com/huggingface/transformers/issues/10832 | 837,111,662 | MDU6SXNzdWU4MzcxMTE2NjI= | 10,832 | run_mlm.py: CUDA error: device-side assert triggered, THCTensorIndex | {
"login": "matteomedioli",
"id": 31959430,
"node_id": "MDQ6VXNlcjMxOTU5NDMw",
"avatar_url": "https://avatars.githubusercontent.com/u/31959430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/matteomedioli",
"html_url": "https://github.com/matteomedioli",
"followers_url": "https://api.github.com/users/matteomedioli/followers",
"following_url": "https://api.github.com/users/matteomedioli/following{/other_user}",
"gists_url": "https://api.github.com/users/matteomedioli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/matteomedioli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/matteomedioli/subscriptions",
"organizations_url": "https://api.github.com/users/matteomedioli/orgs",
"repos_url": "https://api.github.com/users/matteomedioli/repos",
"events_url": "https://api.github.com/users/matteomedioli/events{/privacy}",
"received_events_url": "https://api.github.com/users/matteomedioli/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi there! So the problem is a bit complex and linked to the way RoBERTa is implemented in Transformers with a small hack: its toknizer has 512 + 2 position embeddings, not 512. When you run your command, the model is randomly initialized with 512 position embeddings (the default in the config) but you still use it with that `robert-base` tokenizer which returns up to 514. This results in an index error that throws the \"device-side assert triggered\".\r\n\r\nTo fix this, you need to either use another tokenizer, or prepare your random model like this:\r\n```\r\nfrom transformers import RobertaForMaskedLM, RobertaConfig\r\n\r\nmodel = RobertaForMaskedLM(RobertaConfig(max_position_embeddings=514))\r\nmodel.save_pretrained(\"model_dir\")\r\n```\r\nthen use `model_dir` for `--model_name_or_path` when launching your script.\r\n\r\nYou can also tweak the script directly to add `max_position_embeddings=514` in [this line](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py#L282).",
"Thank you! Now it works! :)"
] | 1,616 | 1,616 | 1,616 | NONE | null | ## Environment info
- `transformers` version: 4.4.2
- Platform: Linux
- Python version: Python 3.4.9
- PyTorch version (GPU?): 1.6.0+cu101
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
- GPU details: 4 GPUs V100 16GB
## Information
I am using Bert and Roberta. I'm try to train from scratch on Wikipedia dataset using your examples run_mlm and your dataset wikipedia (20200501.en)
Before using distributed set up, I was stacked on the first optimization step. Without distributed setup I was stack on first optimization steps or received the reported error. With distributed setup I always receive the reported error.
The problem arises when using:
* [x] the official example scripts: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: MLM train from scratch Bert and Roberta
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
export CUDA_LAUNCH_BLOCKING=1
export TOKENIZERS_PARALLELISM=true
export OMP_NUM_THREADS=32
source /data/medioli/env/bin/activate
python3 -m torch.distributed.launch \
--nproc_per_node 4 run_mlm.py \
--dataset_name wikipedia \
--tokenizer_name roberta-base \
--model_type roberta \
--dataset_config_name 20200501.en \
--do_train \
--do_eval \
--learning_rate 1e-5 \
--num_train_epochs 5 \
--save_steps 5000 \
--output_dir /data/medioli/models/mlm/wikipedia_roberta_5ep_1e5_lbl \
--line_by_line \
--use_fast_tokenizer \
--logging_dir /data/medioli/models/mlm/wikipedia_roberta_5ep_1e5_lbl/runs \
--cache_dir /data/medioli/datasets/wikipedia/ \
--overwrite_output_dir \
```
## Errors and Output
Many errors like this:
```
/pytorch/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [372,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
```
Then:
```
Traceback (most recent call last):
File "/data/medioli/transformers/examples/language-modeling/run_mlm.py", line 491, in <module>
main()
File "/data/medioli/transformers/examples/language-modeling/run_mlm.py", line 457, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/data/medioli/env/lib/python3.6/site-packages/transformers/trainer.py", line 1053, in train
tr_loss += self.training_step(model, inputs)
File "/data/medioli/env/lib/python3.6/site-packages/transformers/trainer.py", line 1443, in training_step
loss = self.compute_loss(model, inputs)
File "/data/medioli/env/lib/python3.6/site-packages/transformers/trainer.py", line 1475, in compute_loss
outputs = model(**inputs)
File "/data/medioli/env/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/data/medioli/env/lib64/python3.6/site-packages/torch/nn/parallel/distributed.py", line 511, in forward
output = self.module(*inputs[0], **kwargs[0])
File "/data/medioli/env/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/data/medioli/env/lib/python3.6/site-packages/transformers/models/roberta/modeling_roberta.py", line 1057, in forward
return_dict=return_dict,
File "/data/medioli/env/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/data/medioli/env/lib/python3.6/site-packages/transformers/models/roberta/modeling_roberta.py", line 810, in forward
past_key_values_length=past_key_values_length,
File "/data/medioli/env/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/data/medioli/env/lib/python3.6/site-packages/transformers/models/roberta/modeling_roberta.py", line 123, in forward
embeddings += position_embeddings
RuntimeError: CUDA error: device-side assert triggered
terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: device-side assert triggered
Exception raised from create_event_internal at /pytorch/c10/cuda/CUDACachingAllocator.cpp:687 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x42 (0x7fa4517ed1e2 in /data/medioli/env/lib64/python3.6/site-packages/torch/lib/libc10.so)
frame #1: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0xad2 (0x7fa451a3bf92 in /data/medioli/env/lib64/python3.6/site-packages/torch/lib/libc10_cuda.so)
frame #2: c10::TensorImpl::release_resources() + 0x4d (0x7fa4517db9cd in /data/medioli/env/lib64/python3.6/site-packages/torch/lib/libc10.so)
frame #3: std::vector<c10d::Reducer::Bucket, std::allocator<c10d::Reducer::Bucket> >::~vector() + 0x25a (0x7fa427f8489a in /data/medioli/env/lib64/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #4: c10d::Reducer::~Reducer() + 0x28a (0x7fa427f79b1a in /data/medioli/env/lib64/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #5: std::_Sp_counted_ptr<c10d::Reducer*, (__gnu_cxx::_Lock_policy)2>::_M_dispose() + 0x12 (0x7fa427f593c2 in /data/medioli/env/lib64/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #6: std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release() + 0x46 (0x7fa4277577a6 in /data/medioli/env/lib64/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #7: <unknown function> + 0xa6b08b (0x7fa427f5a08b in /data/medioli/env/lib64/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #8: <unknown function> + 0x273c00 (0x7fa427762c00 in /data/medioli/env/lib64/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #9: <unknown function> + 0x274e4e (0x7fa427763e4e in /data/medioli/env/lib64/python3.6/site-packages/torch/lib/libtorch_python.so)
<omitting python frames>
frame #22: main + 0x16e (0x400a3e in /data/medioli/env/bin/python3)
frame #23: __libc_start_main + 0xf5 (0x7fa48f4903d5 in /lib64/libc.so.6)
frame #24: /data/medioli/env/bin/python3() [0x400b02]
```
Discussion in pytorch: https://discuss.pytorch.org/t/solved-assertion-srcindex-srcselectdimsize-failed-on-gpu-for-torch-cat/1804/22
Who can help me
Models:
@LysandreJik
Library:
- tokenizers: @LysandreJik
- trainer: @sgugger
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10832/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10832/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10831 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10831/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10831/comments | https://api.github.com/repos/huggingface/transformers/issues/10831/events | https://github.com/huggingface/transformers/issues/10831 | 837,022,217 | MDU6SXNzdWU4MzcwMjIyMTc= | 10,831 | Encoder Decoder Model didn't return a reasonable result | {
"login": "C7ABT",
"id": 41655808,
"node_id": "MDQ6VXNlcjQxNjU1ODA4",
"avatar_url": "https://avatars.githubusercontent.com/u/41655808?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/C7ABT",
"html_url": "https://github.com/C7ABT",
"followers_url": "https://api.github.com/users/C7ABT/followers",
"following_url": "https://api.github.com/users/C7ABT/following{/other_user}",
"gists_url": "https://api.github.com/users/C7ABT/gists{/gist_id}",
"starred_url": "https://api.github.com/users/C7ABT/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/C7ABT/subscriptions",
"organizations_url": "https://api.github.com/users/C7ABT/orgs",
"repos_url": "https://api.github.com/users/C7ABT/repos",
"events_url": "https://api.github.com/users/C7ABT/events{/privacy}",
"received_events_url": "https://api.github.com/users/C7ABT/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! You're using two `bert-base-uncased` as encoder/decoders. This is possible, but you'll need to train your resulting encoder-decoder model on a downstream task in order to obtain coherent results.\r\n\r\nThe `bert-base-uncased` checkpoint is originally from an encoder-only setup.\r\n\r\nIf I may recommend some notebooks/documentation:\r\n- [Documentation of encoder/decoder framework](https://huggingface.co/transformers/model_doc/encoderdecoder.html)\r\n- [Training a Bert2Bert model for summarization](https://github.com/patrickvonplaten/notebooks/blob/master/BERT2BERT_for_CNN_Dailymail.ipynb)\r\n- [Training a shared Roberta2Roberta for summarization](https://github.com/patrickvonplaten/notebooks/blob/master/RoBERTaShared_for_BBC_XSum.ipynb)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,619 | 1,619 | NONE | null | Hello,
I tried the example code in the official website as below.
# code
`from transformers import EncoderDecoderModel, BertTokenizer
import torch
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = EncoderDecoderModel.from_encoder_decoder_pretrained('bert-base-uncased', 'bert-base-uncased')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0)
outputs = model(input_ids=input_ids, decoder_input_ids=input_ids)
outputs = model(input_ids=input_ids, decoder_input_ids=input_ids, labels=input_ids)
loss, logits = outputs.loss, outputs.logits
model.save_pretrained("bert2bert")
model = EncoderDecoderModel.from_pretrained("bert2bert")
generated = model.generate(input_ids, decoder_start_token_id=model.config.decoder.pad_token_id)
for i, sample_output in enumerate(generated):
print("{}: {}".format(i, tokenizer.decode(sample_output, skip_special_tokens=True)))`
# output
However, it returned to such a result.
`Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertLMHeadModel: ['cls.seq_relationship.weight', 'cls.seq_relationship.bias']
- This IS expected if you are initializing BertLMHeadModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertLMHeadModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of BertLMHeadModel were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['bert.encoder.layer.0.crossattention.self.query.weight', 'bert.encoder.layer.0.crossattention.self.query.bias', 'bert.encoder.layer.0.crossattention.self.key.weight', 'bert.encoder.layer.0.crossattention.self.key.bias', 'bert.encoder.layer.0.crossattention.self.value.weight', 'bert.encoder.layer.0.crossattention.self.value.bias', 'bert.encoder.layer.0.crossattention.output.dense.weight', 'bert.encoder.layer.0.crossattention.output.dense.bias', 'bert.encoder.layer.0.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.0.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.1.crossattention.self.query.weight', 'bert.encoder.layer.1.crossattention.self.query.bias', 'bert.encoder.layer.1.crossattention.self.key.weight', 'bert.encoder.layer.1.crossattention.self.key.bias', 'bert.encoder.layer.1.crossattention.self.value.weight', 'bert.encoder.layer.1.crossattention.self.value.bias', 'bert.encoder.layer.1.crossattention.output.dense.weight', 'bert.encoder.layer.1.crossattention.output.dense.bias', 'bert.encoder.layer.1.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.1.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.2.crossattention.self.query.weight', 'bert.encoder.layer.2.crossattention.self.query.bias', 'bert.encoder.layer.2.crossattention.self.key.weight', 'bert.encoder.layer.2.crossattention.self.key.bias', 'bert.encoder.layer.2.crossattention.self.value.weight', 'bert.encoder.layer.2.crossattention.self.value.bias', 'bert.encoder.layer.2.crossattention.output.dense.weight', 'bert.encoder.layer.2.crossattention.output.dense.bias', 'bert.encoder.layer.2.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.2.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.3.crossattention.self.query.weight', 'bert.encoder.layer.3.crossattention.self.query.bias', 'bert.encoder.layer.3.crossattention.self.key.weight', 'bert.encoder.layer.3.crossattention.self.key.bias', 'bert.encoder.layer.3.crossattention.self.value.weight', 'bert.encoder.layer.3.crossattention.self.value.bias', 'bert.encoder.layer.3.crossattention.output.dense.weight', 'bert.encoder.layer.3.crossattention.output.dense.bias', 'bert.encoder.layer.3.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.3.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.4.crossattention.self.query.weight', 'bert.encoder.layer.4.crossattention.self.query.bias', 'bert.encoder.layer.4.crossattention.self.key.weight', 'bert.encoder.layer.4.crossattention.self.key.bias', 'bert.encoder.layer.4.crossattention.self.value.weight', 'bert.encoder.layer.4.crossattention.self.value.bias', 'bert.encoder.layer.4.crossattention.output.dense.weight', 'bert.encoder.layer.4.crossattention.output.dense.bias', 'bert.encoder.layer.4.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.4.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.5.crossattention.self.query.weight', 'bert.encoder.layer.5.crossattention.self.query.bias', 'bert.encoder.layer.5.crossattention.self.key.weight', 'bert.encoder.layer.5.crossattention.self.key.bias', 'bert.encoder.layer.5.crossattention.self.value.weight', 'bert.encoder.layer.5.crossattention.self.value.bias', 'bert.encoder.layer.5.crossattention.output.dense.weight', 'bert.encoder.layer.5.crossattention.output.dense.bias', 'bert.encoder.layer.5.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.5.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.6.crossattention.self.query.weight', 'bert.encoder.layer.6.crossattention.self.query.bias', 'bert.encoder.layer.6.crossattention.self.key.weight', 'bert.encoder.layer.6.crossattention.self.key.bias', 'bert.encoder.layer.6.crossattention.self.value.weight', 'bert.encoder.layer.6.crossattention.self.value.bias', 'bert.encoder.layer.6.crossattention.output.dense.weight', 'bert.encoder.layer.6.crossattention.output.dense.bias', 'bert.encoder.layer.6.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.6.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.7.crossattention.self.query.weight', 'bert.encoder.layer.7.crossattention.self.query.bias', 'bert.encoder.layer.7.crossattention.self.key.weight', 'bert.encoder.layer.7.crossattention.self.key.bias', 'bert.encoder.layer.7.crossattention.self.value.weight', 'bert.encoder.layer.7.crossattention.self.value.bias', 'bert.encoder.layer.7.crossattention.output.dense.weight', 'bert.encoder.layer.7.crossattention.output.dense.bias', 'bert.encoder.layer.7.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.7.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.8.crossattention.self.query.weight', 'bert.encoder.layer.8.crossattention.self.query.bias', 'bert.encoder.layer.8.crossattention.self.key.weight', 'bert.encoder.layer.8.crossattention.self.key.bias', 'bert.encoder.layer.8.crossattention.self.value.weight', 'bert.encoder.layer.8.crossattention.self.value.bias', 'bert.encoder.layer.8.crossattention.output.dense.weight', 'bert.encoder.layer.8.crossattention.output.dense.bias', 'bert.encoder.layer.8.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.8.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.9.crossattention.self.query.weight', 'bert.encoder.layer.9.crossattention.self.query.bias', 'bert.encoder.layer.9.crossattention.self.key.weight', 'bert.encoder.layer.9.crossattention.self.key.bias', 'bert.encoder.layer.9.crossattention.self.value.weight', 'bert.encoder.layer.9.crossattention.self.value.bias', 'bert.encoder.layer.9.crossattention.output.dense.weight', 'bert.encoder.layer.9.crossattention.output.dense.bias', 'bert.encoder.layer.9.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.9.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.10.crossattention.self.query.weight', 'bert.encoder.layer.10.crossattention.self.query.bias', 'bert.encoder.layer.10.crossattention.self.key.weight', 'bert.encoder.layer.10.crossattention.self.key.bias', 'bert.encoder.layer.10.crossattention.self.value.weight', 'bert.encoder.layer.10.crossattention.self.value.bias', 'bert.encoder.layer.10.crossattention.output.dense.weight', 'bert.encoder.layer.10.crossattention.output.dense.bias', 'bert.encoder.layer.10.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.10.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.11.crossattention.self.query.weight', 'bert.encoder.layer.11.crossattention.self.query.bias', 'bert.encoder.layer.11.crossattention.self.key.weight', 'bert.encoder.layer.11.crossattention.self.key.bias', 'bert.encoder.layer.11.crossattention.self.value.weight', 'bert.encoder.layer.11.crossattention.self.value.bias', 'bert.encoder.layer.11.crossattention.output.dense.weight', 'bert.encoder.layer.11.crossattention.output.dense.bias', 'bert.encoder.layer.11.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.11.crossattention.output.LayerNorm.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
2021-03-21 16:47:27.243389: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found
2021-03-21 16:47:27.243603: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
0: . as as as as as as as as as as as as as as as as as as
Process finished with exit code 0
`
# issue
Would you be too kindly to help me find out the reason why it returned word 'as' ? Much thanks!
Besides, as a newbie, would it be possible if I could use BERT as encoder and Transformer as Decoder in this EncoderDecoderModel? I would be too grateful if you could help me out!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10831/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10831/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10830 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10830/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10830/comments | https://api.github.com/repos/huggingface/transformers/issues/10830/events | https://github.com/huggingface/transformers/issues/10830 | 837,020,669 | MDU6SXNzdWU4MzcwMjA2Njk= | 10,830 | getting nans with t5-large + fix | {
"login": "yuvalkirstain",
"id": 57996478,
"node_id": "MDQ6VXNlcjU3OTk2NDc4",
"avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuvalkirstain",
"html_url": "https://github.com/yuvalkirstain",
"followers_url": "https://api.github.com/users/yuvalkirstain/followers",
"following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}",
"gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions",
"organizations_url": "https://api.github.com/users/yuvalkirstain/orgs",
"repos_url": "https://api.github.com/users/yuvalkirstain/repos",
"events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuvalkirstain/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi\r\nI also observe the similar issue with mt5 models, https://github.com/huggingface/transformers/issues/10819 , deepspeed is still not working for me due to this issue with mt5 models.\r\nI greatly appreciate having a look @patil-suraj @patrickvonplaten ",
"We didn't really manage to resolve the problems with t5/mt5 + mixed precision fp16 (cc @patil-suraj). I'm not sure whether anybody has tried internally to fine-tune t5/mt5 with deepspeed (@stas00 maybe?)",
"the issue arises without deepspeed, just vanilla mt5-small model. Also, I see similar nans with deepspeed with a model based on mt5-small slightly modified, please see the issue here https://github.com/huggingface/transformers/issues/10821#issuecomment-803453998, I think if the issue with fp16 option could get resolved, hopefully this will be also more stable with model changes in deepspeed as well. Thanks a lot.",
"Indeed, this has nothing to do with deepspeed, other than that deepspeed trains in mixed precision and evals in full fp16 at the moment.\r\n\r\nI've started studying the bfloat16 vs. float16 numerical properties and their correlation to each other. And once I understand it well I will try to see if there some sort of magical remapping that perhaps could be done - this is my fantasy of course. I just need to finish a few other more urgent things with deepspeed stage3 integration first.\r\n\r\nBut please don't let my comment prevent you from merging the proposed fix if it already solves the problem.\r\n",
"I got similar issue with mt5 model, @patrickvonplaten thanks a lot in advance for your help",
"@dorost1234 + @yuvalkirstain, please kindly try this branch:\r\nhttps://github.com/huggingface/transformers/tree/t5-fp16-no-nans\r\nand let me know if it solves the problem - It seems that the problem is due to `autocast` in `T5LayerFF` so this branch tries to turn off `autocast` just for that layer. It also disables the previously added clamping.\r\n\r\nThere is also a lot of debug statements in the branch but they will be silent unless nan/inf is detected. \r\n\r\nI tested it work on a small sample with t5-small/t5-base/t5-large/google/mt5-small.\r\n\r\nThe main part of the fix is just:\r\n```\r\nclass T5LayerFF(nn.Module):\r\n def forward(self, hidden_states):\r\n with torch.cuda.amp.autocast(enabled=False):\r\n forwarded_states = self.layer_norm(hidden_states)\r\n forwarded_states = self.DenseReluDense(forwarded_states)\r\n hidden_states = hidden_states + self.dropout(forwarded_states)\r\n return hidden_states\r\n```\r\nand removing some code. So use the branch first.\r\n\r\nIf it works I guess we could just monkey patch this version for AMP or come up with some cleaner solution. Probably with `torch.is_autocast_enabled()` check",
"Dear @stas00 \r\nThank you very much for taking time looking into this issue, this would be really awesome if this could fix the issue, I tried to test it, for this I got the branch, and then I install it locally with \"python setup.py develop\", then I run this command:\r\n\r\n`python run_translation.py --model_name_or_path google/mt5-small --do_train --do_eval --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config_name ro-en --output_dir /temp/test --per_device_train_batch_size=4 --per_device_eval_batch_size=4 --overwrite_output_dir --predict_with_generate --logging_step 10 --fp16`\r\n\r\nI got this error:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"run_translation.py\", line 562, in <module>\r\n main()\r\n File \"run_translation.py\", line 448, in main\r\n pad_to_multiple_of=8 if training_args.fp16 else None,\r\nTypeError: __init__() got an unexpected keyword argument 'model'\r\n```\r\nI think there is some version mismatch. I removed the model from input to the collator, as below\r\n\r\n```\r\n data_collator = DataCollatorForSeq2Seq(\r\n tokenizer,\r\n #model=model,\r\n label_pad_token_id=label_pad_token_id,\r\n pad_to_multiple_of=8 if training_args.fp16 else None,\r\n )\r\n\r\n```\r\n\r\nand then here is what I got with fp16 option:\r\n\r\n```\r\n{'loss': 23.3523, 'learning_rate': 4.999890767684712e-05, 'epoch': 0.0} \r\n{'loss': 22.5557, 'learning_rate': 4.999781535369424e-05, 'epoch': 0.0} \r\n{'loss': 25.9471, 'learning_rate': 4.999672303054136e-05, 'epoch': 0.0} \r\n{'loss': 23.0994, 'learning_rate': 4.9995630707388475e-05, 'epoch': 0.0} \r\n{'loss': 24.9974, 'learning_rate': 4.999453838423559e-05, 'epoch': 0.0} \r\n{'loss': 23.3743, 'learning_rate': 4.999344606108271e-05, 'epoch': 0.0} \r\n{'loss': 24.2147, 'learning_rate': 4.999235373792983e-05, 'epoch': 0.0} \r\n{'loss': 26.7845, 'learning_rate': 4.9991261414776954e-05, 'epoch': 0.0} \r\n{'loss': 25.2277, 'learning_rate': 4.9990169091624065e-05, 'epoch': 0.0} \r\n{'loss': 23.3156, 'learning_rate': 4.998907676847119e-05, 'epoch': 0.0} \r\n{'loss': 21.275, 'learning_rate': 4.99879844453183e-05, 'epoch': 0.0} \r\n{'loss': 23.7031, 'learning_rate': 4.9986892122165426e-05, 'epoch': 0.0} \r\n{'loss': 23.8086, 'learning_rate': 4.9985799799012544e-05, 'epoch': 0.0} \r\n{'loss': 25.8143, 'learning_rate': 4.998470747585966e-05, 'epoch': 0.0} \r\n{'loss': 24.4319, 'learning_rate': 4.998361515270678e-05, 'epoch': 0.0} \r\n{'loss': 26.8277, 'learning_rate': 4.99825228295539e-05, 'epoch': 0.0} \r\n```\r\n\r\nhere is loss without fp16:\r\n```\r\n{'loss': 27.0258, 'learning_rate': 4.999890767684712e-05, 'epoch': 0.0} \r\n{'loss': 23.141, 'learning_rate': 4.999781535369424e-05, 'epoch': 0.0} \r\n{'loss': 21.2312, 'learning_rate': 4.999672303054136e-05, 'epoch': 0.0} \r\n{'loss': 19.3567, 'learning_rate': 4.9995630707388475e-05, 'epoch': 0.0} \r\n{'loss': 18.7998, 'learning_rate': 4.999453838423559e-05, 'epoch': 0.0} \r\n{'loss': 17.9632, 'learning_rate': 4.999344606108271e-05, 'epoch': 0.0} \r\n{'loss': 17.2105, 'learning_rate': 4.999235373792983e-05, 'epoch': 0.0} \r\n{'loss': 17.5506, 'learning_rate': 4.9991261414776954e-05, 'epoch': 0.0} \r\n{'loss': 15.2566, 'learning_rate': 4.9990169091624065e-05, 'epoch': 0.0} \r\n{'loss': 14.8667, 'learning_rate': 4.998907676847119e-05, 'epoch': 0.0} \r\n{'loss': 13.7132, 'learning_rate': 4.99879844453183e-05, 'epoch': 0.0} \r\n{'loss': 13.4058, 'learning_rate': 4.9986892122165426e-05, 'epoch': 0.0\r\n```\r\n\r\nSo I think this is not optimizing the loss well. I greatly appreciate having a look. Thanks a lot.\r\n",
"re errors - this is all on master - the source code and `run_translation.py`. When you install `pip install -e .` sometimes conda/pip don't clean up an old install, so it helps to do `pip uninstall transformers -y` at least 2 times!\r\n\r\nI solve such problems by running locally and not relying on the installed `transformers`, i.e.:\r\n```\r\ngit clone https://github.com/huggingface/transformers\r\ncd transformers\r\nPYTHONPATH=src python examples/seq2seq/run_translation.py ...\r\n```\r\n\r\nnow you never need to worry about what `transformers` version is installed in the environment.\r\n\r\nwrt not getting the loss going down - this is odd, I just run your code:\r\n```\r\nPYTHONPATH=src python examples/seq2seq/run_translation.py --model_name_or_path google/mt5-small --do_train --do_eval --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config_name ro-en --output_dir /tmp/test --per_device_train_batch_size=4 --per_device_eval_batch_size=4 --overwrite_output_dir --predict_with_generate --logging_step 10 --fp16\r\n\r\n{'loss': 29.7519, 'learning_rate': 4.999781535369424e-05, 'epoch': 0.0} \r\n{'loss': 26.3593, 'learning_rate': 4.9995630707388475e-05, 'epoch': 0.0} \r\n{'loss': 23.4431, 'learning_rate': 4.999344606108271e-05, 'epoch': 0.0} \r\n{'loss': 21.431, 'learning_rate': 4.9991261414776954e-05, 'epoch': 0.0} \r\n{'loss': 19.2445, 'learning_rate': 4.998907676847119e-05, 'epoch': 0.0} \r\n{'loss': 17.8293, 'learning_rate': 4.9986892122165426e-05, 'epoch': 0.0} \r\n{'loss': 16.9441, 'learning_rate': 4.998470747585966e-05, 'epoch': 0.0} \r\n{'loss': 15.7572, 'learning_rate': 4.99825228295539e-05, 'epoch': 0.0}\r\n{'loss': 15.2937, 'learning_rate': 4.9980338183248135e-05, 'epoch': 0.0}\r\n{'loss': 14.4368, 'learning_rate': 4.997815353694237e-05, 'epoch': 0.0}\r\n{'loss': 14.6709, 'learning_rate': 4.997596889063661e-05, 'epoch': 0.0}\r\n{'loss': 13.2806, 'learning_rate': 4.9973784244330843e-05, 'epoch': 0.0}\r\n{'loss': 12.9245, 'learning_rate': 4.997159959802508e-05, 'epoch': 0.0}\r\n{'loss': 12.4647, 'learning_rate': 4.9969414951719316e-05, 'epoch': 0.0}\r\n{'loss': 11.4738, 'learning_rate': 4.996723030541355e-05, 'epoch': 0.0}\r\n```\r\n\r\nMust be your hardware? Try to lower the learning rate?\r\n\r\nI tried with 1 or 2 gpus and it worked in both cases.\r\n\r\n",
"Hi @stas00 \r\nthank you very much for the pointers, I did it as you mentioned and now I see this is going down nicely\r\n\r\n```\r\n{'loss': 28.1802, 'learning_rate': 4.999890767684712e-05, 'epoch': 0.0} \r\n{'loss': 27.4353, 'learning_rate': 4.999781535369424e-05, 'epoch': 0.0} \r\n{'loss': 21.3904, 'learning_rate': 4.999672303054136e-05, 'epoch': 0.0} \r\n{'loss': 22.8854, 'learning_rate': 4.9995630707388475e-05, 'epoch': 0.0} \r\n{'loss': 19.6943, 'learning_rate': 4.999453838423559e-05, 'epoch': 0.0} \r\n{'loss': 21.253, 'learning_rate': 4.999344606108271e-05, 'epoch': 0.0} \r\n{'loss': 20.1937, 'learning_rate': 4.999235373792983e-05, 'epoch': 0.0} \r\n{'loss': 18.6606, 'learning_rate': 4.9991261414776954e-05, 'epoch': 0.0} \r\n{'loss': 18.0337, 'learning_rate': 4.9990169091624065e-05, 'epoch': 0.0} \r\n{'loss': 16.1259, 'learning_rate': 4.998907676847119e-05, 'epoch': 0.0} \r\n{'loss': 15.4007, 'learning_rate': 4.99879844453183e-05, 'epoch': 0.0} \r\n{'loss': 15.6753, 'learning_rate': 4.9986892122165426e-05, 'epoch': 0.0} \r\n{'loss': 15.0481, 'learning_rate': 4.9985799799012544e-05, 'epoch': 0.0} \r\n{'loss': 14.5833, 'learning_rate': 4.998470747585966e-05, 'epoch': 0.0} \r\n{'loss': 14.0758, 'learning_rate': 4.998361515270678e-05, 'epoch': 0.0} \r\n{'loss': 13.7096, 'learning_rate': 4.99825228295539e-05, 'epoch': 0.0} \r\n{'loss': 13.3216, 'learning_rate': 4.998143050640102e-05, 'epoch': 0.0} \r\n{'loss': 13.2331, 'learning_rate': 4.9980338183248135e-05, 'epoch': 0.0} \r\n{'loss': 12.1556, 'learning_rate': 4.997924586009525e-05, 'epoch': 0.0} \r\n```\r\n\r\nThis is such a great, wonderful, amazing fix. Looking forward to using it when this is pushed to the repository.\r\nFor all the hard problems, you are our only hope @stas00 \r\nThank you very much for this great fix.\r\n",
"Thank you for your kind words, I'm so happy to hear that it worked, @dorost1234.\r\n\r\nI will make a proper PR after I clean this branch up.",
"@yuvalkirstain, please kindly test if this PR fixes the problem: https://github.com/huggingface/transformers/pull/10956",
"Thank you @stas00 !\r\nIt seems to work were my proposed fix failed with T5-Small. I will now run some additional experiments with T5-Large and update.",
"Thank you for validating that, @yuvalkirstain!\r\n\r\nIndeed, I tried first local fixes but the problem would just pop-up elsewhere. \r\n\r\nI'm just thinking that perhaps we could find if it's all calls to FF that lead to the problem or only some of them, and then we could optimize the solution I proposed by only disabling `autocast` in some cases and not all. I haven't tested that yet.\r\n\r\nIf you experiment I recommend for you to try my branch, since I left the \"detector\" on and it'll immediately tell you when the first `inf` is encountered.\r\n\r\nWhat I'm most interested in is some longer runs to ensure it doesn't start overflowing at a later point. \r\n\r\nThank you for your contribution.",
"Finetuned T5-Base using this branch with the standard T5 finetuning HPs on NQ (except from batch_size - used only ~26k tokens) and didn't get nans (it has been running for over 3 hours and training converged). Thanks again, I guess the issue can be closed for time being.",
"Thank you for this validation, @yuvalkirstain. I still would like to see if we can find a more efficient solution before merging it, but this is great that we have one that works.\r\n\r\nThis unfortunately doesn't help with deepspeed since it doesn't use pytorch AMP and has its own version, but which doesn't use context manager so can't be turned off locally like `autocast`. So we hope to find a different solution.\r\n\r\nI linked this issue to the PR so it'll get closed automatically when it's merged.",
"Well, the nans are back.\r\n\r\n`T5LayerFF: 1 has inf\r\nT5LayerNorm has inf\r\nT5LayerNorm variance has inf\r\nT5LayerNorm hidden_states has nans\r\nT5LayerNorm hidden_states before return has nans\r\nT5LayerFF: 2 has nans\r\nT5LayerFF: 3 has nans\r\nT5LayerFF: 5 has nans\r\nT5Block after T5LayerFF has nans\r\nT5Stack loop end has nans\r\nT5LayerNorm has nans\r\nT5LayerNorm variance has nans\r\nT5LayerNorm hidden_states has nans\r\nT5LayerNorm hidden_states before return has nans`\r\n\r\nThe model I used here was T5-large-ssm-nqo.\r\n@stas00 If you'd like to replicate I can send the relevant training file + command.",
"Yes, please, I'm working in parallel on gpt-neo that has the same issues, so the more reproducible cases we have the higher are the chances we can find a solid fix. \r\n\r\nAlso those would be good candidates for tests (hoping that we can find a quick way to get to overflow).",
"Let's continue the discussion in the PR that is trying to solve this issue: https://github.com/huggingface/transformers/pull/10956",
"@dorost1234 hI, Could you please tell me how you solved this loss optimization problem. I am facing same issue",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"So is this fix now in the main version of transformers?",
"I found that results are different when you load like this: (first is better) \r\n\r\nmodel1a_CPU = T5ForConditionalGeneration.from_pretrained(best_model_path, low_cpu_mem_usage=True,torch_dtype=torch.float16).to(\"cuda\")\r\n\r\nthan when you load via:\r\n\r\nmodel1a_CPU = T5ForConditionalGeneration.from_pretrained(best_model_path, low_cpu_mem_usage=True)\r\nmodel1a_CPU.half()\r\nmodel1a_CPU.eval() \r\nmodel1a_CPU.to(\"cuda\")\r\n\r\nSo this could be a solution, I will compare result on /CPU versus /This versus /Half\r\n\r\n\r\n\r\n\r\n",
"@seems like the solution is already implemented in this call: (model1a_CPU = T5ForConditionalGeneration.from_pretrained(best_model_path, low_cpu_mem_usage=True,torch_dtype=torch.float16).to(\"cuda\"))\r\n\r\nProbably it is trigered by torch_dtype=torch.float16. So a part of model is (likely) moved to fp32 from fp16, so it works properly, exactly the same as with FP32, and exactly the same as on CPU. \r\n\r\nOf course it does use a little bit more of memory. When you call it second way, the memory usage is around 2.5 GB for T5-large, while with first it is around 2.9GB. It is slower around 10-15 percent.\r\n\r\n\r\n\r\n\r\n\r\n\r\n"
] | 1,616 | 1,680 | 1,627 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.5.0.dev0
- Platform: Linux-4.15.0-65-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyTorch version (GPU?): 1.7.1+cu101 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@patil-suraj @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): t5-large
The problem arises when using:
* [ ] my own modified scripts: run_seq2seq with minor modifications (attached)
The tasks I am working on is:
* [ ] my own task or dataset: Closed-Book Open Domain QA
## To reproduce
Steps to reproduce the behavior (the fix I'm suggesting is very simple, so perhaps there is no reason to reproduce):
1. unzip the attached zip (below).
2. run
```bash
python run_seq2seq.py --model_name_or_path=t5-large
--do_train
--do_eval
--task=qa
--train_file=data/PAQ.filtered.regular.16000.json
--validation_file=data/PAQ.filtered.regular.16000.json
--output_dir=results/5e-5-t5-large-4096000-128-140-1792000-0.1-regular-true-4
--overwrite_output_dir
--per_device_train_batch_size=1
--per_device_eval_batch_size=128
--predict_with_generate
--fp16
--max_steps=1000
--evaluation_strategy=steps
--text_column=question
--summary_column=answer
--save_total_limit=5
--cache_dir=../.cache
--save_steps=500000
--learning_rate=5e-5
--eval_steps=96000
--warmup_steps=100
--run_name=5e-5-t5-large-4096000-128-140-1792000-0.1-regular-true-4
--dropout_rate=0.1
--gradient_accumulation_steps=1
--logging_steps=1
```
## Expected behavior
Training without nans.
## Possible fix
I debugged and saw that we get nans at the `modeling_t5.py` script in line 241
```python
hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
```
By modifing this line to:
```python
clamp_value = torch.finfo(hidden_states.dtype).max - 1000
hidden_states = torch.clamp(hidden_states, min=-clamp_value, max=clamp_value) * torch.rsqrt(variance + self.variance_epsilon)
```
It seems to be solved.
BTW it happens in the last layers (this might explain why it wasn't caught in [this fix](https://discuss.huggingface.co/t/t5-fp16-issue-is-fixed/3139))
[seq2seq.zip](https://github.com/huggingface/transformers/files/6177063/seq2seq.zip)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10830/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10830/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10829 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10829/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10829/comments | https://api.github.com/repos/huggingface/transformers/issues/10829/events | https://github.com/huggingface/transformers/pull/10829 | 837,018,549 | MDExOlB1bGxSZXF1ZXN0NTk3NDQzMTQy | 10,829 | [Wav2Vec2] Small improvements for wav2vec2 info script | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,616 | 1,616 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10829/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10829/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10829",
"html_url": "https://github.com/huggingface/transformers/pull/10829",
"diff_url": "https://github.com/huggingface/transformers/pull/10829.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10829.patch",
"merged_at": 1616316104000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/10828 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10828/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10828/comments | https://api.github.com/repos/huggingface/transformers/issues/10828/events | https://github.com/huggingface/transformers/pull/10828 | 837,008,188 | MDExOlB1bGxSZXF1ZXN0NTk3NDM1NjM4 | 10,828 | [wav2vec sprint doc] add doc for Local machine | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,616 | 1,616 | MEMBER | null | # What does this PR do?
Add instructions for how to do Wav2Vec2 fine-tuning on local machine.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10828/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10828/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10828",
"html_url": "https://github.com/huggingface/transformers/pull/10828",
"diff_url": "https://github.com/huggingface/transformers/pull/10828.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10828.patch",
"merged_at": 1616313334000
} |
https://api.github.com/repos/huggingface/transformers/issues/10827 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10827/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10827/comments | https://api.github.com/repos/huggingface/transformers/issues/10827/events | https://github.com/huggingface/transformers/issues/10827 | 836,952,708 | MDU6SXNzdWU4MzY5NTI3MDg= | 10,827 | Log continuously models with wandb | {
"login": "borisdayma",
"id": 715491,
"node_id": "MDQ6VXNlcjcxNTQ5MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/borisdayma",
"html_url": "https://github.com/borisdayma",
"followers_url": "https://api.github.com/users/borisdayma/followers",
"following_url": "https://api.github.com/users/borisdayma/following{/other_user}",
"gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions",
"organizations_url": "https://api.github.com/users/borisdayma/orgs",
"repos_url": "https://api.github.com/users/borisdayma/repos",
"events_url": "https://api.github.com/users/borisdayma/events{/privacy}",
"received_events_url": "https://api.github.com/users/borisdayma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I'm realizing now it would be so helpful with xlsr trainings (lost models due to crash after very long training).\r\n\r\nWould you have any input or suggestions @sgugger on how I could try to implement it?",
"Hi @borisdayma, I hadn't seen this issue. On our side we're more focused on how to continuously push the checkpoints to our own hub ;-).\r\n\r\nThat being said, we can definitely leverage the `on_save` event and just look for the last `checkpoint-xxx` folder, then push its content as artifact. If you have a tracking with `metric_for_best_model`, then you won't even have to look for the checkpoint, it will be in the state with `state.best_model_checkpoint`.\r\nAs for having one or separate checkpoints, I guess it really depends on what you think is best for WandB, you have more expertise than me here.",
"Thanks, it makes sense!\r\nActually I imagined you may probably have some interest in a similar logic for the model hub so I wanted to work on something that would be useful for everyone.\r\n\r\nAs a side note, models could also be stored on the model hub AND tracked by W&B (some people do it with ASW S3 for example). In this way, only checksums are actually stored and the files point back to the storage space so it could be complementary.",
"To do the same on the hub, my idea was to leverage the versioning system and just push the saved checkpoint every save with a commit message like \"checkpoint step xxx\". Ideally inside a Callback to avoid adding more stuff to the main training loop.\r\nI'll try to focus on this next week and see what we can easily do!",
"Nice, if possible it would be cool to allow a logger method to be called right after you push your checkpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Still interested in working on it! Let me know when you have a hook for the model hub!",
"We have started working on it (the Trainer gained a push_to_hub API!) the last step missing is the continuous checkpointing, will try to work on that last part soon!",
"Awesome, if you can somehow have a hook after pushing to the hub (with access to the url maybe) then we could link link them to the runs.",
"The call to `Trainer.push_to_hub` returns the url of the commit to the model hub (it's not part of the train for now).",
"I really like where this is going!\r\n\r\nIs the goal for `TrainingArguments.push_to_hub` to be eventually directly used by the `Trainer` or will it always be handled by the scripts?\r\nAlso would it be possible to save the return of `_push_to_hub` somewhere in the `Trainer` (that way that url could be used by wandb).",
"Yes, we can definitely save the URL somewhere! Would you like to make a PR with that?\r\n\r\nI'm on another project that we will release soon right now but also plan to go back to the continuous integration after (should be in two weeks!)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,624 | 1,624 | CONTRIBUTOR | null | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
wandb integration currently logs last model (which can be the best by using `TrainingArguments.load_best_model_at_end`).
It would be great to allow continuous upload of model with appropriate aliases to versions.
Options would be:
* `WANDB_LOG_MODEL = True` which just logs at the end as currently (not sure if we want to add scheduler and optimizer)
* `WANDB_LOG_MODEL = 'all'` which logs continuously the model
* `WANDB_LOG_MODEL = False` which does not log the model
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
Training can be very long and it would be so sad to lose a model :sob:
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
I can probably propose a PR but would love brainstorming on the ideal logic:
1. should we leverage `Trainer.save_model` (as currently) or `Trainer._save_checkpoint`
2. should we consider an artifact version as containing only the model & config or also containing optimizer and scheduler? Or should it actually be 2 separate artifacts?
3. if we leverage `on_save`, can we avoid the same current logic (fake trainer saving to a temporary directory that is then uploaded async) and just use an actual copy of what has been saved. We would just need the path or list of files that have been saved (should be straightforward)
4. If we log continuously the model, should we upload it only if it's improved (when `metric_for_best_model` is defined)? If that's the case, we'll need to be able to detect when that is the case. If that's not the case we'll still need to be able to know which one is the best. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10827/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10827/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10826 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10826/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10826/comments | https://api.github.com/repos/huggingface/transformers/issues/10826/events | https://github.com/huggingface/transformers/pull/10826 | 836,911,864 | MDExOlB1bGxSZXF1ZXN0NTk3MzYwMzQx | 10,826 | feat(wandb): logging and configuration improvements | {
"login": "borisdayma",
"id": 715491,
"node_id": "MDQ6VXNlcjcxNTQ5MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/borisdayma",
"html_url": "https://github.com/borisdayma",
"followers_url": "https://api.github.com/users/borisdayma/followers",
"following_url": "https://api.github.com/users/borisdayma/following{/other_user}",
"gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions",
"organizations_url": "https://api.github.com/users/borisdayma/orgs",
"repos_url": "https://api.github.com/users/borisdayma/repos",
"events_url": "https://api.github.com/users/borisdayma/events{/privacy}",
"received_events_url": "https://api.github.com/users/borisdayma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,616 | 1,616 | CONTRIBUTOR | null | # What does this PR do?
Following improvements to `wandb` integration:
* ensure unique artifact id → previously it was based on run name which could create duplicates and mismatches (as runs can be updated manually in the UI)
* allow manual calls to `wandb.init()` → previously it would have closed the run and started a new one
* when a wandb run already exists (manually created), adds automatically model config parameters
* simplify reinit logic → now explicitly closes a run for hp search and avoid use of reinit which can have complex side effects (different behavior in notebooks vs scripts)
* ensure we have no dropped values (when step is below a previous logged value) by logging the step as an independent `train/global_step` metric and set it as default x-axis in the UI (can be edited manually). Note: this auto-setting of x-axis will be activated in upcoming release of wandb
* get values committed immediately so they appear in the UI with no delay
* fixes compatibility with sagemaker
Fixes https://github.com/wandb/client/issues/1499, #8754, #10486
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
Documentation: @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10826/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10826/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10826",
"html_url": "https://github.com/huggingface/transformers/pull/10826",
"diff_url": "https://github.com/huggingface/transformers/pull/10826.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10826.patch",
"merged_at": 1616424317000
} |
https://api.github.com/repos/huggingface/transformers/issues/10825 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10825/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10825/comments | https://api.github.com/repos/huggingface/transformers/issues/10825/events | https://github.com/huggingface/transformers/issues/10825 | 836,885,863 | MDU6SXNzdWU4MzY4ODU4NjM= | 10,825 | ReformerEmbedding unclear behavior | {
"login": "fostiropoulos",
"id": 4337024,
"node_id": "MDQ6VXNlcjQzMzcwMjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4337024?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fostiropoulos",
"html_url": "https://github.com/fostiropoulos",
"followers_url": "https://api.github.com/users/fostiropoulos/followers",
"following_url": "https://api.github.com/users/fostiropoulos/following{/other_user}",
"gists_url": "https://api.github.com/users/fostiropoulos/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fostiropoulos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fostiropoulos/subscriptions",
"organizations_url": "https://api.github.com/users/fostiropoulos/orgs",
"repos_url": "https://api.github.com/users/fostiropoulos/repos",
"events_url": "https://api.github.com/users/fostiropoulos/events{/privacy}",
"received_events_url": "https://api.github.com/users/fostiropoulos/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think I don't fully agree here...`max_position_embeddings` is important even if `axial_pos_embedding` is used. If a user makes use of `axial_pos_embeddings` then `max_position_embeddings` is clearly defined by the tuple of `axial_pos_embeddings`. IMO, it's important that the user fully understands how `axial_pos_embeddigs` work and should therefore also set `max_position_embeddigns`",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,619 | 1,619 | NONE | null | ## Environment info
- `transformers` version: 4.3.3
- Platform: Linux-4.15.0-136-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
@patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): Reformer
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: MNIST
## To reproduce
Steps to reproduce the behavior:
1. Leave max_position_embeddings undefined or different than axial_pos_embedding shape
2. Observe the assert get triggered for large sequence lengths (within limit of axial_pos_embedding) https://github.com/huggingface/transformers/blob/master/src/transformers/models/reformer/modeling_reformer.py#L255
## Expected behavior
* No assert
## Additional Details:
`max_position_embeddings` is only used by `PositionEmbeddings`, thus if we provide `axial_pos_embds` in the configuration, the `max_position_embeddings` will be not used for positional embedding. Moreover it is considered for factorizing `num_buckets` in `LSHSelfAttention` layer.
Thus in a scenario of using axial positional embedding, both the assert check will be useless as well as the [bucket factorization](https://github.com/huggingface/transformers/blob/master/src/transformers/models/reformer/modeling_reformer.py#L711) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10825/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10825/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10824 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10824/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10824/comments | https://api.github.com/repos/huggingface/transformers/issues/10824/events | https://github.com/huggingface/transformers/issues/10824 | 836,872,456 | MDU6SXNzdWU4MzY4NzI0NTY= | 10,824 | Running "convert_graph_to_onnx.py" doesn't work. | {
"login": "Jorghi12",
"id": 8586039,
"node_id": "MDQ6VXNlcjg1ODYwMzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/8586039?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jorghi12",
"html_url": "https://github.com/Jorghi12",
"followers_url": "https://api.github.com/users/Jorghi12/followers",
"following_url": "https://api.github.com/users/Jorghi12/following{/other_user}",
"gists_url": "https://api.github.com/users/Jorghi12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jorghi12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jorghi12/subscriptions",
"organizations_url": "https://api.github.com/users/Jorghi12/orgs",
"repos_url": "https://api.github.com/users/Jorghi12/repos",
"events_url": "https://api.github.com/users/Jorghi12/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jorghi12/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This appears to be the proper way to run it.\r\n\r\n`python3 -m transformers.convert_graph_to_onnx --framework pt --model bert-base-cased bert-base-cased.onnx`"
] | 1,616 | 1,616 | 1,616 | NONE | null | ## Environment info
- `transformers` version: 4.5.0.dev0
- Platform: Darwin-19.6.0-x86_64-i386-64bit
- Python version: 3.7.2
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten, @patil-suraj
Models:
Bart (https://huggingface.co/transformers/model_doc/marian.html)
-->
## Information
A problem arises when I run "python3 convert_graph_to_onnx.py" where I receive the following error message
```
Traceback (most recent call last):
File "convert_graph_to_onnx.py", line 22, in <module>
from .file_utils import ModelOutput, is_tf_available, is_torch_available
ModuleNotFoundError: No module named '__main__.file_utils'; '__main__' is not a package
```
## To reproduce
Run "python3 convert_graph_to_onnx.py" inside the following directory transformers/src/transformers.
Steps to reproduce the behavior:
1. cd "transformers/src/transformers/"
2. python3 convert_graph_to_onnx.py
## Expected behavior
I expect convert_graph_to_onnx.py to begin running. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10824/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10824/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10823 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10823/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10823/comments | https://api.github.com/repos/huggingface/transformers/issues/10823/events | https://github.com/huggingface/transformers/pull/10823 | 836,867,993 | MDExOlB1bGxSZXF1ZXN0NTk3MzI3MTUz | 10,823 | Modify the Trainer class to handle simultaneous execution of Ray Tune and Weights & Biases | {
"login": "ruanchaves",
"id": 14352388,
"node_id": "MDQ6VXNlcjE0MzUyMzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/14352388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ruanchaves",
"html_url": "https://github.com/ruanchaves",
"followers_url": "https://api.github.com/users/ruanchaves/followers",
"following_url": "https://api.github.com/users/ruanchaves/following{/other_user}",
"gists_url": "https://api.github.com/users/ruanchaves/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ruanchaves/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ruanchaves/subscriptions",
"organizations_url": "https://api.github.com/users/ruanchaves/orgs",
"repos_url": "https://api.github.com/users/ruanchaves/repos",
"events_url": "https://api.github.com/users/ruanchaves/events{/privacy}",
"received_events_url": "https://api.github.com/users/ruanchaves/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,616 | 1,616 | CONTRIBUTOR | null | # What does this PR do?
The proper way to integrate Ray Tune and Weights & Biases is to pass a `wandb` parameter to `tune.run`.
However, this parameter is handled as a dictionary inside the `config` argument, and there is no distinction between `wandb` parameters and standard model optimization parameters. The following code comes from [their docs](https://docs.wandb.ai/integrations/ray-tune):
```python
from ray.tune.logger import DEFAULT_LOGGERS
from ray.tune.integration.wandb import WandbLogger
tune.run(
train_fn,
config={
# define search space here
"parameter_1": tune.choice([1, 2, 3]),
"parameter_2": tune.choice([4, 5, 6]),
# wandb configuration
"wandb": {
"project": "Optimization_Project",
"api_key_file": "/path/to/file",
"log_config": True
}
},
loggers=DEFAULT_LOGGERS + (WandbLogger, ))
```
This is not a problem for Ray Tune. However, it is a problem for the `transformers` integration because it treats wandb as a model parameter, and therefore configuring wandb in this way will raise an error message claiming that `wandb is not a training argument`.
The following code will raise such an error:
```python
# Initialize our Trainer
trainer = Trainer(
model_init=model_init,
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset if training_args.do_eval else None,
compute_metrics=compute_metrics,
tokenizer=tokenizer,
data_collator=data_collator,
)
# Hyperparameter Search
def hp_space_fn(empty_arg):
config = {
"warmup_steps": tune.choice([50, 100, 500, 1000]),
"learning_rate": tune.choice([1.5e-5, 2e-5, 3e-5, 4e-5]),
"num_train_epochs": tune.quniform(0.0, 10.0, 0.5),
}
wandb_config = {
"wandb": {
"project": os.environ.get(
'WANDB_PROJECT',
'wandb_project'),
"api_key": os.environ.get('API_KEY'),
"log_config": True
}
}
config.update(wandb_config)
return config
best_run = trainer.hyperparameter_search(
direction="maximize",
backend="ray",
scheduler=PopulationBasedTraining(
time_attr='time_total_s',
metric='eval_f1_thr_0',
mode='max',
perturbation_interval=600.0
),
hp_space=hp_space_fn,
loggers=DEFAULT_LOGGERS + (WandbLogger, ),
)
```
One way to work around this is to instantiate a subclass based on the Trainer:
```python
class CustomTrainer(Trainer):
def __init__(self, *args, **kwargs):
super(CustomTrainer, self).__init__(*args, **kwargs)
def _hp_search_setup(self, trial: Any):
try:
trial.pop('wandb', None)
except AttributeError:
pass
super(CustomTrainer, self)._hp_search_setup(trial)
```
However, this looks like a hack because throwing away `wandb` arguments in model config on `_hp_search_setup` should be standard Trainer behavior.
That's why I'm submitting a PR that directly modifies the `_hp_search_setup` of the Trainer class to ignore `wandb` arguments if Ray is chosen as a backend.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
I'm tagging @richardliaw and @amogkam as they're directly involved in Ray Tune. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10823/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10823/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10823",
"html_url": "https://github.com/huggingface/transformers/pull/10823",
"diff_url": "https://github.com/huggingface/transformers/pull/10823.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10823.patch",
"merged_at": 1616436292000
} |
https://api.github.com/repos/huggingface/transformers/issues/10822 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10822/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10822/comments | https://api.github.com/repos/huggingface/transformers/issues/10822/events | https://github.com/huggingface/transformers/pull/10822 | 836,858,577 | MDExOlB1bGxSZXF1ZXN0NTk3MzE5NjAw | 10,822 | Correct AutoConfig call docstrings | {
"login": "Sebelino",
"id": 837775,
"node_id": "MDQ6VXNlcjgzNzc3NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/837775?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sebelino",
"html_url": "https://github.com/Sebelino",
"followers_url": "https://api.github.com/users/Sebelino/followers",
"following_url": "https://api.github.com/users/Sebelino/following{/other_user}",
"gists_url": "https://api.github.com/users/Sebelino/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sebelino/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sebelino/subscriptions",
"organizations_url": "https://api.github.com/users/Sebelino/orgs",
"repos_url": "https://api.github.com/users/Sebelino/repos",
"events_url": "https://api.github.com/users/Sebelino/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sebelino/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,616 | 1,616 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
The `AutoConfig` class has no method named `from_json_file`, so the [examples in the documentation](https://huggingface.co/transformers/model_doc/auto.html#transformers.TFAutoModelForSequenceClassification.from_pretrained) are incorrect. Most likely the intention is to call `from_pretrained`.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10822/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10822/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10822",
"html_url": "https://github.com/huggingface/transformers/pull/10822",
"diff_url": "https://github.com/huggingface/transformers/pull/10822.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10822.patch",
"merged_at": 1616418765000
} |
https://api.github.com/repos/huggingface/transformers/issues/10821 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10821/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10821/comments | https://api.github.com/repos/huggingface/transformers/issues/10821/events | https://github.com/huggingface/transformers/issues/10821 | 836,829,186 | MDU6SXNzdWU4MzY4MjkxODY= | 10,821 | checkpoint breaks with deepspeed | {
"login": "dorost1234",
"id": 79165106,
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorost1234",
"html_url": "https://github.com/dorost1234",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2659267025,
"node_id": "MDU6TGFiZWwyNjU5MjY3MDI1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed",
"name": "DeepSpeed",
"color": "4D34F7",
"default": false,
"description": ""
}
] | closed | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [
"I'm able to reproduce your issue, @dorost1234 \r\n\r\nSo this particular error happens because you change something in the model structure after the checkpoint is saved and before it's resumed. For example at the resume point your model is different than what it was just before you saved the checkpoint.\r\n\r\nThe reason you encountered this problem with deepspeed and did not without it, is because deepspeed by default saves the optimizer state and resumes from it. So here is a quick hack to overcome this in short-term:\r\n\r\n```\r\n--- a/seq2seq/third_party/trainers/trainer.py\r\n+++ b/seq2seq/third_party/trainers/trainer.py\r\n@@ -1166,7 +1166,7 @@ class Trainer:\r\n\r\n if self.deepspeed:\r\n # Not sure how to check if there is a saved deepspeed checkpoint, but since it just return None if it fails to find a deepspeed checkpoint this is sort of a check-n-load function\r\n- self.deepspeed.load_checkpoint(checkpoint, load_optimizer_states=True, load_lr_scheduler_states=True)\r\n+ self.deepspeed.load_checkpoint(checkpoint, load_optimizer_states=False, load_lr_scheduler_states=False)\r\n\r\n def hyperparameter_search(\r\n self,\r\n\r\ndiff --git a/seq2seq/ds_config.json b/seq2seq/ds_config.json\r\nindex 18ce5a3..44b5cc0 100644\r\n--- a/seq2seq/ds_config.json\r\n+++ b/seq2seq/ds_config.json\r\n@@ -15,7 +15,8 @@\r\n \"reduce_scatter\": true,\r\n \"reduce_bucket_size\": 2e8,\r\n \"contiguous_gradients\": true,\r\n- \"cpu_offload\": false\r\n+ \"cpu_offload\": false,\r\n+ \"load_from_fp32_weights\": false\r\n },\r\n\r\n \"zero_allow_untested_optimizer\": true,\r\n\r\n```\r\n\r\nBasically, we are telling deepspeed not to resume the optimizer/scheduler states and ignore the fp32 weights as well. The last one is not great and may not do what you want, as it's likely to impact the precision.\r\n\r\nI added some debug code to deepspeed and the key it fails on in your traceback is \"exp_avg\" . If it helps here is what I did:\r\n```\r\ndiff --git a/deepspeed/runtime/zero/stage2.py b/deepspeed/runtime/zero/stage2.py\r\nindex e0ca4f0..02d904b 100755\r\n--- a/deepspeed/runtime/zero/stage2.py\r\n+++ b/deepspeed/runtime/zero/stage2.py\r\n@@ -1802,7 +1802,11 @@ class FP16_DeepSpeedZeroOptimizer(object):\r\n p = group['params'][0]\r\n for key, saved in base_optimizer_group_states[i].items():\r\n if torch.is_tensor(self.optimizer.state[p][key]):\r\n- self.optimizer.state[p][key].data.copy_(saved.data)\r\n+ try:\r\n+ self.optimizer.state[p][key].data.copy_(saved.data)\r\n+ except:\r\n+ print(f\"failed with key={key}\")\r\n+ raise\r\n else:\r\n self.optimizer.state[p][key] = saved\r\n```\r\n\r\n------------\r\n\r\nAlso: unrelated to this particular issue, the code base your forked from is quite outdated and many deepspeed-related bug fixes were applied since then, so I highly recommend that you sync your fork with the current trainer. If possible try to subclass the Trainer rather than hacking it directly, so you always get the most up-to-date code base you inherit from. Deepspeed is very fresh and expect many more changes happening in the next few weeks/months.\r\n\r\n-----------\r\n\r\nNow to actually solving the problem.\r\n\r\nStudy your code and see where your model's structure gets modified between its creation and the checkpoint saving. Trainer tries to resume from the checkpoint as soon as `train()` starts and it appears that at that point the model is different structurally (dimensions are different) then it is later when it gets saved. So you need your model to be in an identical shape during resume and saving points.\r\n\r\nPlease let me know if this helps.\r\n\r\nI'd attack it as simple as dumping the model's param dimensions just before the checkpoint is loaded and just before it's saved, comparing the two - most likely finding the mismatch and then going forward from loading or backward from saving in the code and finding a spot where you modify the model. Then move the modification to before you resume from the checkpoint. i.e. before you call `train()`.\r\n\r\n----------\r\n\r\na minor comment - your repro instructions are 95% working, I had to do a few fixes to make it work (e.g. your `setup.py` is slightly broken), so it's always a good idea to re-validate that it's actually reproducible ;) But it's all good, I figured it out. It was very helpful to have what you provided to reproduce the problem.",
"Dear @stas00 \r\nThank you very much for taking your precious time to look into this issue to assist me, I am indebted to you, and for all the incredible job you do. About reproducibility, I very honestly created a new condo environment and tested it before sending it out, and was working on my side, please accept my sincere apologies for any shortcomings and if I missed something without realizing it. \r\n\r\nI will investigate the issue with the great pointer you shared with me and I will keep this issue updated.\r\n\r\nThank you very much again for the great help. ",
"Dear @stas00 \r\nI will be indebted to also ask also about the nans I get with deepspeed just with the same code you run the loss is nan. I very much appreciate any suggestion you might have for me to try to resolve the issue I face with deepspeed, I am using mt5-small. It would be a great help to me if I could use the great work you have done in deepspeed in huggingface repo and overcome the nan issue. Thank you.",
"can we close this one now? we are dealing with nans at https://github.com/huggingface/transformers/pull/10956/files",
"Dear Stas\nI unfortunately could not still figure this out. I am a bit confused by\nwhere fp16 casting is applied in MT5, specially with the new PR, where this\nis disabled in forward path. To me, in the middle there is some fp16\ncasting, which causes this, but I could not figure this out so far, I was\nwondering if you could give me more time. I appreciate any hints. thanks\n\nOn Tue, Mar 30, 2021 at 12:52 AM Stas Bekman ***@***.***>\nwrote:\n\n> can we close this one now? we are dealing with nans at\n> https://github.com/huggingface/transformers/pull/10956/files\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/10821#issuecomment-809771691>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AS37NMTDDH2FJGNOVF6ELVDTGEAELANCNFSM4ZQQFMIA>\n> .\n>\n",
"I had a closer look and - you have a very dated trainer code with multiple bugs that have been fixed since then - you will need to update your code base to `transformers` master. I'm pretty sure that once you had it synced this problem will no longer be there.\r\n\r\nPlease do update me if this is still not the case and then we will fix it then. Thank you!\r\n\r\n",
"Dear @stas00 \r\nThank you very much for the help. Much appreciated.\r\nI upgraded the codes to the last version of the codes in huggingface repository and I am still having the same issue. \r\nI will make an updated the repository asap and keep you updated on this.\r\nThank you very much.",
"Yes, let me know when you have a repo I can reproduce the issue with. Thank you.",
"Hi @stas00 \r\nI finally found this bug, this is the issue reported also here https://github.com/huggingface/transformers/issues/11294 \r\nI was freezing some parameters, and during checkpoiting, since huggingface codes does not handle the freezed parameters properly and has a bug currently regarding this, those parameters were not freezed when its loads from the checkpoints, and caused the difference in the number of parameters for the deepspeed.\r\nthanks a lot for all the hints and helps on this."
] | 1,616 | 1,618 | 1,618 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.3
- Platform: linux
- Python version: 3.7
- PyTorch version (GPU?): 1.8
- Tensorflow version (GPU?): -
- Using GPU in script?: -
- Using distributed or parallel set-up in script?: -
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
deepspeed: @stas00
## Information
Dear @stas00
Having your permission, I opened up this bug, you are really my only hope with this issue and I truly appreciate your help. Thank you very much.
I am using mt5 model, I modified it with adding adapters layers. The problem arises when:
* loading checkpoints from the model trained with deepspeed
The tasks I am working on is:
* paraphrase detection using paws-x dataset on mt5 model
## To reproduce
Steps to reproduce the behavior:
```
git clone [email protected]:dorost1234/codes.git
conda create --name deepspeed python=3.7
conda activate deepspeed
conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c conda-forge
python setup.py develop
pip install deepspeed
```
running the codes:
```
deepspeed run_seq2seq.py configs/test.json
```
I save a checkpoint every 10 steps, the output would look like the below:
```
After first checkpoint I kill the codes, here is the output:
onfiguration saved in outputs/checkpoint-10/config.json
Model weights saved in outputs/checkpoint-10/pytorch_model.bin
[2021-03-20 15:18:45,897] [INFO] [logging.py:60:log_dist] [Rank 0] Saving model checkpoint: outputs/checkpoint-10/global_step10/mp_rank_00_model_states.pt
[2021-03-20 15:18:51,783] [INFO] [engine.py:1680:_save_zero_checkpoint] zero checkpoint saved outputs/checkpoint-10/global_step10/zero_pp_rank_0_mp_rank_00optim_states.pt
Configuration saved in outputs/config.json
Model weights saved in outputs/pytorch_model.bin
```
Then, I contunue training with running the command again:
```
deepspeed run_seq2seq.py configs/test.json
```
once loading the checkpoint, it cannot load it with deepspeed:
```
successfully loaded 1 ZeRO state_dicts for rank 0
Traceback (most recent call last):
File "run_seq2seq.py", line 512, in <module>
main()
File "run_seq2seq.py", line 476, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/users/dara/dev/debug_codes/seq2seq/third_party/trainers/trainer.py", line 780, in train
self._load_optimizer_and_scheduler(resume_from_checkpoint)
File "/users/dara/dev/debug_codes/seq2seq/third_party/trainers/trainer.py", line 1169, in _load_optimizer_and_scheduler
self.deepspeed.load_checkpoint(checkpoint, load_optimizer_states=True, load_lr_scheduler_states=True)
File "/users/dara/libs/anaconda3/envs/deepspeed/lib/python3.7/site-packages/deepspeed/runtime/engine.py", line 1416, in load_checkpoint
load_optimizer_states=load_optimizer_states)
File "/users/dara/libs/anaconda3/envs/deepspeed/lib/python3.7/site-packages/deepspeed/runtime/engine.py", line 1488, in _load_zero_checkpoint
load_from_fp32_weights=self.zero_load_from_fp32_weights())
File "/users/dara/libs/anaconda3/envs/deepspeed/lib/python3.7/site-packages/deepspeed/runtime/zero/stage2.py", line 1844, in load_state_dict
self._restore_base_optimizer_state(state_dict_list)
File "/users/dara/libs/anaconda3/envs/deepspeed/lib/python3.7/site-packages/deepspeed/runtime/zero/stage2.py", line 1805, in _restore_base_optimizer_state
self.optimizer.state[p][key].data.copy_(saved.data)
RuntimeError: The size of tensor a (302612288) must match the size of tensor b (129296512) at non-singleton dimension 0
Killing subprocess 23829
Traceback (most recent call last):
File "/users/dara/libs/anaconda3/envs/deepspeed/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/users/dara/libs/anaconda3/envs/deepspeed/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/users/dara/libs/anaconda3/envs/deepspeed/lib/python3.7/site-packages/deepspeed/launcher/launch.py", line 171, in <module>
main()
File "/users/dara/libs/anaconda3/envs/deepspeed/lib/python3.7/site-packages/deepspeed/launcher/launch.py", line 161, in main
sigkill_handler(signal.SIGTERM, None) # not coming back
File "/users/dara/libs/anaconda3/envs/deepspeed/lib/python3.7/site-packages/deepspeed/launcher/launch.py", line 139, in sigkill_handler
raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd)
subprocess.CalledProcessError: Command '['/users/dara/anaconda3/envs/deepspeed/bin/python', '-u', 'run_seq2seq.py', '--local_rank=0', 'configs/test.json']' returned non-zero exit status 1.
```
## Expected behavior
being able to continue training from the saved checkpoints | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10821/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10821/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10820 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10820/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10820/comments | https://api.github.com/repos/huggingface/transformers/issues/10820/events | https://github.com/huggingface/transformers/issues/10820 | 836,760,545 | MDU6SXNzdWU4MzY3NjA1NDU= | 10,820 | JSONLINES support on examples/seq2seq/run_translation.py | {
"login": "HeroadZ",
"id": 17962682,
"node_id": "MDQ6VXNlcjE3OTYyNjgy",
"avatar_url": "https://avatars.githubusercontent.com/u/17962682?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HeroadZ",
"html_url": "https://github.com/HeroadZ",
"followers_url": "https://api.github.com/users/HeroadZ/followers",
"following_url": "https://api.github.com/users/HeroadZ/following{/other_user}",
"gists_url": "https://api.github.com/users/HeroadZ/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HeroadZ/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HeroadZ/subscriptions",
"organizations_url": "https://api.github.com/users/HeroadZ/orgs",
"repos_url": "https://api.github.com/users/HeroadZ/repos",
"events_url": "https://api.github.com/users/HeroadZ/events{/privacy}",
"received_events_url": "https://api.github.com/users/HeroadZ/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think you just have to name your file with a \".json\" extension for the script to work.\r\n\r\nThere is no support for other formats that will be added to this script as it's not easy to add the csv format while continue to support all the translation datasets in Datasets. You should just tweak the data processing of your example (for instance by doing the same as in `run_summarization`) to your needs if you need to use a csv file.",
"Thank you very much! It works. \r\nBecause the extension for JSONLINES format is `jsonl`, it's better to explain it in the readme.",
"This actually might be a good idea to change the extension to `.jsonl` to make it less ambiguous and it would require no special documentation then."
] | 1,616 | 1,657 | 1,616 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: windows 10
- Python version: 3.8
- PyTorch version (GPU?): 1.8.0
- Using GPU in script?: yes, tesla v100
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger @stas00 @LysandreJik
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
It is said in the seq2seq READ.me that
> The task of translation supports only custom JSONLINES files
However, in the line 202, the extension of file should be `.json`
```py
if self.train_file is not None:
extension = self.train_file.split(".")[-1]
assert extension == "json", "`train_file` should be a json file."
```
Even if I changed it to
```py
assert extension in ("json", "jsonl")
```
it throws another error, which says that there's no jsonl process file in library `datasets`
```py
Traceback (most recent call last):
File "/home/zchelllo/anaconda3/envs/ex/lib/python3.8/site-packages/datasets/load.py", line 323, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/home/zchelllo/anaconda3/envs/ex/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 274, in cached_path
output_path = get_from_cache(
File "/home/zchelllo/anaconda3/envs/ex/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 614, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/jsonl/jsonl.py
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/zchelllo/anaconda3/envs/ex/lib/python3.8/site-packages/datasets/load.py", line 335, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/home/zchelllo/anaconda3/envs/ex/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 274, in cached_path
output_path = get_from_cache(
File "/home/zchelllo/anaconda3/envs/ex/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 614, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/jsonl/jsonl.py
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "examples/seq2seq/run_translation.py", line 562, in <module>
main()
File "examples/seq2seq/run_translation.py", line 295, in main
datasets = load_dataset(extension, data_files=data_files)
File "/home/zchelllo/anaconda3/envs/ex/lib/python3.8/site-packages/datasets/load.py", line 707, in load_dataset
module_path, hash, resolved_file_path = prepare_module(
File "/home/zchelllo/anaconda3/envs/ex/lib/python3.8/site-packages/datasets/load.py", line 343, in prepare_module
raise FileNotFoundError(
FileNotFoundError: Couldn't find file locally at jsonl/jsonl.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/jsonl/jsonl.py.
The file is also not present on the master branch on github.
```
Is this part in the stage of dev now? I think it is related to [#1943](https://github.com/huggingface/datasets/pull/1943)
Could you add the original csv support before this implementation is over?
Thanks in advance!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10820/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10820/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10819 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10819/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10819/comments | https://api.github.com/repos/huggingface/transformers/issues/10819/events | https://github.com/huggingface/transformers/issues/10819 | 836,737,560 | MDU6SXNzdWU4MzY3Mzc1NjA= | 10,819 | mt5 getting nans with fp16 | {
"login": "dorost1234",
"id": 79165106,
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorost1234",
"html_url": "https://github.com/dorost1234",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Duplicate of https://github.com/huggingface/transformers/issues/10830",
"Hi @patrickvonplaten this is not exact duplicate, I am using mt5-small and the other user in #10830 is using t5-large, I appreciate considering both thank you ",
"@dorost1234, please kindly test if this PR fixes the problem: https://github.com/huggingface/transformers/pull/10956",
"@stas00 thank you very much for the contributions, it now works for me for the mt5-small, I am running some more experiments with it and update.",
"Dear @stas00 \r\nI tested more codes, without deepspeed, it works fine with setting the feedforward layer to float32, as suggested in the PR, but the moment I switch to deepspeed I still get nan issue in my codes. I greatly appreciate if you can spare some moments from your precious time and provide me with a suggestion for the case of deepspeed for the same problem. Thank you very much\r\n\r\nI also used your debug codes:\r\n```\r\n^M 0%| | 0/38600 [00:00<?, ?it/s]WARNING:seq2seq.third_party.models.t5.debug_utils:gelu 5 has inf\r\nWARNING:seq2seq.third_party.models.t5.debug_utils:T5Block after T5LayerFF has nans\r\nWARNING:seq2seq.third_party.models.t5.debug_utils:T5Block after T5LayerFF has inf\r\nWARNING:seq2seq.third_party.models.t5.debug_utils:T5Stack loop end has nans\r\nWARNING:seq2seq.third_party.models.t5.debug_utils:T5Stack loop start has nans\r\nWARNING:seq2seq.third_party.models.t5.debug_utils:T5Block has nans\r\nWARNING:seq2seq.third_party.models.t5.debug_utils:T5LayerNorm has nans\r\nWARNING:seq2seq.third_party.models.t5.debug_utils:T5LayerNorm variance has nans\r\nWARNING:seq2seq.third_party.models.t5.debug_utils:T5LayerNorm hidden_states has nans\r\nWARNING:seq2seq.third_party.models.t5.debug_utils:T5LayerNorm hidden_states before return has nans\r\nWARNING:seq2seq.third_party.models.t5.debug_utils:T5Block after T5LayerSelfAttention has nans\r\nWARNING:seq2seq.third_party.models.t5.debug_utils:T5Block before T5LayerFF has nans\r\nWARNING:seq2seq.third_party.models.t5.debug_utils:T5LayerNorm has nans\r\nWARNING:seq2seq.third_party.models.t5.debug_utils:T5LayerNorm variance has nans\r\nWARNING:seq2seq.third_party.models.t5.debug_utils:T5LayerNorm hidden_states has nans\r\nWARNING:seq2seq.third_party.models.t5.debug_utils:T5LayerNorm hidden_states before return has nans\r\nWARNING:seq2seq.third_party.models.t5.debug_utils:gelu 1 has nans\r\nWARNING:seq2seq.third_party.models.t5.debug_utils:gelu 2 has nans\r\nWARNING:seq2seq.third_party.models.t5.debug_utils:gelu 3 has nans\r\n\r\n```\r\n",
"I was just thinking about it, so thank you for confirming that. \r\n\r\nDeepspeed is not using `autocast` so in essence the proposed fixed makes no difference under Deepspeed as we aren't running under `autocast` in the first place. Let's ask the DeepSpeed developers https://github.com/microsoft/DeepSpeed/issues/908\r\n\r\nThough let's continue the discussion on the deepspeed in the other issue you opened, since these are related but different problems. That's we may fix one but not the other, or the fixes may come at different times, so it's easier to track separate issues. \r\n\r\nOr if there is not one specific issue to t5/mt5+deepspeed please open one. Thank you.\r\n",
"Dear @stas00 \r\nSure, thank you very much for coming back to me. Having your permission I will open up an issue on this. \r\nThank you very much.",
"I already did - please see the link in my last comment. Please do not worry, we will surely find one way or another to resolve this.",
"oh, great, thank you very much ",
"Dear @stas00 \r\nI tested the code more (without deepspeed) on larger scale and when I train on opus100 (I train on 20 languages of it), after 2000 iterations with mt5-small, after applying the fix, this gets nan still. I will share with you a reproducible code soon. thanks a lot for all the great work. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,623 | 1,623 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4.2
- Platform: linux
- Python version: 3.7
- PyTorch version (GPU?): 1.8
- Tensorflow version (GPU?): -
- Using GPU in script?: -
- Using distributed or parallel set-up in script?: -
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
t5: @patrickvonplaten, @patil-suraj
## Information
I am using mt5-small model:
* the problem arises when using fp16 with mt5
The tasks I am working on is:
* translation
## To reproduce
Steps to reproduce the behavior:
`python run_translation.py --model_name_or_path google/mt5-small --do_train --do_eval --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config_name ro-en --output_dir test/tst-translation --per_device_train_batch_size=4 --per_device_eval_batch_size=4 --overwrite_output_dir --predict_with_generate --max_train_samples 100 --fp16`
outputs:
```
***** eval metrics *****
epoch = 3.0
eval_bleu = 0.0039
eval_gen_len = 2.95
eval_loss = nan
eval_mem_cpu_alloc_delta = 4MB
eval_mem_cpu_peaked_delta = 5MB
eval_mem_gpu_alloc_delta = 0MB
eval_mem_gpu_peaked_delta = 1080MB
eval_runtime = 72.1865
eval_samples = 1999
eval_samples_per_second = 27.692
```
## Expected behavior
being able to use fp16 with mt5 models. Thank you very much for your help, this is really crucial for me to be able to run these models with fp16 to be able to fit more data into old GPUs I have access to and I appreciate a lot your help. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10819/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10819/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10818 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10818/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10818/comments | https://api.github.com/repos/huggingface/transformers/issues/10818/events | https://github.com/huggingface/transformers/pull/10818 | 836,722,499 | MDExOlB1bGxSZXF1ZXN0NTk3MjE2NjEz | 10,818 | Bump jinja2 from 2.11.2 to 2.11.3 in /examples/research_projects/lxmert | {
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
} | [
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] | closed | false | null | [] | [] | 1,616 | 1,616 | 1,616 | CONTRIBUTOR | null | Bumps [jinja2](https://github.com/pallets/jinja) from 2.11.2 to 2.11.3.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/pallets/jinja/releases">jinja2's releases</a>.</em></p>
<blockquote>
<h2>2.11.3</h2>
<p>This contains a fix for a speed issue with the <code>urlize</code> filter. <code>urlize</code> is likely to be called on untrusted user input. For certain inputs some of the regular expressions used to parse the text could take a very long time due to backtracking. As part of the fix, the email matching became slightly stricter. The various speedups apply to <code>urlize</code> in general, not just the specific input cases.</p>
<ul>
<li>PyPI: <a href="https://pypi.org/project/Jinja2/2.11.3/">https://pypi.org/project/Jinja2/2.11.3/</a></li>
<li>Changes: <a href="https://jinja.palletsprojects.com/en/2.11.x/changelog/#version-2-11-3">https://jinja.palletsprojects.com/en/2.11.x/changelog/#version-2-11-3</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/pallets/jinja/blob/master/CHANGES.rst">jinja2's changelog</a>.</em></p>
<blockquote>
<h2>Version 2.11.3</h2>
<p>Released 2021-01-31</p>
<ul>
<li>Improve the speed of the <code>urlize</code> filter by reducing regex
backtracking. Email matching requires a word character at the start
of the domain part, and only word characters in the TLD. :pr:<code>1343</code></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/pallets/jinja/commit/cf215390d4a4d6f0a4de27e2687eed176878f13d"><code>cf21539</code></a> release version 2.11.3</li>
<li><a href="https://github.com/pallets/jinja/commit/15ef8f09b659f9100610583938005a7a10472d4d"><code>15ef8f0</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/pallets/jinja/issues/1343">#1343</a> from pallets/urlize-speedup</li>
<li><a href="https://github.com/pallets/jinja/commit/ef658dc3b6389b091d608e710a810ce8b87995b3"><code>ef658dc</code></a> speed up urlize matching</li>
<li><a href="https://github.com/pallets/jinja/commit/eeca0fecc3318d43f61bc340ad61db641b861ade"><code>eeca0fe</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/pallets/jinja/issues/1207">#1207</a> from mhansen/patch-1</li>
<li><a href="https://github.com/pallets/jinja/commit/2dd769111cbb1a2637f805b3b4c652ec8096d371"><code>2dd7691</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/pallets/jinja/issues/1209">#1209</a> from mhansen/patch-3</li>
<li><a href="https://github.com/pallets/jinja/commit/48929401db7228db04dfd8e88115dd5c30dc2d86"><code>4892940</code></a> do_dictsort: update example ready to copy/paste</li>
<li><a href="https://github.com/pallets/jinja/commit/7db7d336ba12574e6205fdd929386fd529e3fad4"><code>7db7d33</code></a> api.rst: bugfix in docs, import PackageLoader</li>
<li><a href="https://github.com/pallets/jinja/commit/9ec465baefe32e305bd4e61da49e6c39360c194e"><code>9ec465b</code></a> fix changelog header</li>
<li>See full diff in <a href="https://github.com/pallets/jinja/compare/2.11.2...2.11.3">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10818/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10818/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10818",
"html_url": "https://github.com/huggingface/transformers/pull/10818",
"diff_url": "https://github.com/huggingface/transformers/pull/10818.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10818.patch",
"merged_at": 1616417690000
} |
https://api.github.com/repos/huggingface/transformers/issues/10817 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10817/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10817/comments | https://api.github.com/repos/huggingface/transformers/issues/10817/events | https://github.com/huggingface/transformers/pull/10817 | 836,694,411 | MDExOlB1bGxSZXF1ZXN0NTk3MTkwMDg1 | 10,817 | [vulnerability] in example deps fix | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Ah actually the dependabot PR was earlier in my notifications so I merged it without seeing you had opened a PR here. Sorry about that, closing as already taken care of in https://github.com/huggingface/transformers/commit/dbfe3795147e1360b3afac53a9ee0e14374d2ea6",
"Actually, the proposed `>=` is probably better, so fixing conflicts and merging this. Thanks @stas00!"
] | 1,616 | 1,616 | 1,616 | CONTRIBUTOR | null | Takes care of:
https://github.com/huggingface/transformers/security/dependabot/examples/research_projects/lxmert/requirements.txt/jinja2/open
@LysandreJik
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10817/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10817/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10817",
"html_url": "https://github.com/huggingface/transformers/pull/10817",
"diff_url": "https://github.com/huggingface/transformers/pull/10817.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10817.patch",
"merged_at": 1616418324000
} |
https://api.github.com/repos/huggingface/transformers/issues/10816 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10816/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10816/comments | https://api.github.com/repos/huggingface/transformers/issues/10816/events | https://github.com/huggingface/transformers/issues/10816 | 836,684,366 | MDU6SXNzdWU4MzY2ODQzNjY= | 10,816 | [trainer] figuring out why eval with `--fp16_full_eval` is 25% slower | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
},
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
}
] | open | false | null | [] | [
"Hi @stas00,\r\nPlease let me know if this is still open and I can contribute.",
"Yes, please.",
"I reproduced this in colab and got 28% slowness but still figuring out the cause, \r\nEarlier my assumption was this bit reduction/quantization was a device-specific thing.",
"Usually in such situations I try to either go from the bottom up or in reverse. That is just take the `model(**inputs)` and measure the speed w/ `model` vs `model.half()` - if it's the same go one level up into `generate`, etc. Or starting from the top (`generate`) and then removing big chunks of code until you find the part that contributes to the slow down.\r\n\r\nYou can use this tracker to bracket the operation you measure.\r\n\r\nhttps://github.com/huggingface/transformers/blob/335c0ca35c159f88d73198bdac928e61a4d480c7/src/transformers/trainer_utils.py#L258\r\n\r\nBut a totally different approach which might get to the core of the issue much faster is to use a python profiler, .e.g. `cProfile` - that way you get the full analytics on each function call and if you compare these side by side w/ and w/o `half()` you might get an instant answer. Actually now that I wrote this I'd say start with this approach.\r\n",
"I have done a few measures on 2 different cards (a 3090 and a 2080 Ti) using various evaluation batch sizes, and I haven't observed a single culprit for this problem. Instead, I'm seeing that all the operations in the `forward` pass are somewhat slower with `fp16`, and consistently so.\r\n\r\nSetup\r\n* Evaluation batch size in {4, 8, 16, 32, 64, 128}\r\n* 128 evaluation samples. Since I'm using powers of 2 for the batch sizes, this allows us to test from 1 batch to many batches of the same size.\r\n* `max_length` = `min_length` = 128. Setting `min_length` to 128 increases processing time.\r\n\r\nThese are the results for the main operations inside the `forward` method of `T5Block` (total seconds spent in the corresponding areas; figures from the 3090 and the 3 first batch sizes for brevity):\r\n\r\n<img width=\"558\" alt=\"image\" src=\"https://user-images.githubusercontent.com/1177582/115751446-7bf66080-a399-11eb-828a-097ea8cb1308.png\">\r\n\r\nThe time difference depends on the batch size, but `fp16` is always between 15% (for bs=64) and 26% (bs=16) slower.\r\n\r\n---\r\n\r\nToday I discovered [this thread](https://github.com/pytorch/pytorch/issues/50153) in the PyTorch forums, and repeated the test using a version of **PyTorch compiled from source**. Amazingly, processing is now **almost twice as fast**, but the difference is still there:\r\n\r\n<img width=\"558\" alt=\"image\" src=\"https://user-images.githubusercontent.com/1177582/115752012-fde68980-a399-11eb-868d-fa37b3effd54.png\">\r\n\r\nIn this case, using a batch size of 128 (1 batch) is about 13% slower, while a batch size of 16 is 27% slower.\r\n\r\nI'm not sure how to proceed. Does this ring a bell for anyone?",
"Thank you for researching and profiling, @pcuenca!\r\n\r\nI think the next step is the new pytorch profiler:\r\nhttps://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/\r\n\r\nUnfortunately, at the moment I have no time to dig into it, so I hope someone will beat me to it.\r\n\r\n-------------\r\n\r\nre: building from source:\r\n\r\nIndeed, I recently built pytorch from source and I don't know if it's that or something else since 1 month passed since OP was made, but I'm getting 2x speed improvement (rtx-3090) on training this task. eval is only slightly faster, but is still 25% slower @ fp16.\r\n\r\nAlso adapted the cmd line to the recently changed examples:\r\n\r\n```\r\n\r\nexport BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 \\\r\n./examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --output_dir /tmp/zero3 \\\r\n--overwrite_output_dir --max_train_samples 10 --max_eval_samples 100 --max_source_length 12 \\\r\n--max_target_length 128 --do_train --num_train_epochs 1 \\\r\n--per_device_train_batch_size 2 --learning_rate 3e-3 --warmup_steps 8 --predict_with_generate \\\r\n--logging_steps 0 --save_steps 2 --eval_steps 1 --group_by_length --adafactor --dataset_name wmt16 \\\r\n--dataset_config ro-en --source_lang en --target_lang ro \\\r\n--source_prefix \"translate English to Romanian: \" --do_eval \r\n\r\n***** train metrics *****\r\n epoch = 1.0\r\n init_mem_cpu_alloc_delta = 1254MB\r\n init_mem_cpu_peaked_delta = 155MB\r\n init_mem_gpu_alloc_delta = 230MB\r\n init_mem_gpu_peaked_delta = 0MB\r\n train_mem_cpu_alloc_delta = 1382MB\r\n train_mem_cpu_peaked_delta = 125MB\r\n train_mem_gpu_alloc_delta = 231MB\r\n train_mem_gpu_peaked_delta = 194MB\r\n train_runtime = 0:00:04.19\r\n train_samples = 10\r\n train_samples_per_second = 1.191\r\n\r\n***** eval metrics *****\r\n epoch = 1.0\r\n eval_bleu = 2.2434\r\n eval_gen_len = 15.69\r\n eval_loss = 3.7374\r\n eval_mem_cpu_alloc_delta = 1MB\r\n eval_mem_cpu_peaked_delta = 0MB\r\n eval_mem_gpu_alloc_delta = 0MB\r\n eval_mem_gpu_peaked_delta = 171MB\r\n eval_runtime = 0:00:04.33\r\n eval_samples = 100\r\n eval_samples_per_second = 23.051\r\n```\r\n\r\nadd `--fp16_full_eval`\r\n\r\n```\r\nexport BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 \\\r\n./examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --output_dir /tmp/zero3 \\\r\n--overwrite_output_dir --max_train_samples 10 --max_eval_samples 100 --max_source_length 12 \\\r\n--max_target_length 128 --do_train --num_train_epochs 1 \\\r\n--per_device_train_batch_size 2 --learning_rate 3e-3 --warmup_steps 8 --predict_with_generate \\\r\n--logging_steps 0 --save_steps 2 --eval_steps 1 --group_by_length --adafactor --dataset_name wmt16 \\\r\n--dataset_config ro-en --source_lang en --target_lang ro \\\r\n--source_prefix \"translate English to Romanian: \" --do_eval --fp16_full_eval\r\n\r\n***** train metrics *****\r\n epoch = 1.0\r\n init_mem_cpu_alloc_delta = 1259MB\r\n init_mem_cpu_peaked_delta = 155MB\r\n init_mem_gpu_alloc_delta = 230MB\r\n init_mem_gpu_peaked_delta = 0MB\r\n train_mem_cpu_alloc_delta = 1380MB\r\n train_mem_cpu_peaked_delta = 125MB\r\n train_mem_gpu_alloc_delta = 231MB\r\n train_mem_gpu_peaked_delta = 194MB\r\n train_runtime = 0:00:03.76\r\n train_samples = 10\r\n train_samples_per_second = 1.326\r\n\r\n***** eval metrics *****\r\n epoch = 1.0\r\n eval_bleu = 2.2434\r\n eval_gen_len = 15.69\r\n eval_loss = 3.7383\r\n eval_mem_cpu_alloc_delta = 4MB\r\n eval_mem_cpu_peaked_delta = 0MB\r\n eval_mem_gpu_alloc_delta = -231MB\r\n eval_mem_gpu_peaked_delta = 262MB\r\n eval_runtime = 0:00:05.32\r\n eval_samples = 100\r\n eval_samples_per_second = 18.778\r\n\r\n```",
"By running everything with `CUDA_LAUNCH_BLOCKING=1` under the line profiler, I found that [this](https://github.com/huggingface/transformers/blob/ff5cdc086be1e0c3e2bbad8e3469b34cffb55a85/src/transformers/models/t5/modeling_t5.py#L677) and [this](https://github.com/huggingface/transformers/blob/ff5cdc086be1e0c3e2bbad8e3469b34cffb55a85/src/transformers/models/t5/modeling_t5.py#L692) check for infinite values take up more time than I expected.\r\n\r\nAfter removing those checks, this is what I end up with: \r\n```\r\n$ export BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 \\\r\npython -m cProfile -o profile.prof ./examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --output_dir /tmp/zero3 \\\r\n--overwrite_output_dir --max_train_samples 10 --max_eval_samples 1600 --max_source_length 12 \\\r\n--max_target_length 128 --do_train --num_train_epochs 1 \\\r\n--per_device_train_batch_size 4 --per_device_eval_batch_size $BS --learning_rate 3e-3 --warmup_steps 8 --predict_with_generate \\\r\n--logging_steps 0 --save_steps 2 --eval_steps 1 --group_by_length --adafactor --dataset_name wmt16 \\\r\n--dataset_config ro-en --source_lang en --target_lang ro \\\r\n--source_prefix \"translate English to Romanian: \" --do_eval\r\n...\r\n***** eval metrics *****\r\n epoch = 1.0\r\n eval_bleu = 0.3251\r\n eval_gen_len = 10.2375\r\n eval_loss = 3.6796\r\n eval_runtime = 0:01:03.89\r\n eval_samples = 1600\r\n eval_samples_per_second = 25.04\r\n eval_steps_per_second = 1.565\r\n```\r\n\r\nThe same with `--fp16_full_eval`:\r\n```\r\n***** eval metrics *****\r\n epoch = 1.0\r\n eval_bleu = 0.3258\r\n eval_gen_len = 10.2406\r\n eval_loss = 3.6797\r\n eval_runtime = 0:01:01.43\r\n eval_samples = 1600\r\n eval_samples_per_second = 26.043\r\n eval_steps_per_second = 1.628\r\n```\r\n\r\nNote that I had to dial up the number of eval examples since this measurement was quite noisy on the shared system I used. However, the FP16 was faster most of the time. If someone could double check these observations under more reliable circumstances, that'll be great. ",
"Thank you for looking into it, @dsuess! \r\n\r\nI'm trying to figure out torch.profiler to get a better understanding using native tools.\r\n\r\nGreat to hear you found those checks to be slowdowns. Need to investigate these closer with torch.profiler.\r\n\r\nAnd I also found https://github.com/huggingface/transformers/blob/ff5cdc086be1e0c3e2bbad8e3469b34cffb55a85/src/transformers/models/t5/modeling_t5.py#L504 to be another point of slowdown. It's possible that the upcast can be removed completely, which should speed things up. But definitely a slightly faster version is to:\r\n```\r\n attn_weights = nn.functional.softmax(scores.float(), dim=-1).type_as(scores)\r\n attn_weights = nn.functional.softmax(scores.float(), dim=-1, dtype=scores.dtype)\r\n```\r\nfor fp16 (it makes no difference for fp32)\r\n\r\nI will look closer into the 2 points you suggested.\r\n\r\nbut also we should run under a more realistic configuration of at least seqlen 512 and not 12 like I had it originally, with large seqlen things change quite a lot. That is `--max_source_length 512 --max_target_length 512` (or even better 1024). ",
"Thanks for your feedback @stas00. I finally got the time to have a closer look with the pytorch profiler. I'd summarize what I found with:\r\n- the speedup we're getting for matmuls in fp16 aren't that great. This might be due to fewer kernels being executed on Tensor cores when using FP16 (31% of kernels) compared to FP32 (74% of kernels).\r\n- this is made worse by additional copy/conversion operations as can be seen in the device self time for FP16 (left) vs FP32 (right): \r\n<img width=\"937\" alt=\"image\" src=\"https://user-images.githubusercontent.com/5870291/150110417-2e8baf04-6904-45b9-960c-7cc12a16ee03.png\">\r\n\r\nThese conversions happen in the [layer norm](https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/modeling_t5.py#L246) and before the [softmax](https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/modeling_t5.py#L513), which matches with your observation. I also double checked the layer norm with this [micro benchmark](https://github.com/dsuess/transformers/blob/10816-fp16_eval_performance/tests/benchmark_modeling_t5.py), which runs ~30% slower in FP16.\r\nThere's a [tiny improvement](https://github.com/dsuess/transformers/commit/63f039329434e5b57051111be9b8466c87689159), which makes the eval-example run ~1% faster, but it doesn't even register in the micro benchmark. \r\n\r\nJudging from [the issue](https://github.com/pytorch/pytorch/issues/66707) you raised, we can't run layer norm in FP16. I'd expect the same to be true for softmax, so I am unsure if we can get rid of those conversions. We may have a chance to get more out of the matmuls, so I'll try to figure out why those kernels don't run on Tensor cores despite being eligible.\r\n\r\n---\r\nI've done all these experiments on a 3080Ti with `--max_source_length 512 --max_target_length 512`",
"This is fantastic work, @dsuess!\r\n\r\nHere is an additional profiling report of the same issue but under tf32: https://github.com/huggingface/transformers/issues/14608#issuecomment-1001257392\r\n\r\nThis appears to be specific to t5 and derived models. And yes the problem is that it uses RMSNorm which pytorch doesn't provide and that's why it's slow.\r\n\r\nI made a request to make an RMSNorm fused kernel here: https://github.com/NVIDIA/apex/issues/1271 and once this is done to ask to upstream it into pytorch. I hope this should solve this issue.\r\n\r\nI also tried to avoid re-casting using some tricks here by trying to deploy the existing fused functions: https://github.com/huggingface/transformers/pull/14656 but I couldn't find a faster way using the existing pytorch python API.\r\n\r\nHave you by chance tried any other architectures using the same benchmarks? e.g. gpt2 and bert as they are very distinct from t5.\r\n",
"> Here is an additional profiling report of the same issue but under tf32: [#14608 (comment)](https://github.com/huggingface/transformers/issues/14608#issuecomment-1001257392)\r\n\r\nGreat benchmark of the different data types, thanks for sharing.\r\n\r\n> Have you by chance tried any other architectures using the same benchmarks? e.g. gpt2 and bert as they are very distinct from t5.\r\n\r\nI've just tested the same script with some of the mbart variants and as expected, fp16 is faster for those.\r\n"
] | 1,616 | 1,642 | null | CONTRIBUTOR | null | Recently HF trainer was extended to support full fp16 eval via `--fp16_full_eval`. I'd have expected it to be either equal or faster than eval with fp32 model, but surprisingly I have noticed a 25% slowdown when using it.
This may or may not impact deepspeed as well, which also runs eval in fp16, but we can't compare it to a baseline, since it only runs fp16.
I wonder if someone would like to research where the slowdown comes from.
I'd probably isolate the `model.half()` call which should be a constant and focus on the rest of the eval. I'm thinking that some component doesn't take well to fp16 variables. e.g. label smoothing was problematic and now should be fixed in https://github.com/huggingface/transformers/pull/10815, but I tested w/ and w/o label smoothing and it's not adding to the slowdown.
Here are the script and the corresponding metrics.
First w/o `--fp16_full_eval`,
```
export BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 \
./examples/seq2seq/run_translation.py --model_name_or_path t5-small --output_dir /tmp/zero3 \
--overwrite_output_dir --max_train_samples 10 --max_val_samples 100 --max_source_length 12 \
--max_target_length 128 --val_max_target_length 128 --do_train --num_train_epochs 1 \
--per_device_train_batch_size 2 --learning_rate 3e-3 --warmup_steps 8 --predict_with_generate \
--logging_steps 0 --save_steps 2 --eval_steps 1 --group_by_length --adafactor --dataset_name wmt16 \
--dataset_config ro-en --source_lang en --target_lang ro \
--source_prefix "translate English to Romanian: " --do_eval
***** train metrics *****
epoch = 1.0
init_mem_cpu_alloc_delta = 2MB
init_mem_cpu_peaked_delta = 0MB
init_mem_gpu_alloc_delta = 230MB
init_mem_gpu_peaked_delta = 0MB
train_mem_cpu_alloc_delta = 60MB
train_mem_cpu_peaked_delta = 63MB
train_mem_gpu_alloc_delta = 231MB
train_mem_gpu_peaked_delta = 194MB
train_runtime = 7.7162
train_samples = 10
train_samples_per_second = 0.648
***** eval metrics *****
epoch = 1.0
eval_bleu = 2.4612
eval_gen_len = 18.53
eval_loss = 5.017
eval_mem_cpu_alloc_delta = 0MB
eval_mem_cpu_peaked_delta = 0MB
eval_mem_gpu_alloc_delta = 0MB
eval_mem_gpu_peaked_delta = 244MB
eval_runtime = 4.6481
eval_samples = 100
eval_samples_per_second = 21.514
```
now let's add `--fp16_full_eval`:
```
export BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 \
./examples/seq2seq/run_translation.py --model_name_or_path t5-small --output_dir /tmp/zero3 \
--overwrite_output_dir --max_train_samples 10 --max_val_samples 100 --max_source_length 12 \
--max_target_length 128 --val_max_target_length 128 --do_train --num_train_epochs 1 \
--per_device_train_batch_size 2 --learning_rate 3e-3 --warmup_steps 8 --predict_with_generate \
--logging_steps 0 --save_steps 2 --eval_steps 1 --group_by_length --adafactor --dataset_name wmt16 \
--dataset_config ro-en --source_lang en --target_lang ro \
--source_prefix "translate English to Romanian: " --do_eval \
--fp16_full_eval
***** train metrics *****
epoch = 1.0
init_mem_cpu_alloc_delta = 2MB
init_mem_cpu_peaked_delta = 0MB
init_mem_gpu_alloc_delta = 230MB
init_mem_gpu_peaked_delta = 0MB
train_mem_cpu_alloc_delta = 60MB
train_mem_cpu_peaked_delta = 63MB
train_mem_gpu_alloc_delta = 231MB
train_mem_gpu_peaked_delta = 194MB
train_runtime = 7.1477
train_samples = 10
train_samples_per_second = 0.7
***** eval metrics *****
epoch = 1.0
eval_bleu = 2.4612
eval_gen_len = 18.53
eval_loss = 5.0168
eval_mem_cpu_alloc_delta = 0MB
eval_mem_cpu_peaked_delta = 0MB
eval_mem_gpu_alloc_delta = -231MB
eval_mem_gpu_peaked_delta = 262MB
eval_runtime = 6.0125
eval_samples = 100
eval_samples_per_second = 16.632
```
As you can see w/o `--fp16_full_eval`: we get ~22 samples per sec and w/ it only ~17/ - that's a huge difference.
I also tested with a larger sample and the gap remains constant.
The halving happens here:
https://github.com/huggingface/transformers/blob/21e86f99e6b91af2e4df3790ba6c781e85fa0eb5/src/transformers/trainer.py#L1800
Thank you!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10816/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10816/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10815 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10815/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10815/comments | https://api.github.com/repos/huggingface/transformers/issues/10815/events | https://github.com/huggingface/transformers/pull/10815 | 836,674,534 | MDExOlB1bGxSZXF1ZXN0NTk3MTcxMDgz | 10,815 | [trainer] fix nan in full-fp16 label_smoothing eval | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @stas00 \r\nI tested this PR and for me this becomes really slower with mt5-small model after adding the modifications in this PR, here is the command I run, I am using transformer=4.4.2. I will be grateful to your expert knowledge to have the speed issue also fixed. Thank you very much for the incredible jobs you do. \r\n\r\n`deepspeed run_translation.py --model_name_or_path google/mt5-small --do_train --do_eval --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config_name ro-en --output_dir test/tst-t1ranslatieeeon --per_device_train_batch_size=4 --per_device_eval_batch_size=4 --predict_with_generate --max_train_samples 100 --fp16 --deepspeed ds_config.json --max_val_samples 100 --logging_step 10\r\n`",
"I can't possibly see how this PR could impact the speed since it changes the label_smoother and your command line doesn't have `--label_smoothing 0.1` so the modified code in this PR doesn't get to run.\r\n\r\nThat said when you use this PR you're in a way using `master`, so perhaps you were testing with some other `transformers` version before and noticed a regression in `master`. Try to test with whatever version you were using before and then retest with the `master` branch and see whether you can reproduce your issue. \r\n\r\nIf it is, do you know how to use `git bisect`? You can then in a matter of a few runs find out the commit that impacted the performance. If you can't figure it out just give me the last good transformers version and I will help you from there.\r\n\r\n-----\r\n\r\nAlso you're not telling me what's inside `ds_config.json` - I assume it's zero2 configuration. zero3 isn't quite ready yet."
] | 1,616 | 1,616 | 1,616 | CONTRIBUTOR | null | This PR fixes the issue of getting NaN eval loss with any inference that uses full fp16 model - which is the case with deepspeed or when `--fp16_full_eval` is passed.
The problem is that `log_probs.sum` runs over 30-50K of numbers overflows easily in fp16, so this PR switches it to fp32 internally. Which surprisingly requires almost no extra memory. As the conversion happens on the hardware level and we only need an extra `2 bytes * batch_size` of additional memory.
Here is some data showing the the metrics remain the same after this fix:
```
export BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 \
./examples/seq2seq/run_translation.py --model_name_or_path t5-small --output_dir /tmp/zero3 \
--overwrite_output_dir --max_train_samples 10 --max_val_samples 100 --max_source_length 12 \
--max_target_length 128 --val_max_target_length 128 --do_train --num_train_epochs 1 \
--per_device_train_batch_size 2 --learning_rate 3e-3 --warmup_steps 8 --predict_with_generate \
--logging_steps 0 --save_steps 2 --eval_steps 1 --group_by_length --adafactor --dataset_name wmt16 \
--dataset_config ro-en --source_lang en --target_lang ro \
--source_prefix "translate English to Romanian: " --do_eval --label_smoothing 0.1
***** train metrics *****
epoch = 1.0
init_mem_cpu_alloc_delta = 2MB
init_mem_cpu_peaked_delta = 0MB
init_mem_gpu_alloc_delta = 230MB
init_mem_gpu_peaked_delta = 0MB
train_mem_cpu_alloc_delta = 60MB
train_mem_cpu_peaked_delta = 63MB
train_mem_gpu_alloc_delta = 231MB
train_mem_gpu_peaked_delta = 194MB
train_runtime = 7.7162
train_samples = 10
train_samples_per_second = 0.648
***** eval metrics *****
epoch = 1.0
eval_bleu = 2.4612
eval_gen_len = 18.53
eval_loss = 5.017
eval_mem_cpu_alloc_delta = 0MB
eval_mem_cpu_peaked_delta = 0MB
eval_mem_gpu_alloc_delta = 0MB
eval_mem_gpu_peaked_delta = 244MB
eval_runtime = 4.6481
eval_samples = 100
eval_samples_per_second = 21.514
```
now let's add `--fp16_full_eval`, which before this PR leads to ` eval_loss = nan`
```
export BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 \
./examples/seq2seq/run_translation.py --model_name_or_path t5-small --output_dir /tmp/zero3 \
--overwrite_output_dir --max_train_samples 10 --max_val_samples 100 --max_source_length 12 \
--max_target_length 128 --val_max_target_length 128 --do_train --num_train_epochs 1 \
--per_device_train_batch_size 2 --learning_rate 3e-3 --warmup_steps 8 --predict_with_generate \
--logging_steps 0 --save_steps 2 --eval_steps 1 --group_by_length --adafactor --dataset_name wmt16 \
--dataset_config ro-en --source_lang en --target_lang ro \
--source_prefix "translate English to Romanian: " --do_eval --label_smoothing 0.1 \
--fp16_full_eval
***** train metrics *****
epoch = 1.0
init_mem_cpu_alloc_delta = 2MB
init_mem_cpu_peaked_delta = 0MB
init_mem_gpu_alloc_delta = 230MB
init_mem_gpu_peaked_delta = 0MB
train_mem_cpu_alloc_delta = 60MB
train_mem_cpu_peaked_delta = 63MB
train_mem_gpu_alloc_delta = 231MB
train_mem_gpu_peaked_delta = 194MB
train_runtime = 7.1477
train_samples = 10
train_samples_per_second = 0.7
***** eval metrics *****
epoch = 1.0
eval_bleu = 2.4612
eval_gen_len = 18.53
eval_loss = 5.0168
eval_mem_cpu_alloc_delta = 0MB
eval_mem_cpu_peaked_delta = 0MB
eval_mem_gpu_alloc_delta = -231MB
eval_mem_gpu_peaked_delta = 262MB
eval_runtime = 6.0125
eval_samples = 100
eval_samples_per_second = 16.632
```
`eval_loss` is off by 0.0002.
I spent quite some time trying to find where to add a test, but it's a tricky situation where the input has to be pretty huge. I remember seeing it in some deepspeed tests, but I can't find at the moment, currently all tests return a normal number.
One interesting things I noticed is that `--fp16_full_eval` makes eval slower by 20-25% which is strange, but I tested this PR has no impact on the speed. I file a separate issue about it https://github.com/huggingface/transformers/issues/10816
Fixes: https://github.com/huggingface/transformers/issues/10674
@sgugger, @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10815/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10815/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10815",
"html_url": "https://github.com/huggingface/transformers/pull/10815",
"diff_url": "https://github.com/huggingface/transformers/pull/10815.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10815.patch",
"merged_at": 1616466204000
} |
https://api.github.com/repos/huggingface/transformers/issues/10814 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10814/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10814/comments | https://api.github.com/repos/huggingface/transformers/issues/10814/events | https://github.com/huggingface/transformers/pull/10814 | 836,328,831 | MDExOlB1bGxSZXF1ZXN0NTk2ODUxMzk3 | 10,814 | [makefile] autogenerate target | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,616 | 1,616 | CONTRIBUTOR | null | As a follow up to https://github.com/huggingface/transformers/pull/10801 this PR proposes to group autogeneration code in a separate target. I think as the number of little things the makefile does this helps with clarity.
There is no functional change.
@sgugger, @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10814/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10814/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10814",
"html_url": "https://github.com/huggingface/transformers/pull/10814",
"diff_url": "https://github.com/huggingface/transformers/pull/10814.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10814.patch",
"merged_at": 1616418862000
} |
https://api.github.com/repos/huggingface/transformers/issues/10813 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10813/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10813/comments | https://api.github.com/repos/huggingface/transformers/issues/10813/events | https://github.com/huggingface/transformers/issues/10813 | 836,294,700 | MDU6SXNzdWU4MzYyOTQ3MDA= | 10,813 | Example code for ReformerForMaskedLM | {
"login": "ksrinivs64",
"id": 10170178,
"node_id": "MDQ6VXNlcjEwMTcwMTc4",
"avatar_url": "https://avatars.githubusercontent.com/u/10170178?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ksrinivs64",
"html_url": "https://github.com/ksrinivs64",
"followers_url": "https://api.github.com/users/ksrinivs64/followers",
"following_url": "https://api.github.com/users/ksrinivs64/following{/other_user}",
"gists_url": "https://api.github.com/users/ksrinivs64/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ksrinivs64/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ksrinivs64/subscriptions",
"organizations_url": "https://api.github.com/users/ksrinivs64/orgs",
"repos_url": "https://api.github.com/users/ksrinivs64/repos",
"events_url": "https://api.github.com/users/ksrinivs64/events{/privacy}",
"received_events_url": "https://api.github.com/users/ksrinivs64/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"That is not a bug:\r\nhttps://stackoverflow.com/questions/66625945/huggingfaces-reformerformaskedlm-configuration-issue/66636363#66636363",
"Unfortunately that did not help. Adding:\r\n```from transformers import ReformerTokenizer, ReformerForMaskedLM, ReformerConfig\r\nimport torch\r\n\r\ntokenizer = ReformerTokenizer.from_pretrained('google/reformer-crime-and-punishment')\r\nconfig = ReformerConfig.from_pretrained('google/reformer-crime-and-punishment')\r\nconfig.is_decoder=False\r\nmodel = ReformerForMaskedLM.from_pretrained('google/reformer-crime-and-punishment', config=config)\r\n\r\ninputs = tokenizer(\"The capital of France is [MASK].\", return_tensors=\"pt\")\r\nlabels = tokenizer(\"The capital of France is Paris.\", return_tensors=\"pt\")[\"input_ids\"]\r\n\r\noutputs = model(**inputs, labels=labels)\r\nloss = outputs.loss\r\nlogits = outputs.logits```\r\n\r\ncaused this exception:\r\n```Traceback (most recent call last):\r\n File \"testReformers.py\", line 13, in <module>\r\n outputs = model(**inputs, labels=labels)\r\n File \"/mnt/nfs/d4nvme0/userhomes/ksrinivs/anaconda3/envs/reformers/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 727, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/mnt/nfs/d4nvme0/userhomes/ksrinivs/anaconda3/envs/reformers/lib/python3.8/site-packages/transformers/models/reformer/modeling_reformer.py\", line 2367, in forward\r\n masked_lm_loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1))\r\n File \"/mnt/nfs/d4nvme0/userhomes/ksrinivs/anaconda3/envs/reformers/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 727, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/mnt/nfs/d4nvme0/userhomes/ksrinivs/anaconda3/envs/reformers/lib/python3.8/site-packages/torch/nn/modules/loss.py\", line 961, in forward\r\n return F.cross_entropy(input, target, weight=self.weight,\r\n File \"/mnt/nfs/d4nvme0/userhomes/ksrinivs/anaconda3/envs/reformers/lib/python3.8/site-packages/torch/nn/functional.py\", line 2468, in cross_entropy\r\n return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)\r\n File \"/mnt/nfs/d4nvme0/userhomes/ksrinivs/anaconda3/envs/reformers/lib/python3.8/site-packages/torch/nn/functional.py\", line 2261, in nll_loss\r\n raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).'\r\nValueError: Expected input batch_size (21) to match target batch_size (17).```.\r\n\r\nSomething seems amiss here given that there is a single sentence being passed in and it seems to think we have batch sizes of 21 and 17?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,619 | 1,619 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4.2
- Platform: Linux
- Python version: 3.8
- PyTorch version (GPU?): 1.7.1[11.0]
- Tensorflow version (GPU?):
- Using GPU in script?:yes
- Using distributed or parallel set-up in script?:no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
@patrickvonplaten
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): ReformerForMaskedLM
The problem arises when using:
* [x ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Running the example code for ReformerForMaskedLM:
```
from transformers import ReformerTokenizer, ReformerForMaskedLM
import torch
tokenizer = ReformerTokenizer.from_pretrained('google/reformer-crime-and-punishment')
model = ReformerForMaskedLM.from_pretrained('google/reformer-crime-and-punishment')
inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt")
labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
outputs = model(**inputs, labels=labels)
loss = outputs.loss
logits = outputs.logits
```
causes:
```AssertionError: If you want to use `ReformerForMaskedLM` make sure `config.is_decoder=False` for bi-directional self-attention.```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10813/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10813/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10812 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10812/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10812/comments | https://api.github.com/repos/huggingface/transformers/issues/10812/events | https://github.com/huggingface/transformers/issues/10812 | 836,207,708 | MDU6SXNzdWU4MzYyMDc3MDg= | 10,812 | Domain adaptation | {
"login": "lematmat",
"id": 19993147,
"node_id": "MDQ6VXNlcjE5OTkzMTQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/19993147?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lematmat",
"html_url": "https://github.com/lematmat",
"followers_url": "https://api.github.com/users/lematmat/followers",
"following_url": "https://api.github.com/users/lematmat/following{/other_user}",
"gists_url": "https://api.github.com/users/lematmat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lematmat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lematmat/subscriptions",
"organizations_url": "https://api.github.com/users/lematmat/orgs",
"repos_url": "https://api.github.com/users/lematmat/repos",
"events_url": "https://api.github.com/users/lematmat/events{/privacy}",
"received_events_url": "https://api.github.com/users/lematmat/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!",
"Thank you very much.\r\nI will contact the forum.\r\nRegards,\r\nlematmat\r\n"
] | 1,616 | 1,616 | 1,616 | NONE | null | Hi all,
I'm just wondering how to do model adaptation of pre-trained Camembert model on my custom dataset ?
I haven't found any information in Transformers documentation.
Best regards,
Lematmat
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:4.4.0
- Platform:Jupyter Notebook
- Python version:3.7
- PyTorch version (GPU?):1.8, no GPU
- Tensorflow version (GPU?):
- Using GPU in script?:no
- Using distributed or parallel set-up in script?:no
-->
## Information
Model I am using (Bert, XLNet ...): Camembert
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10812/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10812/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10811 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10811/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10811/comments | https://api.github.com/repos/huggingface/transformers/issues/10811/events | https://github.com/huggingface/transformers/pull/10811 | 836,081,076 | MDExOlB1bGxSZXF1ZXN0NTk2NjQxNDM5 | 10,811 | Add transformers id to hub requests | {
"login": "philschmid",
"id": 32632186,
"node_id": "MDQ6VXNlcjMyNjMyMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/philschmid",
"html_url": "https://github.com/philschmid",
"followers_url": "https://api.github.com/users/philschmid/followers",
"following_url": "https://api.github.com/users/philschmid/following{/other_user}",
"gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/philschmid/subscriptions",
"organizations_url": "https://api.github.com/users/philschmid/orgs",
"repos_url": "https://api.github.com/users/philschmid/repos",
"events_url": "https://api.github.com/users/philschmid/events{/privacy}",
"received_events_url": "https://api.github.com/users/philschmid/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,616 | 1,616 | MEMBER | null | # What does this PR do?
This PR adds a `TRANSFORMERS_ID` const, which helps us to group the several files request against the hub. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10811/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10811/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10811",
"html_url": "https://github.com/huggingface/transformers/pull/10811",
"diff_url": "https://github.com/huggingface/transformers/pull/10811.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10811.patch",
"merged_at": 1616167592000
} |
https://api.github.com/repos/huggingface/transformers/issues/10810 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10810/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10810/comments | https://api.github.com/repos/huggingface/transformers/issues/10810/events | https://github.com/huggingface/transformers/issues/10810 | 836,074,038 | MDU6SXNzdWU4MzYwNzQwMzg= | 10,810 | handle_impossible_answer not working in the question answering pipeline for ROBERTa model | {
"login": "mmaslankowska-neurosys",
"id": 77386734,
"node_id": "MDQ6VXNlcjc3Mzg2NzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/77386734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mmaslankowska-neurosys",
"html_url": "https://github.com/mmaslankowska-neurosys",
"followers_url": "https://api.github.com/users/mmaslankowska-neurosys/followers",
"following_url": "https://api.github.com/users/mmaslankowska-neurosys/following{/other_user}",
"gists_url": "https://api.github.com/users/mmaslankowska-neurosys/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mmaslankowska-neurosys/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mmaslankowska-neurosys/subscriptions",
"organizations_url": "https://api.github.com/users/mmaslankowska-neurosys/orgs",
"repos_url": "https://api.github.com/users/mmaslankowska-neurosys/repos",
"events_url": "https://api.github.com/users/mmaslankowska-neurosys/events{/privacy}",
"received_events_url": "https://api.github.com/users/mmaslankowska-neurosys/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! I believe this is was overlook on our part. Your change looks reasonable to me, do you want to open a PR with your proposed fix?\r\n\r\nAnd thank you for opening such a detailed and well-structured issue!"
] | 1,616 | 1,616 | 1,616 | CONTRIBUTOR | null | ### Environment info
- Platform: Linux 20.04
- Python version 3.8.5
- `transformers` version `3.5.0` and `4.3.2`
### The issue
I'm using the `pipeline("question-answering")` with QA Models downloaded [from community](https://huggingface.co/models?pipeline_tag=question-answering). I'm evaluating models on the SQUAD 2.0 dataset which doesn't always have an answer to the given question - that's what the `handle_impossible_answer` flag in the pipeline is for.
I noticed that ROBERTa model (any ROBERTa, not just a specific model) in version 4 of `transformers` always produces an answer despite the `handle_impossible_answer` flag - even if the same model for the same example didn't produce an answer (returned "" as an answer) while using version 3 of the library.
```python
bert_model_name = 'deepset/bert-base-cased-squad2'
roberta_model_name = 'deepset/roberta-base-squad2'
bert_tokenizer = AutoTokenizer.from_pretrained(bert_model_name)
bert_model = AutoModelForQuestionAnswering.from_pretrained(bert_model_name, return_dict=True)
roberta_tokenizer = AutoTokenizer.from_pretrained(roberta_model_name)
roberta_model = AutoModelForQuestionAnswering.from_pretrained(roberta_model_name, return_dict=True)
bert_qa = pipeline('question-answering', tokenizer=bert_tokenizer, model=bert_model)
roberta_qa = pipeline('question-answering', tokenizer=roberta_tokenizer, model=roberta_model)
# Random SQUAD 2.0 example which doesn't have an answer to the question
question = 'What was the name of the only ship operating in the Indian Ocean?'
context = 'In September 1695, Captain Henry Every, an English pirate on board the Fancy, reached the Straits of Bab-el-Mandeb, where he teamed up with five other pirate captains to make an attack on the Indian fleet making the annual voyage to Mocha. The Mughal convoy included the treasure-laden Ganj-i-Sawai, reported to be the greatest in the Mughal fleet and the largest ship operational in the Indian Ocean, and its escort, the Fateh Muhammed. They were spotted passing the straits en route to Surat. The pirates gave chase and caught up with Fateh Muhammed some days later, and meeting little resistance, took some £50,000 to £60,000 worth of treasure.'
print(bert_qa(question=question, context=context, handle_impossible_answer=True))
# transformers 3.5.0: {'score': 0.999398410320282, 'start': 0, 'end': 0, 'answer': ''}
# transformers 4.3.2: {'score': 0.999398410320282, 'start': 0, 'end': 0, 'answer': ''}
print(roberta_qa(question=question, context=context, handle_impossible_answer=True))
# transformers 3.5.0: {'score': 0.979897797107697, 'start': 0, 'end': 0, 'answer': ''}
# transformers 4.3.2: {'score': 0.222181886434555, 'start': 422, 'end': 436, 'answer': 'Fateh Muhammed'}
```
### Probable issue reason
I've found out that in the `question_answering.py` file in the `pipeline` directory in version 4 of `transformers` there is a condition that provides ROBERTa models from adjusting the `p_mask` for this task. It looks simply like this: `if self.tokenizer.cls_token_id`. And since ROBERTa's `cls_token_id = 0` the condition isn't met and the `p_mask` isn't changed for the `cls_token`. This results in omitting the token while answering the question (it behaves like e.g the token was a part of a question). For example BERT's `cls_token_id = 101` so the condition is met.
### Plausible solution
Possibly the easy solution is to expand the condition to `if self.tokenizer.cls_token_id is not None`. However, there wasn't such a condition in version 3 at all so maybe it performs some crucial function in its current form that I'm not aware of...
```python
# originally the condition here was more general and looked like this
# if self.tokenizer.cls_token_id:
if self.tokenizer.cls_token_id is not None:
cls_index = np.nonzero(encoded_inputs["input_ids"] == self.tokenizer.cls_token_id)
p_mask[cls_index] = 0
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10810/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10810/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10809 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10809/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10809/comments | https://api.github.com/repos/huggingface/transformers/issues/10809/events | https://github.com/huggingface/transformers/pull/10809 | 835,990,238 | MDExOlB1bGxSZXF1ZXN0NTk2NTY1Njgx | 10,809 | [Flax] Add general conversion script | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2934977194,
"node_id": "MDU6TGFiZWwyOTM0OTc3MTk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Flax",
"name": "Flax",
"color": "4862AD",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"> Great work! My only concern is to make sure we don't lose any performance by not using `nn.linen.SelfAttention`. If we are just using the same code as its implementation, there is no reason for that but it's good to double-check.\r\n> Otherwise, I agree it's better to re-implement it than to have custom weight loading logic..\r\n\r\nGreat! Yeah, I'll talk with @avital about this next week (hopefully) :-) "
] | 1,616 | 1,619 | 1,617 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR changes the weight architecture of `FlaxBertModel` so that it corresponds 1-to-1 to PyTorch's version of `BertModel`. This means that some weights had to be renamed (*e.g.* "layer_norm" -> "LayerNorm" since PyTorch uses "LayerNorm") and also some new `flax.linen.Modules`, such as `FlaxBertSelfOutput` had to be created.
As can be seen, the PT=>Flax conversion function is now kept very general and can be applied to all models so that we can fully delete any model-specific conversion logic.
The PR has one drawback however:
Flax official [SelfAttention Module](https://flax.readthedocs.io/en/latest/_autosummary/flax.linen.SelfAttention.html#flax-linen-selfattention) cannot be used anymore since it doesn't give us enough flexibility to convert PyTorch weights to flax weights without having a model-specific conversion function. FlaxBERT's new attention modules fully correspond to PyTorchBERT's attention modules and are IMO still kept quite short by relying on Flax's [`dot_product_attention` function](https://flax.readthedocs.io/en/latest/_autosummary/flax.linen.dot_product_attention.html). Another drawback is that for auto-regressive Transformers models we will have to manually add all the code corresponding to cached / auto-regressive attention to the attention module (which we do for PyTorch anyways) instead of being able to use already existing code of `nn.linen.SelfAttention` -> see here: https://github.com/google/flax/blob/e31063da71bd7a4df137b000df6a48b0cea35a2b/flax/linen/attention.py#L202.
All in all, rewriting parts of `flax.linen.SelfAttention` is the right choice here though because it allows us to have a much cleaner conversion function with very little downside IMO (slightly higher maintenance because we need to copy-paste a bit more code).
@LysandreJik @sgugger - could you check if you agree more or less with my solution here (below I left some comments to showcase the trade-offs a bit better). I'll clean the code & upload the new weight structure then :-)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10809/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10809/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10809",
"html_url": "https://github.com/huggingface/transformers/pull/10809",
"diff_url": "https://github.com/huggingface/transformers/pull/10809.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10809.patch",
"merged_at": 1617095639000
} |
https://api.github.com/repos/huggingface/transformers/issues/10808 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10808/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10808/comments | https://api.github.com/repos/huggingface/transformers/issues/10808/events | https://github.com/huggingface/transformers/pull/10808 | 835,934,066 | MDExOlB1bGxSZXF1ZXN0NTk2NTE3NTk5 | 10,808 | wav2vec doc tweaks | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Ok actually this is ready to merge"
] | 1,616 | 1,616 | 1,616 | MEMBER | null | tiny tweaks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10808/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10808/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10808",
"html_url": "https://github.com/huggingface/transformers/pull/10808",
"diff_url": "https://github.com/huggingface/transformers/pull/10808.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10808.patch",
"merged_at": 1616172534000
} |
https://api.github.com/repos/huggingface/transformers/issues/10807 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10807/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10807/comments | https://api.github.com/repos/huggingface/transformers/issues/10807/events | https://github.com/huggingface/transformers/issues/10807 | 835,780,687 | MDU6SXNzdWU4MzU3ODA2ODc= | 10,807 | I am finetuning mBART for summarization using finetune_trainer.py on custom dataset, but I keep getting this error. | {
"login": "laibamehnaz",
"id": 36405283,
"node_id": "MDQ6VXNlcjM2NDA1Mjgz",
"avatar_url": "https://avatars.githubusercontent.com/u/36405283?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/laibamehnaz",
"html_url": "https://github.com/laibamehnaz",
"followers_url": "https://api.github.com/users/laibamehnaz/followers",
"following_url": "https://api.github.com/users/laibamehnaz/following{/other_user}",
"gists_url": "https://api.github.com/users/laibamehnaz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/laibamehnaz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/laibamehnaz/subscriptions",
"organizations_url": "https://api.github.com/users/laibamehnaz/orgs",
"repos_url": "https://api.github.com/users/laibamehnaz/repos",
"events_url": "https://api.github.com/users/laibamehnaz/events{/privacy}",
"received_events_url": "https://api.github.com/users/laibamehnaz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,619 | 1,619 | NONE | null | This is the traceback:
`thread '<unnamed>' panicked at 'index out of bounds: the len is 453 but the index is 453', /__w/tokenizers/tokenizers/tokenizers/src/tokenizer/normalizer.rs:382:21
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
thread '<unnamed>' panicked at 'range end index 140732665363856 out of range for slice of length 0', /rustc/e1884a8e3c3e813aada8254edfa120e85bf5ffca/library/alloc/src/vec.rs:1317:42
stack backtrace:
0: 0x7f7340048b40 - std::backtrace_rs::backtrace::libunwind::trace::h04d12fdcddff82aa
at /rustc/e1884a8e3c3e813aada8254edfa120e85bf5ffca/library/std/src/../../backtrace/src/backtrace/libunwind.rs:100:5
1: 0x7f7340048b40 - std::backtrace_rs::backtrace::trace_unsynchronized::h1459b974b6fbe5e1
at /rustc/e1884a8e3c3e813aada8254edfa120e85bf5ffca/library/std/src/../../backtrace/src/backtrace/mod.rs:66:5
2: 0x7f7340048b40 - std::sys_common::backtrace::_print_fmt::h9b8396a669123d95
at /rustc/e1884a8e3c3e813aada8254edfa120e85bf5ffca/library/std/src/sys_common/backtrace.rs:67:5
3: 0x7f7340048b40 - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::he009dcaaa75eed60
at /rustc/e1884a8e3c3e813aada8254edfa120e85bf5ffca/library/std/src/sys_common/backtrace.rs:46:22
4: 0x7f734006806c - core::fmt::write::h77b4746b0dea1dd3
at /rustc/e1884a8e3c3e813aada8254edfa120e85bf5ffca/library/core/src/fmt/mod.rs:1078:17
5: 0x7f7340046362 - std::io::Write::write_fmt::heb7e50902e98831c
at /rustc/e1884a8e3c3e813aada8254edfa120e85bf5ffca/library/std/src/io/mod.rs:1518:15
6: 0x7f734004afb5 - std::sys_common::backtrace::_print::h2d880c9e69a21be9
at /rustc/e1884a8e3c3e813aada8254edfa120e85bf5ffca/library/std/src/sys_common/backtrace.rs:49:5
7: 0x7f734004afb5 - std::sys_common::backtrace::print::h5f02b1bb49f36879
at /rustc/e1884a8e3c3e813aada8254edfa120e85bf5ffca/library/std/src/sys_common/backtrace.rs:36:9
8: 0x7f734004afb5 - std::panicking::default_hook::{{closure}}::h658e288a7a809b29
at /rustc/e1884a8e3c3e813aada8254edfa120e85bf5ffca/library/std/src/panicking.rs:208:50
9: 0x7f734004ac58 - std::panicking::default_hook::hb52d73f0da9a4bb8
at /rustc/e1884a8e3c3e813aada8254edfa120e85bf5ffca/library/std/src/panicking.rs:227:9
10: 0x7f734004b751 - std::panicking::rust_panic_with_hook::hfe7e1c684e3e6462
at /rustc/e1884a8e3c3e813aada8254edfa120e85bf5ffca/library/std/src/panicking.rs:593:17
11: 0x7f734004b297 - std::panicking::begin_panic_handler::{{closure}}::h42939e004b32765c
at /rustc/e1884a8e3c3e813aada8254edfa120e85bf5ffca/library/std/src/panicking.rs:499:13
12: 0x7f7340048ffc - std::sys_common::backtrace::__rust_end_short_backtrace::h9d2070f7bf9fd56c
at /rustc/e1884a8e3c3e813aada8254edfa120e85bf5ffca/library/std/src/sys_common/backtrace.rs:141:18
13: 0x7f734004b1f9 - rust_begin_unwind
at /rustc/e1884a8e3c3e813aada8254edfa120e85bf5ffca/library/std/src/panicking.rs:495:5
14: 0x7f7340065fd1 - core::panicking::panic_fmt::ha0bb065d9a260792
at /rustc/e1884a8e3c3e813aada8254edfa120e85bf5ffca/library/core/src/panicking.rs:92:14
15: 0x7f7340069d32 - core::slice::index::slice_end_index_len_fail::hcd7c711938bf4c03
at /rustc/e1884a8e3c3e813aada8254edfa120e85bf5ffca/library/core/src/slice/index.rs:41:5
16: 0x7f733fd95d63 - core::ptr::drop_in_place::h2923a820a2e4a8d4
17: 0x7f733fd9b01c - <rayon::vec::IntoIter<T> as rayon::iter::IndexedParallelIterator>::with_producer::hd6f8d390195a749b
18: 0x7ffee086ff90 - <unknown>
thread panicked while panicking. aborting.`
I am using Colab for finetuning mBART. Any help will be appreciated. Thank you:) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10807/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10807/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10806 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10806/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10806/comments | https://api.github.com/repos/huggingface/transformers/issues/10806/events | https://github.com/huggingface/transformers/pull/10806 | 835,768,627 | MDExOlB1bGxSZXF1ZXN0NTk2Mzc1ODI4 | 10,806 | [XLSR-Wav2Vec2 Info doc] Add a couple of lines | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,616 | 1,616 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10806/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10806/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10806",
"html_url": "https://github.com/huggingface/transformers/pull/10806",
"diff_url": "https://github.com/huggingface/transformers/pull/10806.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10806.patch",
"merged_at": 1616147574000
} |
https://api.github.com/repos/huggingface/transformers/issues/10805 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10805/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10805/comments | https://api.github.com/repos/huggingface/transformers/issues/10805/events | https://github.com/huggingface/transformers/issues/10805 | 835,749,745 | MDU6SXNzdWU4MzU3NDk3NDU= | 10,805 | ONNX export outputs many warnings | {
"login": "NebelAI",
"id": 7240417,
"node_id": "MDQ6VXNlcjcyNDA0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7240417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NebelAI",
"html_url": "https://github.com/NebelAI",
"followers_url": "https://api.github.com/users/NebelAI/followers",
"following_url": "https://api.github.com/users/NebelAI/following{/other_user}",
"gists_url": "https://api.github.com/users/NebelAI/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NebelAI/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NebelAI/subscriptions",
"organizations_url": "https://api.github.com/users/NebelAI/orgs",
"repos_url": "https://api.github.com/users/NebelAI/repos",
"events_url": "https://api.github.com/users/NebelAI/events{/privacy}",
"received_events_url": "https://api.github.com/users/NebelAI/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,619 | 1,619 | NONE | null |
I was testing ONNX export via your **04-onnx-export.ipynb** notebook and when calling `!python -m transformers.convert_graph_to_onnx --framework pt --model bert-base-cased --opset 11 --quantize onnx/bert-base-cased2.onnx` I get many Warnings like:
```
Warning: Unsupported operator LayerNormalization. No schema registered for this operator.
Warning: Unsupported operator Attention. No schema registered for this operator.
Warning: Unsupported operator LayerNormalization. No schema registered for this operator.
Warning: Unsupported operator Gelu. No schema registered for this operator.
...
```
They only appear when using `--quantize` flag.
I know it is just a warning ... but still ... does it affect the exporting process in any way? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10805/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10805/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10804 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10804/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10804/comments | https://api.github.com/repos/huggingface/transformers/issues/10804/events | https://github.com/huggingface/transformers/issues/10804 | 835,608,676 | MDU6SXNzdWU4MzU2MDg2NzY= | 10,804 | Initializing ddp is extremely slow when finetuning RAG | {
"login": "tangxiangru",
"id": 22478336,
"node_id": "MDQ6VXNlcjIyNDc4MzM2",
"avatar_url": "https://avatars.githubusercontent.com/u/22478336?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tangxiangru",
"html_url": "https://github.com/tangxiangru",
"followers_url": "https://api.github.com/users/tangxiangru/followers",
"following_url": "https://api.github.com/users/tangxiangru/following{/other_user}",
"gists_url": "https://api.github.com/users/tangxiangru/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tangxiangru/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tangxiangru/subscriptions",
"organizations_url": "https://api.github.com/users/tangxiangru/orgs",
"repos_url": "https://api.github.com/users/tangxiangru/repos",
"events_url": "https://api.github.com/users/tangxiangru/events{/privacy}",
"received_events_url": "https://api.github.com/users/tangxiangru/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,619 | 1,619 | NONE | null | Hi, when I am finetuning the RAG model, it seems that the DDP process is extremely slow. I waited 1 day but still did not see the training process.
loading file None
loading file None
loading file /nfs/users/s_xiangru/model/rag/rag/rag-sequence-nq/generator_tokenizer/special_tokens_map.json
loading file /nfs/users/s_xiangru/model/rag/rag/rag-sequence-nq/generator_tokenizer/tokenizer_config.json
Global seed set to 42
Global seed set to 42
LOCAL_RANK: 2 - CUDA_VISIBLE_DEVICES: [0,1,2,3]
Using native 16bit precision.
Global seed set to 42
LOCAL_RANK: 1 - CUDA_VISIBLE_DEVICES: [0,1,2,3]
Using native 16bit precision.
Global seed set to 42
INFO:__main__:Custom init_ddp_connection.
initializing ddp: GLOBAL_RANK: 2, MEMBER: 3/4
INFO:__main__:Custom init_ddp_connection.
initializing ddp: GLOBAL_RANK: 1, MEMBER: 2/4 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10804/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10804/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10803 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10803/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10803/comments | https://api.github.com/repos/huggingface/transformers/issues/10803/events | https://github.com/huggingface/transformers/issues/10803 | 835,554,382 | MDU6SXNzdWU4MzU1NTQzODI= | 10,803 | How much vRAM should I have for fine tuning DeBERTa v2 xxlarge? | {
"login": "ngoquanghuy99",
"id": 36761076,
"node_id": "MDQ6VXNlcjM2NzYxMDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/36761076?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ngoquanghuy99",
"html_url": "https://github.com/ngoquanghuy99",
"followers_url": "https://api.github.com/users/ngoquanghuy99/followers",
"following_url": "https://api.github.com/users/ngoquanghuy99/following{/other_user}",
"gists_url": "https://api.github.com/users/ngoquanghuy99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ngoquanghuy99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ngoquanghuy99/subscriptions",
"organizations_url": "https://api.github.com/users/ngoquanghuy99/orgs",
"repos_url": "https://api.github.com/users/ngoquanghuy99/repos",
"events_url": "https://api.github.com/users/ngoquanghuy99/events{/privacy}",
"received_events_url": "https://api.github.com/users/ngoquanghuy99/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I don't know the answer, but hoping that it works after this PR got merged https://github.com/huggingface/transformers/pull/10753\r\n\r\nDo you already use deepspeed ?",
"No, i did not"
] | 1,616 | 1,616 | 1,616 | CONTRIBUTOR | null | I'm fine tuning DeBERTa v2 xxlarge with 1.5B parameters on Nvidia Tesla T4 (16GB vRAM) and it returns "CUDA out of memory".
How much vRAM is enough?
@LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10803/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10803/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10802 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10802/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10802/comments | https://api.github.com/repos/huggingface/transformers/issues/10802/events | https://github.com/huggingface/transformers/pull/10802 | 835,420,665 | MDExOlB1bGxSZXF1ZXN0NTk2MDgxODYz | 10,802 | addressing vulnerability report in research project deps | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,616 | 1,616 | CONTRIBUTOR | null | This PR addresses this security alert:
https://github.com/huggingface/transformers/security/dependabot/examples/research_projects/lxmert/requirements.txt/Pillow/open
@LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10802/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10802/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10802",
"html_url": "https://github.com/huggingface/transformers/pull/10802",
"diff_url": "https://github.com/huggingface/transformers/pull/10802.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10802.patch",
"merged_at": 1616119330000
} |
https://api.github.com/repos/huggingface/transformers/issues/10801 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10801/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10801/comments | https://api.github.com/repos/huggingface/transformers/issues/10801/events | https://github.com/huggingface/transformers/pull/10801 | 835,393,175 | MDExOlB1bGxSZXF1ZXN0NTk2MDU4MTg3 | 10,801 | Sort init import | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"To address your comments on the `MakeFile`, I have removed some checks from `extra_quality_checks` because they are checks that modify content and `make quality` is only supposed to check, not change.\r\n\r\nTo have `make fixup` still work as intended, I put the checks that change content in `extra_style_checks` that is called both by `make fixup` and `make style`. Could you double-check it looks okay @LysandreJik and @stas00 ? Thanks!"
] | 1,616 | 1,616 | 1,616 | COLLABORATOR | null | # What does this PR do?
Not a high-priority item but I get bored at nights and I like writing those kinds of scripts 😅
So this PR adds a script to properly sort the import inside `_import_structure` because people have been absolutely ruthless and putting their objects in any kind of random order. That's not very feng-shui so I'm bringing back harmony by having the same sort as isort applied to all `__init__` that contain an `_import_structure`.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10801/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10801/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10801",
"html_url": "https://github.com/huggingface/transformers/pull/10801",
"diff_url": "https://github.com/huggingface/transformers/pull/10801.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10801.patch",
"merged_at": 1616185033000
} |
https://api.github.com/repos/huggingface/transformers/issues/10800 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10800/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10800/comments | https://api.github.com/repos/huggingface/transformers/issues/10800/events | https://github.com/huggingface/transformers/issues/10800 | 835,347,217 | MDU6SXNzdWU4MzUzNDcyMTc= | 10,800 | How to get a probability for the result of t5_tokenizer.decode(output,...)? | {
"login": "LianaN",
"id": 11301976,
"node_id": "MDQ6VXNlcjExMzAxOTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/11301976?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LianaN",
"html_url": "https://github.com/LianaN",
"followers_url": "https://api.github.com/users/LianaN/followers",
"following_url": "https://api.github.com/users/LianaN/following{/other_user}",
"gists_url": "https://api.github.com/users/LianaN/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LianaN/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LianaN/subscriptions",
"organizations_url": "https://api.github.com/users/LianaN/orgs",
"repos_url": "https://api.github.com/users/LianaN/repos",
"events_url": "https://api.github.com/users/LianaN/events{/privacy}",
"received_events_url": "https://api.github.com/users/LianaN/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!"
] | 1,616 | 1,616 | 1,616 | NONE | null | Hello,
I am using `t5-base` to map phrases into categories, for example: "I want to eat" -> "hunger".
Is there any way to get the probability for `result` values?
For example, if the input is "He is hungry", the model returns 5 labels. These results seem to be ordered by some relevance rank, so that the most relevant label is always first in `outputs`. So, my question is how can I retrieve these probabilities?
My final goal is to set a threshold on the probability, so that `outputs` would only include results that pass this threshold, or it can be empty if nothing relevant found.
```
t5_tokenizer = T5Tokenizer.from_pretrained('t5-base')
t5_model = T5ForConditionalGeneration.from_pretrained('t5-base')
...
model.model.eval()
outputs = model.model.generate(
input_ids=test_input_ids,attention_mask=test_attention_mask,
max_length=64,
early_stopping=True,
num_beams=10,
num_return_sequences=5,
no_repeat_ngram_size=2
)
for output in outputs:
result = t5_tokenizer.decode(output, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(result)
```
Thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10800/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10800/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10799 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10799/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10799/comments | https://api.github.com/repos/huggingface/transformers/issues/10799/events | https://github.com/huggingface/transformers/pull/10799 | 835,259,226 | MDExOlB1bGxSZXF1ZXN0NTk1OTQyODg5 | 10,799 | Expand a bit the presentation of examples | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I'd be super-handy to link directly to suitable datasets and models for each example as in \r\n- https://huggingface.co/datasets?search=squad\r\n- https://huggingface.co/models?filter=squad\r\n\r\nmay be this could be an easy first good issue.\r\n\r\nSome of the keywords and whether to use `?filter=` or `?search=` will require some investigation since the former is hidden and packs some power missing from the latter.",
"The first may be helpful, but the second is not necessarily: it shows the models that have been fine-tuned on a squad dataset, not the models that can be fine-tuned on it. There is no way to filter all the models that have an architecture containing a question-answering head as far as I know, which is what we would want to show.",
"Would this be at least in the right direction? https://huggingface.co/models?pipeline_tag=question-answering \r\n",
"Mmm, those seem to be models fine-tuned on a question-answering task, not all models with a QuestionAnswering arch available (for instance, you should see all BERT checkpoints, all distilBERT checkpoints etc).",
"OK, then it won't work.\r\n\r\nIt'd be really awesome if in the future we had a filter to filter models by architecture - and sub-architecture in this case - that is without the model-specific part of the class name."
] | 1,616 | 1,616 | 1,616 | COLLABORATOR | null | # What does this PR do?
This PR adds a bit more information to the examples README (main and specific per example), copying some information from the main philosophy and expanding a bit, to make sure all users know what we want for the examples. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10799/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10799/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10799",
"html_url": "https://github.com/huggingface/transformers/pull/10799",
"diff_url": "https://github.com/huggingface/transformers/pull/10799.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10799.patch",
"merged_at": 1616162769000
} |
https://api.github.com/repos/huggingface/transformers/issues/10798 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10798/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10798/comments | https://api.github.com/repos/huggingface/transformers/issues/10798/events | https://github.com/huggingface/transformers/issues/10798 | 835,173,316 | MDU6SXNzdWU4MzUxNzMzMTY= | 10,798 | Truncated words on GPT-2 output | {
"login": "Vova-B",
"id": 35993213,
"node_id": "MDQ6VXNlcjM1OTkzMjEz",
"avatar_url": "https://avatars.githubusercontent.com/u/35993213?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vova-B",
"html_url": "https://github.com/Vova-B",
"followers_url": "https://api.github.com/users/Vova-B/followers",
"following_url": "https://api.github.com/users/Vova-B/following{/other_user}",
"gists_url": "https://api.github.com/users/Vova-B/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Vova-B/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vova-B/subscriptions",
"organizations_url": "https://api.github.com/users/Vova-B/orgs",
"repos_url": "https://api.github.com/users/Vova-B/repos",
"events_url": "https://api.github.com/users/Vova-B/events{/privacy}",
"received_events_url": "https://api.github.com/users/Vova-B/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!"
] | 1,616 | 1,616 | 1,616 | NONE | null | Hi! I use the GPT-2 model for the seq2seq task, but unfortunately, at the output of the model, words are cut off and sentences are not added, how can I make the model end sentences at the output and not cut off words? (increasing the maximum length does not correct the situation)
P.S.
I'm sorry, this question is probably very stupid, but I just can't figure it out. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10798/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10798/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10797 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10797/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10797/comments | https://api.github.com/repos/huggingface/transformers/issues/10797/events | https://github.com/huggingface/transformers/issues/10797 | 835,097,282 | MDU6SXNzdWU4MzUwOTcyODI= | 10,797 | Pretrained XLNetTokenizer not returning tokenizer | {
"login": "gauravsharma-97",
"id": 28568869,
"node_id": "MDQ6VXNlcjI4NTY4ODY5",
"avatar_url": "https://avatars.githubusercontent.com/u/28568869?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gauravsharma-97",
"html_url": "https://github.com/gauravsharma-97",
"followers_url": "https://api.github.com/users/gauravsharma-97/followers",
"following_url": "https://api.github.com/users/gauravsharma-97/following{/other_user}",
"gists_url": "https://api.github.com/users/gauravsharma-97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gauravsharma-97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gauravsharma-97/subscriptions",
"organizations_url": "https://api.github.com/users/gauravsharma-97/orgs",
"repos_url": "https://api.github.com/users/gauravsharma-97/repos",
"events_url": "https://api.github.com/users/gauravsharma-97/events{/privacy}",
"received_events_url": "https://api.github.com/users/gauravsharma-97/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! This is weird, you should be gotten an error before even being able to instantiate the tokenizer with `from_pretrained`. Such an error:\r\n```\r\nImportError: \r\nXLNetTokenizer requires the SentencePiece library but it was not found in your environment. Checkout the instructions on the\r\ninstallation page of its repo: https://github.com/google/sentencepiece#installation and follow the ones\r\nthat match your environment.\r\n```\r\n\r\nCould you install SentencePiece `pip install sentencepiece` and let me know if it fixes your issue?",
"Hi! I had actually did the `pip install sentencepiece`. I was getting `None` after it.\r\n\r\nI saw the source code and the embedding size used was `None` there. You can check it <a href=\"https://github.com/gauravsharma-97/transformers/blob/master/src/transformers/models/xlnet/tokenization_xlnet.py#L41-L44\">here</a>. I think that is the issue and it should be some integer like BertTokenizer uses 512 as embedding size.",
"I may be wrong, but I think this can happen if you're in a colab environment and you install SentencePiece, but don't reload the kernel before re-running your cell. \r\n\r\nYou say you're on Ubuntu, I managed to obtain a similar result by re-running the code you mentioned twice in the same Python runtime, by installing `sentencepiece` between the two code statements. Since sentencepiece is loaded on the fly, this can be the result.\r\n\r\nI stand by what I say that this is due to `sentencepiece` not being installed. If it's correctly installed in your environment, running your statement results in:\r\n```\r\nPreTrainedTokenizer(name_or_path='xlnet-base-cased', vocab_size=32000, model_max_len=1000000000000000019884624838656, is_fast=False, padding_side='left', special_tokens={'bos_token': '<s>', 'eos_token': '</s>', 'unk_token': '<unk>', 'sep_token': '<sep>', 'pad_token': '<pad>', 'cls_token': '<cls>', 'mask_token': AddedToken(\"<mask>\", rstrip=False, lstrip=True, single_word=False, normalized=True), 'additional_special_tokens': ['<eop>', '<eod>']})\r\n``` \r\n\r\nYou're mentioning the embedding size which is `None`, this is on purpose. The XLNet model uses relative positional embeddings, and has therefore no limitations on the size of the input (note the `model_max_len` in the above code statement); which isn't the case for BERT, that uses absolute positional embeddings which are limited to 512.",
"Yes you are correct. I was running this on colab and it might have required reloading the kernel. But funnily enough, its working today without reloading it.\r\n\r\nYesterday might have been an isolated incident, although I did try to get it to run for very long before posting the issue.\r\n\r\nAnyway, thanks for the help @LysandreJik and for the explanation on embeddings. "
] | 1,616 | 1,616 | 1,616 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4.1
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.0+cu101 (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.-->
@patrickvonplaten
@LysandreJik
## Information
I am using XLNet Tokenizer. When trying to use `XLNetTokenizer.from_pretrained()`, `None` object is returned.
I last worked with it in december and it was working fine till then.
## To reproduce
Steps to reproduce the behavior:
```
from transformers import XLNetTokenizer
tokenizer = XLNetTokenizer.from_pretrained("xlnet-base-cased")
print(tokenizer)
```
Output is `None`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
A tokenizer should be returned instead of `None`.
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10797/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10797/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10796 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10796/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10796/comments | https://api.github.com/repos/huggingface/transformers/issues/10796/events | https://github.com/huggingface/transformers/pull/10796 | 835,076,309 | MDExOlB1bGxSZXF1ZXN0NTk1Nzg4MDU1 | 10,796 | [Example] Fix a NaN bug in the flax mlm example | {
"login": "merrymercy",
"id": 15100009,
"node_id": "MDQ6VXNlcjE1MTAwMDA5",
"avatar_url": "https://avatars.githubusercontent.com/u/15100009?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/merrymercy",
"html_url": "https://github.com/merrymercy",
"followers_url": "https://api.github.com/users/merrymercy/followers",
"following_url": "https://api.github.com/users/merrymercy/following{/other_user}",
"gists_url": "https://api.github.com/users/merrymercy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/merrymercy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/merrymercy/subscriptions",
"organizations_url": "https://api.github.com/users/merrymercy/orgs",
"repos_url": "https://api.github.com/users/merrymercy/repos",
"events_url": "https://api.github.com/users/merrymercy/events{/privacy}",
"received_events_url": "https://api.github.com/users/merrymercy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"cc @patrickvonplaten ",
"Hey @merrymercy - super sorry, I saw the PR too late and it was actually already fixed.",
"Thanks for your effort on Jax integration! @patrickvonplaten \r\nCould you also add some doc for these examples https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling?"
] | 1,616 | 1,619 | 1,619 | CONTRIBUTOR | null | ## What does this PR do?
Fix a NaN bug in the flax masked language model example. This is a bug introduced in #9133
The min should be max. Otherwise, we will get a NaN.
## Who can review?
@TevenLeScao @mfuntowicz | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10796/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10796/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10796",
"html_url": "https://github.com/huggingface/transformers/pull/10796",
"diff_url": "https://github.com/huggingface/transformers/pull/10796.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10796.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/10795 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10795/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10795/comments | https://api.github.com/repos/huggingface/transformers/issues/10795/events | https://github.com/huggingface/transformers/pull/10795 | 835,061,149 | MDExOlB1bGxSZXF1ZXN0NTk1Nzc1ODc0 | 10,795 | Fix distributed evaluation | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,616 | 1,616 | COLLABORATOR | null | # What does this PR do?
#10778 introduced a bug in the distributed evaluation, this PR fixes it.
cc @philschmid | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10795/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10795/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10795",
"html_url": "https://github.com/huggingface/transformers/pull/10795",
"diff_url": "https://github.com/huggingface/transformers/pull/10795.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10795.patch",
"merged_at": 1616087524000
} |
https://api.github.com/repos/huggingface/transformers/issues/10794 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10794/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10794/comments | https://api.github.com/repos/huggingface/transformers/issues/10794/events | https://github.com/huggingface/transformers/pull/10794 | 835,058,322 | MDExOlB1bGxSZXF1ZXN0NTk1NzczNTgx | 10,794 | Add new community notebook - wav2vec2 with GPT | {
"login": "voidful",
"id": 10904842,
"node_id": "MDQ6VXNlcjEwOTA0ODQy",
"avatar_url": "https://avatars.githubusercontent.com/u/10904842?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/voidful",
"html_url": "https://github.com/voidful",
"followers_url": "https://api.github.com/users/voidful/followers",
"following_url": "https://api.github.com/users/voidful/following{/other_user}",
"gists_url": "https://api.github.com/users/voidful/gists{/gist_id}",
"starred_url": "https://api.github.com/users/voidful/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/voidful/subscriptions",
"organizations_url": "https://api.github.com/users/voidful/orgs",
"repos_url": "https://api.github.com/users/voidful/repos",
"events_url": "https://api.github.com/users/voidful/events{/privacy}",
"received_events_url": "https://api.github.com/users/voidful/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Do you want to take a look @patrickvonplaten?",
"thanks a lot!!!"
] | 1,616 | 1,618 | 1,616 | CONTRIBUTOR | null | * Update:community.md, new nb add
* feat: notebook of wav2vec xlsr ctc decoding with gpt logit adjustment
* Update: Wav2vec2 CTC decoding with gpt2 adjustment
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10794/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10794/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10794",
"html_url": "https://github.com/huggingface/transformers/pull/10794",
"diff_url": "https://github.com/huggingface/transformers/pull/10794.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10794.patch",
"merged_at": 1616313593000
} |
https://api.github.com/repos/huggingface/transformers/issues/10793 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10793/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10793/comments | https://api.github.com/repos/huggingface/transformers/issues/10793/events | https://github.com/huggingface/transformers/pull/10793 | 835,029,330 | MDExOlB1bGxSZXF1ZXN0NTk1NzQ4ODg2 | 10,793 | [doc] no more bucket | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,617 | 1,617 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10793/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10793/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10793",
"html_url": "https://github.com/huggingface/transformers/pull/10793",
"diff_url": "https://github.com/huggingface/transformers/pull/10793.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10793.patch",
"merged_at": 1617301547000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/10792 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10792/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10792/comments | https://api.github.com/repos/huggingface/transformers/issues/10792/events | https://github.com/huggingface/transformers/pull/10792 | 834,938,151 | MDExOlB1bGxSZXF1ZXN0NTk1NjcwMzQw | 10,792 | [Example] Updating Question Answering examples for Predict Stage | {
"login": "bhadreshpsavani",
"id": 26653468,
"node_id": "MDQ6VXNlcjI2NjUzNDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhadreshpsavani",
"html_url": "https://github.com/bhadreshpsavani",
"followers_url": "https://api.github.com/users/bhadreshpsavani/followers",
"following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}",
"gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions",
"organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs",
"repos_url": "https://api.github.com/users/bhadreshpsavani/repos",
"events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,616 | 1,616 | CONTRIBUTOR | null | # What does this PR do?
Fixes #10482
1. It fixes the error that comes while using SQAUD_V2 on question-answering task using `max_val_sample_***`
2. Adds predict method for question-answering examples
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stas00 @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10792/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10792/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10792",
"html_url": "https://github.com/huggingface/transformers/pull/10792",
"diff_url": "https://github.com/huggingface/transformers/pull/10792.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10792.patch",
"merged_at": 1616161337000
} |
https://api.github.com/repos/huggingface/transformers/issues/10791 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10791/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10791/comments | https://api.github.com/repos/huggingface/transformers/issues/10791/events | https://github.com/huggingface/transformers/issues/10791 | 834,825,917 | MDU6SXNzdWU4MzQ4MjU5MTc= | 10,791 | run_summarization script breaks with label_smoothing_factor and pad_to_max_length true | {
"login": "elsanns",
"id": 3648991,
"node_id": "MDQ6VXNlcjM2NDg5OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3648991?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elsanns",
"html_url": "https://github.com/elsanns",
"followers_url": "https://api.github.com/users/elsanns/followers",
"following_url": "https://api.github.com/users/elsanns/following{/other_user}",
"gists_url": "https://api.github.com/users/elsanns/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elsanns/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elsanns/subscriptions",
"organizations_url": "https://api.github.com/users/elsanns/orgs",
"repos_url": "https://api.github.com/users/elsanns/repos",
"events_url": "https://api.github.com/users/elsanns/events{/privacy}",
"received_events_url": "https://api.github.com/users/elsanns/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think the `DataCollatorForSeq2Seq` should be used in all cases as it does more than just padding. If you want to suggest a PR with the fix, that would be more than welcome!",
"Assuming the goal is:\r\n- using DataCollatorForSeq2Seq in Seq2SeqTrainer as default when no data_collator is provided, while keeping the remaining functionality unchanged, \r\n\r\nthe first approach could be:\r\n- providing Seq2SeqTrainer with an `__init__` method: \r\n - instantiating a DataCollatorForSeq2Seq if no collator provided, and\r\n - calling Trainer's `__init__` and passing the instance along with other parameters. \r\n \r\nSomething like:\r\n\r\n```\r\nclass Seq2SeqTrainer(Trainer):\r\n \r\n def __init__(\r\n self,\r\n model: Union[PreTrainedModel, torch.nn.Module] = None,\r\n args: TrainingArguments = None,\r\n data_collator: Optional[DataCollator] = None,\r\n train_dataset: Optional[Dataset] = None,\r\n eval_dataset: Optional[Dataset] = None,\r\n tokenizer: Optional[\"PreTrainedTokenizerBase\"] = None,\r\n model_init: Callable[[], PreTrainedModel] = None,\r\n compute_metrics: Optional[Callable[[EvalPrediction], Dict]] = None,\r\n callbacks: Optional[List[TrainerCallback]] = None,\r\n optimizers: Tuple[torch.optim.Optimizer, torch.optim.lr_scheduler.LambdaLR] = (None, None),\r\n ): \r\n \"\"\"\r\n Setting DataCollatorForSeq2Seq as default if no data_collator is provided.\r\n \"\"\"\r\n\r\n if data_collator is None:\r\n # Perform validation and overwrite model with model_init before passing to collator,\r\n # as done in Trainer\r\n if tokenizer is None:\r\n raise RuntimeError(\r\n \"`tokenizer` parameter is required by the default `DataCollatorForSeq2Seq`\"\r\n )\r\n if model is None and model_init is None:\r\n raise RuntimeError(\r\n \"`Trainer` requires either a `model` or `model_init` argument\"\r\n )\r\n model_collator = model\r\n if model_init is not None:\r\n # No parameter handling for hyper-parameter search (trial)\r\n # Only passing the prepare_decoder_input_ids_from_labels function\r\n model_collator = model_init()\r\n\r\n data_collator = DataCollatorForSeq2Seq(tokenizer, model=model_collator)\r\n\r\n super().__init__(\r\n model,\r\n args,\r\n data_collator,\r\n train_dataset,\r\n eval_dataset,\r\n tokenizer,\r\n model_init,\r\n compute_metrics,\r\n callbacks,\r\n optimizers,\r\n ) \r\n \r\n```\r\n\r\nOf course, I would need to look further into the code and the handling of other DataCollatorForSeq2Seq \r\nparameters like: `pad_to_multiple_of=8 if training_args.fp16 else None`\r\n\r\n@sgugger, Thanks for the suggestion, it is very interesting;)",
"Mmm, I was thinking of an easier fix to just use that in the example script without necessary changing the default in `Seq2SeqTrainer`."
] | 1,616 | 1,616 | 1,616 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: '4.5.0.dev0' (from source)
- Platform: Linux
- Python version: 3.6.9
- PyTorch version (GPU?): '1.8.0' (yes)
## Information
I am running the `examples/seq2seq/run_summarization.py` script with BartForConditionalGeneration.
The script breaks whenever these two parameters are passed together:
- label_smoothing_factor
- pad_to_max_length
It seems that the source of this behaviour is setting collator to `default_data_collator` if `pad_to_max_length` is defined:
https://github.com/huggingface/transformers/blob/5f19c07a704eca4db376b56f950b729dcaa73039/examples/seq2seq/run_summarization.py#L469-L477
while `prepare_decoder_input_ids_from_labels` is only handled by DataCollatorForSeq2Seq:
https://github.com/huggingface/transformers/blob/5f19c07a704eca4db376b56f950b729dcaa73039/src/transformers/data/data_collator.py#L292-L294
It seems to be related with: [10452](https://github.com/huggingface/transformers/issues/10452), where passing a model argument to DataCollatorForSeq2Seq solves the problem
`data_collator = DataCollatorForSeq2Seq(tokenizer, model=model)`
This is more of a question than an issue as it is work in progress. A more general one would be:
Is the `default_data_collator` intended for use with seq2seq models (e.g: Bart), with special cases (like label smoothing) to be handled by `DataCollatorForSeq2Seq`?
Or should `DataCollatorForSeq2Seq` always be used with Seq2SeqTrainer in the future?
The problem arises when using:
* [x ] the official example scripts: (give details below)
examples/seq2seq/run_summarization.py
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x ] an official GLUE/SQUaD task: (give the name) (xsum)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
python examples/seq2seq/run_summarization.py \
--model_name_or_path sshleifer/distilbart-xsum-12-3 \
--do_train \
--do_eval \
--dataset_name xsum \
--output_dir /tmp/output_dir \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate \
--max_train_samples 500 \
--max_val_samples 500 \
--max_source_length 128 \
--max_target_length 64 \
--label_smoothing_factor 0.1 \
--pad_to_max_length true
```
Output:
```
Traceback (most recent call last):
File "examples/seq2seq/run_summarization.py", line 595, in <module>
main()
File "examples/seq2seq/run_summarization.py", line 533, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/opt/anaconda3/envs/tensorflow2/lib/python3.6/site-packages/transformers/trainer.py", line 1082, in train
tr_loss += self.training_step(model, inputs)
File "/opt/anaconda3/envs/tensorflow2/lib/python3.6/site-packages/transformers/trainer.py", line 1472, in training_step
loss = self.compute_loss(model, inputs)
File "/opt/anaconda3/envs/tensorflow2/lib/python3.6/site-packages/transformers/trainer.py", line 1511, in compute_loss
loss = self.label_smoother(outputs, labels)
File "/opt/anaconda3/envs/tensorflow2/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 439, in __call__
smoothed_loss.masked_fill_(padding_mask, 0.0)
RuntimeError: The expanded size of the tensor (128) must match the existing size (64) at non-singleton dimension 1. Target sizes: [4, 128, 1]. Tensor sizes: [4, 64, 1]
0%|
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Script works for a parameter set including:
- label_smoothing_factor
- pad_to_max_length
Or info which collator class should be used in the future
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10791/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10791/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10790 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10790/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10790/comments | https://api.github.com/repos/huggingface/transformers/issues/10790/events | https://github.com/huggingface/transformers/issues/10790 | 834,686,073 | MDU6SXNzdWU4MzQ2ODYwNzM= | 10,790 | HerbertTokenizer doesn't work on version 3.5.1 | {
"login": "Zhylkaaa",
"id": 18054828,
"node_id": "MDQ6VXNlcjE4MDU0ODI4",
"avatar_url": "https://avatars.githubusercontent.com/u/18054828?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Zhylkaaa",
"html_url": "https://github.com/Zhylkaaa",
"followers_url": "https://api.github.com/users/Zhylkaaa/followers",
"following_url": "https://api.github.com/users/Zhylkaaa/following{/other_user}",
"gists_url": "https://api.github.com/users/Zhylkaaa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Zhylkaaa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Zhylkaaa/subscriptions",
"organizations_url": "https://api.github.com/users/Zhylkaaa/orgs",
"repos_url": "https://api.github.com/users/Zhylkaaa/repos",
"events_url": "https://api.github.com/users/Zhylkaaa/events{/privacy}",
"received_events_url": "https://api.github.com/users/Zhylkaaa/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I guess this is related to URL issue #10744 ? and one should change model URL",
"I resolved this by updating URL's to models, this is my current code:\r\n```PRETRAINED_VOCAB_FILES_MAP = {\r\n \"vocab_file\": {\"allegro/herbert-base-cased\": \"https://huggingface.co/allegro/herbert-base-cased/resolve/main/vocab.json\"},\r\n \"merges_file\": {\"allegro/herbert-base-cased\": \"https://huggingface.co/allegro/herbert-base-cased/resolve/main/merges.txt\"},\r\n}``` \r\nIs there a way to fix this to maintain backward compatibility? @LysandreJik",
"Cross-posting the Forum thread: https://discuss.huggingface.co/t/delete-organizations-models-from-the-hub/954/40"
] | 1,616 | 1,616 | 1,616 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.5.1
- Platform: MacOS X, Linux
- Python version: 3.7
- PyTorch version (GPU?): 1.6.0
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): allegro/herbert-base-cased
I tried to use official script on model hub page with transformers version 3.5.1. Week ago it worked just fine, but now I am getting error listed below.
@rmroczkowski maybe you have some information on this topic, I saw some new commits on model hub, but they shouldn't change anything
For latest version it works fine with AutoTokenizers (EDIT: only version 4.4 works, I tasted version 3.5.1, 4.0.0, 4.3 and got same error)
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
I tried importing AutoTokenizers and HerbertTokenizer, but got the same error
`OSError: Can't load tokenizer for 'allegro/herbert-base-cased'. Make sure that:
- 'allegro/herbert-base-cased' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'allegro/herbert-base-cased' is the correct path to a directory containing relevant tokenizer files`
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. install transformers 3.5.1
2. try to use official script from https://huggingface.co/allegro/herbert-base-case
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
tokenizer loads and works
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10790/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10790/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10789 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10789/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10789/comments | https://api.github.com/repos/huggingface/transformers/issues/10789/events | https://github.com/huggingface/transformers/issues/10789 | 834,684,138 | MDU6SXNzdWU4MzQ2ODQxMzg= | 10,789 | [Deepspeed ZeRO-3] Broken model save on fresh Transformers branch | {
"login": "exelents",
"id": 12846582,
"node_id": "MDQ6VXNlcjEyODQ2NTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/12846582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/exelents",
"html_url": "https://github.com/exelents",
"followers_url": "https://api.github.com/users/exelents/followers",
"following_url": "https://api.github.com/users/exelents/following{/other_user}",
"gists_url": "https://api.github.com/users/exelents/gists{/gist_id}",
"starred_url": "https://api.github.com/users/exelents/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/exelents/subscriptions",
"organizations_url": "https://api.github.com/users/exelents/orgs",
"repos_url": "https://api.github.com/users/exelents/repos",
"events_url": "https://api.github.com/users/exelents/events{/privacy}",
"received_events_url": "https://api.github.com/users/exelents/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2659267025,
"node_id": "MDU6TGFiZWwyNjU5MjY3MDI1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed",
"name": "DeepSpeed",
"color": "4D34F7",
"default": false,
"description": ""
}
] | closed | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [
"I'm getting a similar problem after training BERT with MLM using DeepSpeed where all the saved weights are of size 1. The same `run_mlm` script worked as expected if I didn't use DeepSpeed.\r\n\r\n`RuntimeError: Error(s) in loading state_dict for BertForSequenceClassification:\r\n size mismatch for bert.embeddings.word_embeddings.weight: copying a param with shape torch.Size([1]) from checkpoint, the shape in current model is torch.Size([119547, 768]).\r\n size mismatch for bert.embeddings.position_embeddings.weight: copying a param with shape torch.Size([1]) from checkpoint, the shape in current model is torch.Size([512, 768]).\r\n size mismatch for bert.encoder.layer.0.attention.self.query.weight: copying a param with shape torch.Size([1]) from checkpoint, the shape in current model is torch.Size([768, 768]).`",
"Since this is using DeepSpeed, maybe @stas00 has an idea?",
"Just tried loading a model trained with `sharded_ddp` and got a different error:\r\n\r\n```[INFO|modeling_utils.py:1044] 2021-03-18 12:56:04,792 >> loading weights file fs-test-mlm-mbert/checkpoint-1000/pytorch_model.bin\r\nTraceback (most recent call last):\r\n File \"/export/proj/code/transformers/src/transformers/modeling_utils.py\", line 1057, i\r\nn from_pretrained\r\n state_dict = torch.load(resolved_archive_file, map_location=\"cpu\")\r\n File \"/export/proj/env_cuda11_1/lib/python3.7/site-packages/torch/serialization.py\", line 593, in load\r\n return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)\r\n File \"/export/proj/env_cuda11_1/lib/python3.7/site-packages/torch/serialization.py\", line 762, in _legacy_load\r\n magic_number = pickle_module.load(f, **pickle_load_args)\r\nEOFError: Ran out of input\r\n```\r\nIt seems the model saving might not be happening properly for these two integrations? I also noticed that only the config and weights were being saved when using `--sharded_ddp`.\r\n\r\n\r\nUPDATE: It's actually the checkpoint saving getting stuck that's causing this issue. Started another run to confirm and it got stuck while saving as well. \r\n\r\nUPDATE 2: This only happens with `zero_dp_2` and `zero_dp_3`. `simple` appears to work fine. For DeepSpeed, using stage 2 appears to fix the problem (I was previously using stage 3).",
"@samsontmr I have changed DeepSpeed stage to 2 and it seems works well - checkpoints are saved properly. I also used DeepSpeed stage 3 before.\r\n\r\nIt seems problems are in Stage 3 integration. Maybe @stas00 could help, he did previous integration of DeepSpeed into trainer.",
"DeepSpeed Stage 3 integration is not finished yet, a wip PR is here if you'd like to try it - though it has a ton of debug statements still and a few more features are still missing.\r\nhttps://github.com/huggingface/transformers/pull/10753\r\n\r\nMake sure you are using the latest deepspeed since zero3 had problems with saving checkpoint but the 0.3.13 release should be good.\r\n\r\nBut I am pretty sure the issue is different, as I literally merged the code that generated the error you quoted 2 days ago:\r\nIf it worked before please roll back to any sha before https://github.com/huggingface/transformers/pull/10760 and let me know if it works.\r\n\r\nThe problem with DeepSpeed is that it doesn't currently have a way to save a fp32 checkpoint that can be loaded normally and not via DeepSpeed, https://github.com/microsoft/DeepSpeed/issues/800 so when you save a model you only get an fp16 version. However its special checkpoint (see e.g. `global-step10` folder in the checkpoint folder) contains all the right data and thus if you want to load deepspeed model you need to `train(resume_from_checkpoint)` instead. \r\n\r\nSo if you want to resume training you can't use `from_pretrained()` at the moment, unless fp16 weights are sufficient for your work. And it sounds that it's broken at the moment.\r\n\r\nLet me know if any of this makes sense and let's see how we can make your code work with what we have.\r\n\r\nI'd be happy to adapt my recent changes to meet your needs.\r\n\r\n\r\n",
"Thanks for the detailed reply @stas00! Is the issue with the fp32 checkpoint saving only happening with zero3 or also with stage 2? My fine-tuning step started with no issues when I used the checkpoint from a stage 2 training run (hasn't completed yet so I'm not sure how it'll end up).",
"> Is the issue with the fp32 checkpoint saving only happening with zero3 or also with stage 2? \r\n\r\nIt's an issue with any zero stage under deepspeed. \r\n\r\nAre you saying that the problem emerged once switching to zero3 config? I'm not at all sure it can resume from zero2 checkpoint to zero3 config - those are quite different setups. So we really need to get the fp32 saving sorted out\r\n\r\nLet's see if we can ask to make this a higher priority at https://github.com/huggingface/transformers/issues/10789\r\n\r\n",
"> Are you saying that the problem emerged once switching to zero3 config? I'm not at all sure it can resume from zero2 checkpoint to zero3 config - those are quite different setups. So we really need to get the fp32 saving sorted out\r\n\r\nYup, I didn't try going from zero2 to zero3; I just restarted my training using zero2, then fine-tuned the model without deepspeed... which somehow managed to load just by using `.from_pretrained`",
"As I tried to explain you were getting only fp16 weights when using from `from_pretrained` which may or may not be good enough for your needs. It mostly should be OK. Except some metrics or feature may break under fp16 if they weren't coded for it.\r\ne.g. https://github.com/huggingface/transformers/issues/10674\r\n\r\nSo let's lay out a test that I need to work on to reproduce your issues. Could you please lay out a sequence of events - ideally in code but pseudo-code will work too and then I will try to see where the breakage is.\r\n\r\nThe PR I referred to includes several save/resume tests, so the saving is normal, and resume uses `train(resume_from_checkpoint)` and it works too. Though I need to add zero3 test as well. Only tested zero2 so far. The resume test is here:\r\nhttps://github.com/huggingface/transformers/blob/008672e6e5fb0f2d2fc6fbd367ab6e135eea3f2d/examples/tests/deepspeed/test_deepspeed.py#L279\r\n\r\nYou shouldn't get:\r\n```\r\nValueError: [deepspeed] failed to resume from checkpoint ./templates/siamese-t5-small-v1_1-template\r\n```\r\n\r\nif you're not trying to do `train(resume_from_checkpoint)`, you can see where it gets triggered:\r\nhttps://github.com/huggingface/transformers/blob/008672e6e5fb0f2d2fc6fbd367ab6e135eea3f2d/src/transformers/integrations.py#L452\r\n",
"As for me: I fixed my problem with unnessesary checkpoint load, where I get load error, but it still has an save error on DeepSpeed stage 3 mode. If you @stas00 could help me, I would appreciate.\r\n\r\n\r\nHere is steps to reproduce my error with model save:\r\n\r\n- Clone this repo:\r\nhttps://github.com/exelents/try_t5_siamese\r\n- Extract folder \"qasc\" from this archive:\r\nhttps://drive.google.com/file/d/1gwvFiPzWW0JLr0XLS25PuG2Br5S4fPbR/view?usp=sharing\r\n- Go to clonned repo folder and run ./create-siamese-template.sh - it will create siamese NN from two t5-small encoders in folder ./templates/siamese-t5-small-template\r\n- then you can run ./run-siamese-small.sh - you will see normal behaviour, in folder ./siamese_train_deepspeed/output_dir/ you will find there will be stored checkpoints every 3 steps? and you will can see a sight that weights are stored:\r\n weights files like ./siamese_train_deepspeed/output_dir/checkpoint-6/left/pytorch_model.bin will have size around hundred megabytes.\r\n\r\n- Then to see a problem open ./run-siamese-small.sh and change \"ds_config.json\" to \"ds_config_stage3.json\" and rerun training. You will see that weights files, like ./siamese_train_deepspeed/output_dir/checkpoint-6/left/pytorch_model.bin will have size for a few kilobytes, and you couldn't load model from that checkpoint. There is a probleb, and it appears only if I turn on \"stage 3\" mode in config.",
"Thank you for the detailed instructions, @exelents.\r\n\r\nLet me adapt the existing test first to zero3 so I am sure it's working and then will try your sequence. I will keep you posted.",
"I can reproduce the saved model size problem. `pytorch_model.bin` with:\r\n- zero2 135M \r\n- zero3 38K \r\n\r\nbut as I mentioned currently Deepspeed doesn't provide a proper way to save a model on its own.\r\n\r\nIt saves the model state in its own sub-folder, e.g., in your case:\r\n```\r\nls -l output_dir/checkpoint-6/global_step6/\r\ntotal 809M\r\n-rw-rw-r-- 1 stas stas 53K Mar 18 14:03 zero_pp_rank_0_mp_rank_00_model_states.pt\r\n-rw-rw-r-- 1 stas stas 809M Mar 18 14:03 zero_pp_rank_0_mp_rank_00_optim_states.pt\r\n```\r\nas you can see the optimizer states dict has everything in it. So you should be able to resume from it.\r\n\r\nYour script is a bit old and based on an old example - so it doesn't support the current mechanism of doing resume from command line using https://github.com/huggingface/transformers/blob/master/examples/README.md#resuming-training\r\n\r\nSo for resume to currently work, you either need to bring your script up-to-date, by probably checking the latest version of the example you used as a base for your work.\r\n\r\nThe key is `train(resume_from_checkpoint)` if you passed this as `output_dir/checkpoint-6` deepspeed reloads where it left on and continues on its merry way.\r\n\r\nTo help you think the new script in your case is this and I pointed to where the critical part is:\r\n\r\nhttps://github.com/huggingface/transformers/blob/dcebe254fadfe142b6f0d6301cc8a875dca7d603/examples/seq2seq/run_translation.py#L500\r\n\r\n(this is on master)\r\n\r\nSo if you could bring your script up-to-date with the current way it'd automatically work, or you can adapt it manually as I suggested above.\r\n\r\nIf any of my comments are unclear please don't hesitate to ask for clarifications.\r\n\r\nMeanwhile I will investigate why the model state_dict is almost empty under zero3 - this looks like a bug - making it work might help you move on w/o needing you to change your code.\r\n\r\nI will get back to you.\r\n\r\n",
"I investigated and `model.state_dict()` returns some sort of placeholder with `tensor([1.],` for each weights and no real data, that's why `pytorch_model.bin` is tiny. Filed a request: https://github.com/microsoft/DeepSpeed/issues/872\r\n\r\nSo until we find a way to reconstruct it, I suggest to stick to zero2 otherwise you will remain locked in into DeepSpeed data files, that is you should be able to continue training but not being able to use it w/o deepspeed.\r\n",
"While the Deepspeed team is sorting the addition of a method to extract model weights from its checkpoint, here is an update for you.\r\n\r\nDeepspeed stores the model weights in its checkpoint file (a file per gpu) which at the moment can only be loaded via its `deepspeed.load_checkpoint`. Therefore please adapt your code to rely on that to save and resume your custom models. Do not rely on `save_pretrained` and then expect `from_pretrained` to work, since the model weights won't be there.\r\n\r\nThe new method we are discussing will be able to convert the deepspeed checkpoint into consolidated from multiple gpus model weights. This is quite expensive so it shouldn't happen on each checkpoint saving and definitely shouldn't be the default because there might not be enough memory to do the consolidation (e.g. a model spread out over dozens of gpus).\r\n\r\nBottom line, should you choose to use deepspeed zero-3 things aren't as straightforward. And we will work out a solution in this case.\r\n\r\nI suppose it's a similar story with fairscale Sharded DDP, but I am working on DeepSpeed only at the moment and can't comment on the former. Unless @sgugger who did the initial integration of fairscale beats me to it I will be able to look at it once I complete the integration of DeepSpeed ZeRO-3, which is coming along nicely but requires changes on the DeepSpeed side - so it'll take some time.\r\n",
"@exelents, here is how to solve your specific problem of:\r\n```\r\nclass T5Siamese(T5PreTrainedModel):\r\n[....]\r\n def init_from_base_t5_model(model_name_or_path='t5-base', output_root='./'):\r\n [...]\r\n model_left = T5EncoderModel.from_pretrained(MODEL)\r\n model_right = T5EncoderModel.from_pretrained(MODEL)\r\n```\r\nwith DeepSpeed zero-3.\r\n\r\nIf you don't mind continuing training and not being to retrieve the final weights until https://github.com/microsoft/DeepSpeed/issues/872 is addressed, here is what you can do immediately to be able to move forward:\r\n\r\nDo the above only when starting \"cold\", but when resuming from a checkpoint don't do that and let instead `T5Siamese` be restored from the deepspeed checkpoint at once. \r\n\r\nOnce we get the method to extract the model weights out of the DeepSpeed checkpoint, you can then recover both sub-model weights if you want to upload them to the hub or to take them elsewhere.\r\n\r\nPlease let me know if this solution resonates with you. Or if you run into any hiccups I haven't considered.\r\n\r\nNote that currently under zero-2 you're only recovering fp16 weights, so it is also not ideal either. So you want to use this solution for both cases.\r\n",
"@samsontmr, would you kindly open a separate issue since while this is related the use-case is quite different. Please tag me and we will work on solving your use case there. Thank you!\r\n\r\np.s. also when you test please make sure you are using the `transformers` and `deepspeeed` master since there are constant fixes merged into it. ",
"@stas00 Thank you for the explanation. So, to load stage-3 checkpoint I should make \"cold load\" from original T5 weights, and then load actual weights via `deepspeed.load_checkpoint` . The question is: is it possible to use this model in usual jupyter notebook, or usual python script, if I load model weights using deepspeed function? Or if I trained model via deepspeed once, I will be bound to it's runner forever?",
"> So, to load stage-3 checkpoint I should make \"cold load\" from original T5 weights, and then load actual weights via deepspeed.load_checkpoint . \r\n\r\nI haven't tested it, but I can't think of any reason why it won't work. If you run into problems that I haven't considered please let me know.\r\n\r\n> The question is: is it possible to use this model in usual jupyter notebook, or usual python script, if I load model weights using deepspeed function? \r\n\r\nYes, of course.\r\n\r\nJust note that if you use the notebook directly and don't launch an external process which launches the distributed environment, you will be limited to 1 gpu and you will have to emulate the distributed environment like so:\r\n```\r\nimport os\r\ndist_env_1_gpu = dict(MASTER_ADDR=\"localhost\", MASTER_PORT=\"10999\", RANK=\"0\", LOCAL_RANK=\"0\", WORLD_SIZE=\"1\")\r\nfor k,v in dist_env_1_gpu.items():\r\n os.environ[k] = v\r\n```\r\nand please make sure you're on the master or very recent `transformers` version for this to work.\r\n\r\nBut if you just use the notebook to open a shell with the `deepspeed` launcher then you have no limitation of one gpu, e.g. see: https://github.com/stas00/porting/blob/master/transformers/deepspeed/DeepSpeed_on_colab_CLI.ipynb\r\n\r\n> Or if I trained model via deepspeed once, I will be bound to it's runner forever?\r\n\r\nI'm not sure what you ask here, as I don't know whether you refer to the `deepspeed` launcher, or something else. \r\n\r\n1. The `deepspeed` launcher is a more elaborate equivalent of `python -m torch.distributed.launch`. In simple cases of a single node you can use the latter. Here all DeepSpeed needs is to have a dedicated process per gpu and the distributed env set up (even in the case of one gpu).\r\n\r\n2. If you're asking whether your data will be locked into the deepspeed checkpoints, then at the moment the answer is yes.\r\nOnce https://github.com/microsoft/DeepSpeed/issues/872 is resolved you will be able to recover the consolidated weights and use them in any way you want.",
"Ok, thank you for the explanation. I'm not sure if I could test these changes on my code soon, but I'll do it sooner or later.",
"I just proposed yet another API in https://github.com/microsoft/DeepSpeed/issues/872:\r\n\r\n> being able to call `deepspeed.consolidate_weights()` in the rank0 process which would give users full weights back (perhaps with a bool arg of whether they want the fp16 or fp32 version). So now they can just save the model as they do with any other pytorch tools. This would only be practical for small-ish models. The key here is that while this would be somewhat costly they will be able to use their code almost w/o any change if they train in various ways and not just with deepspeed.\r\n\r\nSo if that was added then your current code would also work with just adding this newly proposed API. Let's see.",
"@stas00 thanks! My problem is solved for now since I'm also using fp16 during fine-tuning so the current stage2 saves are good enough for me.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Hello, @stas00. I have created an issue due to problems with converting model to fp32. Can you say something about it?\r\nhttps://github.com/microsoft/DeepSpeed/issues/1009"
] | 1,616 | 1,619 | 1,619 | NONE | null | I have my own model, which utilize two T5 encoders, and I train it via DeepSpeed. It has it's own save_pretrained() and from_pretrained() methods, which makes a custom load/save logic:
https://github.com/exelents/try_t5_siamese/blob/4140194978ac113c45e7370f40b3d9b932d0b35b/siamese_model.py#L80
When I run training and trainer starts to save checkpoint, there are going something strange: weights file for every saved encoder is going to be e few kilobytes - weights are not going to be saved.
On the start of training trainer tries to load checkpoint using model.load_checkpoint(), but it seems this function has it's own loading logic, because it cannot exec my load model logic and throws an error:
`ValueError: [deepspeed] failed to resume from checkpoint ./templates/siamese-t5-small-v1_1-template`
I can comment this code, which loads checkpoint, but then I got described before problem with saving checkpoint...
What should I do to make save my own custom model properly? It worked a month ago, but today I refreshed my Transformers repo and everything has broken. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10789/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10789/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10788 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10788/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10788/comments | https://api.github.com/repos/huggingface/transformers/issues/10788/events | https://github.com/huggingface/transformers/issues/10788 | 834,514,520 | MDU6SXNzdWU4MzQ1MTQ1MjA= | 10,788 | TypeError: __init__() got an unexpected keyword argument 'filepath' when using RAG model | {
"login": "tangxiangru",
"id": 22478336,
"node_id": "MDQ6VXNlcjIyNDc4MzM2",
"avatar_url": "https://avatars.githubusercontent.com/u/22478336?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tangxiangru",
"html_url": "https://github.com/tangxiangru",
"followers_url": "https://api.github.com/users/tangxiangru/followers",
"following_url": "https://api.github.com/users/tangxiangru/following{/other_user}",
"gists_url": "https://api.github.com/users/tangxiangru/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tangxiangru/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tangxiangru/subscriptions",
"organizations_url": "https://api.github.com/users/tangxiangru/orgs",
"repos_url": "https://api.github.com/users/tangxiangru/repos",
"events_url": "https://api.github.com/users/tangxiangru/events{/privacy}",
"received_events_url": "https://api.github.com/users/tangxiangru/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,619 | 1,619 | NONE | null | I was finetuning RAG model with cmd:
python finetune_rag.py \
--data_dir ../../../../data/ms-marco/ \
--output_dir ../../../../data/ms-marco/ \
--model_name_or_path ~/model/rag/rag/rag-sequence-nq \
--model_type rag_sequence \
--fp16 \
--gpus 8 \
--do_train --do_predict
where ~/model/rag/rag/rag-sequence-nq was completely download from https://huggingface.co/facebook/rag-sequence-nq.
Here is the log:
Model name '/nfs/users/s_xiangru/model/rag/rag/rag-sequence-nq' not found in model shortcut name list (facebook/dpr-question_encoder-single-nq-base, facebook/dpr-question_encoder-multiset-base). Assuming '/nfs/users/s_xiangru/model/rag/rag/rag-sequence-nq' is a path, a model identifier, or url to a directory containing tokenizer files.
Didn't find file /nfs/users/s_xiangru/model/rag/rag/rag-sequence-nq/question_encoder_tokenizer/tokenizer.json. We won't load it.
Didn't find file /nfs/users/s_xiangru/model/rag/rag/rag-sequence-nq/question_encoder_tokenizer/added_tokens.json. We won't load it.
loading file /nfs/users/s_xiangru/model/rag/rag/rag-sequence-nq/question_encoder_tokenizer/vocab.txt
loading file None
loading file None
loading file /nfs/users/s_xiangru/model/rag/rag/rag-sequence-nq/question_encoder_tokenizer/special_tokens_map.json
loading file /nfs/users/s_xiangru/model/rag/rag/rag-sequence-nq/question_encoder_tokenizer/tokenizer_config.json
Model name '/nfs/users/s_xiangru/model/rag/rag/rag-sequence-nq' not found in model shortcut name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). Assuming '/nfs/users/s_xiangru/model/rag/rag/rag-sequence-nq' is a path, a model identifier, or url to a directory containing tokenizer files.
Didn't find file /nfs/users/s_xiangru/model/rag/rag/rag-sequence-nq/generator_tokenizer/tokenizer.json. We won't load it.
Didn't find file /nfs/users/s_xiangru/model/rag/rag/rag-sequence-nq/generator_tokenizer/added_tokens.json. We won't load it.
loading file /nfs/users/s_xiangru/model/rag/rag/rag-sequence-nq/generator_tokenizer/vocab.json
loading file /nfs/users/s_xiangru/model/rag/rag/rag-sequence-nq/generator_tokenizer/merges.txt
loading file None
loading file None
loading file /nfs/users/s_xiangru/model/rag/rag/rag-sequence-nq/generator_tokenizer/special_tokens_map.json
loading file /nfs/users/s_xiangru/model/rag/rag/rag-sequence-nq/generator_tokenizer/tokenizer_config.json
Traceback (most recent call last):
File "finetune_rag.py", line 629, in <module>
main(args)
File "finetune_rag.py", line 597, in main
checkpoint_callback=get_checkpoint_callback(args.output_dir, model.val_metric),
File "/nfs/users/s_xiangru/transformers/examples/research_projects/rag/callbacks_rag.py", line 41, in get_checkpoint_callback
period=1, # maybe save a checkpoint every time val is run, not just end of epoch.
TypeError: __init__() got an unexpected keyword argument 'filepath'
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10788/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10788/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10787 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10787/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10787/comments | https://api.github.com/repos/huggingface/transformers/issues/10787/events | https://github.com/huggingface/transformers/issues/10787 | 834,513,340 | MDU6SXNzdWU4MzQ1MTMzNDA= | 10,787 | Can DeepSpeed ZeRO-3 be applied for training? | {
"login": "avionkmh",
"id": 20922702,
"node_id": "MDQ6VXNlcjIwOTIyNzAy",
"avatar_url": "https://avatars.githubusercontent.com/u/20922702?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/avionkmh",
"html_url": "https://github.com/avionkmh",
"followers_url": "https://api.github.com/users/avionkmh/followers",
"following_url": "https://api.github.com/users/avionkmh/following{/other_user}",
"gists_url": "https://api.github.com/users/avionkmh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/avionkmh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avionkmh/subscriptions",
"organizations_url": "https://api.github.com/users/avionkmh/orgs",
"repos_url": "https://api.github.com/users/avionkmh/repos",
"events_url": "https://api.github.com/users/avionkmh/events{/privacy}",
"received_events_url": "https://api.github.com/users/avionkmh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! You might find this reply https://github.com/huggingface/transformers/issues/10789#issuecomment-802100991 by @stas00 of interest.",
"@avionkmh, very soon it'll be supported, you may want to track: https://github.com/huggingface/transformers/pull/10753",
"@stas00, @LysandreJik, thank you for your kind information. I'm going to track the issues you recommended. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,634 | 1,634 | NONE | null | # 🌟 New model addition
We have applied DeepSpeed v0.3.10(ZeRO-2) on T5 training.
I heard DeepSpeed ZeRO-3 library has been released 10 days ago(8/MAR).
I'd like to adopt ZeRO-3 for our training.
Can this library be applied, especially for T5 training?
Do you have any experience applying this library?
If any, could share your experience?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10787/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10787/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10786 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10786/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10786/comments | https://api.github.com/repos/huggingface/transformers/issues/10786/events | https://github.com/huggingface/transformers/pull/10786 | 834,504,819 | MDExOlB1bGxSZXF1ZXN0NTk1MzA2NjIy | 10,786 | Add XLSR-Wav2Vec2 Fine-Tuning README.md | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,616 | 1,616 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10786/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10786/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10786",
"html_url": "https://github.com/huggingface/transformers/pull/10786",
"diff_url": "https://github.com/huggingface/transformers/pull/10786.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10786.patch",
"merged_at": 1616102563000
} |
https://api.github.com/repos/huggingface/transformers/issues/10785 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10785/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10785/comments | https://api.github.com/repos/huggingface/transformers/issues/10785/events | https://github.com/huggingface/transformers/issues/10785 | 834,366,170 | MDU6SXNzdWU4MzQzNjYxNzA= | 10,785 | Typo in M2M100 model page | {
"login": "xhluca",
"id": 21180505,
"node_id": "MDQ6VXNlcjIxMTgwNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/21180505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xhluca",
"html_url": "https://github.com/xhluca",
"followers_url": "https://api.github.com/users/xhluca/followers",
"following_url": "https://api.github.com/users/xhluca/following{/other_user}",
"gists_url": "https://api.github.com/users/xhluca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xhluca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xhluca/subscriptions",
"organizations_url": "https://api.github.com/users/xhluca/orgs",
"repos_url": "https://api.github.com/users/xhluca/repos",
"events_url": "https://api.github.com/users/xhluca/events{/privacy}",
"received_events_url": "https://api.github.com/users/xhluca/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Indeed! Pinging @patil-suraj ",
"urgh, my bad. Thanks for pointing it out. fixed!"
] | 1,616 | 1,616 | 1,616 | CONTRIBUTOR | null | Seems like there's a typo in the [m2m 100 page](https://huggingface.co/facebook/m2m100_418M):
```python
from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer
hi_text = "जीवन एक चॉकलेट बॉक्स की तरह है।"
chinese_text = "生活就像一盒巧克力。"
model = M2M100ForConditionalGeneration.from_pretrained("faGreekook/m2m100_418M")
tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M")
```
Pretty sure it should be "facebook" instead of "faGreekook" | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10785/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10785/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10784 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10784/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10784/comments | https://api.github.com/repos/huggingface/transformers/issues/10784/events | https://github.com/huggingface/transformers/issues/10784 | 834,338,147 | MDU6SXNzdWU4MzQzMzgxNDc= | 10,784 | How to interpret fine-tuned model results and use model | {
"login": "zakerytclarke",
"id": 30702789,
"node_id": "MDQ6VXNlcjMwNzAyNzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/30702789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zakerytclarke",
"html_url": "https://github.com/zakerytclarke",
"followers_url": "https://api.github.com/users/zakerytclarke/followers",
"following_url": "https://api.github.com/users/zakerytclarke/following{/other_user}",
"gists_url": "https://api.github.com/users/zakerytclarke/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zakerytclarke/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zakerytclarke/subscriptions",
"organizations_url": "https://api.github.com/users/zakerytclarke/orgs",
"repos_url": "https://api.github.com/users/zakerytclarke/repos",
"events_url": "https://api.github.com/users/zakerytclarke/events{/privacy}",
"received_events_url": "https://api.github.com/users/zakerytclarke/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\n@patrickvonplaten @stas00 \r\n\r\nThanks!",
"@LysandreJik Thanks for pointing me in the right direction, I've moved the post over to the forum."
] | 1,616 | 1,616 | 1,616 | NONE | null | Hello,
Apologies if this is the wrong forum to ask these kinds of questions, but I was unable to find this in the documentation.
I fine-tuned a seq2seq model on my custom dataset using the tutorial found here: https://github.com/huggingface/transformers/tree/master/examples/seq2seq
I am trying to find out the F1 and EM accuracy for the fine-tuned model, but am not sure how to interpret the output. I've attached a link to the training's output below:
https://github.com/zakerytclarke/transformers/tree/master/modelResults
```
{
"epoch": 3.0,
"eval_gen_len": 55.7429,
"eval_loss": 2.063843250274658,
"eval_mem_cpu_alloc_delta": 1998448,
"eval_mem_cpu_peaked_delta": 638828,
"eval_rouge1": 33.8505,
"eval_rouge2": 13.1365,
"eval_rougeL": 27.8332,
"eval_rougeLsum": 31.5921,
"eval_runtime": 119.8097,
"eval_samples": 35,
"eval_samples_per_second": 0.292
}
```
Can you point me to documentation about how to interpret these results and how I can load my fine-tuned model in order to evaluate it on a new piece of text?
Thanks for your help,
--Zak | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10784/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10784/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10783 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10783/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10783/comments | https://api.github.com/repos/huggingface/transformers/issues/10783/events | https://github.com/huggingface/transformers/pull/10783 | 834,326,129 | MDExOlB1bGxSZXF1ZXN0NTk1MTY0MzYy | 10,783 | Fix bug in input check for LengthGroupSampler | {
"login": "thominj",
"id": 3819908,
"node_id": "MDQ6VXNlcjM4MTk5MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/3819908?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thominj",
"html_url": "https://github.com/thominj",
"followers_url": "https://api.github.com/users/thominj/followers",
"following_url": "https://api.github.com/users/thominj/following{/other_user}",
"gists_url": "https://api.github.com/users/thominj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thominj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thominj/subscriptions",
"organizations_url": "https://api.github.com/users/thominj/orgs",
"repos_url": "https://api.github.com/users/thominj/repos",
"events_url": "https://api.github.com/users/thominj/events{/privacy}",
"received_events_url": "https://api.github.com/users/thominj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,616 | 1,616 | CONTRIBUTOR | null | # What does this PR do?
This commit fixes a bug in the LengthGroupSampler where if
model_input_name is not set, the default value is None instead of
"input_ids"
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
I did not write a test for this, but if necessary I can. Neither this sampler nor the distributed version currently have test coverage for the ValueError this bug raises, but it might not be bad to have.
## Who can review?
@patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10783/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10783/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10783",
"html_url": "https://github.com/huggingface/transformers/pull/10783",
"diff_url": "https://github.com/huggingface/transformers/pull/10783.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10783.patch",
"merged_at": 1616077557000
} |
https://api.github.com/repos/huggingface/transformers/issues/10782 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10782/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10782/comments | https://api.github.com/repos/huggingface/transformers/issues/10782/events | https://github.com/huggingface/transformers/pull/10782 | 834,304,095 | MDExOlB1bGxSZXF1ZXN0NTk1MTQ4OTc5 | 10,782 | add dockerfile for zero optimzier | {
"login": "micmelesse",
"id": 16394078,
"node_id": "MDQ6VXNlcjE2Mzk0MDc4",
"avatar_url": "https://avatars.githubusercontent.com/u/16394078?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/micmelesse",
"html_url": "https://github.com/micmelesse",
"followers_url": "https://api.github.com/users/micmelesse/followers",
"following_url": "https://api.github.com/users/micmelesse/following{/other_user}",
"gists_url": "https://api.github.com/users/micmelesse/gists{/gist_id}",
"starred_url": "https://api.github.com/users/micmelesse/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/micmelesse/subscriptions",
"organizations_url": "https://api.github.com/users/micmelesse/orgs",
"repos_url": "https://api.github.com/users/micmelesse/repos",
"events_url": "https://api.github.com/users/micmelesse/events{/privacy}",
"received_events_url": "https://api.github.com/users/micmelesse/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,616 | 1,616 | NONE | null | This PR adds a dockerfile for zero optimzier | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10782/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10782/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10782",
"html_url": "https://github.com/huggingface/transformers/pull/10782",
"diff_url": "https://github.com/huggingface/transformers/pull/10782.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10782.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/10781 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10781/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10781/comments | https://api.github.com/repos/huggingface/transformers/issues/10781/events | https://github.com/huggingface/transformers/pull/10781 | 834,167,309 | MDExOlB1bGxSZXF1ZXN0NTk1MDQyMDMw | 10,781 | Add support for detecting intel-tensorflow version | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,616 | 1,616 | MEMBER | null | `intel-tensorflow` pypi package is not currently detected by transformers.
This PR adds support for detecting Intel TF version. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10781/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10781/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10781",
"html_url": "https://github.com/huggingface/transformers/pull/10781",
"diff_url": "https://github.com/huggingface/transformers/pull/10781.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10781.patch",
"merged_at": 1616027147000
} |
https://api.github.com/repos/huggingface/transformers/issues/10780 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10780/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10780/comments | https://api.github.com/repos/huggingface/transformers/issues/10780/events | https://github.com/huggingface/transformers/pull/10780 | 834,150,080 | MDExOlB1bGxSZXF1ZXN0NTk1MDI4MTE1 | 10,780 | Improve the speed of adding tokens from added_tokens.json | {
"login": "cchen-dialpad",
"id": 47165889,
"node_id": "MDQ6VXNlcjQ3MTY1ODg5",
"avatar_url": "https://avatars.githubusercontent.com/u/47165889?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cchen-dialpad",
"html_url": "https://github.com/cchen-dialpad",
"followers_url": "https://api.github.com/users/cchen-dialpad/followers",
"following_url": "https://api.github.com/users/cchen-dialpad/following{/other_user}",
"gists_url": "https://api.github.com/users/cchen-dialpad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cchen-dialpad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cchen-dialpad/subscriptions",
"organizations_url": "https://api.github.com/users/cchen-dialpad/orgs",
"repos_url": "https://api.github.com/users/cchen-dialpad/repos",
"events_url": "https://api.github.com/users/cchen-dialpad/events{/privacy}",
"received_events_url": "https://api.github.com/users/cchen-dialpad/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! Could you provide a way to benchmark the change so that we can see in which situations is the speedup visible? Thank you!",
"> Hi! Could you provide a way to benchmark the change so that we can see in which situations is the speedup visible? Thank you!\r\n\r\nHi @LysandreJik , I just updated the benchmark code snippets and sample output in the description. Hopefully it can validate the change.",
"Hi @LysandreJik , just wanted to follow up on this, is there anything else you would like to see on this PR? I also tried to run the slow test. It looked ok, but with a few connection errors: `ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on`, and re-run didn't fix it. I was wondering how the slow tests look like on your end. Thanks!",
"Awesome, thanks to both of you!"
] | 1,616 | 1,617 | 1,617 | CONTRIBUTOR | null | # What does this PR do?
This PR significantly improves the speed of adding tokens from `added_tokens.json`, when it contains a large number of tokens (e.g., 20,000+). When adding one token at a time, it uses `bisect` to insert the token into `PreTrainedTokenizer.unique_no_split_tokens`. Please see a detailed description and motivation in this issue: https://github.com/huggingface/transformers/issues/10676
This change relies on the requirement that `unique_no_split_tokens` is sorted. (I'm not sure if this is a fair assumption, otherwise I can check if it's already sorted before the insertion.)
Fixes #10676
## Benchmark
MacOS Mojave (CPU: 2.6 GHz Intel Core i7)
python==3.7.9
torch==1.7.1+cpu
transformers==3.5.1 or transformers==4.4.2 (similar results)
```python
import json
from timeit import default_timer as timer
from transformers import DistilBertTokenizer
model_dir = '/home/username/saved_model'
# Load a pretrained model's tokenizer, save it to {model_dir}
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
tokenizer.save_pretrained(model_dir)
print('len(tokenizer) of a pretrained model distilbert-base-uncased:', len(tokenizer))
# Generate n values as token suffix, to randomize the insertion position of added tokens
low = 30522
high = 82181 # exclusive
random_suffix = list(range(low, high))
random.shuffle(random_suffix)
# Save the n new tokens with correct indices to added_tokens.json
added_tokens = {f'addedtoken{val}': low + idx for idx, val in enumerate(random_suffix)}
with open(f'{model_dir}/added_tokens.json', 'w') as f:
json.dump(added_tokens, f)
print(f'saved {len(added_tokens)} tokens to added_tokens.json')
# Load the tokenizer from {model_dir}, and print the elapsed time
start = timer()
tokenizer = DistilBertTokenizer.from_pretrained(model_dir)
print('len(tokenizer) after loading from saved model:', len(tokenizer))
end = timer()
print('Elapsed (seconds):', round(end - start, 3))
# Make sure tokenizer.unique_no_split_tokens remains sorted
all_values = tokenizer.unique_no_split_tokens
assert all(all_values[i+1] > all_values[i] for i in range(len(all_values) - 1))
```
**If we save 21659 tokens in added_tokens.json**, output before the change:
```bash
len(tokenizer) of a pretrained model distilbert-base-uncased: 30522
saved 21659 tokens to added_tokens.json
len(tokenizer) after loading from saved model: 52181
*** Elapsed (seconds): 76.95
```
output after the change:
```bash
len(tokenizer) of a pretrained model distilbert-base-uncased: 30522
saved 21659 tokens to added_tokens.json
len(tokenizer) after loading from saved model: 52181
*** Elapsed (seconds): 0.308
```
**If we save 51659 tokens in added_tokens.json**, output before the change:
```bash
len(tokenizer) of a pretrained model distilbert-base-uncased: 30522
saved 51659 tokens to added_tokens.json
len(tokenizer) after loading from saved model: 82181
*** Elapsed (seconds): 527.795
```
output after the change:
```bash
len(tokenizer) of a pretrained model distilbert-base-uncased: 30522
saved 51659 tokens to added_tokens.json
len(tokenizer) after loading from saved model: 82181
*** Elapsed (seconds): 0.83
```
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Who can review?
@lhoestq @LysandreJik
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10780/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10780/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10780",
"html_url": "https://github.com/huggingface/transformers/pull/10780",
"diff_url": "https://github.com/huggingface/transformers/pull/10780.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10780.patch",
"merged_at": 1617281772000
} |
https://api.github.com/repos/huggingface/transformers/issues/10779 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10779/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10779/comments | https://api.github.com/repos/huggingface/transformers/issues/10779/events | https://github.com/huggingface/transformers/issues/10779 | 834,074,242 | MDU6SXNzdWU4MzQwNzQyNDI= | 10,779 | EncoderDecoderModel with different model dimensions | {
"login": "LarsHill",
"id": 37187985,
"node_id": "MDQ6VXNlcjM3MTg3OTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/37187985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LarsHill",
"html_url": "https://github.com/LarsHill",
"followers_url": "https://api.github.com/users/LarsHill/followers",
"following_url": "https://api.github.com/users/LarsHill/following{/other_user}",
"gists_url": "https://api.github.com/users/LarsHill/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LarsHill/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LarsHill/subscriptions",
"organizations_url": "https://api.github.com/users/LarsHill/orgs",
"repos_url": "https://api.github.com/users/LarsHill/repos",
"events_url": "https://api.github.com/users/LarsHill/events{/privacy}",
"received_events_url": "https://api.github.com/users/LarsHill/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | open | false | null | [] | [
"Hey @LarsHill, \r\n\r\nYes, we should fix this indeed :-) I'll try to open a PR for this this week!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,620 | null | NONE | null | ## Who can help
@patrickvonplaten, @patil-suraj
## Information
When instantiating an `EncoderDecoderModel` from two pretrained models whose model dimensions are different, a `RunTimeError` occurs at the `CrossAttention` calculation step.
The reason is, that regardless of a potentially different encoder model dimension, the projection layers for key and value are initialized with the decoder model dimension.
This leads to a dimensionality mismatch when performing the matrix multiplication of encoder outputs (encoder model dimension) in the key and value projection layers (decoder model dimension).
Looking a little bit deeper in the API I would suspect it should be easy to provide the correct encoder model dimension to the `Attention` module in most Model implementations and their key/value projection layers, if the `add_cross_attention=True` argument is set. Also, I think the encoder model dimension should be easily accessible via `self.encoder.config.d_model` or something along these lines.
Generally, I think there is no reason against using `EncoderDecoderModel` with `encoder='bert-large-cased'` (`d_model=1024`) and `decoder='gpt2'` (`d_model=768`), but currently this setup doesnt't work.
Thanks a lot for looking into it :)
Best regards
Lars | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10779/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10779/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10778 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10778/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10778/comments | https://api.github.com/repos/huggingface/transformers/issues/10778/events | https://github.com/huggingface/transformers/pull/10778 | 834,062,442 | MDExOlB1bGxSZXF1ZXN0NTk0OTU1OTYx | 10,778 | Smmp batch not divisible by microbatches fix | {
"login": "mansimane",
"id": 23171195,
"node_id": "MDQ6VXNlcjIzMTcxMTk1",
"avatar_url": "https://avatars.githubusercontent.com/u/23171195?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mansimane",
"html_url": "https://github.com/mansimane",
"followers_url": "https://api.github.com/users/mansimane/followers",
"following_url": "https://api.github.com/users/mansimane/following{/other_user}",
"gists_url": "https://api.github.com/users/mansimane/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mansimane/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mansimane/subscriptions",
"organizations_url": "https://api.github.com/users/mansimane/orgs",
"repos_url": "https://api.github.com/users/mansimane/repos",
"events_url": "https://api.github.com/users/mansimane/events{/privacy}",
"received_events_url": "https://api.github.com/users/mansimane/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks @mansimane !\r\nI added the test and realized the implementation was not working as expected, so fixed it (I had forgotten this does not behave like the `DistributedSampler` that takes one sample every other num_replicas but sliced the indices at the beginning). If you want to have a last look to check I didn't do anything bad that would be great.\r\n\r\nThe rest of the changes are just the result of our styling scripts",
"Changes look good to me too. I tested with microbatch size 2."
] | 1,616 | 1,616 | 1,616 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes:
Batch size not divisible by microbatches issue in sagemaker model parallel. Following is summary of changes:
1. Updated SequentialDistributedSampler to generate samples of multiples of batchsize.
2. Updated preds_gatherer and labels_gatherer calls to be multiple of batch size.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger @philschmid @anirudh2290
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10778/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10778/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10778",
"html_url": "https://github.com/huggingface/transformers/pull/10778",
"diff_url": "https://github.com/huggingface/transformers/pull/10778.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10778.patch",
"merged_at": 1616023091000
} |
https://api.github.com/repos/huggingface/transformers/issues/10777 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10777/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10777/comments | https://api.github.com/repos/huggingface/transformers/issues/10777/events | https://github.com/huggingface/transformers/pull/10777 | 834,034,924 | MDExOlB1bGxSZXF1ZXN0NTk0OTMzNTI0 | 10,777 | [trainer] make failure to find a resume checkpoint fatal + tests | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,616 | 1,616 | CONTRIBUTOR | null | As a follow up to https://github.com/huggingface/transformers/pull/10760 this PR:
- makes a failure to find a valid checkpoint to resume from fatal - when an explicit `resume_from_checkpoint` was passed
- extends `test_can_resume_training` to validate this change and also the boolean `resume_from_checkpoint` case.
- adds a small `test_can_resume_training` refactoring - so it's easy to see they are the same args on each invocation.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10777/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10777/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10777",
"html_url": "https://github.com/huggingface/transformers/pull/10777",
"diff_url": "https://github.com/huggingface/transformers/pull/10777.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10777.patch",
"merged_at": 1616004997000
} |
https://api.github.com/repos/huggingface/transformers/issues/10776 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10776/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10776/comments | https://api.github.com/repos/huggingface/transformers/issues/10776/events | https://github.com/huggingface/transformers/pull/10776 | 833,996,820 | MDExOlB1bGxSZXF1ZXN0NTk0OTAxNzcw | 10,776 | [examples] document resuming | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,616 | 1,616 | CONTRIBUTOR | null | This PR documents how one can resume training in examples. Thanks to @sgugger for the notes.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10776/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10776/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10776",
"html_url": "https://github.com/huggingface/transformers/pull/10776",
"diff_url": "https://github.com/huggingface/transformers/pull/10776.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10776.patch",
"merged_at": 1616010516000
} |
https://api.github.com/repos/huggingface/transformers/issues/10775 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10775/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10775/comments | https://api.github.com/repos/huggingface/transformers/issues/10775/events | https://github.com/huggingface/transformers/pull/10775 | 833,995,315 | MDExOlB1bGxSZXF1ZXN0NTk0OTAwNjAz | 10,775 | Check copies blackify | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,616 | 1,616 | COLLABORATOR | null | # What does this PR do?
This PR update the check_copies utils to apply black when checking if a copy has diverged from the original when replacement happen. An example of the problem is given with the diff in `modeling_mobilebert.py` here, where the check copies could not be applied to whole class because of styling divergences.
It also fixes a bug where the check was not applied on functions after the end of the definition (it wasn't checking the function but was stopping at the first unindent when the closing parenthesis was). As a consequence, three files are changed because they diverged from the original function:
- modeling_m2m_100.py
- modeling_roberta.py
- modeling_speech_to_text.py
I'm not sure if the check should be removed on those or not (cc @patil-suraj ) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10775/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10775/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10775",
"html_url": "https://github.com/huggingface/transformers/pull/10775",
"diff_url": "https://github.com/huggingface/transformers/pull/10775.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10775.patch",
"merged_at": 1616019081000
} |
https://api.github.com/repos/huggingface/transformers/issues/10774 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10774/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10774/comments | https://api.github.com/repos/huggingface/transformers/issues/10774/events | https://github.com/huggingface/transformers/issues/10774 | 833,944,829 | MDU6SXNzdWU4MzM5NDQ4Mjk= | 10,774 | torch.nn.modules.module.ModuleAttributeError: 'AlbertEmbeddings' object has no attribute 'bias' | {
"login": "dhs29",
"id": 44876246,
"node_id": "MDQ6VXNlcjQ0ODc2MjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/44876246?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhs29",
"html_url": "https://github.com/dhs29",
"followers_url": "https://api.github.com/users/dhs29/followers",
"following_url": "https://api.github.com/users/dhs29/following{/other_user}",
"gists_url": "https://api.github.com/users/dhs29/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhs29/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhs29/subscriptions",
"organizations_url": "https://api.github.com/users/dhs29/orgs",
"repos_url": "https://api.github.com/users/dhs29/repos",
"events_url": "https://api.github.com/users/dhs29/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhs29/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,615 | 1,619 | 1,619 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
transformers-cli convert --model_type albert
--tf_checkpoint $ALBERT_BASE_DIR/model.ckpt-64000
--config $ALBERT_BASE_DIR/albert_config.json
--pytorch_dump_output $ALBERT_BASE_DIR/pytorch_model.bin
i am running this script
AlbertConfig {
"attention_probs_dropout_prob": 0,
"bos_token_id": 2,
"classifier_dropout_prob": 0.1,
"down_scale_factor": 1,
"embedding_size": 128,
"eos_token_id": 3,
"gap_size": 0,
"hidden_act": "gelu",
"hidden_dropout_prob": 0,
"hidden_size": 768,
"initializer_range": 0.02,
"inner_group_num": 1,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "albert",
"net_structure_type": 0,
"num_attention_heads": 12,
"num_hidden_groups": 1,
"num_hidden_layers": 12,
"num_memory_blocks": 0,
"pad_token_id": 0,
"type_vocab_size": 2,
"vocab_size": 31990
}
Converting TensorFlow checkpoint from /data/NLP/ALBERT_Inspird_Train/albert_base/model.ckpt-64000
Loading TF weight bert/embeddings/layer_normalization/beta with shape [128]
Loading TF weight bert/embeddings/layer_normalization/beta/adam_m with shape [128]
Loading TF weight bert/embeddings/layer_normalization/beta/adam_v with shape [128]
Loading TF weight bert/embeddings/layer_normalization/gamma with shape [128]
Loading TF weight bert/embeddings/layer_normalization/gamma/adam_m with shape [128]
Loading TF weight bert/embeddings/layer_normalization/gamma/adam_v with shape [128]
Loading TF weight bert/embeddings/position_embeddings with shape [512, 128]
Loading TF weight bert/embeddings/position_embeddings/adam_m with shape [512, 128]
Loading TF weight bert/embeddings/position_embeddings/adam_v with shape [512, 128]
Loading TF weight bert/embeddings/token_type_embeddings with shape [2, 128]
Loading TF weight bert/embeddings/token_type_embeddings/adam_m with shape [2, 128]
Loading TF weight bert/embeddings/token_type_embeddings/adam_v with shape [2, 128]
Loading TF weight bert/embeddings/word_embeddings with shape [31990, 128]
Loading TF weight bert/embeddings/word_embeddings/adam_m with shape [31990, 128]
Loading TF weight bert/embeddings/word_embeddings/adam_v with shape [31990, 128]
Loading TF weight bert/encoder/embedding_hidden_mapping_in/bias with shape [768]
Loading TF weight bert/encoder/embedding_hidden_mapping_in/bias/adam_m with shape [768]
Loading TF weight bert/encoder/embedding_hidden_mapping_in/bias/adam_v with shape [768]
Loading TF weight bert/encoder/embedding_hidden_mapping_in/kernel with shape [128, 768]
Loading TF weight bert/encoder/embedding_hidden_mapping_in/kernel/adam_m with shape [128, 768]
Loading TF weight bert/encoder/embedding_hidden_mapping_in/kernel/adam_v with shape [128, 768]
Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias with shape [768]
Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel with shape [768, 768]
Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias with shape [768]
Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel with shape [768, 768]
Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias with shape [768]
Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel with shape [768, 768]
Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias with shape [768]
Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel with shape [768, 768]
Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias with shape [3072]
Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias/adam_m with shape [3072]
Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias/adam_v with shape [3072]
Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel with shape [768, 3072]
Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel/adam_m with shape [768, 3072]
Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel/adam_v with shape [768, 3072]
Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias with shape [768]
Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel with shape [3072, 768]
Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel/adam_m with shape [3072, 768]
Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel/adam_v with shape [3072, 768]
Loading TF weight bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_1/beta with shape [768]
Loading TF weight bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_1/beta/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_1/beta/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_1/gamma with shape [768]
Loading TF weight bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_1/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_1/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_2/beta with shape [768]
Loading TF weight bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_2/beta/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_2/beta/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_2/gamma with shape [768]
Loading TF weight bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_2/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_2/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_3/beta with shape [768]
Loading TF weight bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_3/beta/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_3/beta/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_3/gamma with shape [768]
Loading TF weight bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_3/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_3/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_4/beta with shape [768]
Loading TF weight bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_4/beta/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_4/beta/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_4/gamma with shape [768]
Loading TF weight bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_4/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_4/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_21/beta with shape [768]
Loading TF weight bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_21/beta/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_21/beta/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_21/gamma with shape [768]
Loading TF weight bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_21/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_21/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_22/beta with shape [768]
Loading TF weight bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_22/beta/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_22/beta/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_22/gamma with shape [768]
Loading TF weight bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_22/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_22/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_23/beta with shape [768]
Loading TF weight bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_23/beta/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_23/beta/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_23/gamma with shape [768]
Loading TF weight bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_23/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_23/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_24/beta with shape [768]
Loading TF weight bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_24/beta/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_24/beta/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_24/gamma with shape [768]
Loading TF weight bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_24/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_24/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_5/beta with shape [768]
Loading TF weight bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_5/beta/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_5/beta/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_5/gamma with shape [768]
Loading TF weight bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_5/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_5/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_6/beta with shape [768]
Loading TF weight bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_6/beta/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_6/beta/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_6/gamma with shape [768]
Loading TF weight bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_6/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_6/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_7/beta with shape [768]
Loading TF weight bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_7/beta/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_7/beta/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_7/gamma with shape [768]
Loading TF weight bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_7/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_7/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_8/beta with shape [768]
Loading TF weight bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_8/beta/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_8/beta/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_8/gamma with shape [768]
Loading TF weight bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_8/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_8/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_10/beta with shape [768]
Loading TF weight bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_10/beta/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_10/beta/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_10/gamma with shape [768]
Loading TF weight bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_10/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_10/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_9/beta with shape [768]
Loading TF weight bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_9/beta/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_9/beta/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_9/gamma with shape [768]
Loading TF weight bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_9/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_9/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_11/beta with shape [768]
Loading TF weight bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_11/beta/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_11/beta/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_11/gamma with shape [768]
Loading TF weight bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_11/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_11/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_12/beta with shape [768]
Loading TF weight bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_12/beta/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_12/beta/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_12/gamma with shape [768]
Loading TF weight bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_12/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_12/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_13/beta with shape [768]
Loading TF weight bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_13/beta/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_13/beta/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_13/gamma with shape [768]
Loading TF weight bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_13/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_13/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_14/beta with shape [768]
Loading TF weight bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_14/beta/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_14/beta/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_14/gamma with shape [768]
Loading TF weight bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_14/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_14/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_15/beta with shape [768]
Loading TF weight bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_15/beta/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_15/beta/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_15/gamma with shape [768]
Loading TF weight bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_15/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_15/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_16/beta with shape [768]
Loading TF weight bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_16/beta/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_16/beta/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_16/gamma with shape [768]
Loading TF weight bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_16/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_16/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_17/beta with shape [768]
Loading TF weight bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_17/beta/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_17/beta/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_17/gamma with shape [768]
Loading TF weight bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_17/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_17/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_18/beta with shape [768]
Loading TF weight bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_18/beta/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_18/beta/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_18/gamma with shape [768]
Loading TF weight bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_18/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_18/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_19/beta with shape [768]
Loading TF weight bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_19/beta/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_19/beta/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_19/gamma with shape [768]
Loading TF weight bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_19/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_19/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_20/beta with shape [768]
Loading TF weight bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_20/beta/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_20/beta/adam_v with shape [768]
Loading TF weight bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_20/gamma with shape [768]
Loading TF weight bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_20/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_20/gamma/adam_v with shape [768]
Loading TF weight bert/pooler/dense/bias with shape [768]
Loading TF weight bert/pooler/dense/bias/adam_m with shape [768]
Loading TF weight bert/pooler/dense/bias/adam_v with shape [768]
Loading TF weight bert/pooler/dense/kernel with shape [768, 768]
Loading TF weight bert/pooler/dense/kernel/adam_m with shape [768, 768]
Loading TF weight bert/pooler/dense/kernel/adam_v with shape [768, 768]
Loading TF weight cls/predictions/output_bias with shape [31990]
Loading TF weight cls/predictions/output_bias/adam_m with shape [31990]
Loading TF weight cls/predictions/output_bias/adam_v with shape [31990]
Loading TF weight cls/predictions/transform/dense/bias with shape [128]
Loading TF weight cls/predictions/transform/dense/bias/adam_m with shape [128]
Loading TF weight cls/predictions/transform/dense/bias/adam_v with shape [128]
Loading TF weight cls/predictions/transform/dense/kernel with shape [768, 128]
Loading TF weight cls/predictions/transform/dense/kernel/adam_m with shape [768, 128]
Loading TF weight cls/predictions/transform/dense/kernel/adam_v with shape [768, 128]
Loading TF weight cls/predictions/transform/layer_normalization_25/beta with shape [128]
Loading TF weight cls/predictions/transform/layer_normalization_25/beta/adam_m with shape [128]
Loading TF weight cls/predictions/transform/layer_normalization_25/beta/adam_v with shape [128]
Loading TF weight cls/predictions/transform/layer_normalization_25/gamma with shape [128]
Loading TF weight cls/predictions/transform/layer_normalization_25/gamma/adam_m with shape [128]
Loading TF weight cls/predictions/transform/layer_normalization_25/gamma/adam_v with shape [128]
Loading TF weight cls/seq_relationship/output_bias with shape [2]
Loading TF weight cls/seq_relationship/output_bias/adam_m with shape [2]
Loading TF weight cls/seq_relationship/output_bias/adam_v with shape [2]
Loading TF weight cls/seq_relationship/output_weights with shape [2, 768]
Loading TF weight cls/seq_relationship/output_weights/adam_m with shape [2, 768]
Loading TF weight cls/seq_relationship/output_weights/adam_v with shape [2, 768]
Loading TF weight global_step with shape []
bert/embeddings/layer_normalization/beta
bert/embeddings/layer_normalization/beta/adam_m
bert/embeddings/layer_normalization/beta/adam_v
bert/embeddings/layer_normalization/gamma
bert/embeddings/layer_normalization/gamma/adam_m
bert/embeddings/layer_normalization/gamma/adam_v
bert/embeddings/position_embeddings
bert/embeddings/position_embeddings/adam_m
bert/embeddings/position_embeddings/adam_v
bert/embeddings/token_type_embeddings
bert/embeddings/token_type_embeddings/adam_m
bert/embeddings/token_type_embeddings/adam_v
bert/embeddings/word_embeddings
bert/embeddings/word_embeddings/adam_m
bert/embeddings/word_embeddings/adam_v
bert/encoder/embedding_hidden_mapping_in/bias
bert/encoder/embedding_hidden_mapping_in/bias/adam_m
bert/encoder/embedding_hidden_mapping_in/bias/adam_v
bert/encoder/embedding_hidden_mapping_in/kernel
bert/encoder/embedding_hidden_mapping_in/kernel/adam_m
bert/encoder/embedding_hidden_mapping_in/kernel/adam_v
bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias
bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias/adam_m
bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias/adam_v
bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel
bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel/adam_m
bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel/adam_v
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias/adam_m
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias/adam_v
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel/adam_m
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel/adam_v
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias/adam_m
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias/adam_v
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel/adam_m
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel/adam_v
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias/adam_m
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias/adam_v
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel/adam_m
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel/adam_v
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias/adam_m
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias/adam_v
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel/adam_m
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel/adam_v
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias/adam_m
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias/adam_v
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel/adam_m
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel/adam_v
bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_1/beta
bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_1/beta/adam_m
bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_1/beta/adam_v
bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_1/gamma
bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_1/gamma/adam_m
bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_1/gamma/adam_v
bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_2/beta
bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_2/beta/adam_m
bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_2/beta/adam_v
bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_2/gamma
bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_2/gamma/adam_m
bert/encoder/transformer/group_0/layer_0/inner_group_0/layer_normalization_2/gamma/adam_v
bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_3/beta
bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_3/beta/adam_m
bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_3/beta/adam_v
bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_3/gamma
bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_3/gamma/adam_m
bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_3/gamma/adam_v
bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_4/beta
bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_4/beta/adam_m
bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_4/beta/adam_v
bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_4/gamma
bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_4/gamma/adam_m
bert/encoder/transformer/group_0_1/layer_1/inner_group_0/layer_normalization_4/gamma/adam_v
bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_21/beta
bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_21/beta/adam_m
bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_21/beta/adam_v
bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_21/gamma
bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_21/gamma/adam_m
bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_21/gamma/adam_v
bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_22/beta
bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_22/beta/adam_m
bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_22/beta/adam_v
bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_22/gamma
bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_22/gamma/adam_m
bert/encoder/transformer/group_0_10/layer_10/inner_group_0/layer_normalization_22/gamma/adam_v
bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_23/beta
bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_23/beta/adam_m
bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_23/beta/adam_v
bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_23/gamma
bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_23/gamma/adam_m
bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_23/gamma/adam_v
bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_24/beta
bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_24/beta/adam_m
bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_24/beta/adam_v
bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_24/gamma
bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_24/gamma/adam_m
bert/encoder/transformer/group_0_11/layer_11/inner_group_0/layer_normalization_24/gamma/adam_v
bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_5/beta
bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_5/beta/adam_m
bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_5/beta/adam_v
bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_5/gamma
bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_5/gamma/adam_m
bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_5/gamma/adam_v
bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_6/beta
bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_6/beta/adam_m
bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_6/beta/adam_v
bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_6/gamma
bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_6/gamma/adam_m
bert/encoder/transformer/group_0_2/layer_2/inner_group_0/layer_normalization_6/gamma/adam_v
bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_7/beta
bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_7/beta/adam_m
bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_7/beta/adam_v
bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_7/gamma
bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_7/gamma/adam_m
bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_7/gamma/adam_v
bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_8/beta
bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_8/beta/adam_m
bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_8/beta/adam_v
bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_8/gamma
bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_8/gamma/adam_m
bert/encoder/transformer/group_0_3/layer_3/inner_group_0/layer_normalization_8/gamma/adam_v
bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_10/beta
bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_10/beta/adam_m
bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_10/beta/adam_v
bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_10/gamma
bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_10/gamma/adam_m
bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_10/gamma/adam_v
bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_9/beta
bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_9/beta/adam_m
bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_9/beta/adam_v
bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_9/gamma
bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_9/gamma/adam_m
bert/encoder/transformer/group_0_4/layer_4/inner_group_0/layer_normalization_9/gamma/adam_v
bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_11/beta
bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_11/beta/adam_m
bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_11/beta/adam_v
bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_11/gamma
bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_11/gamma/adam_m
bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_11/gamma/adam_v
bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_12/beta
bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_12/beta/adam_m
bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_12/beta/adam_v
bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_12/gamma
bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_12/gamma/adam_m
bert/encoder/transformer/group_0_5/layer_5/inner_group_0/layer_normalization_12/gamma/adam_v
bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_13/beta
bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_13/beta/adam_m
bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_13/beta/adam_v
bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_13/gamma
bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_13/gamma/adam_m
bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_13/gamma/adam_v
bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_14/beta
bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_14/beta/adam_m
bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_14/beta/adam_v
bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_14/gamma
bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_14/gamma/adam_m
bert/encoder/transformer/group_0_6/layer_6/inner_group_0/layer_normalization_14/gamma/adam_v
bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_15/beta
bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_15/beta/adam_m
bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_15/beta/adam_v
bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_15/gamma
bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_15/gamma/adam_m
bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_15/gamma/adam_v
bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_16/beta
bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_16/beta/adam_m
bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_16/beta/adam_v
bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_16/gamma
bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_16/gamma/adam_m
bert/encoder/transformer/group_0_7/layer_7/inner_group_0/layer_normalization_16/gamma/adam_v
bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_17/beta
bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_17/beta/adam_m
bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_17/beta/adam_v
bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_17/gamma
bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_17/gamma/adam_m
bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_17/gamma/adam_v
bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_18/beta
bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_18/beta/adam_m
bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_18/beta/adam_v
bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_18/gamma
bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_18/gamma/adam_m
bert/encoder/transformer/group_0_8/layer_8/inner_group_0/layer_normalization_18/gamma/adam_v
bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_19/beta
bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_19/beta/adam_m
bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_19/beta/adam_v
bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_19/gamma
bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_19/gamma/adam_m
bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_19/gamma/adam_v
bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_20/beta
bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_20/beta/adam_m
bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_20/beta/adam_v
bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_20/gamma
bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_20/gamma/adam_m
bert/encoder/transformer/group_0_9/layer_9/inner_group_0/layer_normalization_20/gamma/adam_v
bert/pooler/dense/bias
bert/pooler/dense/bias/adam_m
bert/pooler/dense/bias/adam_v
bert/pooler/dense/kernel
bert/pooler/dense/kernel/adam_m
bert/pooler/dense/kernel/adam_v
cls/predictions/output_bias
cls/predictions/output_bias/adam_m
cls/predictions/output_bias/adam_v
cls/predictions/transform/dense/bias
cls/predictions/transform/dense/bias/adam_m
cls/predictions/transform/dense/bias/adam_v
cls/predictions/transform/dense/kernel
cls/predictions/transform/dense/kernel/adam_m
cls/predictions/transform/dense/kernel/adam_v
cls/predictions/transform/layer_normalization_25/beta
cls/predictions/transform/layer_normalization_25/beta/adam_m
cls/predictions/transform/layer_normalization_25/beta/adam_v
cls/predictions/transform/layer_normalization_25/gamma
cls/predictions/transform/layer_normalization_25/gamma/adam_m
cls/predictions/transform/layer_normalization_25/gamma/adam_v
cls/seq_relationship/output_bias
cls/seq_relationship/output_bias/adam_m
cls/seq_relationship/output_bias/adam_v
cls/seq_relationship/output_weights
cls/seq_relationship/output_weights/adam_m
cls/seq_relationship/output_weights/adam_v
global_step
Skipping albert/embeddings/layer_normalization/beta
Traceback (most recent call last):
File "/home/dshah/venv/bin/transformers-cli", line 8, in
sys.exit(main())
File "/home/dshah/venv/lib64/python3.8/site-packages/transformers/commands/transformers_cli.py", line 33, in main
service.run()
File "/home/dshah/venv/lib64/python3.8/site-packages/transformers/commands/convert.py", line 80, in run
convert_tf_checkpoint_to_pytorch(self._tf_checkpoint, self._config, self._pytorch_dump_output)
File "/home/dshah/venv/lib64/python3.8/site-packages/transformers/convert_albert_original_tf_checkpoint_to_pytorch.py", line 36, in convert_tf_checkpoint_to_pytorch
load_tf_weights_in_albert(model, config, tf_checkpoint_path)
File "/home/dshah/venv/lib64/python3.8/site-packages/transformers/modeling_albert.py", line 163, in load_tf_weights_in_albert
pointer = getattr(pointer, "bias")
File "/home/dshah/venv/lib64/python3.8/site-packages/torch/nn/modules/module.py", line 771, in getattr
raise ModuleAttributeError("'{}' object has no attribute '{}'".format(
torch.nn.modules.module.ModuleAttributeError: 'AlbertEmbeddings' object has no attribute 'bias'
could you please look in to this @LysandreJik
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10774/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10774/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10773 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10773/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10773/comments | https://api.github.com/repos/huggingface/transformers/issues/10773/events | https://github.com/huggingface/transformers/pull/10773 | 833,791,822 | MDExOlB1bGxSZXF1ZXN0NTk0NzI5NDQz | 10,773 | Wav2Vec2 - fix flaky test | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,615 | 1,615 | 1,615 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
The test: `tests/test_modeling_wav2vec2.py::Wav2Vec2RobustModelTest::test_ctc_loss_inference` is a bit flaky. Locally, these bug fixes seem to solve the problem. I ran the test 200 times locally.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10773/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10773/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10773",
"html_url": "https://github.com/huggingface/transformers/pull/10773",
"diff_url": "https://github.com/huggingface/transformers/pull/10773.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10773.patch",
"merged_at": 1615993817000
} |
https://api.github.com/repos/huggingface/transformers/issues/10772 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10772/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10772/comments | https://api.github.com/repos/huggingface/transformers/issues/10772/events | https://github.com/huggingface/transformers/issues/10772 | 833,741,207 | MDU6SXNzdWU4MzM3NDEyMDc= | 10,772 | Differences between S2T and Wav2Vec2 | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"@patrickvonplaten \r\nWho should I tag for S2T ?",
"@patil-suraj for s2t",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,615 | 1,619 | 1,619 | CONTRIBUTOR | null | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
There seem to be differences between S2T and Wav2Vec2 which are hard to understand reason about that may be fixable.
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
Adding something like AutomaticSpeechRecognitionPipeline might be desirable and would be hard to do in current state. If/When new multimodal models are added it is going to add more and more complexity. Aiming for consistent API is desirable IMO.
## Description
- There are no `AutoProcessor.from_pretrained`.
- Wav2Vec2 cannot use `skip_special_tokens` for the decode variant (because it skips `<pad>` tokens early leading to removing all duplicate letters from the output. IMO it's a "bug" as `<pad>` in the context of CTC is not a special_tokens (at least until letters are resolved).
- Wav2Vec2 uses 1 forward pass, where S2T uses generate function. It would be nice, if there could be 1 interface only. (maybe just overload the `Wav2Vec2ForCTC.generate` ?
- S2T overloads `input_ids` with float tensors when `generating`, which works in practice but does seem like a piggy-back of the generate function and is definitely confusing to use. If `generate` is generic enough maybe then `input_ids` should be renamed to reflect that (`input_ids` are IDs everywhere in the rest of transformers. It could be a simple internal variable rename, I don't imply we should change any function signature anywhere, just that the variable is not necessarily IDs. Isn't it a bit like `inputs_embed` ?
- Wav2Vec2Processor returns 'input_values' where S2TProcessor returns 'input_features'. They seem (at least in appearance) to be the same. Would it be better to use only 1 name if they do ?
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
Happy to contribute with PRs but I lack the more general view to be sure about what direction to take, and where are the "better" fixes. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10772/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10772/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10771 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10771/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10771/comments | https://api.github.com/repos/huggingface/transformers/issues/10771/events | https://github.com/huggingface/transformers/pull/10771 | 833,710,983 | MDExOlB1bGxSZXF1ZXN0NTk0NjYxMDc0 | 10,771 | Fix ProphetNet Flaky Test | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I may be mistaken, but this test has appeared since the ProphetNet refactor, right? Is it due to that refactor, or is it a newly added test? "
] | 1,615 | 1,615 | 1,615 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
This PR aims at solving the flaky prohpetnet test: https://app.circleci.com/pipelines/github/huggingface/transformers/21170/workflows/749ec532-0847-4d1b-8078-ca27bfdbe318/jobs/182387 . I double-checked the code and everything looks correct. Also, I've ran the test 100 times locally with the increased tolerance to somewhat make sure that it fixes the flaky CI
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10771/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10771/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10771",
"html_url": "https://github.com/huggingface/transformers/pull/10771",
"diff_url": "https://github.com/huggingface/transformers/pull/10771.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10771.patch",
"merged_at": 1615986914000
} |
https://api.github.com/repos/huggingface/transformers/issues/10770 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10770/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10770/comments | https://api.github.com/repos/huggingface/transformers/issues/10770/events | https://github.com/huggingface/transformers/issues/10770 | 833,630,809 | MDU6SXNzdWU4MzM2MzA4MDk= | 10,770 | TAPAS for Question Generation | {
"login": "saichandrapandraju",
"id": 41769919,
"node_id": "MDQ6VXNlcjQxNzY5OTE5",
"avatar_url": "https://avatars.githubusercontent.com/u/41769919?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saichandrapandraju",
"html_url": "https://github.com/saichandrapandraju",
"followers_url": "https://api.github.com/users/saichandrapandraju/followers",
"following_url": "https://api.github.com/users/saichandrapandraju/following{/other_user}",
"gists_url": "https://api.github.com/users/saichandrapandraju/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saichandrapandraju/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saichandrapandraju/subscriptions",
"organizations_url": "https://api.github.com/users/saichandrapandraju/orgs",
"repos_url": "https://api.github.com/users/saichandrapandraju/repos",
"events_url": "https://api.github.com/users/saichandrapandraju/events{/privacy}",
"received_events_url": "https://api.github.com/users/saichandrapandraju/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes you can. TAPAS is an encoder-model, and can be used in an encoder-decoder set-up, like so:\r\n\r\n```\r\nfrom transformers import EncoderDecoderModel\r\n\r\nmodel = EncoderDecoderModel.from_encoder_decoder_pretrained(\"google/tapas-base\", \"bert-base-cased\")\r\n```\r\nYou can specify any decoder you want, here I'm using BERT as a decoder, but you can also use GPT-2, etc (any model that supports the `is_decoder` logic).\r\n\r\nFor more information, see the [docs](https://huggingface.co/transformers/model_doc/encoderdecoder.html) of `EncoderDecoderModel`. ",
"Thanks @NielsRogge ,\r\n\r\nI went through the concept of `EncoderDecoderModel` and I have a doubt in implementing it for TAPAS - \r\n\r\nUnlike normal BERT models, TAPAS tokenizer takes `table`, `queries` and `answers` for fine-tuning. So if I want to generate questions, should I skip questions for TAPAS (currently using `google/tapas-large`) encoder and give them to decoder (currently using `GPT-2-medium`) instead?",
"Yes, if you want to generate questions given a table, then you should only encode the table (you can set `queries=None` when providing a table to `TapasTokenizer`). ",
"Thanks @NielsRogge ,\r\n\r\nI'll implement and let you know",
"Great, I'm curious to see the results. Another use case could be to generate answers given a question + table with an EncoderDecoder set-up. ",
"Hi @NielsRogge ,\r\n\r\nI tried with the above approach by passing `table` to encoder and `queries` to decoder. But while encoding, it's giving warning as - **TAPAS is a question answering model but you have not passed a query. Please be aware that the model will probably not behave correctly** which is because of [this](https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py#L1014). I thought passing 'queries' to TAPAS is mandatory.\r\n\r\nAnyhow I trained the model but it's not performing as expected. While inferencing, it is giving same question (not fully formed) for any input that I pass. Below is the sample snippet\r\n\r\n\r\n",
"Yeah that warning is shown because TAPAS has been pre-trained on text-table pairs. You can ignore that warning, because we can still encode just the tables.\r\n\r\nWhat kind of generation method are you using? Greedy decoding, beam search? (See [this post](https://huggingface.co/blog/how-to-generate) for the different arguments you can pass to `.generate()`). ",
"@NielsRogge ,\r\n\r\nI'm not specifying any generation method so it should be Greedy itself.",
"Can you provide a notebook?",
"Hi @NielsRogge ,\r\n\r\n[Here](https://colab.research.google.com/drive/1d8m_hmipL-1ZzU15LfA2XHmypKipvnJR?usp=sharing) is the colab link for the replica of my work. As I cannot share or upload any files from my Office' VPN, I created this notebook which is same as the one I'm working with in our VM's. Only change is I used `google/tapas-base` and `gpt2` in colab whereas I'm using `google/tapas-large` and `gpt2-medium` for my official work.",
"Hi @NielsRogge \r\n\r\nI'm able to generate decent questions but only one generic question per table. How can I extend this to generate a question based on a particular cell value? \r\n\r\nBecause currently it's giving very basic ones like - \r\n`what are all of the countries?`\r\n`what are the names of all the drivers?`",
"Ok great :) sorry I didn't have the time yet to look at your notebook. Looking at it now, it looks really clean!\r\n\r\nI think the questions that it will generate highly depend on the training data you provide. I see you're currently training on SQA questions, and only those for which `position==0`. These questions are almost always very generic, because SQA is a dataset involving conversational questions, which means that the first question (with position 0) is most of the time a very generic question regarding a table, and the ones that come after (with position 1, 2, 3) are then more specific follow-up questions (regarding particular cell values).\r\n\r\nSo either you can also add those follow-up questions to your training dataset, or consider train on questions of the [WTQ](https://nlp.stanford.edu/blog/wikitablequestions-a-complex-real-world-question-understanding-dataset/) dataset? (Note that there is an overlap between WTQ and SQA questions - SQA was created based on WTQ). Or maybe questions from the WikiSQL dataset (which is available in HuggingFace datasets)?\r\n\r\nVery nice use case!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Hi @NielsRogge, \r\nThanks for the TAPAS implementation!\r\n\r\nI'm trying to follow this use-case in order train the model to perform conditional generation from tables.\r\nSince TAPAS can encode the semi-structured meaning in tables, I guessed it was a good choice to use it as an encoder and say GPT2 as decoder.\r\n\r\nI however encountered a problem when trying to generate from that EncoderDecoder model:\r\nHere is the relevant pieces of code, this: \r\n\r\nresults in this error:\r\n\r\n\r\nI guess this is since model.generate() for EncoderDecoder does not expect to have the extra `token_type_ids` that TAPAS has. Can you think of a way I can make this work?\r\n\r\nThanks!"
] | 1,615 | 1,642 | 1,620 | NONE | null | Hi,
Is there a way to generate questions for a table with TAPAS ? or is it only for Question Answering ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10770/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10770/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10769 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10769/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10769/comments | https://api.github.com/repos/huggingface/transformers/issues/10769/events | https://github.com/huggingface/transformers/pull/10769 | 833,619,948 | MDExOlB1bGxSZXF1ZXN0NTk0NTg1NzI3 | 10,769 | [Generate] Add save mode logits processor to remove nans and infs if necessary | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,615 | 1,616 | 1,616 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
It can happen that the output logits of models contain `inf` or even `nan` values. Those values will necessarily lead to errors when using the `sample(...)` or `beam_sample(...)` method.
This PR adds an optional `InfNanRemoveLogitsProcessor` that - enabled - should remove those values. It should help to fix flaky ci failures like this one: https://app.circleci.com/pipelines/github/huggingface/transformers/21081/workflows/36711d05-4282-4167-88df-59fbda03fe33/jobs/181274
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10769/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10769/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10769",
"html_url": "https://github.com/huggingface/transformers/pull/10769",
"diff_url": "https://github.com/huggingface/transformers/pull/10769.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10769.patch",
"merged_at": 1616450405000
} |
https://api.github.com/repos/huggingface/transformers/issues/10768 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10768/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10768/comments | https://api.github.com/repos/huggingface/transformers/issues/10768/events | https://github.com/huggingface/transformers/issues/10768 | 833,572,974 | MDU6SXNzdWU4MzM1NzI5NzQ= | 10,768 | Bug in multi-gpu training setting max_iters | {
"login": "dorooddorood606",
"id": 79288051,
"node_id": "MDQ6VXNlcjc5Mjg4MDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/79288051?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorooddorood606",
"html_url": "https://github.com/dorooddorood606",
"followers_url": "https://api.github.com/users/dorooddorood606/followers",
"following_url": "https://api.github.com/users/dorooddorood606/following{/other_user}",
"gists_url": "https://api.github.com/users/dorooddorood606/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dorooddorood606/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorooddorood606/subscriptions",
"organizations_url": "https://api.github.com/users/dorooddorood606/orgs",
"repos_url": "https://api.github.com/users/dorooddorood606/repos",
"events_url": "https://api.github.com/users/dorooddorood606/events{/privacy}",
"received_events_url": "https://api.github.com/users/dorooddorood606/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"There is no `--max_iters` argument in the `run_summarization` script, so I'm not sure what you're referring.",
"Hi\r\nI apologize for the typo, this is max_steps, if you set it and run a code\r\nin a distributed way and compare it with non-distributed way, the number of\r\nsteps would not differ, but if you try with setting max_train_epochs, you\r\nwould see less number of iterations when training on multiple GPUs, meaning that the code is\r\ncorrectly setting the parameters in that case. thanks\r\n\r\nOn Wed, Mar 17, 2021 at 2:08 PM Sylvain Gugger ***@***.***>\r\nwrote:\r\n\r\n> There is no --max_iters argument in the run_summarization script, so I'm\r\n> not sure what you're referring.\r\n>\r\n> —\r\n> You are receiving this because you authored the thread.\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/10768#issuecomment-801066552>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AS45N4YLS7NRPPJ32GUWPW3TECSTBANCNFSM4ZKGUFPA>\r\n> .\r\n>\r\n",
"Yes, `max_steps` is the number of training steps, so whether you run on one or several GPUs, you will do that number of training steps. That is the intended behavior and it is not a bug.\r\n \r\n`num_epochs` is the number of training epochs. Depending on your number of GPUs you will not have the same number of training steps per epoch (as long as you keep `per_device_train_batch_size` the same) so you will not train for the same number of total steps.",
"Hi\nthanks for the response, still to me if a user needs max_steps on multiple\ngpus, it needs to become a smaller number as this divides per number of\ngpus, similar to number of epochs.\n\nOn Thu, Mar 18, 2021 at 3:36 PM Sylvain Gugger ***@***.***>\nwrote:\n\n> Yes, max_steps is the number of training steps, so whether you run on one\n> or several GPUs, you will do that number of training steps. That is the\n> intended behavior and it is not a bug.\n>\n> num_epochs is the number of training epochs. Depending on your number of\n> GPUs you will not have the same number of training steps per epoch (as long\n> as you keep per_device_train_batch_size the same) so you will not train\n> for the same number of total steps.\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/10768#issuecomment-801980651>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AS45N4ZF2CRND434WMFIIUDTEIFYBANCNFSM4ZKGUFPA>\n> .\n>\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,615 | 1,619 | 1,619 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.3
- Platform:
- Python version: 3.8
- PyTorch version (GPU?): 1.8
- Tensorflow version (GPU?): -
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger, @patrickvonplaten, @patil-suraj
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
trainer: @sgugger
## Information
I am training T5 model using the command in the repo on 4 GPUs in a distributed way, the issue arises that if one set max_iters then the number of iterations with 4 GPUs is not divided by 4 anymore, only one get speed up if max_iters is not set, and this looks like this is a bug.
## To reproduce
Steps to reproduce the behavior:
Please run
python -m torch.distributed.launch \
--nproc_per_node 8 \
examples/seq2seq/run_summarization.py \
--model_name_or_path t5-small \
--do_train \
--do_eval \
--dataset_name xsum \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate \
--max_train_samples 500 \
--max_val_samples 500
--max_iters 100
compare the results with the case you run on 1 GPU, both would have the same number of iterations to get completed once running which is not correct
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10768/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10768/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10767 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10767/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10767/comments | https://api.github.com/repos/huggingface/transformers/issues/10767/events | https://github.com/huggingface/transformers/pull/10767 | 833,497,142 | MDExOlB1bGxSZXF1ZXN0NTk0NDg0NTk4 | 10,767 | add run_common_voice script | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,615 | 1,616 | 1,616 | MEMBER | null | # What does this PR do?
This PR adds `run_common_voice.py` script to fine-tune XLSR-Wav2Vec2 models on `common_voice` dataset
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10767/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10767/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10767",
"html_url": "https://github.com/huggingface/transformers/pull/10767",
"diff_url": "https://github.com/huggingface/transformers/pull/10767.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10767.patch",
"merged_at": 1616068276000
} |
https://api.github.com/repos/huggingface/transformers/issues/10766 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10766/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10766/comments | https://api.github.com/repos/huggingface/transformers/issues/10766/events | https://github.com/huggingface/transformers/issues/10766 | 833,489,936 | MDU6SXNzdWU4MzM0ODk5MzY= | 10,766 | auto model encodings for a text snippet returns different floating values across different batch sizes | {
"login": "murali1996",
"id": 30381152,
"node_id": "MDQ6VXNlcjMwMzgxMTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/30381152?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/murali1996",
"html_url": "https://github.com/murali1996",
"followers_url": "https://api.github.com/users/murali1996/followers",
"following_url": "https://api.github.com/users/murali1996/following{/other_user}",
"gists_url": "https://api.github.com/users/murali1996/gists{/gist_id}",
"starred_url": "https://api.github.com/users/murali1996/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/murali1996/subscriptions",
"organizations_url": "https://api.github.com/users/murali1996/orgs",
"repos_url": "https://api.github.com/users/murali1996/repos",
"events_url": "https://api.github.com/users/murali1996/events{/privacy}",
"received_events_url": "https://api.github.com/users/murali1996/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! Thank you for your report. One question:\r\n- Do you still observe this when you're not using padding? Padding can influence values because of the padding tokens, even with attention masks.\r\n\r\nAlso, you're using a `_batch_to_device` method, but you should just be able to cast the batch to the device :)\r\n```py\r\n f = tokenizer(s, padding=True, truncation='longest_first', return_tensors=\"pt\", max_length=128)\r\n f = f.to(device)\r\n```",
"Hi @LysandreJik , thanks for `.to(device)` thingy. Regarding the bug, no i am not using padding tokens. For example, a batch two in above experimental setup looks like the following:\r\n\r\n```\r\n{'input_ids': tensor([[ 101, 2023, 7705, 19421, 7861, 8270, 4667, 2015, 2005, 2169, 7953, 6251, 102],\r\n\t\t\t\t\t [ 101, 2023, 7705, 19421, 7861, 8270, 4667, 2015, 2005, 2169, 7953, 6251, 102]], device='cuda:0'), \r\n 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],\r\n\t\t\t\t\t\t [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], device='cuda:0')}\r\n```\r\n\r\nI get this issue irrespective of whether I use `padding=True` or `padding=False` ",
"Okay, I see, thank you! Second question: do you obtain the same if you're running on CPU? I'm currently on a CPU setup and tried running your code, I have exactly the same values for each.\r\n\r\nGPUs are known for numerical instabilities, so I wouldn't be surprised if this was the source of the issue!",
"@LysandreJik I tried on a CPU. I see different values for batch size 1 vs. greater than 1. For latter, all are exactly same. But still i see the following differences:\r\n\r\n\r\n```\r\ncpu\r\n0001 [-0.43458425998687744, 0.19430384039878845, -0.008721470832824707, 0.16533654928207397, -0.2130793333053589]\r\n0002 [-0.4345836639404297, 0.1943041831254959, -0.008721746504306793, 0.16533654928207397, -0.2130793035030365]\r\n0003 [-0.4345836639404297, 0.1943041831254959, -0.008721746504306793, 0.16533654928207397, -0.2130793035030365]\r\n0004 [-0.4345836639404297, 0.1943041831254959, -0.008721746504306793, 0.16533654928207397, -0.2130793035030365]\r\n0005 [-0.4345828890800476, 0.19430366158485413, -0.008721619844436646, 0.16533659398555756, -0.21308015286922455]\r\n0006 [-0.4345828890800476, 0.19430366158485413, -0.008721619844436646, 0.16533659398555756, -0.21308015286922455]\r\n0007 [-0.4345828890800476, 0.19430366158485413, -0.008721619844436646, 0.16533659398555756, -0.21308015286922455]\r\n0008 [-0.4345828890800476, 0.19430366158485413, -0.008721619844436646, 0.16533659398555756, -0.21308015286922455]\r\n0009 [-0.4345828890800476, 0.19430366158485413, -0.008721619844436646, 0.16533659398555756, -0.21308015286922455]\r\n0010 [-0.4345828890800476, 0.19430366158485413, -0.008721619844436646, 0.16533659398555756, -0.21308015286922455]\r\n0011 [-0.4345828890800476, 0.19430366158485413, -0.008721619844436646, 0.16533659398555756, -0.21308015286922455]\r\n0012 [-0.4345828890800476, 0.19430366158485413, -0.008721619844436646, 0.16533659398555756, -0.21308015286922455]\r\n0013 [-0.4345828890800476, 0.19430366158485413, -0.008721619844436646, 0.16533659398555756, -0.21308015286922455]\r\n0014 [-0.4345828890800476, 0.19430366158485413, -0.008721619844436646, 0.16533659398555756, -0.21308015286922455]\r\n0015 [-0.4345828890800476, 0.19430366158485413, -0.008721619844436646, 0.16533659398555756, -0.21308015286922455]\r\n0016 [-0.4345828890800476, 0.19430366158485413, -0.008721619844436646, 0.16533659398555756, -0.21308015286922455]\r\n0017 [-0.4345828890800476, 0.19430366158485413, -0.008721619844436646, 0.16533659398555756, -0.21308015286922455]\r\n0018 [-0.4345828890800476, 0.19430366158485413, -0.008721619844436646, 0.16533659398555756, -0.21308015286922455]\r\n0019 [-0.4345828890800476, 0.19430366158485413, -0.008721619844436646, 0.16533659398555756, -0.21308015286922455]\r\n```\r\nThis is obtained on following system:\r\n- `transformers` version: 4.3.2\r\n- Platform: Darwin-20.3.0-x86_64-i386-64bit\r\n- Python version: 3.7.9\r\n- PyTorch version (GPU?): 1.7.1 (False)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n\r\n\r\n```\r\ncpu\r\n0001 [-0.43458291888237, 0.19430391490459442, -0.00872180424630642, 0.1653362363576889, -0.21307975053787231]\r\n0002 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635]\r\n0003 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635]\r\n0004 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635]\r\n0005 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635]\r\n0006 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635]\r\n0007 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635]\r\n0008 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635]\r\n0009 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635]\r\n0010 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635]\r\n0011 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635]\r\n0012 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635]\r\n0013 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635]\r\n0014 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635]\r\n0015 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635]\r\n0016 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635]\r\n0017 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635]\r\n0018 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635]\r\n0019 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635]\r\n0020 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635]\r\n0021 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635]\r\n0022 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635]\r\n0023 [-0.4345830976963043, 0.19430406391620636, -0.008721794933080673, 0.16533644497394562, -0.21307967603206635]\r\n```\r\nThis is obtained on following system:\r\n- `transformers` version: 4.4.1\r\n- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.7.10\r\n- PyTorch version (GPU?): 1.8.0+cu101 (False)\r\n- Tensorflow version (GPU?): 2.4.1 (False)\r\n\r\n\r\n\r\nAre you getting similar results or are you ending up getting exact same values irrespective of batch size equal or greater than 1 ?",
"Yes, you're right, the difference is between batch size == 1 and batch size > 1! Talking about it with team members, we guess it's because the kernels used to compute the results differ according to the dimensions, as they're optimized differently. \r\n\r\nFor batch size = 1, the model input would essentially be in one dimension (the vector of tokens), while for batch size > 1, the model input would essentially be in two dimension (an array of tokens).\r\n\r\nImo this is more of a PyTorch issue (if it's an issue in the first place) than a `transformers` issue!",
"Thanks for the information! ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,615 | 1,619 | 1,619 | NONE | null | ## Environment info
- `transformers` version: 4.4.1
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.0+cu101 (True)
- Tensorflow version (GPU?): 2.4.1 (True)
- Using GPU in script?: yes (but the bug issue is irrespective of it)
- Using distributed or parallel set-up in script?: no
### Who can help
@LysandreJik, @patrickvonplaten
## Information
Model I am using : `bert-base-cased` and `sentence-transformers/distilbert-base-nli-stsb-mean-tokens`
Consider the following code:
```python
# pip install transformers
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
print(device)
import transformers
from transformers import AutoModel, AutoTokenizer
name = "sentence-transformers/distilbert-base-nli-stsb-mean-tokens"
model = AutoModel.from_pretrained(name)
tokenizer = AutoTokenizer.from_pretrained(name)
model.to(device)
model.eval()
from tqdm.autonotebook import trange
for ntimes in trange(1, 200, 1, desc="ntimes", disable=False):
s = ['This framework generates embeddings for each input sentence' for _ in range(ntimes)]
f = tokenizer(s, padding=True, truncation='longest_first', return_tensors="pt", max_length=128)
f = f.to(device)
with torch.no_grad():
out = model(**f, return_dict=False)
t = out[0] # token_embedding
print(str(ntimes).zfill(4), t[0][0][:5].tolist())
```
The testing setup is as follows: For every batch size considered in 1 to 200, the model output (last layer's output) for first sentence is taken and compared. Ideally, it is expected to be same but depending on the batch size, the output varies. Although the differences are after several decimal places, it still creates an issue when rounding off or when used for exact-text-match tasks. An example when comparing first 5 values of CLS token's positional representation is printed below:
```
batch_size first_5_values
0001 [-0.4345831274986267, 0.19430403411388397, -0.008721709251403809, 0.16533663868904114, -0.21307958662509918]
0002 [-0.4345831274986267, 0.19430403411388397, -0.008721709251403809, 0.16533663868904114, -0.21307958662509918]
0003 [-0.4345839023590088, 0.19430400431156158, -0.008721785619854927, 0.16533628106117249, -0.21307939291000366]
0004 [-0.4345839023590088, 0.19430400431156158, -0.008721785619854927, 0.16533628106117249, -0.21307939291000366]
0005 [-0.4345839023590088, 0.19430400431156158, -0.008721785619854927, 0.16533628106117249, -0.21307939291000366]
0006 [-0.4345828890800476, 0.19430409371852875, -0.0087218526750803, 0.1653369963169098, -0.2130797803401947]
0007 [-0.43458378314971924, 0.19430388510227203, -0.008721890859305859, 0.16533657908439636, -0.21307970583438873]
0008 [-0.43458378314971924, 0.19430388510227203, -0.008721890859305859, 0.16533657908439636, -0.21307970583438873]
0009 [-0.43458378314971924, 0.19430388510227203, -0.008721890859305859, 0.16533657908439636, -0.21307970583438873]
0010 [-0.43458378314971924, 0.19430388510227203, -0.008721890859305859, 0.16533657908439636, -0.21307970583438873]
0011 [-0.43458303809165955, 0.19430424273014069, -0.0087218526750803, 0.16533637046813965, -0.21307967603206635]
0012 [-0.43458303809165955, 0.19430424273014069, -0.0087218526750803, 0.16533637046813965, -0.21307967603206635]
0013 [-0.4345836043357849, 0.19430403411388397, -0.008721555583178997, 0.16533656418323517, -0.21307919919490814]
0014 [-0.43458399176597595, 0.19430390000343323, -0.008721986785531044, 0.16533610224723816, -0.2130793035030365]
0015 [-0.43458399176597595, 0.19430390000343323, -0.008721986785531044, 0.16533610224723816, -0.2130793035030365]
0016 [-0.43458399176597595, 0.19430390000343323, -0.008721986785531044, 0.16533610224723816, -0.2130793035030365]
0017 [-0.43458399176597595, 0.19430390000343323, -0.008721986785531044, 0.16533610224723816, -0.2130793035030365]
0018 [-0.43458399176597595, 0.19430390000343323, -0.008721986785531044, 0.16533610224723816, -0.2130793035030365]
0019 [-0.43458399176597595, 0.19430390000343323, -0.008721986785531044, 0.16533610224723816, -0.2130793035030365]
0020 [-0.43458399176597595, 0.19430390000343323, -0.008721986785531044, 0.16533610224723816, -0.2130793035030365]
0021 [-0.4345839321613312, 0.19430312514305115, -0.008722005411982536, 0.16533663868904114, -0.21307897567749023]
0022 [-0.43458428978919983, 0.1943034529685974, -0.008722092024981976, 0.16533707082271576, -0.21307975053787231]
0023 [-0.43458428978919983, 0.1943034529685974, -0.008722092024981976, 0.16533707082271576, -0.21307975053787231]
0024 [-0.4345826506614685, 0.19430312514305115, -0.008722330443561077, 0.16533666849136353, -0.2130790799856186]
0025 [-0.43458348512649536, 0.19430328905582428, -0.008722656406462193, 0.16533656418323517, -0.21307960152626038]
0026 [-0.43458348512649536, 0.19430328905582428, -0.008722656406462193, 0.16533656418323517, -0.21307960152626038]
0027 [-0.43458327651023865, 0.19430407881736755, -0.008721843361854553, 0.16533705592155457, -0.2130795419216156]
0028 [-0.43458327651023865, 0.19430407881736755, -0.008721843361854553, 0.16533705592155457, -0.2130795419216156]
0029 [-0.43458327651023865, 0.19430407881736755, -0.008721843361854553, 0.16533705592155457, -0.2130795419216156]
0030 [-0.43458327651023865, 0.19430407881736755, -0.008721843361854553, 0.16533705592155457, -0.2130795419216156]
0031 [-0.43458327651023865, 0.19430407881736755, -0.008721843361854553, 0.16533705592155457, -0.2130795419216156]
0032 [-0.43458327651023865, 0.19430407881736755, -0.008721843361854553, 0.16533705592155457, -0.2130795419216156]
0033 [-0.43458327651023865, 0.19430407881736755, -0.008721843361854553, 0.16533705592155457, -0.2130795419216156]
0034 [-0.43458327651023865, 0.19430407881736755, -0.008721843361854553, 0.16533705592155457, -0.2130795419216156]
0035 [-0.4345836639404297, 0.19430400431156158, -0.008721727877855301, 0.16533680260181427, -0.2130795419216156]
0036 [-0.4345836639404297, 0.19430400431156158, -0.008721727877855301, 0.16533680260181427, -0.2130795419216156]
0037 [-0.4345836639404297, 0.19430400431156158, -0.008721727877855301, 0.16533680260181427, -0.2130795419216156]
0038 [-0.4345836639404297, 0.19430400431156158, -0.008721727877855301, 0.16533680260181427, -0.2130795419216156]
0039 [-0.4345836639404297, 0.19430400431156158, -0.008721727877855301, 0.16533680260181427, -0.2130795419216156]
0040 [-0.4345836639404297, 0.19430400431156158, -0.008721727877855301, 0.16533680260181427, -0.2130795419216156]
0041 [-0.43458354473114014, 0.1943034529685974, -0.008721929043531418, 0.1653369963169098, -0.2130795270204544]
0042 [-0.43458354473114014, 0.1943034529685974, -0.008721929043531418, 0.1653369963169098, -0.2130795270204544]
0043 [-0.43458354473114014, 0.1943034529685974, -0.008721929043531418, 0.1653369963169098, -0.2130795270204544]
0044 [-0.43458375334739685, 0.1943032294511795, -0.008721861988306046, 0.1653372347354889, -0.21307994425296783]
0045 [-0.43458375334739685, 0.1943032294511795, -0.008721861988306046, 0.1653372347354889, -0.21307994425296783]
0046 [-0.43458375334739685, 0.1943032294511795, -0.008721861988306046, 0.1653372347354889, -0.21307994425296783]
0047 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768]
0048 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768]
0049 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768]
0050 [-0.43458375334739685, 0.19430378079414368, -0.008721306920051575, 0.1653364896774292, -0.21308039128780365]
0051 [-0.43458375334739685, 0.19430378079414368, -0.008721306920051575, 0.1653364896774292, -0.21308039128780365]
0052 [-0.43458375334739685, 0.19430378079414368, -0.008721306920051575, 0.1653364896774292, -0.21308039128780365]
0053 [-0.4345836043357849, 0.1943032443523407, -0.008722072467207909, 0.16533635556697845, -0.21307991445064545]
0054 [-0.4345836043357849, 0.1943032443523407, -0.008722072467207909, 0.16533635556697845, -0.21307991445064545]
0055 [-0.4345836043357849, 0.1943032443523407, -0.008722072467207909, 0.16533635556697845, -0.21307991445064545]
0056 [-0.43458402156829834, 0.19430415332317352, -0.008722043596208096, 0.16533638536930084, -0.2130793184041977]
0057 [-0.43458402156829834, 0.19430415332317352, -0.008722043596208096, 0.16533638536930084, -0.2130793184041977]
0058 [-0.43458348512649536, 0.19430400431156158, -0.008722235448658466, 0.16533590853214264, -0.2130792737007141]
0059 [-0.43458348512649536, 0.19430400431156158, -0.008722235448658466, 0.16533590853214264, -0.2130792737007141]
0060 [-0.43458348512649536, 0.19430400431156158, -0.008722235448658466, 0.16533590853214264, -0.2130792737007141]
0061 [-0.43458348512649536, 0.19430400431156158, -0.008722235448658466, 0.16533590853214264, -0.2130792737007141]
0062 [-0.4345838725566864, 0.19430425763130188, -0.008721861988306046, 0.16533733904361725, -0.21307975053787231]
0063 [-0.4345838725566864, 0.19430425763130188, -0.008721861988306046, 0.16533733904361725, -0.21307975053787231]
0064 [-0.4345833957195282, 0.19430355727672577, -0.008721498772501945, 0.16533666849136353, -0.21307994425296783]
0065 [-0.4345833957195282, 0.19430355727672577, -0.008721498772501945, 0.16533666849136353, -0.21307994425296783]
0066 [-0.4345833957195282, 0.19430355727672577, -0.008721498772501945, 0.16533666849136353, -0.21307994425296783]
0067 [-0.4345833957195282, 0.19430355727672577, -0.008721498772501945, 0.16533666849136353, -0.21307994425296783]
0068 [-0.4345833957195282, 0.19430355727672577, -0.008721498772501945, 0.16533666849136353, -0.21307994425296783]
0069 [-0.4345833957195282, 0.19430355727672577, -0.008721498772501945, 0.16533666849136353, -0.21307994425296783]
0070 [-0.4345833957195282, 0.19430355727672577, -0.008721498772501945, 0.16533666849136353, -0.21307994425296783]
0071 [-0.4345833957195282, 0.19430355727672577, -0.008721498772501945, 0.16533666849136353, -0.21307994425296783]
0072 [-0.4345833957195282, 0.19430355727672577, -0.008721498772501945, 0.16533666849136353, -0.21307994425296783]
0073 [-0.4345824718475342, 0.1943037360906601, -0.00872176606208086, 0.16533659398555756, -0.2130804806947708]
0074 [-0.4345824718475342, 0.1943037360906601, -0.00872176606208086, 0.16533659398555756, -0.2130804806947708]
0075 [-0.43458399176597595, 0.19430390000343323, -0.008721986785531044, 0.16533610224723816, -0.2130793035030365]
0076 [-0.43458399176597595, 0.19430390000343323, -0.008721986785531044, 0.16533610224723816, -0.2130793035030365]
0077 [-0.43458399176597595, 0.19430390000343323, -0.008721986785531044, 0.16533610224723816, -0.2130793035030365]
0078 [-0.43458399176597595, 0.19430390000343323, -0.008721986785531044, 0.16533610224723816, -0.2130793035030365]
0079 [-0.43458399176597595, 0.19430390000343323, -0.008721986785531044, 0.16533610224723816, -0.2130793035030365]
0080 [-0.43458354473114014, 0.19430452585220337, -0.008721709251403809, 0.16533659398555756, -0.21307997405529022]
0081 [-0.43458354473114014, 0.19430452585220337, -0.008721709251403809, 0.16533659398555756, -0.21307997405529022]
0082 [-0.43458354473114014, 0.19430452585220337, -0.008721709251403809, 0.16533659398555756, -0.21307997405529022]
0083 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0084 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0085 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0086 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0087 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0088 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0089 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0090 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0091 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0092 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0093 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0094 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0095 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0096 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0097 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0098 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0099 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0100 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0101 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0102 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0103 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0104 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0105 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0106 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0107 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0108 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0109 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0110 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0111 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0112 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0113 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0114 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0115 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0116 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0117 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0118 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0119 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0120 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0121 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0122 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0123 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0124 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0125 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0126 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0127 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0128 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0129 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0130 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0131 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0132 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0133 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0134 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0135 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0136 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0137 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0138 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0139 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0140 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0141 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0142 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0143 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0144 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0145 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0146 [-0.43458354473114014, 0.19430243968963623, -0.008721593767404556, 0.16533765196800232, -0.21307995915412903]
0147 [-0.4345836937427521, 0.1943046748638153, -0.008721747435629368, 0.1653372347354889, -0.2130795568227768]
0148 [-0.4345836937427521, 0.1943046748638153, -0.008721747435629368, 0.1653372347354889, -0.2130795568227768]
0149 [-0.4345836937427521, 0.1943046748638153, -0.008721747435629368, 0.1653372347354889, -0.2130795568227768]
0150 [-0.4345836937427521, 0.1943046748638153, -0.008721747435629368, 0.1653372347354889, -0.2130795568227768]
0151 [-0.4345836937427521, 0.1943046748638153, -0.008721747435629368, 0.1653372347354889, -0.2130795568227768]
0152 [-0.4345836937427521, 0.1943046748638153, -0.008721747435629368, 0.1653372347354889, -0.2130795568227768]
0153 [-0.4345836937427521, 0.1943046748638153, -0.008721747435629368, 0.1653372347354889, -0.2130795568227768]
0154 [-0.4345836937427521, 0.1943046748638153, -0.008721747435629368, 0.1653372347354889, -0.2130795568227768]
0155 [-0.4345836937427521, 0.1943046748638153, -0.008721747435629368, 0.1653372347354889, -0.2130795568227768]
0156 [-0.4345836937427521, 0.1943046748638153, -0.008721747435629368, 0.1653372347354889, -0.2130795568227768]
0157 [-0.4345836937427521, 0.1943046748638153, -0.008721747435629368, 0.1653372347354889, -0.2130795568227768]
0158 [-0.4345836937427521, 0.1943046748638153, -0.008721747435629368, 0.1653372347354889, -0.2130795568227768]
0159 [-0.4345836937427521, 0.1943046748638153, -0.008721747435629368, 0.1653372347354889, -0.2130795568227768]
0160 [-0.4345836937427521, 0.1943046748638153, -0.008721747435629368, 0.1653372347354889, -0.2130795568227768]
0161 [-0.4345836937427521, 0.1943046748638153, -0.008721747435629368, 0.1653372347354889, -0.2130795568227768]
0162 [-0.4345826506614685, 0.19430312514305115, -0.008722330443561077, 0.16533666849136353, -0.2130790799856186]
0163 [-0.4345826506614685, 0.19430312514305115, -0.008722330443561077, 0.16533666849136353, -0.2130790799856186]
0164 [-0.4345826506614685, 0.19430312514305115, -0.008722330443561077, 0.16533666849136353, -0.2130790799856186]
0165 [-0.4345826506614685, 0.19430312514305115, -0.008722330443561077, 0.16533666849136353, -0.2130790799856186]
0166 [-0.4345826506614685, 0.19430312514305115, -0.008722330443561077, 0.16533666849136353, -0.2130790799856186]
0167 [-0.4345826506614685, 0.19430312514305115, -0.008722330443561077, 0.16533666849136353, -0.2130790799856186]
0168 [-0.4345826506614685, 0.19430312514305115, -0.008722330443561077, 0.16533666849136353, -0.2130790799856186]
0169 [-0.4345826506614685, 0.19430312514305115, -0.008722330443561077, 0.16533666849136353, -0.2130790799856186]
0170 [-0.4345826506614685, 0.19430312514305115, -0.008722330443561077, 0.16533666849136353, -0.2130790799856186]
0171 [-0.4345826506614685, 0.19430312514305115, -0.008722330443561077, 0.16533666849136353, -0.2130790799856186]
0172 [-0.4345826506614685, 0.19430312514305115, -0.008722330443561077, 0.16533666849136353, -0.2130790799856186]
0173 [-0.4345826506614685, 0.19430312514305115, -0.008722330443561077, 0.16533666849136353, -0.2130790799856186]
0174 [-0.4345826506614685, 0.19430312514305115, -0.008722330443561077, 0.16533666849136353, -0.2130790799856186]
0175 [-0.4345826506614685, 0.19430312514305115, -0.008722330443561077, 0.16533666849136353, -0.2130790799856186]
0176 [-0.4345826506614685, 0.19430312514305115, -0.008722330443561077, 0.16533666849136353, -0.2130790799856186]
0177 [-0.4345826506614685, 0.19430312514305115, -0.008722330443561077, 0.16533666849136353, -0.2130790799856186]
0178 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768]
0179 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768]
0180 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768]
0181 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768]
0182 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768]
0183 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768]
0184 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768]
0185 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768]
0186 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768]
0187 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768]
0188 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768]
0189 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768]
0190 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768]
0191 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768]
0192 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768]
0193 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768]
0194 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768]
0195 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768]
0196 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768]
0197 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768]
0198 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768]
0199 [-0.4345839321613312, 0.19430294632911682, -0.008722168393433094, 0.16533693671226501, -0.21308031678199768]
```
## To reproduce
Run the snippet of code provided above
Detailed snippets are available at this [colab notebook](https://colab.research.google.com/drive/19yXek9nx4E2pZTqk8JsS-tAhYjgZ5yGG?usp=sharing)
## Expected behavior
It is expected that an input has same representation irrespective of the batch size used to obtain it.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10766/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10766/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10765 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10765/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10765/comments | https://api.github.com/repos/huggingface/transformers/issues/10765/events | https://github.com/huggingface/transformers/issues/10765 | 833,444,270 | MDU6SXNzdWU4MzM0NDQyNzA= | 10,765 | Cannot import name swish from transformers.activations | {
"login": "ardiantovn",
"id": 16162415,
"node_id": "MDQ6VXNlcjE2MTYyNDE1",
"avatar_url": "https://avatars.githubusercontent.com/u/16162415?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ardiantovn",
"html_url": "https://github.com/ardiantovn",
"followers_url": "https://api.github.com/users/ardiantovn/followers",
"following_url": "https://api.github.com/users/ardiantovn/following{/other_user}",
"gists_url": "https://api.github.com/users/ardiantovn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ardiantovn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ardiantovn/subscriptions",
"organizations_url": "https://api.github.com/users/ardiantovn/orgs",
"repos_url": "https://api.github.com/users/ardiantovn/repos",
"events_url": "https://api.github.com/users/ardiantovn/events{/privacy}",
"received_events_url": "https://api.github.com/users/ardiantovn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! `swish` is not importable because it isn't available. `swish` is another name for `silu`, but arrived after it so the name you can use is `silu`:\r\n\r\n```py\r\n>>> from transformers.activations import silu\r\n```\r\n\r\nHowever, in our `ACT2FN` dict we have support for both `swish` and `silu`, so that you can do:\r\n```py\r\n>>> from transformers.activations import ACT2FN\r\n>>> swish = ACT2FN[\"swish\"]\r\n>>> silu = ACT2FN[\"silu\"]\r\n```",
"Thank you"
] | 1,615 | 1,616 | 1,616 | NONE | null | I have installed `transformers v4.4.1` and `tensorflow v2.4.1`.
I tried to run
`from transformers.activations import gelu, gelu_new, swish`.
I get the error like this:
`ImportError: cannot import name 'swish' from 'transformers.activations' (/Users/array/opt/miniconda3/lib/python3.7/site-packages/transformers/activations.py)`
Is there any solutions for this error?
Thank you🙏
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10765/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10765/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10764 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10764/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10764/comments | https://api.github.com/repos/huggingface/transformers/issues/10764/events | https://github.com/huggingface/transformers/issues/10764 | 833,441,728 | MDU6SXNzdWU4MzM0NDE3Mjg= | 10,764 | TokenClassificationPipeline: top-k predictions | {
"login": "francescorubbo",
"id": 5140987,
"node_id": "MDQ6VXNlcjUxNDA5ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5140987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/francescorubbo",
"html_url": "https://github.com/francescorubbo",
"followers_url": "https://api.github.com/users/francescorubbo/followers",
"following_url": "https://api.github.com/users/francescorubbo/following{/other_user}",
"gists_url": "https://api.github.com/users/francescorubbo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/francescorubbo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/francescorubbo/subscriptions",
"organizations_url": "https://api.github.com/users/francescorubbo/orgs",
"repos_url": "https://api.github.com/users/francescorubbo/repos",
"events_url": "https://api.github.com/users/francescorubbo/events{/privacy}",
"received_events_url": "https://api.github.com/users/francescorubbo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,615 | 1,619 | 1,619 | CONTRIBUTOR | null | # 🚀 Feature request
Optional argument for TokenClassificationPipeline to output top-k predictions instead of limiting output to argmax.
## Motivation
Having access to the top-k prediction distribution is useful in a number of scenarios, such as confidence calibration (https://arxiv.org/abs/1706.04599) or generating pseudo-labels (https://arxiv.org/abs/1911.04252).
## Your contribution
I'm happy to submit a PR with the proposed changes, if this contribution is deemed useful. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10764/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10764/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10763 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10763/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10763/comments | https://api.github.com/repos/huggingface/transformers/issues/10763/events | https://github.com/huggingface/transformers/issues/10763 | 833,426,345 | MDU6SXNzdWU4MzM0MjYzNDU= | 10,763 | TokenClassificationPipeline: ignoring subwords | {
"login": "francescorubbo",
"id": 5140987,
"node_id": "MDQ6VXNlcjUxNDA5ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5140987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/francescorubbo",
"html_url": "https://github.com/francescorubbo",
"followers_url": "https://api.github.com/users/francescorubbo/followers",
"following_url": "https://api.github.com/users/francescorubbo/following{/other_user}",
"gists_url": "https://api.github.com/users/francescorubbo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/francescorubbo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/francescorubbo/subscriptions",
"organizations_url": "https://api.github.com/users/francescorubbo/orgs",
"repos_url": "https://api.github.com/users/francescorubbo/repos",
"events_url": "https://api.github.com/users/francescorubbo/events{/privacy}",
"received_events_url": "https://api.github.com/users/francescorubbo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! Could you take a look at https://github.com/huggingface/transformers/pull/10568 and let me know if it's interesting for you? It proposes a refactor of the two keywords you mentioned.",
"> Hello! Could you take a look at #10568 and let me know if it's interesting for you? It proposes a refactor of the two keywords you mentioned.\r\n\r\nYes! That would solve this issue. Thanks for the pointer. I'll post comments there.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,615 | 1,619 | 1,619 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.4.1
- Platform: Linux-4.15.0-136-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.8.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
Library:
- pipelines: @LysandreJik
## Information
Model I am using (Bert, XLNet ...):
Any NER model, e.g. elastic/distilbert-base-cased-finetuned-conll03-english
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
Ignoring subwords using the TokenClassificationPipeline.
## To reproduce
Steps to reproduce the behavior:
```
import transformers
pl = transformers.pipeline('ner', model="elastic/distilbert-base-cased-finetuned-conll03-english", tokenizer="elastic/distilbert-base-cased-finetuned-conll03-english", ignore_labels=[], ignore_subwords=True)
output = pl("Sir Testy McTest is testiful")
```
This outputs:
```
[{'word': 'Sir', 'score': 0.997665524482727, 'entity': 'O', 'index': 1, 'start': 0, 'end': 3}, {'word': 'Test', 'score': 0.7986497282981873, 'entity': 'B-PER', 'index': 2, 'start': 4, 'end': 8}, {'word': '##y', 'score': 0.9581826329231262, 'entity': 'B-PER', 'index': 3, 'start': 8, 'end': 9}, {'word': 'M', 'score': 0.9105736613273621, 'entity': 'I-PER', 'index': 4, 'start': 10, 'end': 11}, {'word': '##c', 'score': 0.9090507626533508, 'entity': 'I-PER', 'index': 5, 'start': 11, 'end': 12}, {'word': '##T', 'score': 0.9545289874076843, 'entity': 'I-PER', 'index': 6, 'start': 12, 'end': 13}, {'word': '##est', 'score': 0.9441993832588196, 'entity': 'I-PER', 'index': 7, 'start': 13, 'end': 16}, {'word': 'is', 'score': 0.9999386072158813, 'entity': 'O', 'index': 8, 'start': 17, 'end': 19}, {'word': 'test', 'score': 0.9998794198036194, 'entity': 'O', 'index': 9, 'start': 20, 'end': 24}, {'word': '##iful', 'score': 0.9999022483825684, 'entity': 'O', 'index': 10, 'start': 24, 'end': 28}]
```
## Expected behavior
The expected behavior would be the subwords token being merged with the preceding token, and their predictions ignored e.g.
```
{'word': 'Testy', 'score': 0.7986497282981873, 'entity': 'B-PER', 'index': 2, 'start': 4, 'end': 9}
```
instead of
```
{'word': 'Test', 'score': 0.7986497282981873, 'entity': 'B-PER', 'index': 2, 'start': 4, 'end': 8}, {'word': '##y', 'score': 0.9581826329231262, 'entity': 'B-PER', 'index': 3, 'start': 8, 'end': 9}
```
In the current logic the flag `ignore_subwords` seems to be used only in combination with the `grouped_entities` https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/token_classification.py#L216 . The output obtained from the example input above, setting both flags as True:
```
[{'entity_group': 'O', 'score': 0.997665524482727, 'word': 'Sir', 'start': 0, 'end': 3}, {'entity_group': 'PER', 'score': 0.8546116948127747, 'word': 'Testy McTest', 'start': 4, 'end': 16}, {'entity_group': 'O', 'score': 0.9999090135097504, 'word': 'is testiful', 'start': 17, 'end': 28}]
```
while setting `grouped_entities=True` and `ignore_subwords=False` outputs
```
[{'entity_group': 'O', 'score': 0.997665524482727, 'word': 'Sir', 'start': 0, 'end': 3}, {'entity_group': 'PER', 'score': 0.7986497282981873, 'word': 'Test', 'start': 4, 'end': 8}, {'entity_group': 'PER', 'score': 0.9353070855140686, 'word': '##y McTest', 'start': 8, 'end': 16}, {'entity_group': 'O', 'score': 0.9999067584673563, 'word': 'is testiful', 'start': 17, 'end': 28}]
```
This seems counterintuitive as the grouped entities shouldn't be fragmented by subwords, and ignoring subwords shouldn't be conditioned on grouping entitities. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10763/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10763/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10762 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10762/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10762/comments | https://api.github.com/repos/huggingface/transformers/issues/10762/events | https://github.com/huggingface/transformers/pull/10762 | 833,415,569 | MDExOlB1bGxSZXF1ZXN0NTk0NDE0ODU2 | 10,762 | [DeepSpeed] simplify init | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2659267025,
"node_id": "MDU6TGFiZWwyNjU5MjY3MDI1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed",
"name": "DeepSpeed",
"color": "4D34F7",
"default": false,
"description": ""
}
] | closed | false | null | [] | [] | 1,615 | 1,616 | 1,616 | CONTRIBUTOR | null | This PR simplifies `deepspeed.initialize` setup thanks to this PR https://github.com/microsoft/DeepSpeed/pull/825
We already have the required version that includes that change in DeepSpeed in place.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10762/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10762/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10762",
"html_url": "https://github.com/huggingface/transformers/pull/10762",
"diff_url": "https://github.com/huggingface/transformers/pull/10762.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10762.patch",
"merged_at": 1616001663000
} |
https://api.github.com/repos/huggingface/transformers/issues/10761 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10761/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10761/comments | https://api.github.com/repos/huggingface/transformers/issues/10761/events | https://github.com/huggingface/transformers/pull/10761 | 833,406,897 | MDExOlB1bGxSZXF1ZXN0NTk0NDA3MzI2 | 10,761 | [doc] [testing] extend the pytest -k section with more examples | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,615 | 1,615 | 1,615 | CONTRIBUTOR | null | This PR adds more examples on using `pytest -k` - I always forget that I want to use `-k A OR B` when I want several tests - I keep trying AND and it doesn't match any.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10761/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10761/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10761",
"html_url": "https://github.com/huggingface/transformers/pull/10761",
"diff_url": "https://github.com/huggingface/transformers/pull/10761.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10761.patch",
"merged_at": 1615987418000
} |
https://api.github.com/repos/huggingface/transformers/issues/10760 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10760/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10760/comments | https://api.github.com/repos/huggingface/transformers/issues/10760/events | https://github.com/huggingface/transformers/pull/10760 | 833,388,462 | MDExOlB1bGxSZXF1ZXN0NTk0MzkxOTIy | 10,760 | [DeepSpeed] improve checkpoint loading code plus tests | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2659267025,
"node_id": "MDU6TGFiZWwyNjU5MjY3MDI1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed",
"name": "DeepSpeed",
"color": "4D34F7",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Great, thank you for the feedback, @sgugger - I will add it separately https://github.com/huggingface/transformers/pull/10777"
] | 1,615 | 1,616 | 1,616 | CONTRIBUTOR | null | This PR further improves the DeepSpeed integration
* checkpoint resuming code has been cleaned up
* detailed checkpoint saving and resuming from checkpoint tests added
* a small reshuffle made in `test_trainer.py` to enable re-using helper functions in other test modules
* switched `test_trainer.py` to `TestCasePlus` so it's easier to deal with temp dirs during debug
* adjusted `init_deepspeed` to make a a deepcopy of the config dict passed to it, so that the user's copy isn't affected - needed at least for tests
Note that I made a failed attempt to load from resume point fatal under deepspeed. I'm not sure why the normal code just warns if a wrong path is passed. Unless I'm missing something, if a user expects to resume and it is not possible it should be fatal IMHO, so that they can correct their launching code.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10760/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10760/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10760",
"html_url": "https://github.com/huggingface/transformers/pull/10760",
"diff_url": "https://github.com/huggingface/transformers/pull/10760.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10760.patch",
"merged_at": 1616001778000
} |
https://api.github.com/repos/huggingface/transformers/issues/10759 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10759/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10759/comments | https://api.github.com/repos/huggingface/transformers/issues/10759/events | https://github.com/huggingface/transformers/issues/10759 | 833,377,713 | MDU6SXNzdWU4MzMzNzc3MTM= | 10,759 | AlbertForMaskedLM always has bad results | {
"login": "Twinparadox",
"id": 18140719,
"node_id": "MDQ6VXNlcjE4MTQwNzE5",
"avatar_url": "https://avatars.githubusercontent.com/u/18140719?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Twinparadox",
"html_url": "https://github.com/Twinparadox",
"followers_url": "https://api.github.com/users/Twinparadox/followers",
"following_url": "https://api.github.com/users/Twinparadox/following{/other_user}",
"gists_url": "https://api.github.com/users/Twinparadox/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Twinparadox/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Twinparadox/subscriptions",
"organizations_url": "https://api.github.com/users/Twinparadox/orgs",
"repos_url": "https://api.github.com/users/Twinparadox/repos",
"events_url": "https://api.github.com/users/Twinparadox/events{/privacy}",
"received_events_url": "https://api.github.com/users/Twinparadox/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,615 | 1,619 | 1,619 | NONE | null | I am the one who is using your great project.
I'm trying to make my own ALBERT language model using scratch from [here](https://mlcom.github.io/Create-Language-Model/)
This article seems similar to [your tutorial notebook.](https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb#scrollTo=YZ9HSQxAAbme)
I have already made a BERT model that fits my language by referring to this, and it has shown satisfactory results.
```
BertForMaskedLM : about 0.6 loss
BertForSequenceClassification : accuracy 0.88 in my dataset(binary classification)
```
In order to use ALBERT, I trained tokenizer by using Sentencepiece and trained the pretrain model.
My tokenizer had good results, but my language model had higher loss than my BERT model.(about 2.7~2.8)
I thought the result was bad after seeing the loss, so I checked the result with pipeline and fill-mask tasks, and all the results came out the same.
Sentence A
```json
[{'score': 0.7783917188644409, 'token': 32002, 'token_str': '<pad>'}, {'score': 0.008062483742833138, 'token': 3, 'token_str': '.'}, {'score': 0.0054806191474199295, 'token': 4, 'token_str': ','}, ...
```
Sentence B
```json
[{'score': 0.7783915400505066, 'token': 32002, 'token_str': '<pad>'}, {'score': 0.008062485605478287, 'token': 3, 'token_str': '.'}, {'score': 0.005480623338371515, 'token': 4, 'token_str': ','}, ...
```
I want to solve this problem, but I couldn't find the answer even though I found a lot of articles.
I will look forward to the opinions of the contributors and users of this project. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10759/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10759/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10758 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10758/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10758/comments | https://api.github.com/repos/huggingface/transformers/issues/10758/events | https://github.com/huggingface/transformers/issues/10758 | 833,348,275 | MDU6SXNzdWU4MzMzNDgyNzU= | 10,758 | Even slower when using multiple gpus with sharded_ddp | {
"login": "yana-xuyan",
"id": 38536635,
"node_id": "MDQ6VXNlcjM4NTM2NjM1",
"avatar_url": "https://avatars.githubusercontent.com/u/38536635?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yana-xuyan",
"html_url": "https://github.com/yana-xuyan",
"followers_url": "https://api.github.com/users/yana-xuyan/followers",
"following_url": "https://api.github.com/users/yana-xuyan/following{/other_user}",
"gists_url": "https://api.github.com/users/yana-xuyan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yana-xuyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yana-xuyan/subscriptions",
"organizations_url": "https://api.github.com/users/yana-xuyan/orgs",
"repos_url": "https://api.github.com/users/yana-xuyan/repos",
"events_url": "https://api.github.com/users/yana-xuyan/events{/privacy}",
"received_events_url": "https://api.github.com/users/yana-xuyan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"`--sharded_ddp` is not there to accelerate your training, it's there to save GPU memory (for very large models) at some cost on the training time. So if you can finetune on one GPU, you should definitely use this option.",
"@sgugger as long as I know sharded_ddp is only for distributed training, not sure why you have suggested to use sharded_ddp on one GPU? ",
"I have not suggested that. I have said to just fine-tune your model on one GPU without any kind of DDP (so no `--sharded_ddp`). It does not make any sense to use this option if your model and its training can fit on one GPU as it is there to reduce GPU memory, not speed up training.",
"thanks a lot, now I understood what you meant.\n\nOn Mon, Mar 22, 2021 at 3:08 AM Sylvain Gugger ***@***.***>\nwrote:\n\n> I have not suggested that. I have said to just fine-tune your model on one\n> GPU without any kind of DDP (so no --sharded_ddp). It does not make any\n> sense to use this option if your model and its training can fit on one GPU\n> as it is there to reduce GPU memory, not speed up training.\n>\n> —\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/10758#issuecomment-803714561>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AS37NMRT24IR4P3H5QAT753TE2RCVANCNFSM4ZJW56GA>\n> .\n>\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,615 | 1,619 | 1,619 | NONE | null | ## Environment info
- `transformers` version: 4.2.2
- Platform: Linux-3.10.0-1062.el7.x86_64-x86_64-with-centos-7.7.1908-Core
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: True
### Who can help
Library:
- trainer: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): facebook/bart-large-cnn
## To reproduce
Steps to reproduce the behavior:
When I was trying to use sharded_ddp and multiple GPUs to accelerate the training process, my command I used is as follows:
CUDA_VISIBLE_DEIVCES=6,7 python -m torch.distributed.launch --nproc_per_node=2 finetune_trainer.py \
--sharded_ddp --[other_args]
However, the experiment even took a longer time than the experiment using a single GPU. May I ask which part I did was wrong?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10758/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10758/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10757 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10757/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10757/comments | https://api.github.com/repos/huggingface/transformers/issues/10757/events | https://github.com/huggingface/transformers/issues/10757 | 833,329,189 | MDU6SXNzdWU4MzMzMjkxODk= | 10,757 | BERT for Regression predicts constant | {
"login": "victormaricato",
"id": 11489228,
"node_id": "MDQ6VXNlcjExNDg5MjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/11489228?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/victormaricato",
"html_url": "https://github.com/victormaricato",
"followers_url": "https://api.github.com/users/victormaricato/followers",
"following_url": "https://api.github.com/users/victormaricato/following{/other_user}",
"gists_url": "https://api.github.com/users/victormaricato/gists{/gist_id}",
"starred_url": "https://api.github.com/users/victormaricato/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/victormaricato/subscriptions",
"organizations_url": "https://api.github.com/users/victormaricato/orgs",
"repos_url": "https://api.github.com/users/victormaricato/repos",
"events_url": "https://api.github.com/users/victormaricato/events{/privacy}",
"received_events_url": "https://api.github.com/users/victormaricato/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello!\r\n\r\nI think this more a question to address on the forum https://discuss.huggingface.co/ as it doesn't looks like to be related to a bug in the library."
] | 1,615 | 1,616 | 1,616 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.2
- Platform: Ubuntu?
- Python version: 3.8
- Tensorflow version (GPU?): 2.4.1 (Yes)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
- albert, bert, xlm: @LysandreJik
- tensorflow: @jplu
## Information
Model I am using (Bert, XLNet ...): BERT.
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts:
I am doing BERT for a regression `]0-1]` in a deep (sequential) genomic data. Similar to [DNABERT](https://github.com/jerryji1993/DNABERT) and [this medium post](https://towardsdatascience.com/bringing-bert-to-the-field-how-to-predict-gene-expression-from-corn-dna-9287af91fcf8)
My code is quite simple, basically I am using BERT from the lib, averaging the outputs and passing it to a head model.
```python
class Predictor(Model):
def __init__(
self, batch_size: int, sequence_size: int, hidden_layers: List[int], bert_params: dict
):
super().__init__()
self._embedder = Embedder(sequence_size, bert_params)
self._head_model = _create_head_model(batch_size, hidden_layers, bert_params["hidden_size"])
def call(self, inputs):
embedding = self._embedder(inputs)
return self._head_model(embedding)
class Embedder(Model):
def __init__(self, sequence_size: int, bert_params: dict):
super().__init__()
self._bert = _create_bert_model(sequence_size, bert_params)
self.avg_pooling = GlobalAveragePooling1D()
def call(self, sequence):
x = self._bert(sequence)
return self.avg_pooling(x.last_hidden_state)
def _create_bert_model(sequence_size: int, bert_params: dict) -> TFBertModel:
tokenizer = KmerTokenizer.load()
sequence_length = sequence_size - tokenizer.k
config = BertConfig(
vocab_size=tokenizer.vocab_size,
max_position_embeddings=sequence_length,
**bert_params,
)
return TFBertModel(config)
def _create_head_model(batch_size: int, n_mixtures: int, hidden_layers: List[int], input_size: int):
embedding = Input(input_size, batch_size)
x = embedding
for n_neurons in hidden_layers:
x = Dense(n_neurons, activation=nn.gelu)(x)
output = Dense(1, x)
return Model(embedding, output)
```
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
```
X -> Tokenized sequence of integers ([1,5,10010, 2,200, 304,1001,535,341])
y -> Float, ]0,1]
```
My Y variable is distributed like this:

## Problem
The problem I am having is that my model is predicting constant.
Scatter (y_true x y_pred)

Predictions histogram:

Parameters:
```yaml
batch_size: 16
training_steps: 20
sequence_size: 600
bert:
hidden_size: 32
num_attention_heads: 8
num_hidden_layers: 2
hidden_layers: [32]
early_stopping:
patience: 15
optimizer:
type: "RectifiedAdam"
lr: 0.0004
epsilon: 0.000001
beta_2: 0.98
total_steps: 100
weight_decay: 0.01
loss: "mean_squared_error"
```
## What I have tried
* Smaller/Larger learning rate
* Smaller/Larger batch size
* Shallower/Deeper network
* Changing Y distribution (Std Scaling)
* Mixture Density Networks
I simply cannot get through this constant prediction
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10757/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10757/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10756 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10756/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10756/comments | https://api.github.com/repos/huggingface/transformers/issues/10756/events | https://github.com/huggingface/transformers/issues/10756 | 833,237,157 | MDU6SXNzdWU4MzMyMzcxNTc= | 10,756 | Google Colab TypeError: expected str, bytes or os.PathLike object, not NoneType | {
"login": "lenyabloko",
"id": 55606,
"node_id": "MDQ6VXNlcjU1NjA2",
"avatar_url": "https://avatars.githubusercontent.com/u/55606?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lenyabloko",
"html_url": "https://github.com/lenyabloko",
"followers_url": "https://api.github.com/users/lenyabloko/followers",
"following_url": "https://api.github.com/users/lenyabloko/following{/other_user}",
"gists_url": "https://api.github.com/users/lenyabloko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lenyabloko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lenyabloko/subscriptions",
"organizations_url": "https://api.github.com/users/lenyabloko/orgs",
"repos_url": "https://api.github.com/users/lenyabloko/repos",
"events_url": "https://api.github.com/users/lenyabloko/events{/privacy}",
"received_events_url": "https://api.github.com/users/lenyabloko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @lenyabloko, indeed, there is an issue with the online repository of `xlm-clm-ende-1024`. @sgugger is currently fixing it right now. \r\n\r\nThanks for letting us know, we'll let you know when it is fixed.",
"It should be fixed now, thanks to @sgugger: [`huggingface#e824d7b`](https://huggingface.co/xlm-clm-ende-1024/commit/e824d7bf481ebf027a50407dd378ad3de4031d90)"
] | 1,615 | 1,617 | 1,616 | NONE | null | ## Environment info
- `transformers` version: 4.4.1
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.0+cu101 (True)
- Tensorflow version (GPU?): 2.4.1 (True)
The problem arises when using:
I started getting this error this morning without any changes on my side just loading my old Colab notebook (that worked few hours ago without any problem!)
The code that breaks is:
```
tokenizer = AutoTokenizer.from_pretrained('xlm-clm-ende-1024')
/usr/local/lib/python3.7/dist-packages/transformers/models/auto/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
417 else:
418 if tokenizer_class_py is not None:
--> 419 return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
420 else:
421 raise ValueError(
/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
1703
1704 return cls._from_pretrained(
-> 1705 resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs
1706 )
1707
/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py in _from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs)
1774 # Instantiate tokenizer.
1775 try:
-> 1776 tokenizer = cls(*init_inputs, **init_kwargs)
1777 except OSError:
1778 raise OSError(
/usr/local/lib/python3.7/dist-packages/transformers/models/xlm/tokenization_xlm.py in __init__(self, vocab_file, merges_file, unk_token, bos_token, sep_token, pad_token, cls_token, mask_token, additional_special_tokens, lang2id, id2lang, do_lowercase_and_remove_accent, **kwargs)
645 self.encoder = json.load(vocab_handle)
646 self.decoder = {v: k for k, v in self.encoder.items()}
--> 647 with open(merges_file, encoding="utf-8") as merges_handle:
648 merges = merges_handle.read().split("\n")[:-1]
649 merges = [tuple(merge.split()[:2]) for merge in merges]
TypeError: expected str, bytes or os.PathLike object, not NoneType
```
## To reproduce
Steps to reproduce the behavior:
'''
!pip install transformers
!pip install pytorch-transformers
!pip install tensorboardX
'''
## Expected behavior
It all worked this morning without problem and for many month before that I have not touched the code. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10756/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10756/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10755 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10755/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10755/comments | https://api.github.com/repos/huggingface/transformers/issues/10755/events | https://github.com/huggingface/transformers/issues/10755 | 833,218,500 | MDU6SXNzdWU4MzMyMTg1MDA= | 10,755 | Online decoding for ASR | {
"login": "arkadyark",
"id": 4860115,
"node_id": "MDQ6VXNlcjQ4NjAxMTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4860115?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arkadyark",
"html_url": "https://github.com/arkadyark",
"followers_url": "https://api.github.com/users/arkadyark/followers",
"following_url": "https://api.github.com/users/arkadyark/following{/other_user}",
"gists_url": "https://api.github.com/users/arkadyark/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arkadyark/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arkadyark/subscriptions",
"organizations_url": "https://api.github.com/users/arkadyark/orgs",
"repos_url": "https://api.github.com/users/arkadyark/repos",
"events_url": "https://api.github.com/users/arkadyark/events{/privacy}",
"received_events_url": "https://api.github.com/users/arkadyark/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\ncc @patrickvonplaten \r\n\r\nThanks!",
"Sure, no problem, sorry about that. There seems to be a typo in the forum link - for anybody reading this in the future here are the [forums:](https://discuss.huggingface.co/)"
] | 1,615 | 1,615 | 1,615 | NONE | null | # 🚀 Feature request
Are there plans to implement online decoding for the speech recognition models such as wav2vec2 and XLSR? More specifically, to be able to receive audio in short chunks, and output partial transcripts as they become available.
## Motivation
Many use cases are covered by the current wav2vec2 model in the library, involving batch recognition of pre-recorded text. However for an online application that wanted to continuously recognize speech on a live input stream, this may not be sufficient. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10755/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10755/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10754 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10754/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10754/comments | https://api.github.com/repos/huggingface/transformers/issues/10754/events | https://github.com/huggingface/transformers/issues/10754 | 833,190,985 | MDU6SXNzdWU4MzMxOTA5ODU= | 10,754 | run_clm.py gpt-2 training example in documentation runs out of memory on a 32gb v100, should be verified and/or modified | {
"login": "HodorTheCoder",
"id": 15326703,
"node_id": "MDQ6VXNlcjE1MzI2NzAz",
"avatar_url": "https://avatars.githubusercontent.com/u/15326703?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HodorTheCoder",
"html_url": "https://github.com/HodorTheCoder",
"followers_url": "https://api.github.com/users/HodorTheCoder/followers",
"following_url": "https://api.github.com/users/HodorTheCoder/following{/other_user}",
"gists_url": "https://api.github.com/users/HodorTheCoder/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HodorTheCoder/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HodorTheCoder/subscriptions",
"organizations_url": "https://api.github.com/users/HodorTheCoder/orgs",
"repos_url": "https://api.github.com/users/HodorTheCoder/repos",
"events_url": "https://api.github.com/users/HodorTheCoder/events{/privacy}",
"received_events_url": "https://api.github.com/users/HodorTheCoder/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The doc has not been updated since a while ago so it's probably not up to date yes. I think the corresponding script probably had different defaults in earlier versions (either a shorter sequence length or a shorter batch size).\r\n\r\nAs for your second question, the script does work with torch.distributed.launch without changes. See the [main examples README](https://github.com/huggingface/transformers/tree/master/examples#distributed-training-and-mixed-precision) for more information.",
"OK. Thanks for validating, that's kind of what I figured.\r\n\r\nSecondly, thank you-- I think I was expecting it to split the batch for me. So in the example on the page you sent:\r\n\r\n```\r\npython -m torch.distributed.launch \\\r\n --nproc_per_node 8 text-classification/run_glue.py \\\r\n --model_name_or_path bert-large-uncased-whole-word-masking \\\r\n --task_name mnli \\\r\n --do_train \\\r\n --do_eval \\\r\n --max_seq_length 128 \\\r\n --per_device_train_batch_size 8 \\\r\n --learning_rate 2e-5 \\\r\n --num_train_epochs 3.0 \\\r\n --output_dir /tmp/mnli_output/\r\n```\r\n\r\nthat would be the equivalent of running a total batch size of 64 on a single GPU?\r\n\r\n(`per_device_train_batch_size=8`) * (`nproc_per_node=8`) = `64`\r\n\r\nMuch appreciated.",
"Yes, that's correct!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,615 | 1,619 | 1,619 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.3
- Platform: Linux-4.15.0-135-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0+cu101 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: aye!
- Using distributed or parallel set-up in script?: no
Models:
- gpt2: @patrickvonplaten, @LysandreJik
Library:
- benchmarks: @patrickvonplaten
Documentation: @sgugger
## Information
Model I am using (Bert, XLNet ...): running the run_clm.py fine-tuning script on gpt-2
The problem arises when using:
* [x] the official example scripts: tranformers/example/language-modeling/run_clm.py
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name) Using the official huggingface wikitext-2-raw-v1 dataset
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Using the example here:
https://github.com/huggingface/transformers/tree/master/examples/language-modeling
When fine-tuning gpt-2 with run_clm.py, this should run on a k80 (24gb of RAM) in about an hour according to the example. However, I'm running out of memory with default settings... and I'm using a v100 inside a DXG-9, with 32gb of memory:
```
nvidia-smi
Tue Mar 16 15:19:23 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.67 Driver Version: 418.67 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla V100-SXM2... On | 00000000:06:00.0 Off | 0 |
| N/A 34C P0 44W / 300W | 0MiB / 32480MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
```
Pasting the command here for clarity.
```
python run_clm.py \
--model_name_or_path gpt2 \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--do_train \
--do_eval \
--output_dir /tmp/test-clm
```
and the eponymous error:
`RuntimeError: CUDA out of memory. Tried to allocate 512.00 MiB (GPU 0; 31.72 GiB total capacity; 30.32 GiB already allocated; 187.88 MiB free; 30.38 GiB reserved in total by PyTorch)`
That's a ton of memory... is this right? Or is there some type of memory leak?
Now, this can be fixed by setting `--per_device_train_batch_size 4` but I highly doubt the current format will work out of the box on a k80 w/out changing anything (which I can't test because I don't have access to one) using the default batch size of `8`, and this should be reflected in the example and/or verified with the current `run_clm.py`.
Now, on a v100, it did finish in a little under 14 minutes, which is incredibly fast-- so I'm not complaining-- but I know batch size should be as high as possible on these things to get the best results, and I was really hoping it would work with 8 (in fact I was hoping I could jack it up to 16 by wrapping it in nn.torchDataParallel, but that's for another day.)
This leads me to another question-- I know you can do torch.distributed.launch with these scripts, is there one that wraps the model in `nn.parallel.DistrubtedDataParallel` so that you can chunk a larger batch size across multiple GPU's and utilize the extra memory, or should this be done by hand? If so, maybe I will create a PR and add an option for this inside the three example scripts as it would be quite beneficial. Example:
```
model = torch.nn.parallel.DistributedDataParallel(model,
device_ids=[args.local_rank],
output_device=args.local_rank)
```
Results:
```
Training completed. Do not forget to share your model on huggingface.co/models =)
{'train_runtime': 820.2706, 'train_samples_per_second': 2.121, 'epoch': 3.0}
100%|##########################################################################################################################| 1740/1740 [13:40<00:00, 2.12it/s]
[INFO|trainer.py:1408] 2021-03-16 15:39:31,084 >> Saving model checkpoint to /tmp/test-clm
[INFO|configuration_utils.py:304] 2021-03-16 15:39:31,085 >> Configuration saved in /tmp/test-clm/config.json
[INFO|modeling_utils.py:817] 2021-03-16 15:39:32,049 >> Model weights saved in /tmp/test-clm/pytorch_model.bin
03/16/2021 15:39:32 - INFO - __main__ - ***** Train results *****
03/16/2021 15:39:32 - INFO - __main__ - epoch = 3.0
03/16/2021 15:39:32 - INFO - __main__ - train_runtime = 820.2706
03/16/2021 15:39:32 - INFO - __main__ - train_samples_per_second = 2.121
03/16/2021 15:39:32 - INFO - __main__ - *** Evaluate ***
[INFO|trainer.py:1600] 2021-03-16 15:39:32,118 >> ***** Running Evaluation *****
[INFO|trainer.py:1601] 2021-03-16 15:39:32,118 >> Num examples = 240
[INFO|trainer.py:1602] 2021-03-16 15:39:32,119 >> Batch size = 8
100%|##############################################################################################################################| 30/30 [00:08<00:00, 3.52it/s]
03/16/2021 15:39:40 - INFO - __main__ - ***** Eval results *****
03/16/2021 15:39:40 - INFO - __main__ - perplexity = 20.967772820757663
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10754/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10754/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10753 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10753/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10753/comments | https://api.github.com/repos/huggingface/transformers/issues/10753/events | https://github.com/huggingface/transformers/pull/10753 | 833,186,903 | MDExOlB1bGxSZXF1ZXN0NTk0MjI4ODcy | 10,753 | [DeepSpeed] ZeRO Stage 3 | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2659267025,
"node_id": "MDU6TGFiZWwyNjU5MjY3MDI1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed",
"name": "DeepSpeed",
"color": "4D34F7",
"default": false,
"description": ""
}
] | closed | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [
"OK, the code-base is ready for review.\r\n\r\nI want to add a few more performance notes to the docs tomorrow.\r\n\r\nI will work on the wasteful weights init/preloading/ovewriting/resuming in a separate PR next, as it's all intertwined and will also look at how to make `from_pretrained` support all those different ways (i.e. it's not just deepspeed-specific). This is what we started discussing at https://github.com/huggingface/transformers/issues/10893 and much earlier https://github.com/huggingface/transformers/issues/9205\r\n \r\nThank you!",
"@sgugger, the docs are ready for your review when you have a bit of time.\r\n\r\nI added some unrelated to ZeRO3 installation notes to both fairscale and deepspeed while at it.\r\n\r\nThank you!"
] | 1,615 | 1,617 | 1,617 | CONTRIBUTOR | null | This PR implements DeepSpeed ZeR0 3 integration:
* [x] removes the "wind-down" of the deepspeed setup at the end of train, since zero3 can't do inference w/o this setup - we will have some other ways to reclaim memory for the no longer needed optimizer in the future.
* [x] adds initial support for eval w/o train - more work will be done in the future
* [x] to support `predict_with_generate` extends `generate` and its 5 beam search ways to support a new `synced_gpus` flag which is needed by ZeRO stage3 - under ZeRO3 parallelization if this gpu finished before max_length was reached, it must continue running forward so that the other gpus who may have not finished their generate yet, can complete as they rely on this gpu to received its slice of params. Currently deployed for DeepSpeed - but may need to do the same for fairscale elsewhere.
* [x] because now we are forced to run all gpus in sync, the `generate` logic is also now equipped with a stopping early mechanism that is synchronized across all participating gpus
* [x] reworks how pretrained model is loaded - `from_pretrained` is now zero3 aware and does a whole lot to efficiently preload massive models
* [x] since `state_dict` is fake under `zero3` it can't be saved and used - so care is taken to either not save the bogus model, or the weights get reconsolidated before saving if `stage3_gather_fp16_weights_on_model_save` is enabled
* [x] adds new DeepSpeed configuration docs and basic tuning recommendation
* [x] adds lots of new tests, now testing zero2 and zero3 separately
* [x] fixes a disappearing std stream problem in an older test using a workaround
* [x] a new DS feature: `deepspeed.zero.register_external_parameter(self, self.layer1.weight)` - haven't needed it so far - need to find which models may need this feature. this is needed for when a layer accesses weights of another layer. but most our models don't do that. so just documented this for now.
DeepSpeed PRs that need to be merged and a new release is made 0.3.14:
* [x] https://github.com/microsoft/DeepSpeed/pull/881 (memory issue)
* [x] https://github.com/microsoft/DeepSpeed/pull/884 support lists of params in `GatheredParameters` (needed for pretrained model load)
* [x] https://github.com/microsoft/DeepSpeed/pull/892 script to extract consolidated fp32 weights for zero2 and zero3
* [x] https://github.com/microsoft/DeepSpeed/pull/893 save the consolidated fp16 weights under zero3
* [x] https://github.com/microsoft/DeepSpeed/pull/896 leak memory fix needed for tests
* [x] `deepspeed==0.3.14` released
May be:
* [x] https://github.com/microsoft/DeepSpeed/pull/882 - this needs to be changed to be optional - won't be efficient by default
Future PR TODO:
* [ ] make loading and resuming more efficient - gotta find a way to not preload the model from weights when we are resuming from a checkpoint. and of course not init the weights. Currently we do it 3 times! A huge overhead for big models.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10753/reactions",
"total_count": 3,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/10753/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10753",
"html_url": "https://github.com/huggingface/transformers/pull/10753",
"diff_url": "https://github.com/huggingface/transformers/pull/10753.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10753.patch",
"merged_at": 1617900781000
} |
https://api.github.com/repos/huggingface/transformers/issues/10752 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10752/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10752/comments | https://api.github.com/repos/huggingface/transformers/issues/10752/events | https://github.com/huggingface/transformers/pull/10752 | 833,146,295 | MDExOlB1bGxSZXF1ZXN0NTk0MTk1MDUz | 10,752 | Patches full import failure when sentencepiece is not installed | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,615 | 1,615 | 1,615 | MEMBER | null | The `M2M100Tokenizer` and `DebertaV2Tokenizer` should be under the `if is_sentencepiece_available()`
cc @patil-suraj | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10752/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10752/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10752",
"html_url": "https://github.com/huggingface/transformers/pull/10752",
"diff_url": "https://github.com/huggingface/transformers/pull/10752.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10752.patch",
"merged_at": 1615924700000
} |
https://api.github.com/repos/huggingface/transformers/issues/10751 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10751/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10751/comments | https://api.github.com/repos/huggingface/transformers/issues/10751/events | https://github.com/huggingface/transformers/issues/10751 | 833,143,581 | MDU6SXNzdWU4MzMxNDM1ODE= | 10,751 | Tensorflow Keras model.loads_weights() breaks on TFElectraModel trained with v4.3.0 | {
"login": "buoi",
"id": 38630200,
"node_id": "MDQ6VXNlcjM4NjMwMjAw",
"avatar_url": "https://avatars.githubusercontent.com/u/38630200?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/buoi",
"html_url": "https://github.com/buoi",
"followers_url": "https://api.github.com/users/buoi/followers",
"following_url": "https://api.github.com/users/buoi/following{/other_user}",
"gists_url": "https://api.github.com/users/buoi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/buoi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/buoi/subscriptions",
"organizations_url": "https://api.github.com/users/buoi/orgs",
"repos_url": "https://api.github.com/users/buoi/repos",
"events_url": "https://api.github.com/users/buoi/events{/privacy}",
"received_events_url": "https://api.github.com/users/buoi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for opening this issue!\r\n\r\nI have not access to a computer to extract and read you archive, can you, please, share a Colab or copy/paste the code here in this issue. Thanks!",
"> Thanks for opening this issue!\r\n> \r\n> I have not access to a computer to extract and read you archive, can you, please, share a Colab or copy/paste the code here in this issue. Thanks!\r\n\r\nyes sure: https://colab.research.google.com/drive/1tHzUMhveYwYkPOCxZwMA80IkVkRCWGRW?usp=sharing\r\n\r\n#### To Reproduce\r\n\r\n1. \"run all\" on attached notebook, wait for weights save after (very short) training\r\n\r\n2. change pip tranasformers version to 4.4.0 in the first cell\r\n3. comment model.fit to avoid override of weights\r\n4. restart an run all, wait for model.load_weights to fail\r\n",
"Sorry, I cannot open your Colab :( There is a restricted access",
"Sorry, access at the same link is now allowed. :)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,615 | 1,619 | 1,619 | NONE | null | ## Environment info
- `transformers` version: 4.4.0
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.0+cu101 (True)
- Tensorflow version (GPU?): 2.4.1 (True)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@jplu
## Information
Model I am using TFElectraModel:
The problem arises when using:
* my own modified scripts: see attachment
The tasks I am working on is:
* Information Retrieval on SQuAD dataset
## To reproduce
Steps to reproduce the behavior:
1. train a keras model whith a TFElectraModel as a layer on transformers 4.3.0.
2. switch to transformers 4.4.0
3. call model.load_weights on the keras model
OR:
1. "run all" on attached notebook, wait for weights save after (very short) training
2. change pip tranasformers version to 4.4.0 in the first cell
3. comment model.fit to avoid override of weights
4. restart an run all, wait for model.load_weights to fail
[IR_TFElectraModel.ipynb.zip](https://github.com/huggingface/transformers/files/6151831/IR_TFElectraModel.ipynb.zip)
ValueError: Cannot assign to variable tf_electra_model_1/electra/embeddings/token_type_embeddings/embeddings:0 due to variable shape (2, 128) and value shape (512, 128) are incompatible
## Expected behavior
I would expect the weights to load as it correctly does with both train and load performed on transformers 4.3.0
Anyway keep up with the great job! 🤗
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10751/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10751/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10750 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10750/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10750/comments | https://api.github.com/repos/huggingface/transformers/issues/10750/events | https://github.com/huggingface/transformers/pull/10750 | 833,119,183 | MDExOlB1bGxSZXF1ZXN0NTk0MTcyMjQ4 | 10,750 | Patches the full import failure and adds a test | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,615 | 1,615 | 1,615 | MEMBER | null | The full import currently fails because some layers are imported when they do not exist.
This adds a test in `test_file_utils.py` by trying to import the entire transformers. This failed before the proposed fix.
Fixes https://github.com/huggingface/transformers/issues/10749 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10750/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10750/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10750",
"html_url": "https://github.com/huggingface/transformers/pull/10750",
"diff_url": "https://github.com/huggingface/transformers/pull/10750.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10750.patch",
"merged_at": 1615923472000
} |
https://api.github.com/repos/huggingface/transformers/issues/10749 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10749/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10749/comments | https://api.github.com/repos/huggingface/transformers/issues/10749/events | https://github.com/huggingface/transformers/issues/10749 | 833,014,813 | MDU6SXNzdWU4MzMwMTQ4MTM= | 10,749 | bug in new version 4.4.0 sentencepiece is not available | {
"login": "ashaheedq",
"id": 22966206,
"node_id": "MDQ6VXNlcjIyOTY2MjA2",
"avatar_url": "https://avatars.githubusercontent.com/u/22966206?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ashaheedq",
"html_url": "https://github.com/ashaheedq",
"followers_url": "https://api.github.com/users/ashaheedq/followers",
"following_url": "https://api.github.com/users/ashaheedq/following{/other_user}",
"gists_url": "https://api.github.com/users/ashaheedq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ashaheedq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ashaheedq/subscriptions",
"organizations_url": "https://api.github.com/users/ashaheedq/orgs",
"repos_url": "https://api.github.com/users/ashaheedq/repos",
"events_url": "https://api.github.com/users/ashaheedq/events{/privacy}",
"received_events_url": "https://api.github.com/users/ashaheedq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, thank you for opening an issue. Could you respect the issue template? What code led to this error? What's your environment?\r\n\r\nI can run the following without any issues:\r\n```py\r\n>>> from transformers import IBertModel\r\n>>> model = IBertModel.from_pretrained(\"kssteven/ibert-roberta-base\")\r\n```\r\n\r\nThank you for your understanding.",
"> Hi, thank you for opening an issue. Could you respect the issue template? What code led to this error? What's your environment?\r\n> \r\n> I can run the following without any issues:\r\n> \r\n> ```python\r\n> >>> from transformers import IBertModel\r\n> >>> model = IBertModel.from_pretrained(\"kssteven/ibert-roberta-base\")\r\n> ```\r\n> \r\n> Thank you for your understanding.\r\n\r\nThanks for replying, \r\n\r\nI am running this Colab notebook \r\nhttps://colab.research.google.com/drive/1M0ls7EPUi1dwqIDh6HNfJ5y826XvcgGX?usp=sharing\r\n\r\nYou can reproduce by running all cells and the error will appear on the import cell. \r\n``` python\r\n# (1)load libraries \r\nimport json, sys, regex\r\nimport torch\r\nimport GPUtil\r\nimport torch.nn as nn\r\nfrom torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler\r\nfrom keras.preprocessing.sequence import pad_sequences\r\nfrom sklearn.model_selection import train_test_split\r\nfrom pytorch_pretrained_bert import BertTokenizer, BertConfig, BertAdam, BertForSequenceClassification\r\nfrom tqdm import tqdm, trange\r\nimport pandas as pd\r\nimport os\r\nimport numpy as np\r\nfrom sklearn.metrics import accuracy_score, f1_score, recall_score, precision_score, classification_report, confusion_matrix\r\n##----------------------------------------------------\r\nfrom transformers import *\r\nfrom transformers import XLMRobertaConfig\r\nfrom transformers import XLMRobertaModel\r\nfrom transformers import AutoTokenizer, AutoModelWithLMHead\r\nfrom transformers import XLMRobertaForSequenceClassification, XLMRobertaTokenizer, XLMRobertaModel\r\nfrom tokenizers import Tokenizer, models, pre_tokenizers, decoders, processors\r\nfrom transformers import AdamW, get_linear_schedule_with_warmup\r\nfrom transformers import AutoTokenizer, AutoModel\r\n```",
"Thank you, I can reproduce. We'll release a patch for this in the coming days.\r\n\r\nBy the way, is there a reason you're importing everything from `transformers`, before importing specific layers?\r\n```py\r\nfrom transformers import *\r\nfrom transformers import XLMRobertaConfig\r\nfrom transformers import XLMRobertaModel\r\nfrom transformers import AutoTokenizer, AutoModelWithLMHead\r\nfrom transformers import XLMRobertaForSequenceClassification, XLMRobertaTokenizer, XLMRobertaModel\r\nfrom tokenizers import Tokenizer, models, pre_tokenizers, decoders, processors\r\nfrom transformers import AdamW, get_linear_schedule_with_warmup\r\nfrom transformers import AutoTokenizer, AutoModel\r\n```",
"We just released version v4.4.1 with a patch for this. Thank you for letting us know!"
] | 1,615 | 1,615 | 1,615 | NONE | null | Hi, I am using Colab for a sentiment analysis model.
The code suddenly stopped working after a fresh run from yesterday.
I noticed that a new version of transformers was released which caused this issue to appear.
when trying to import I get this error message:
`ModuleNotFoundError: No module named 'sentencepiece'`
and after installing sentencepiece using pip I get this error message:
`AttributeError: module transformers.models.ibert has no attribute IBertLayer` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10749/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10749/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10748 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10748/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10748/comments | https://api.github.com/repos/huggingface/transformers/issues/10748/events | https://github.com/huggingface/transformers/pull/10748 | 832,922,044 | MDExOlB1bGxSZXF1ZXN0NTk0MDA1MzMw | 10,748 | Fix URLs from #10744 | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,615 | 1,615 | 1,615 | COLLABORATOR | null | # What does this PR do?
Forgot the `resole/main` in the URLs in #10744 (cc @julien-c)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10748/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10748/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10748",
"html_url": "https://github.com/huggingface/transformers/pull/10748",
"diff_url": "https://github.com/huggingface/transformers/pull/10748.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10748.patch",
"merged_at": 1615908689000
} |
https://api.github.com/repos/huggingface/transformers/issues/10747 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10747/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10747/comments | https://api.github.com/repos/huggingface/transformers/issues/10747/events | https://github.com/huggingface/transformers/issues/10747 | 832,913,545 | MDU6SXNzdWU4MzI5MTM1NDU= | 10,747 | Issues with MODEL_FOR_MASKED_LM_MAPPING.keys(), and transformer.utils.check_min_version() | {
"login": "RasmusEdvardsen",
"id": 15195622,
"node_id": "MDQ6VXNlcjE1MTk1NjIy",
"avatar_url": "https://avatars.githubusercontent.com/u/15195622?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RasmusEdvardsen",
"html_url": "https://github.com/RasmusEdvardsen",
"followers_url": "https://api.github.com/users/RasmusEdvardsen/followers",
"following_url": "https://api.github.com/users/RasmusEdvardsen/following{/other_user}",
"gists_url": "https://api.github.com/users/RasmusEdvardsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RasmusEdvardsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RasmusEdvardsen/subscriptions",
"organizations_url": "https://api.github.com/users/RasmusEdvardsen/orgs",
"repos_url": "https://api.github.com/users/RasmusEdvardsen/repos",
"events_url": "https://api.github.com/users/RasmusEdvardsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/RasmusEdvardsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! Have you checked the [\"Important note\" at the top of the examples README](https://github.com/huggingface/transformers/tree/master/examples#important-note)? \r\n\r\nDid you get this error with a source install?",
"@LysandreJik that does the trick - I'll be more thorough in the future with my reading :)"
] | 1,615 | 1,615 | 1,615 | NONE | null | Hey,
I just recently wanted to pre-train on top of a BERT model, and ran into some issues. When I run `python run_mlm.py`,
I get the following error:
```
Traceback (most recent call last):
File "run_mlm.py", line 46, in <module>
from transformers.utils import check_min_version
ImportError: cannot import name 'check_min_version' from 'transformers.utils' (/PATH/TO/site-packages/transformers/utils/__init__.py)
```
https://github.com/huggingface/transformers/blob/d3d388b934ef515e96246ba643c924d675f6515d/examples/language-modeling/run_mlm.py#L46
After commenting out that line, and it's import (I know, shame on me), I get the following error:
```
Traceback (most recent call last):
File "run_mlm.py", line 53, in <module>
MODEL_CONFIG_CLASSES = list(MODEL_FOR_MASKED_LM_MAPPING.keys())
AttributeError: 'NoneType' object has no attribute 'keys'
```
https://github.com/huggingface/transformers/blob/d3d388b934ef515e96246ba643c924d675f6515d/examples/language-modeling/run_mlm.py#L54
Tried with python 3.7.10 and 3.8.3
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10747/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10747/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10746 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10746/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10746/comments | https://api.github.com/repos/huggingface/transformers/issues/10746/events | https://github.com/huggingface/transformers/pull/10746 | 832,875,116 | MDExOlB1bGxSZXF1ZXN0NTkzOTY1Mzcy | 10,746 | Add DistributedSamplerWithLoop | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,615 | 1,615 | 1,615 | COLLABORATOR | null | # What does this PR do?
This PR adds a new distributed sampler that will provide a round multiple of the batch size samples on all processes by looping back at the beginning of the (shuffled) dataset. This is useful:
- for TPUs to avoid triggering a new XLA compilation for the last training batch
- for model parallelism to have batches of the same size on all processes
This PR also refactors some logic regarding the wold_size and process_rank in the `TrainingArguments`, as well as adds a test of the new `DistributedSamplerWithLoop`.
Tested on:
- single-GPU
- multi-GPU
- TPU
- SageMaker MP | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10746/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10746/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10746",
"html_url": "https://github.com/huggingface/transformers/pull/10746",
"diff_url": "https://github.com/huggingface/transformers/pull/10746.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10746.patch",
"merged_at": 1615908159000
} |
https://api.github.com/repos/huggingface/transformers/issues/10745 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10745/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10745/comments | https://api.github.com/repos/huggingface/transformers/issues/10745/events | https://github.com/huggingface/transformers/pull/10745 | 832,872,210 | MDExOlB1bGxSZXF1ZXN0NTkzOTYyOTMy | 10,745 | fix M2M100 example | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,615 | 1,615 | 1,615 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10745/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10745/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10745",
"html_url": "https://github.com/huggingface/transformers/pull/10745",
"diff_url": "https://github.com/huggingface/transformers/pull/10745.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10745.patch",
"merged_at": 1615906201000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/10744 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10744/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10744/comments | https://api.github.com/repos/huggingface/transformers/issues/10744/events | https://github.com/huggingface/transformers/pull/10744 | 832,872,035 | MDExOlB1bGxSZXF1ZXN0NTkzOTYyNzc3 | 10,744 | Remove old links to CDN | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"those are not valid files though (check the url template for the other models)",
"Oopsie, thanks for flagging!"
] | 1,615 | 1,615 | 1,615 | COLLABORATOR | null | # What does this PR do?
This PR removes a few links left pointing to `https://cdn.huggingface.co` instead of `https://huggingface.co` (purely cosmetic since they are not actually used anymore, normally). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10744/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10744/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10744",
"html_url": "https://github.com/huggingface/transformers/pull/10744",
"diff_url": "https://github.com/huggingface/transformers/pull/10744.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10744.patch",
"merged_at": 1615906134000
} |
https://api.github.com/repos/huggingface/transformers/issues/10743 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10743/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10743/comments | https://api.github.com/repos/huggingface/transformers/issues/10743/events | https://github.com/huggingface/transformers/pull/10743 | 832,865,921 | MDExOlB1bGxSZXF1ZXN0NTkzOTU3NTk4 | 10,743 | Fix DeBERTa + Conversational pipeline slow tests | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,615 | 1,615 | 1,615 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10743/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10743/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10743",
"html_url": "https://github.com/huggingface/transformers/pull/10743",
"diff_url": "https://github.com/huggingface/transformers/pull/10743.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10743.patch",
"merged_at": 1615907900000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/10742 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10742/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10742/comments | https://api.github.com/repos/huggingface/transformers/issues/10742/events | https://github.com/huggingface/transformers/issues/10742 | 832,824,859 | MDU6SXNzdWU4MzI4MjQ4NTk= | 10,742 | DialoGPT- cannot increase number of conversation turns | {
"login": "albusdemens",
"id": 276459,
"node_id": "MDQ6VXNlcjI3NjQ1OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/276459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albusdemens",
"html_url": "https://github.com/albusdemens",
"followers_url": "https://api.github.com/users/albusdemens/followers",
"following_url": "https://api.github.com/users/albusdemens/following{/other_user}",
"gists_url": "https://api.github.com/users/albusdemens/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albusdemens/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albusdemens/subscriptions",
"organizations_url": "https://api.github.com/users/albusdemens/orgs",
"repos_url": "https://api.github.com/users/albusdemens/repos",
"events_url": "https://api.github.com/users/albusdemens/events{/privacy}",
"received_events_url": "https://api.github.com/users/albusdemens/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @albusdemens,\r\n\r\nSorry I don't follow here completely. Does the model crash after 6 turns or does it just give a qualitatively bad answer?",
"Hey @patrickvonplaten, the second option (the quality of the answers is noticeably worse). Often I get outputs like `?!!`, `!.!!?!?` and similar. Usually low quality outputs don't show up in the first 3-4 conversation turns. When I use the Microsoft model instead, I don't get low-quality results after a few conversation turns. \r\n\r\n Is there a way I can fix this?",
"This sounds very much like the model wasn't trained on long conversations to me...I'm not sure whether it's possible to enforce better quality without retraining the model",
"Thanks for your reply! Besides improving the quality of the training data,\ndo you think I should also increase the number of epochs?\n\nOn Tue, Mar 30, 2021, 6:15 AM Patrick von Platen ***@***.***>\nwrote:\n\n> This sounds very much like the model wasn't trained on long conversations\n> to me...I'm not sure whether it's possible to enforce better quality\n> without retraining the model\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/10742#issuecomment-809942835>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AACDP25QF63SYDYBES5FT5TTGFUB5ANCNFSM4ZITFYLA>\n> .\n>\n"
] | 1,615 | 1,617 | 1,617 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.5.1
- Platform: Linux-5.4.0-58-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.6.13
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten @LysandreJik
## Information
Model I am using (Bert, XLNet ...): DialoGPT
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
I fine-tuned DialoGPT on some sitcom subtitles, and now I am trying to chat with it using the commands listed [here](https://gist.github.com/albusdemens/9cd5602f088720e403f84038e088d696) (in the example, I use the Microsoft fine-tuned model).
1. When I increase the number of conversation turns from 5 to 10, everything goes OK with the Microsoft model.
2. If I increase the number of conversation turns to 10 using my fine-tuned model, reply number six is something like
```
>> User:hello
OurBot: Hello.
>> User:how are things?
OurBot: They're fine.
>> User:cool. Did you have lunch already?
OurBot: I did, actually.
>> User:what did you have?
OurBot: Oh, um, i just wanted to toast.
>> User:where?
**OurBot: !!!hello!**
```
## Expected behavior
Using the DialoGPT model I fine-tuned, I would like to be able to have the same number of turns that I have using the Microsoft model.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10742/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10742/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10741 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10741/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10741/comments | https://api.github.com/repos/huggingface/transformers/issues/10741/events | https://github.com/huggingface/transformers/pull/10741 | 832,755,600 | MDExOlB1bGxSZXF1ZXN0NTkzODY0MjU4 | 10,741 | Fix S2T example | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"No worries, the doctests will catch that when they're re-enabled! Hopefully sooner rather than later."
] | 1,615 | 1,615 | 1,615 | MEMBER | null | cc @patil-suraj | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10741/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10741/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10741",
"html_url": "https://github.com/huggingface/transformers/pull/10741",
"diff_url": "https://github.com/huggingface/transformers/pull/10741.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10741.patch",
"merged_at": 1615899307000
} |
https://api.github.com/repos/huggingface/transformers/issues/10740 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10740/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10740/comments | https://api.github.com/repos/huggingface/transformers/issues/10740/events | https://github.com/huggingface/transformers/issues/10740 | 832,670,610 | MDU6SXNzdWU4MzI2NzA2MTA= | 10,740 | BigBird | {
"login": "slvcsl",
"id": 25265140,
"node_id": "MDQ6VXNlcjI1MjY1MTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/25265140?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/slvcsl",
"html_url": "https://github.com/slvcsl",
"followers_url": "https://api.github.com/users/slvcsl/followers",
"following_url": "https://api.github.com/users/slvcsl/following{/other_user}",
"gists_url": "https://api.github.com/users/slvcsl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/slvcsl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/slvcsl/subscriptions",
"organizations_url": "https://api.github.com/users/slvcsl/orgs",
"repos_url": "https://api.github.com/users/slvcsl/repos",
"events_url": "https://api.github.com/users/slvcsl/events{/privacy}",
"received_events_url": "https://api.github.com/users/slvcsl/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"hi @slvcsl \r\n\r\n`BigBird` is till WIP and not yet added into the lib,. You can follow the progress here #10183\r\n\r\n",
"Thank you for your response. I'll check it out!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,615 | 1,619 | 1,619 | NONE | null | ## Environment info
- `transformers` version: 4.4.0.dev0
- Platform: Linux-5.3.0-64-generic-x86_64-with-glibc2.30
- Python version: 3.9.2
- PyTorch version (GPU?): 1.8.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Y
- Using distributed or parallel set-up in script?: N
### Who can help
@patrickvonplaten, @patil-suraj (guessing)
## Information
Model I am using **BigBird**:
The problem arises when using: the official example scripts: **seq2seq**
The tasks I am working on is: the official **BigPatent** dataset
## To reproduce
I'm trying to use BigBird for a summarization task (on the BigPatent dataset). I'm using the official seq2seq script, which I run as (all lengths/batches are small for testing)
```
python run_seq2seq.py \
--model_name_or_path google/bigbird-roberta-base \
--dataset_name big_patent \
--max_source_length 3 \
--max_target_length 3 \
--val_max_target_length 3 \
--do_eval --do_predict \
--per_gpu_train_batch_size 1 \
--per_gpu_eval_batch_size 1 \
--num_train_epochs 1 \
--output_dir tmp
```
However, I get the following error message:
```
Traceback (most recent call last):
File "/home/scasola/factuality/factuality/transformers/examples/seq2seq/run_seq2seq.py", line 657, in <module>
main()
File "/home/scasola/factuality/factuality/transformers/examples/seq2seq/run_seq2seq.py", line 344, in main
config = AutoConfig.from_pretrained(
File "/home/scasola/anaconda3/envs/factuality/lib/python3.9/site-packages/transformers/models/auto/configuration_auto.py", line 382, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
KeyError: 'big_bird'
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10740/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10740/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10739 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10739/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10739/comments | https://api.github.com/repos/huggingface/transformers/issues/10739/events | https://github.com/huggingface/transformers/issues/10739 | 832,664,582 | MDU6SXNzdWU4MzI2NjQ1ODI= | 10,739 | Tokenizer becomes very slow after adding new tokens | {
"login": "shauli-ravfogel",
"id": 14981791,
"node_id": "MDQ6VXNlcjE0OTgxNzkx",
"avatar_url": "https://avatars.githubusercontent.com/u/14981791?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shauli-ravfogel",
"html_url": "https://github.com/shauli-ravfogel",
"followers_url": "https://api.github.com/users/shauli-ravfogel/followers",
"following_url": "https://api.github.com/users/shauli-ravfogel/following{/other_user}",
"gists_url": "https://api.github.com/users/shauli-ravfogel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shauli-ravfogel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shauli-ravfogel/subscriptions",
"organizations_url": "https://api.github.com/users/shauli-ravfogel/orgs",
"repos_url": "https://api.github.com/users/shauli-ravfogel/repos",
"events_url": "https://api.github.com/users/shauli-ravfogel/events{/privacy}",
"received_events_url": "https://api.github.com/users/shauli-ravfogel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @shauli-ravfogel, this should have been fixed on `master`. Can you try installing from source and let me know if it works better?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,615 | 1,621 | 1,621 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.1.1
- Platform: Linux-5.4.0-58-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.9
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): 2.3.1 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik
## Information
Hi,
When I am trying to add a large number (50k) new tokens to BERT's tokenizer, the tokenizer becomes very slow, taking 29 seconds to tokenize a single short sentence.
## To reproduce
```
from transformers import BertTokenizer
import time
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
new_tokens = ["aaa"+str(i) for i in range(50000)]
tokenizer.add_tokens(new_tokens) # takes some time
sentence = "a short sentence."
start = time.time()
tokenizer.tokenize(sentence)
print(time.time() - start)
> 29.049
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10739/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10739/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10738 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10738/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10738/comments | https://api.github.com/repos/huggingface/transformers/issues/10738/events | https://github.com/huggingface/transformers/issues/10738 | 832,657,611 | MDU6SXNzdWU4MzI2NTc2MTE= | 10,738 | load wav2vec model from local path | {
"login": "roboticsai",
"id": 19629749,
"node_id": "MDQ6VXNlcjE5NjI5NzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/19629749?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/roboticsai",
"html_url": "https://github.com/roboticsai",
"followers_url": "https://api.github.com/users/roboticsai/followers",
"following_url": "https://api.github.com/users/roboticsai/following{/other_user}",
"gists_url": "https://api.github.com/users/roboticsai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/roboticsai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/roboticsai/subscriptions",
"organizations_url": "https://api.github.com/users/roboticsai/orgs",
"repos_url": "https://api.github.com/users/roboticsai/repos",
"events_url": "https://api.github.com/users/roboticsai/events{/privacy}",
"received_events_url": "https://api.github.com/users/roboticsai/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"hi @roboticsai \r\n\r\n`from_pretrained` expects path of the directory where it can find `config.json` and `pytorch_model.bin` files. It seems that you haven't saved the model using `save_pretrained`. \r\n\r\nTo use `from_pretrained`, the model should be saved using `save_pretrained` method.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,615 | 1,619 | 1,619 | NONE | null | I'm trying to run the wav2vec based ASR on my machine. This is my code:
```
import soundfile as sf
import torch
from transformers import Wav2Vec2ForMaskedLM, Wav2Vec2Tokenizer
# load pretrained model
cp = "./my_model_directory/wav2vec_small.pt"
tokenizer = Wav2Vec2Tokenizer.from_pretrained(cp)
model = Wav2Vec2ForMaskedLM.from_pretrained(cp)
# load audio
audio_input, _ = sf.read("/home/robot/Music/dinesh.flac")
# transcribe
input_values = tokenizer(audio_input, return_tensors="pt").input_values
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = tokenizer.batch_decode(predicted_ids)[0]
print(transcription)
```
Here cp is the path to the wav2ved local model file. But when i try to run this i'm getting error;
`- or './my_model_directory' is the correct path to a directory containing relevant tokenizer files`
Here when i use the model present in the cloud eg. cp = "facebook/wav2vec2-base-960h". This works perfectly.
Isn't this possible to run the transformers wav2vec without cloud? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10738/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10738/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10737 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10737/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10737/comments | https://api.github.com/repos/huggingface/transformers/issues/10737/events | https://github.com/huggingface/transformers/issues/10737 | 832,585,020 | MDU6SXNzdWU4MzI1ODUwMjA= | 10,737 | `group_texts` duplicates special tokens | {
"login": "hwijeen",
"id": 29157715,
"node_id": "MDQ6VXNlcjI5MTU3NzE1",
"avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hwijeen",
"html_url": "https://github.com/hwijeen",
"followers_url": "https://api.github.com/users/hwijeen/followers",
"following_url": "https://api.github.com/users/hwijeen/following{/other_user}",
"gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions",
"organizations_url": "https://api.github.com/users/hwijeen/orgs",
"repos_url": "https://api.github.com/users/hwijeen/repos",
"events_url": "https://api.github.com/users/hwijeen/events{/privacy}",
"received_events_url": "https://api.github.com/users/hwijeen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"ELECTRA or BERT are not pretrained using this option, so you should use `--line_by_line` to mimick their pretraining objective.\r\n\r\nAlso note that this is a generic example, not an actual pretraining script (for instance the BERT next sentence prediction objective is not there). It's purpose is to expose all data processing so you can easily tweak it to your needs.",
"Thank you for the clarification. Could you name models that are trained with this option(not --line_by_line)?",
"GPT and GPT-2 is trained this way for instance.",
"#11840 "
] | 1,615 | 1,621 | 1,616 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.2
- Platform: Linux-5.4.0-66-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.9
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> @sgugger, @patil-suraj
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run [run_mlm.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py#L381) with pdb break point at line 409.
2. see value for `tokenized_datasets['train'][0]`
```
ipdb> tokenized_datasets['train'][0]
{'attention_mask': ..., 'input_ids': [2, 11699, 6139, 23923, 6354, 6216, 3, 2, 124, 6149, 6228, 6164, 5125, 27479, 6228, 11699, 6139, 23923, 6354, 6216, 11961, 9121, 9804, 10602, 10293, 5328, 6721, 121, 6997, 15520, 16117, 10602, 11302, 5328, 6721, 121, 6997, 13014, 6177, 22111, 25147,
6189, 6106, 6315, 6110, 5084, 5158, 6291, 12836, 6108, 15512, 6726, 18139, 25596, 12701, 6291, 6106, 6315, 6616, 112, 5171, 113, 4363, 6380, 14946, 13769, 13928, 17518, 10216, 12299, 12571, 12850, 26355, 5315, 6457, 6117, 6303, 6213, 19358, 117, 122, 5201, 6361, 6211, 6377, 6312, 22259, 6631, 9268, 112, 10538, 113, 10728, 22278, 117, 14870, 13905, 142, 15214, 112, 10538, 113, 10728, 22278, 117, 14870, 14934, 7575, 10524, 186, 14921, 30912, 10758, 118, 10022, 680, 6275, 117, 181, 9860, 186, 14921, 30912, 10758, 136, 20107, 18973, 6358, 118, 3, 2, 10022, 7539, 25147, 6189, 116, 24690, 6915, 128, 116, 11134, 14216, 15650, 15373, 117, 13531, 20100, 117, 10028, 6132, 117, 127, 112, 124, 3866, 113, 15798, 15650, 15373, 117,
13531, 20100, 117, 9840, 6403, 117, 23999, 25006, 6131, 112, 14604, 113, 5084, 15466, 112, 5171, 113, 4363, 6380, 14946, 6213, 13579, 10393, 11023, 6187, 9218, 13014, 6236, 23534, 4587, 12827, 11069, 9422, 25686, 9112, 9220, 12112, 13538, 10112, 9427, 9215, 9260, 19036, 10393, 13514, 6187, 10112, 14882, 6130, 20150, 9279, 118, 3, 2, 14233, 15466, 18609, 16080, 118, 3, 2, 25147, 6189, 24864, 28007, 13581, 6149, 6228, 6164, 5125, 27479, 6228, 124, 5134, 16109, 28372, 3814, 6224, 20116, 6158, 12221, 6595, 105, 3, 2, 5466, 11794, 10393, 4700, 6224, 12819, 10694, 6187, 4671, 6628, 6119, 5502, 24468, 5743, 6125, 7111, 18452, 105, 3, 2, 14368, 6164, 5125, 27479, 6228, 6309, 6221, 6139, 4174, 6428, 6243, 167, 11699, 6139, 13295, 16589, 18619, 15924, 6131, 22573, 19515, 11914, 23850, 11914, 11512, 11346, 25763, 5134, 16109, 28372, 5134, 16109, 28372, 7031, 6114, 6114, 6626, 7020, 118, 3, 2, 5476, 6214, 116, 4121, 7788, 6107, 7788, 118, 4822, 6503, 6236, 15053, 4606, 9117, 118, 3, 2, 5676, 26156, 4973, 7088, 6114, 23122, 6114, 25444, 6422, 4218, 14246, 11920, 6147, 12097, 4011, 9117, 118, 3, 2, 4417, 25703, 9205, 9271, 9165, 19235, 4202, 6115, 14810, 6187, 19915, 6164, 4839, 6361, 11721, 4378, 7063, 15482, 9156, 11976, 30627, 9291, 3788, 19018, 20146, 4202, 9172, 118, 9868, 16712, 29634, 6115, 5206, 6203, 4469, 5294, 11019, 10250, 4973, 6284, 6203, 9691, 118, 3, 2, 9310, 10574, 5330, 9799, 11042, 13237, 6149, 9237, 118, 3, 2, 4378, 7063, 9126, 9271, 9242, 9822, 6236, 15472, 23041, 16135, 18119, 15314, 118, 3, 2, 5134, 6341, 6187, 9159, 14990, 5656, 4241, 14059, 6139, 4913, 12802, 9822, 9181, 9841, 4788, 18037, 116, 14059, 10825, 5087, 5178, 11699, 6213, 15171, 6333, 9242, 4645, 10212, 9691, 118, 3, 2, 9620, 10021, 11699, 6139, 17626, 6236, 12258, 4378, 7063, 6185, 16269, 26623, 30683, 12901, 118, 3, 2, 18276, 6130, 4378, 7063, 9126, 11699, 6187, 9159, 9242, 9319, 13793, 17451, 6260, 4184, 9242, 17561, 10724, 20756, 12126, 4789, 9172, 118, 9567, 4144, 12062, 10780, 5466, 6803, 4202, 4378, 7063, 9126, 4422, 12303, 6164, 9165, 10350, 571, 9467, 20853, 7177, 11947, 10441, 9270, 18480, 3795, 9207, 12098, 11725, 118], 'special_tokens_mask': ... , 'token_type_ids': ...}
```
When `group_texts` is `map`ed to `tokenized_datasets`, whose examples already contain special tokens (e.g. [CLS] and [SEP]), the mapped results have the following format: `[CLS] ... [SEP][CLS] ... [SEP][CLS] ... [SEP]`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The output of `group_texts` applied `tokenized_datasets` has the following format `[CLS] ... [SEP]` or `[CLS] ... [SEP] ... [SEP]`.
<!-- A clear and concise description of what you would expect to happen. -->
The current input format is different from the original implementation, [ELECTRA](https://github.com/google-research/electra/blob/f93f3f81cdc13435dd3e85766852d00ff3e00ab5/build_pretraining_dataset.py#L100) for example. Is this a trivial issue? I think the downstream task performance of the model pretrained with the current script could tell if this is a serious bug or not. Could someone share the results? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10737/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10737/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10736 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10736/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10736/comments | https://api.github.com/repos/huggingface/transformers/issues/10736/events | https://github.com/huggingface/transformers/issues/10736 | 832,482,209 | MDU6SXNzdWU4MzI0ODIyMDk= | 10,736 | Position ids in RoBERTa | {
"login": "qhd1996",
"id": 24516022,
"node_id": "MDQ6VXNlcjI0NTE2MDIy",
"avatar_url": "https://avatars.githubusercontent.com/u/24516022?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qhd1996",
"html_url": "https://github.com/qhd1996",
"followers_url": "https://api.github.com/users/qhd1996/followers",
"following_url": "https://api.github.com/users/qhd1996/following{/other_user}",
"gists_url": "https://api.github.com/users/qhd1996/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qhd1996/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qhd1996/subscriptions",
"organizations_url": "https://api.github.com/users/qhd1996/orgs",
"repos_url": "https://api.github.com/users/qhd1996/repos",
"events_url": "https://api.github.com/users/qhd1996/events{/privacy}",
"received_events_url": "https://api.github.com/users/qhd1996/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"There's a reason why position ids don't start at 0 for RoBERTa, see #5285",
"`RoBERTa` never uses 0 and 1 positional ids, in `ROBERTa`, all pad tokens have position id of 1, and the rest of the tokens have position ids in the range `(2, seq_length - num_pad_tokens)`. It's implemented like this to match the original implementation in fairseq.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,615 | 1,619 | 1,619 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Ubuntu 20.04
- Python version: Python 3.7.10
- PyTorch version (GPU?): 1.6.0_py3.7_cuda10.1.243_cudnn7.6.3_0
- Tensorflow version (GPU?): No
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik
## Information
Position ids in RoBERTa is not implemented properly.
The problem arises when using:
create_position_ids_from_input_ids in transformers.models.roberta.modeling_roberta.py
Based on this function, position id 0 is never used. This may cause problem when the sequence is long, for example, 512. Token whose id >= 511 will not get its corresponding token ids.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10736/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10736/timeline | completed | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.