url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/9129 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9129/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9129/comments | https://api.github.com/repos/huggingface/transformers/issues/9129/events | https://github.com/huggingface/transformers/pull/9129 | 767,787,017 | MDExOlB1bGxSZXF1ZXN0NTQwMzgzMjUy | 9,129 | Fix TF Transfo XL | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | CONTRIBUTOR | null | # What does this PR do?
This PR fixes an issue in TFTransfoXL the last layer was added into the complete list of `hidden_states` while being already transposed. Then adding it at the end after all the other states have been transposed.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9129/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9129/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9129",
"html_url": "https://github.com/huggingface/transformers/pull/9129",
"diff_url": "https://github.com/huggingface/transformers/pull/9129.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9129.patch",
"merged_at": 1608063417000
} |
https://api.github.com/repos/huggingface/transformers/issues/9128 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9128/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9128/comments | https://api.github.com/repos/huggingface/transformers/issues/9128/events | https://github.com/huggingface/transformers/pull/9128 | 767,759,666 | MDExOlB1bGxSZXF1ZXN0NTQwMzY1MjAx | 9,128 | BartForCausalLM analogs to `ProphetNetForCausalLM` | {
"login": "sadakmed",
"id": 18331629,
"node_id": "MDQ6VXNlcjE4MzMxNjI5",
"avatar_url": "https://avatars.githubusercontent.com/u/18331629?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sadakmed",
"html_url": "https://github.com/sadakmed",
"followers_url": "https://api.github.com/users/sadakmed/followers",
"following_url": "https://api.github.com/users/sadakmed/following{/other_user}",
"gists_url": "https://api.github.com/users/sadakmed/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sadakmed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sadakmed/subscriptions",
"organizations_url": "https://api.github.com/users/sadakmed/orgs",
"repos_url": "https://api.github.com/users/sadakmed/repos",
"events_url": "https://api.github.com/users/sadakmed/events{/privacy}",
"received_events_url": "https://api.github.com/users/sadakmed/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hy @sadakmed, \r\n\r\nlet me know if you need help on the issue or if you don't find the time to tackle it. I'll then just make it open to the \"public\" again :-) ",
"@patrickvonplaten The loss function it what I stuck on, thank you very much for your guidance.\r\n\r\n\r\n> Let me know if you need help or are stuck :-)\r\n\r\nfor sure I will ;-) ",
"Hey @sadakmed,\r\n\r\nDo you have an update on the PR? It's been three weeks now and it would be great to merge this soon. Sorry, we're very fast-moving in this lib and other community contributors have started asking for this feature. By next week, I'll probably have to take a look myself or redistribute the issue. ",
"Hi @patrickvonplaten my apologies, \r\n\r\nCould you Please see it now, lemme know if anything is missing. ",
"Hi @patrickvonplaten, working on the test 'BartStandaloneCausalLM': I dont know if the `self.model_tester` in 'setUp' should be 'BartDecoderTester' (needed to be implemented), or do u recommend something else. \r\n\r\n ",
"Hey @sadakmed, \r\n\r\nThanks a lot for you additions here :-) \r\nYes, we need a new `BartStandaloneDecoderModelTester` analog to how it's done for ProphetNet in `tests/test_modeling_prophetnet.py`. Do you want to give it a try? Otherwise, I can go into your PR and see how to add the tests :-) ",
"Hi @patrickvonplaten \r\n\r\n> Do you want to give it a try?\r\n\r\nof course, I'm working on it, ",
"Hi @patrickvonplaten, I just pushed the test, could you please check it out! \r\nthaaaanks ",
"Hey @sadakmed, \r\n\r\nI corrected `BartForCausalLM` and also added `MBartForCausalLM`. It would be awesome if you could take care of adding `MarianForCausalLM`, `PegasusForCausalLM`, `BlenderbotForCausalLM`, and `BlenderbotSmallForCausalLM`. \r\n\r\nTo do so you can simply copy everything that was done for `MBart` in this PR 1-to-1 to the mentioned models above. Let me know if sounds feasible for you :-) \r\n\r\nThanks a lot for your help so far!",
"> adding `MarianForCausalLM`, `PegasusForCausalLM`, `BlenderbotForCausalLM`, and `BlenderbotSmallForCausalLM`.\r\n\r\nimplementing it for use with EncoderDecoder or just the test? \r\n\r\nYes I would like to do it, with all pleasure.",
"> > adding `MarianForCausalLM`, `PegasusForCausalLM`, `BlenderbotForCausalLM`, and `BlenderbotSmallForCausalLM`.\r\n> \r\n> implementing it for use with EncoderDecoder or just the test?\r\n> \r\n> Yes I would like to do it, with all pleasure.\r\n\r\nFor those models, there is no need to add a test to `EncoderDecoderModel`. We should only copy-paste the code that was added to MBart to those models and also copy-paste the test in `test_modeling_marian.py` e.g.",
"@patrickvonplaten Could you please check the test if it well, and about the test of `Decoder only` I didn't get what do you mean!!",
"It would be nice to fix the tests and also add tests for `Pegasus`, `Blenderbot`, and `BlenderbotSmall`",
"> It would be nice to fix the tests and also add tests for `Pegasus`, `Blenderbot`, and `BlenderbotSmall`\r\n\r\n@patrickvonplaten, exactly like the one was for Marian?",
"@patrickvonplaten could you check please!",
"**UPDATE:**\r\n\r\n@LysandreJik @sgugger \r\nThis PR enables all Bart-like models to be used in combination with the Encoder-Decoder framework. The model `BartForCausalLM` is added for Bart and then copied to all other models via the copying mechanism. Also, a new model tester is added for all those models. \r\nWhile working on this I found a small bug for a very edge-case scenario for Bart and corrected it here: https://github.com/huggingface/transformers/pull/9128/files#r569436360 . The newly added tests were failing, which made me aware of the bug. \r\nAlso, I had to slightly change the `check_repo.py` file so that it counts both classes from `all_model_classes` with 1 and 2 paratheses. \r\n",
"Great job @sadakmed ",
"> Great job @sadakmed\r\n\r\nwouldn't happen without you, thank you very much. see u in the next PR ;) "
] | 1,608 | 1,624 | 1,612 | CONTRIBUTOR | null | # What does this PR do?
Implementing BartForCausalLM anologs for ProphetNetForCausalLM
Fixes #9066
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9128/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9128/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9128",
"html_url": "https://github.com/huggingface/transformers/pull/9128",
"diff_url": "https://github.com/huggingface/transformers/pull/9128.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9128.patch",
"merged_at": 1612428973000
} |
https://api.github.com/repos/huggingface/transformers/issues/9127 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9127/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9127/comments | https://api.github.com/repos/huggingface/transformers/issues/9127/events | https://github.com/huggingface/transformers/pull/9127 | 767,752,194 | MDExOlB1bGxSZXF1ZXN0NTQwMzYwMzg3 | 9,127 | [Flax] Bugfixes in `run_mlm_flax.py` | {
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,608 | 1,619 | 1,619 | CONTRIBUTOR | null | # What does this PR do?
This PR fixes a few bugs I have observed when using `run_mlm_flax.py`:
- As discussed with @mfuntowicz , `jnp.split` is a lot slower than `np.split` on the first iteration, outright hanging in my tests on simplewiki (~20MB). As this operation doesn't need to be traced. we can use `np.split` instead.
- When using a HF `datasets`, the text column was also passed to the model as input, causing a bug. The PR removes the text column in `dataset.map` to avoid this.
- Finally, using `warmup_steps = 0` (as is default) causes the Flax optimizer to output NaNs. We use 1 as a minimum value for the same warmup-less behaviour. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9127/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9127/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9127",
"html_url": "https://github.com/huggingface/transformers/pull/9127",
"diff_url": "https://github.com/huggingface/transformers/pull/9127.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9127.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9126 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9126/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9126/comments | https://api.github.com/repos/huggingface/transformers/issues/9126/events | https://github.com/huggingface/transformers/issues/9126 | 767,715,749 | MDU6SXNzdWU3Njc3MTU3NDk= | 9,126 | seq2seq finetuning scripts break before training (cannot import name ParallelMode) | {
"login": "apostolis19",
"id": 7759496,
"node_id": "MDQ6VXNlcjc3NTk0OTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7759496?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/apostolis19",
"html_url": "https://github.com/apostolis19",
"followers_url": "https://api.github.com/users/apostolis19/followers",
"following_url": "https://api.github.com/users/apostolis19/following{/other_user}",
"gists_url": "https://api.github.com/users/apostolis19/gists{/gist_id}",
"starred_url": "https://api.github.com/users/apostolis19/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/apostolis19/subscriptions",
"organizations_url": "https://api.github.com/users/apostolis19/orgs",
"repos_url": "https://api.github.com/users/apostolis19/repos",
"events_url": "https://api.github.com/users/apostolis19/events{/privacy}",
"received_events_url": "https://api.github.com/users/apostolis19/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Make sure you install a recent version of transformers, `ParallelMode` was added to the master branch some ~2 weeks ago.",
"Yes, as @KDercksen points out, you need an up-to-date install from source to be able to run the examples (as mentioned in the main examples folder README)."
] | 1,608 | 1,608 | 1,608 | NONE | null | File "finetune_trainer.py", line 24, in <module>
from seq2seq_trainer import Seq2SeqTrainer
File "/home/---/transformers/examples/seq2seq/seq2seq_trainer.py", line 35, in <module>
from transformers.training_args import ParallelMode
ImportError: cannot import name 'ParallelMode' from 'transformers.training_args'
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9126/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9126/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9125 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9125/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9125/comments | https://api.github.com/repos/huggingface/transformers/issues/9125/events | https://github.com/huggingface/transformers/issues/9125 | 767,685,804 | MDU6SXNzdWU3Njc2ODU4MDQ= | 9,125 | Predict single sentence for Glue Tasks | {
"login": "Nickil21",
"id": 8767964,
"node_id": "MDQ6VXNlcjg3Njc5NjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/8767964?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nickil21",
"html_url": "https://github.com/Nickil21",
"followers_url": "https://api.github.com/users/Nickil21/followers",
"following_url": "https://api.github.com/users/Nickil21/following{/other_user}",
"gists_url": "https://api.github.com/users/Nickil21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Nickil21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nickil21/subscriptions",
"organizations_url": "https://api.github.com/users/Nickil21/orgs",
"repos_url": "https://api.github.com/users/Nickil21/repos",
"events_url": "https://api.github.com/users/Nickil21/events{/privacy}",
"received_events_url": "https://api.github.com/users/Nickil21/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Here is how I managed to do it. I have considered a `pandas` dataframe, but you can easily extend it to predict individual sentence too.\r\n\r\n```\r\nimport pandas as pd \r\nimport numpy as np\r\nfrom datasets import Dataset, load_dataset\r\nfrom scipy.special import softmax\r\nfrom transformers import Trainer\r\nfrom transformers import BertTokenizer, BertForSequenceClassification\r\n\r\ntokenizer = BertTokenizer.from_pretrained(<model_name_or_path>)\r\nmodel = BertForSequenceClassification.from_pretrained(<model_name_or_path>)\r\n\r\ndef preprocess_function(examples):\r\n # Tokenize the texts\r\n result = tokenizer(examples['sentence'], padding=False, max_length=None, truncation=True, verbose=False)\r\n return result\r\n\r\ndef predict(dataframe):\r\n eval_dataset = Dataset.from_pandas(dataframe)\r\n eval_dataset = eval_dataset.map(preprocess_function, batched=False, load_from_cache_file=True)\r\n # Initialize our Trainer\r\n trainer = Trainer(model=model, tokenizer=tokenizer)\r\n predictions = trainer.predict(test_dataset=eval_dataset).predictions\r\n # Adding a softmax layer to get probabilities. If you want class labels instead - np.argmax(predictions, axis=1)\r\n predictions = np.array([softmax(element) for element in predictions])[:, 1]\r\n return predictions\r\n```",
"@Nickil21 \r\nI used the torch model in the trainer. It's much faster than using pandas and creating a dataset.\r\n\r\n`import torch\r\ndef test_2(trainer, sentence1, sentence2):\r\n id_tolabel = {0:'negative', 1: 'positive'}\r\n model = trainer.model.eval()\r\n tokenized = tokenizer(sentence1, sentence2, return_tensors='pt').to(model.device)\r\n with torch.no_grad():\r\n label = torch.argmax(trainer.model.forward(**tokenized).logits, dim=1)[0].cpu().item()\r\n return id_tolabel[label]\r\nprint(test_2(trainer, 'it is not possible', 'this is impossible'))`"
] | 1,608 | 1,624 | 1,608 | NONE | null | I have trained a custom binary classifier using `run_glue.py` and have the `pytorch_model.bin` file saved to a directory. Is there a way to predict for a given sentence and extract its label?
I know `trainer.predict(test_dataset)` does it. But I am having some trouble converting the string to the format that it takes.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9125/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9125/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9124 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9124/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9124/comments | https://api.github.com/repos/huggingface/transformers/issues/9124/events | https://github.com/huggingface/transformers/pull/9124 | 767,573,462 | MDExOlB1bGxSZXF1ZXN0NTQwMjM2MDYz | 9,124 | Improve BERT-like models performance with better self attention | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"A Python profiling call gives the following improvements:\r\n```\r\nmodel = TFBertModel.from_pretrained(\"bert-base-cased\")\r\n\r\n# With the improvements\r\ncProfile.run(\"model(model.dummy_inputs)\") \r\n54591 function calls (53774 primitive calls) in 0.064 seconds\r\n\r\n# Currently on master\r\ncProfile.run(\"model(model.dummy_inputs)\")\r\n76166 function calls (75204 primitive calls) in 0.095 seconds\r\n```",
"Thanks @patrickvonplaten !!\r\n\r\n1. Slow tests are passing for these models\r\n2. I confirm that \"Old\" pre-trained models `tf_model.h5` files that were saved with tf < 2.3 can be loaded into the new layer design\r\n\r\nI haven't tested the tf1 models, you mean testing the `load_tf_weights_in_bert` in the `modeling_bert.py` file?",
"@jlei2 has confirmed that now everything works as expected in the profiler and benchmark 👍 https://github.com/huggingface/transformers/issues/6771#issuecomment-745786314",
"> 2\\. \"Old\" pre-trained models `tf_model.h5` files that were saved with tf < 2.3 can be loaded into the new layer des\r\n\r\nYeah I mean loading a tf `.ckpt` file using the `from_pretrained(...)` method. The `from_pretrained(...)` method automatically uses the correct functions to load `.ckpt`. I think the easiest way would be to download one of the zips of the official google bert: https://github.com/google-research/bert#bert and quickly check that it can be loaded and that the output on this branch and on master is the same.",
"> > 2. \"Old\" pre-trained models `tf_model.h5` files that were saved with tf < 2.3 can be loaded into the new layer des\r\n> \r\n> Yeah I mean loading a tf `.ckpt` file using the `from_pretrained(...)` method. The `from_pretrained(...)` method automatically uses the correct functions to load `.ckpt`. I think the easiest way would be to download one of the zips of the official google bert: https://github.com/google-research/bert#bert and quickly check that it can be loaded and that the output on this branch and on master is the same.\r\n\r\nOk as discussed offline TF1 checkpoints cannot even be loaded into TF2 at the moment (only if one goes through PT), so this PR is good to go for me!"
] | 1,608 | 1,608 | 1,608 | CONTRIBUTOR | null | # What does this PR do?
This PR updates the way we implement the self attention layers in order to be aligned on the original BERT performance. Small breaking change, this improvement needs at least TF 2.3. This change has already been discussed with @thomwolf, and he agreed. But still needs the approval of @LysandreJik @patrickvonplaten and @sgugger
@patrickvonplaten I have removed the comment for `check_copies` in the Longformer model because I don't know enough this model to apply the proper changes, I will apply this update to one model by one model for the ones I know but can you take this one?
@jlei2 as I'm on Windows, unfortunately the GPU profiling is not yet available in WSL, can you clone this branch and be sure that everythings works like expected with your benchmark? Thanks!!
Fixes # (issue)
#6771
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9124/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9124/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9124",
"html_url": "https://github.com/huggingface/transformers/pull/9124",
"diff_url": "https://github.com/huggingface/transformers/pull/9124.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9124.patch",
"merged_at": 1608552616000
} |
https://api.github.com/repos/huggingface/transformers/issues/9123 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9123/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9123/comments | https://api.github.com/repos/huggingface/transformers/issues/9123/events | https://github.com/huggingface/transformers/issues/9123 | 767,564,148 | MDU6SXNzdWU3Njc1NjQxNDg= | 9,123 | BART cannot accept -100 as ignored label | {
"login": "Huanghongru",
"id": 28702947,
"node_id": "MDQ6VXNlcjI4NzAyOTQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/28702947?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Huanghongru",
"html_url": "https://github.com/Huanghongru",
"followers_url": "https://api.github.com/users/Huanghongru/followers",
"following_url": "https://api.github.com/users/Huanghongru/following{/other_user}",
"gists_url": "https://api.github.com/users/Huanghongru/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Huanghongru/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Huanghongru/subscriptions",
"organizations_url": "https://api.github.com/users/Huanghongru/orgs",
"repos_url": "https://api.github.com/users/Huanghongru/repos",
"events_url": "https://api.github.com/users/Huanghongru/events{/privacy}",
"received_events_url": "https://api.github.com/users/Huanghongru/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,608 | 1,608 | 1,608 | NONE | null | ## Environment info
- `transformers` version: 4.0.1
- Platform: Linux
- Python version: 3.7.9
- PyTorch version (GPU?): 1.6.0
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
Bart: @patrickvonplaten
## Information
I'm using ``BartForConditionalGeneration`` to do some natural language generation tasks. By the [doc](https://huggingface.co/transformers/model_doc/bart.html#bartforconditionalgeneration) I should be able to set -100 for some tokens to ignore. However, it would raise an out of index error.
## To reproduce
```python
from transformers import BartForConditionalGeneration, AutoTokenizer
b = BartForConditionalGeneration.from_pretrained("facebook/bart-base")
t = AutoTokenizer.from_pretrained("facebook/bart-base")
s1 = "hello hello hello hello world"
inputs = t(s1, return_tensors="pt")
label = inputs["input_ids"].clone()
label[0, 2:3] = -100
outputs = b(**inputs, labels=label)
```
Then it raise the following error:
> Traceback (most recent call last):
> File "<stdin>", line 1, in <module>
> File "/home/hongru/.conda/envs/commonsense/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
> result = self.forward(*input, **kwargs)
> File "/home/hongru/.conda/envs/commonsense/lib/python3.7/site-packages/transformers/models/bart/modeling_bart.py", line 1032, in forward
> return_dict=return_dict,
> File "/home/hongru/.conda/envs/commonsense/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
> result = self.forward(*input, **kwargs)
> File "/home/hongru/.conda/envs/commonsense/lib/python3.7/site-packages/transformers/models/bart/modeling_bart.py", line 915, in forward
> return_dict=return_dict,
> File "/home/hongru/.conda/envs/commonsense/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
> result = self.forward(*input, **kwargs)
> File "/home/hongru/.conda/envs/commonsense/lib/python3.7/site-packages/transformers/models/bart/modeling_bart.py", line 564, in forward
> x = self.embed_tokens(input_ids) * self.embed_scale
> File "/home/hongru/.conda/envs/commonsense/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
> result = self.forward(*input, **kwargs)
> File "/home/hongru/.conda/envs/commonsense/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 126, in forward
> self.norm_type, self.scale_grad_by_freq, self.sparse)
> File "/home/hongru/.conda/envs/commonsense/lib/python3.7/site-packages/torch/nn/functional.py", line 1814, in embedding
> return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
> IndexError: index out of range in self
Without giving -100 in the label, it can return the output correctly.
## Expected behavior
Should return the output correctly. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9123/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9123/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9122 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9122/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9122/comments | https://api.github.com/repos/huggingface/transformers/issues/9122/events | https://github.com/huggingface/transformers/issues/9122 | 767,535,574 | MDU6SXNzdWU3Njc1MzU1NzQ= | 9,122 | RobertaTokenizer fails to do_lower_case, different behavior between version 2 and 3 | {
"login": "jingtaozhan",
"id": 54493610,
"node_id": "MDQ6VXNlcjU0NDkzNjEw",
"avatar_url": "https://avatars.githubusercontent.com/u/54493610?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jingtaozhan",
"html_url": "https://github.com/jingtaozhan",
"followers_url": "https://api.github.com/users/jingtaozhan/followers",
"following_url": "https://api.github.com/users/jingtaozhan/following{/other_user}",
"gists_url": "https://api.github.com/users/jingtaozhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jingtaozhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jingtaozhan/subscriptions",
"organizations_url": "https://api.github.com/users/jingtaozhan/orgs",
"repos_url": "https://api.github.com/users/jingtaozhan/repos",
"events_url": "https://api.github.com/users/jingtaozhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/jingtaozhan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Could you please also try with the most recent transformers release and report what happens?",
"Version 4.0.1 prints: ['Hug', 'ging', 'face'] ",
"Explicitly setting the attribute 'do_lower_case' to True solves the problem.\r\n\r\n```python\r\nfrom transformers import RobertaTokenizer\r\ntokenizer = RobertaTokenizer.from_pretrained(\"roberta-base\", do_lower_case=True)\r\ntokenizer.do_lower_case = True\r\nprint(tokenizer.tokenize(\"Huggingface\"))\r\n```\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"If we use the AutoTokenizer library, this still does not work. \r\n\r\n```\r\nfrom transformers import AutoTokenizer\r\ntokenizer = AutoTokenizer.from_pretrained(\"roberta-base\", do_lower_case=True)\r\ntokenizer.do_lower_case = True\r\nprint(tokenizer.tokenize(\"Huggingface\"))\r\n```"
] | 1,608 | 1,654 | 1,619 | NONE | null | ## Environment info
- `transformers` version: 3.4.0 / 2.8.0
- Platform: linux
- Python version: 3.8
### Who can help
@mfuntowicz
## Information
Tokenizer I am using: RobertaTokenizer
The tokenizer do not lower case the text even if I explicitly set do_lower_case=True. The behavior is different between version 2.8.0 and 3.4.0
## To reproduce
Steps to reproduce the behavior:
```python
from transformers import RobertaTokenizer
tokenizer = RobertaTokenizer.from_pretrained("roberta-base", do_lower_case=True)
print(tokenizer.tokenize("Huggingface"))
```
## Expected behavior
Version 3.4.0 prints: ['Hug', 'ging', 'face']
Version 2.8.0 prints: ['h', 'ug', 'ging', 'face']
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9122/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9122/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9121 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9121/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9121/comments | https://api.github.com/repos/huggingface/transformers/issues/9121/events | https://github.com/huggingface/transformers/issues/9121 | 767,468,311 | MDU6SXNzdWU3Njc0NjgzMTE= | 9,121 | [Generation] Add generation outputs | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "SBrandeis",
"id": 33657802,
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SBrandeis",
"html_url": "https://github.com/SBrandeis",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "SBrandeis",
"id": 33657802,
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SBrandeis",
"html_url": "https://github.com/SBrandeis",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,608 | 1,609 | 1,609 | MEMBER | null | # 🚀 Feature request
We've had multiple issues asking for the possibility to output the scores/probabilities of each token during generation, see:
https://github.com/huggingface/transformers/issues/7654
https://github.com/huggingface/transformers/issues/3891
https://github.com/huggingface/transformers/issues/8656
Also we should be able to output the models attentions `hidden_states` at each generation step, *a.k.a* make use of those model outputs:
https://github.com/huggingface/transformers/blob/c19d04623eacfbc2c452397a5eda0fde42db3fc5/src/transformers/models/bert/modeling_bert.py#L883 in generation as well.
To do so we should create a new generation output class for each "sub" generation function:
1) `GreeySearchDecoderOnlyOutput(output_ids, logits, attentions, hidden_states)` for decoder-only models, where as `output_ids` are the current outputs of generate, `logits` will be the logit vectors at each generation step (so should be of shape `Tuple((logits_1,), ..., (logits_max_length,))`) and `attentions` and `hidden_states` should be of shape `Tuple((attentions_1,), ..., (attentions_max_length,))`. As before `attentions` and `hidden_states` will be output if a flag `output_attentions` or `output_hidden_states` iset to True and for the logits we should add a flag `output_scores`. Also we should have a `GreeySearchEncoderDecoderOutput(output_ids, logits, encoder_attentions, decoder_attentions, encoder_hidden_states, decoder_hidden_states)` class with the respective enc and dec outputs.
2) `SampleDecoderOnlyOutput(output_ids, probabilities, attentions, hidden_states)` -> the same outputs only that we replace the logits output with the softmax probabilities (of the same shape); same flags as in 1) and encoder-decoder class as well
3) `BeamSearchDecoderOnlyOutput(output_ids, probs, attentions, hidden_states)` -> `probs` should be this tensor: https://github.com/huggingface/transformers/blob/c19d04623eacfbc2c452397a5eda0fde42db3fc5/src/transformers/generation_utils.py#L1235 at each step same flags as in 1) and encoder-decoder class as well
Each output class should be derived from https://github.com/huggingface/transformers/blob/c19d04623eacfbc2c452397a5eda0fde42db3fc5/src/transformers/file_utils.py#L1306 just as the model output classes are in https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_outputs.py .
A PR should start with the "GreedySearchOutput" model classes and add this to `generation_utils.py` => then we should add the three flags to both `generate()` and `greedy_search()`. Then `SampleOutput` and `BeamSerachOutput` should be added. The PR should also include good documentation for each of the outputs as it is the case for the current model outputs.
## Your contribution
I'm happy to help the contributor throughout the PR :-)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9121/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9121/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9120 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9120/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9120/comments | https://api.github.com/repos/huggingface/transformers/issues/9120/events | https://github.com/huggingface/transformers/pull/9120 | 767,427,627 | MDExOlB1bGxSZXF1ZXN0NTQwMTM4NjIy | 9,120 | Fix tf2.4 | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"LGTM!"
] | 1,608 | 1,608 | 1,608 | CONTRIBUTOR | null | # What does this PR do?
Fix the tests to make them compliant with the new TF 2.4
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9120/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9120/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9120",
"html_url": "https://github.com/huggingface/transformers/pull/9120",
"diff_url": "https://github.com/huggingface/transformers/pull/9120.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9120.patch",
"merged_at": 1608045047000
} |
https://api.github.com/repos/huggingface/transformers/issues/9119 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9119/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9119/comments | https://api.github.com/repos/huggingface/transformers/issues/9119/events | https://github.com/huggingface/transformers/issues/9119 | 767,346,050 | MDU6SXNzdWU3NjczNDYwNTA= | 9,119 | Which dataset is used for training GPT, GPT2 from scratch? | {
"login": "arisohn",
"id": 307442,
"node_id": "MDQ6VXNlcjMwNzQ0Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/307442?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arisohn",
"html_url": "https://github.com/arisohn",
"followers_url": "https://api.github.com/users/arisohn/followers",
"following_url": "https://api.github.com/users/arisohn/following{/other_user}",
"gists_url": "https://api.github.com/users/arisohn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arisohn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arisohn/subscriptions",
"organizations_url": "https://api.github.com/users/arisohn/orgs",
"repos_url": "https://api.github.com/users/arisohn/repos",
"events_url": "https://api.github.com/users/arisohn/events{/privacy}",
"received_events_url": "https://api.github.com/users/arisohn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!"
] | 1,608 | 1,608 | 1,608 | NONE | null | Hi,
I checked the model card of GPT and GPT2, but I can't find the dataset which was used for training.
Where can I find the datasets which is used for?
https://huggingface.co/openai-gpt
https://huggingface.co/gpt2 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9119/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9119/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9118 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9118/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9118/comments | https://api.github.com/repos/huggingface/transformers/issues/9118/events | https://github.com/huggingface/transformers/issues/9118 | 767,327,972 | MDU6SXNzdWU3NjczMjc5NzI= | 9,118 | Different inference results of a keras including transformer model on TPU vs CPU? | {
"login": "steindor",
"id": 3185711,
"node_id": "MDQ6VXNlcjMxODU3MTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3185711?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/steindor",
"html_url": "https://github.com/steindor",
"followers_url": "https://api.github.com/users/steindor/followers",
"following_url": "https://api.github.com/users/steindor/following{/other_user}",
"gists_url": "https://api.github.com/users/steindor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/steindor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/steindor/subscriptions",
"organizations_url": "https://api.github.com/users/steindor/orgs",
"repos_url": "https://api.github.com/users/steindor/repos",
"events_url": "https://api.github.com/users/steindor/events{/privacy}",
"received_events_url": "https://api.github.com/users/steindor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!"
] | 1,608 | 1,608 | 1,608 | NONE | null | transformers version: 4.0.0
Platform: Linux-4.9.0-11-amd64-x86_64-with-debian-9.11
Python version: 3.7.9
PyTorch version (GPU?): 1.6.0a0+bf2bbd9 (False)
Tensorflow version (GPU?): 2.3.1 (False)
Using GPU in script?: No
Using distributed or parallel set-up in script?: Distributed
I am building a Keras model which consists of a TFRobertaModel with 2 custom heads on top. One is a QA head which outputs span predictions and the other is a binary classification head. I train the model on TPUs and everything works fine with prediction and inference on the TPU with great model performance. The issue I am having is loading the saved model and/or weights and doing inference on CPU. I am getting completely different results compared to inference on the TPU.
I save the model using model.save and the weights as well with models.save_weights and it doesn't matter which one I load, I get the same results (using tf.keras.models.load_model). I do get a warning that there was an error saving the state of the optimiser which is initialized at random on model loading. I figure this is not an issue with keras since I compile the model with a tf.keras.optimizers.Adam which should be saved with model.save with a keras only model. I have also tried building the model from scratch and only loading the saved weights but I get the same results.
This is a convoluted problem to reproduce but I was wondering if you had any pointers on how to debug this or if this was a known problem. Here is a sample of the model output on TPU vs CPU:
TPU - the first 10 binary answer predictions:
array([[9.9994159e-01, 5.8382138e-05],
[9.9990284e-01, 9.7181561e-05],
[9.9995410e-01, 4.5917721e-05],
[9.9996519e-01, 3.4784229e-05],
[9.9975628e-01, 2.4374224e-04],
[9.9997389e-01, 2.6103005e-05],
[9.9995828e-01, 4.1662061e-05],
[9.9998319e-01, 1.6824890e-05],
[7.0182599e-02, 9.2981732e-01],
[9.9993420e-01, 6.5814391e-05]], dtype=float32)
CPU - the first 10 binary answer predictions:
array([[0.06942184, 0.9305781 ],
[0.06942184, 0.9305781 ],
[0.06942184, 0.9305781 ],
[0.06942184, 0.9305781 ],
[0.06942184, 0.9305781 ],
[0.06942184, 0.9305781 ],
[0.06942184, 0.9305781 ],
[0.06942184, 0.9305781 ],
[0.06942184, 0.9305781 ],
[0.06942184, 0.9305781 ]], dtype=float32)
The model building and compilation:
```
dropout=False
def create_model():
input_shape = (None,)
model = TFRobertaForQuestionAnswering.from_pretrained(model_path, from_pt=True, trainable=True)
input_ids = tf.keras.layers.Input(shape=input_shape, dtype=np.int32, name='input_ids')
attention_mask = tf.keras.layers.Input(shape=input_shape, dtype=np.int32, name='attention_mask')
token_type_ids = tf.keras.layers.Input(shape=input_shape, dtype=np.int32, name='token_type_ids')
outputs = model.roberta(input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
output_hidden_states=True)
seq_output = outputs[0]
logits = model.qa_outputs(seq_output)
start_logits, end_logits = tf.split(logits, 2, axis=-1)
start_logits = tf.squeeze(start_logits, axis=-1)
end_logits = tf.squeeze(end_logits, axis=-1)
#BINARY_ANSWER
concat_hidden_layers = tf.concat(tuple([outputs.hidden_states[i] for i in [-4, -3, -2, -1]]), axis=-1)
pooled_output = concat_hidden_layers[:, 0, :]
binary_answer_logits = tf.keras.layers.Dense(768,
kernel_initializer=tf.keras.initializers.truncated_normal(stddev=0.02),
activation="tanh",
name="dense_tanh")(pooled_output)
if dropout:
binary_answer_logits = tf.keras.layers.Dropout(0.1)(binary_answer_logits)
binary_answer_probs = tf.keras.layers.Dense(2, activation='softmax', name="binary_answer")(binary_answer_logits)
keras_model = Model(inputs={'input_ids':input_ids,
'attention_mask':attention_mask,
'token_type_ids':token_type_ids},
outputs={'start_logits':start_logits,
'end_logits':end_logits,
'binary_answer_probs':binary_answer_probs})
return keras_model
with strategy.scope():
keras_model = create_model()
optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08)
keras_model.compile(loss={'start_logits':compute_loss,
'end_logits':compute_loss,
'binary_answer_probs':tf.keras.losses.binary_crossentropy},
optimizer=optimizer)
```
I also get completely different results for span predictions on TPU vs CPU which is not surprising, seeing how different the binary prediction is. Any help or pointers are appreciated.
## Expected behavior
The same inference results on TPU vs CPU.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9118/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9118/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9117 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9117/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9117/comments | https://api.github.com/repos/huggingface/transformers/issues/9117/events | https://github.com/huggingface/transformers/pull/9117 | 767,301,892 | MDExOlB1bGxSZXF1ZXN0NTQwMDYxMDY3 | 9,117 | Tapas v4 (tres) | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
},
{
"id": 2669577093,
"node_id": "MDU6TGFiZWwyNjY5NTc3MDkz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/PR%20for%20Model%20Addition",
"name": "PR for Model Addition",
"color": "5319e7",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Don't worry about the TF tests, these are because of TF2.4 which are fixed on `master`.",
"The conversion script currently includes a line in which I'm importing a local `vocab.txt`, I know this should be removed in the future."
] | 1,608 | 1,611 | 1,608 | CONTRIBUTOR | null | Here we are again, opening a new PR based on the former (#8988) which had some Github issues.
cc @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9117/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9117/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9117",
"html_url": "https://github.com/huggingface/transformers/pull/9117",
"diff_url": "https://github.com/huggingface/transformers/pull/9117.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9117.patch",
"merged_at": 1608070129000
} |
https://api.github.com/repos/huggingface/transformers/issues/9116 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9116/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9116/comments | https://api.github.com/repos/huggingface/transformers/issues/9116/events | https://github.com/huggingface/transformers/issues/9116 | 767,240,112 | MDU6SXNzdWU3NjcyNDAxMTI= | 9,116 | Roberta training crashing due to position_id embedding | {
"login": "david-waterworth",
"id": 5028974,
"node_id": "MDQ6VXNlcjUwMjg5NzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5028974?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/david-waterworth",
"html_url": "https://github.com/david-waterworth",
"followers_url": "https://api.github.com/users/david-waterworth/followers",
"following_url": "https://api.github.com/users/david-waterworth/following{/other_user}",
"gists_url": "https://api.github.com/users/david-waterworth/gists{/gist_id}",
"starred_url": "https://api.github.com/users/david-waterworth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/david-waterworth/subscriptions",
"organizations_url": "https://api.github.com/users/david-waterworth/orgs",
"repos_url": "https://api.github.com/users/david-waterworth/repos",
"events_url": "https://api.github.com/users/david-waterworth/events{/privacy}",
"received_events_url": "https://api.github.com/users/david-waterworth/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"One potential fix is to not use the same padding_idx for the position_ids embedding, why not just use 0? The least actual unpadded value of incremental_indices will be one so 0 is a valid pad.\r\n\r\nIn the `__init__` of `RobertaEmbeddings` (note this code is also duplicated, self.position_embeddings is intiialised twice!)\r\n\r\n self.position_embeddings = nn.Embedding(\r\n config.max_position_embeddings, config.hidden_size, padding_idx=0 # replaced self.padding_idx with 0\r\n )\r\n\r\nAnd then modify create_position_ids_from_input_ids\r\n\r\n```\r\ndef create_position_ids_from_input_ids(input_ids, padding_idx):\r\n \"\"\"\r\n Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding symbols\r\n are ignored. This is modified from fairseq's `utils.make_positions`.\r\n\r\n Args:\r\n x: torch.Tensor x:\r\n\r\n Returns: torch.Tensor\r\n \"\"\"\r\n # The series of casts and type-conversions here are carefully balanced to both work with ONNX export and XLA.\r\n mask = input_ids.ne(padding_idx).int()\r\n incremental_indices = torch.cumsum(mask, dim=1).type_as(mask) * mask\r\n return incremental_indices.long() # removed + padding_idx\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,608 | 1,619 | 1,619 | NONE | null | I've been trying to work out why I keep getting a CUDA assert in a specific mini-batch when training RoBERTa from scratch. I finally tracked it down after switching to CPU.
I don't understand why `padding_idx` is added to `incremental_indices` below? - _edit: I do in that the embedding needs a padding mask but it I'm not sure it's the correct way to do it._
In my case padding_idx=3. And one of my input_ids rows was truncated. Say I have input_ids = [[4,5,6],[4,3,3]], this results in mask=[[1,1,1],[1,0,0]] and incremental index=[[1,2,3],[1,0,0]]. Adding padding_idx then produces [[4,5,6],[4,3,3]].
The issue is `self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size)` so for any sequences which are truncated adding anything to the indices results in an index which is greater than the embedding dim.
Perhaps you can argue that max_position_embeddings is supposed to be larger than the largest possible sequence so this doesn't happen? There is a check in `run_mlm.py` that `data_args.max_seq_length > tokenizer.model_max_length` but it seems that in actual fact to avoid a very hard to track down error you must have tokenizer.model_max_length < data_args.max_seq_length
```
def create_position_ids_from_input_ids(input_ids, padding_idx):
"""
Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding symbols
are ignored. This is modified from fairseq's `utils.make_positions`.
Args:
x: torch.Tensor x:
Returns: torch.Tensor
"""
# The series of casts and type-conversions here are carefully balanced to both work with ONNX export and XLA.
mask = input_ids.ne(padding_idx).int()
incremental_indices = torch.cumsum(mask, dim=1).type_as(mask) * mask
return incremental_indices.long() + padding_idx
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9116/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9116/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9115 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9115/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9115/comments | https://api.github.com/repos/huggingface/transformers/issues/9115/events | https://github.com/huggingface/transformers/pull/9115 | 767,169,636 | MDExOlB1bGxSZXF1ZXN0NTM5OTcwOTY2 | 9,115 | [doc] pytorch native amp leak fix landed in 1.7.1 | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | CONTRIBUTOR | null | update README with good news that the leak fix has been applied to pytorch-1.7.1 and not just 1.8.
Reference: https://github.com/pytorch/pytorch/issues/48049#issuecomment-742790722
@LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9115/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9115/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9115",
"html_url": "https://github.com/huggingface/transformers/pull/9115",
"diff_url": "https://github.com/huggingface/transformers/pull/9115.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9115.patch",
"merged_at": 1608041442000
} |
https://api.github.com/repos/huggingface/transformers/issues/9114 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9114/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9114/comments | https://api.github.com/repos/huggingface/transformers/issues/9114/events | https://github.com/huggingface/transformers/pull/9114 | 767,156,195 | MDExOlB1bGxSZXF1ZXN0NTM5OTYyMTE1 | 9,114 | Fix stack overflow | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | MEMBER | null | Currently calling `n_sequences` on a `BatchEncoding` results in a stack overflow. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9114/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9114/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9114",
"html_url": "https://github.com/huggingface/transformers/pull/9114",
"diff_url": "https://github.com/huggingface/transformers/pull/9114.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9114.patch",
"merged_at": 1608041749000
} |
https://api.github.com/repos/huggingface/transformers/issues/9113 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9113/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9113/comments | https://api.github.com/repos/huggingface/transformers/issues/9113/events | https://github.com/huggingface/transformers/issues/9113 | 767,131,705 | MDU6SXNzdWU3NjcxMzE3MDU= | 9,113 | Some Models do not support gradient checkpointing | {
"login": "jingtaozhan",
"id": 54493610,
"node_id": "MDQ6VXNlcjU0NDkzNjEw",
"avatar_url": "https://avatars.githubusercontent.com/u/54493610?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jingtaozhan",
"html_url": "https://github.com/jingtaozhan",
"followers_url": "https://api.github.com/users/jingtaozhan/followers",
"following_url": "https://api.github.com/users/jingtaozhan/following{/other_user}",
"gists_url": "https://api.github.com/users/jingtaozhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jingtaozhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jingtaozhan/subscriptions",
"organizations_url": "https://api.github.com/users/jingtaozhan/orgs",
"repos_url": "https://api.github.com/users/jingtaozhan/repos",
"events_url": "https://api.github.com/users/jingtaozhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/jingtaozhan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,608 | 1,619 | 1,619 | NONE | null | Thanks for this wonderful library.
I found some models do not support gradient_checkpointing, which I believe is a very important feature. For example,
ElectraModel: ElectraConfig has no gradient_checkpointing option but ElectraModel will use gradient_checkpointing if config.gradient_checkpointing = True
DistillBERT: DistillBertConfig has no gradient_checkpointing option and DistillBertModel does not support gradient_checkpointing.
I assume all transformer-based models should be able to support gradient_checkpointing. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9113/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9113/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9112 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9112/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9112/comments | https://api.github.com/repos/huggingface/transformers/issues/9112/events | https://github.com/huggingface/transformers/pull/9112 | 767,046,233 | MDExOlB1bGxSZXF1ZXN0NTM5ODkxMTQ1 | 9,112 | Add BORT | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"🔥 Looking forward to taking a look at the conversion script from GluonNLP/mxnet!",
"@patrickvonplaten I added some examples for both `modeling_bort.py` and modeling_tf_bort.py` :hugs: \r\n\r\n@julien-c The conversion script is also added - you just need to install `gluonnlp==0.8.3` and `mxnet==1.5.0`.\r\n\r\nThese versions are defined in the BORT [requirements file](https://github.com/alexa/bort/blob/master/requirements.txt). The conversion script also performs a version check.",
"We'll have to think a bit how to advertise this. Let me draft up a \"Contribution Proposal\" for the fine-tuning algorithm.",
"Hey @stefan-it,\r\n\r\nI've discussed a bit with @LysandreJik and @sgugger offline and I do agree with @LysandreJik after having thought about it again. I think it's better if we actually don't add any new code (besides the conversion script) that should be added to `src/transformers/models/bert/` and the docs page. I'm very sorry to have you asked to go down this road! I think however it does make more sense to not add any \"tokenizer\" or \"model\" code as those are exact copies of the `RobertaTokenizer` and `BertModel`. It's probably most efficient to open a new PR and only add the required files. Super sorry again!",
"Are we planning to implement the architectural optimization (FPTAS) or just the pre-trained models?",
"> Are we planning to implement the architectural optimization (FPTAS) or just the pre-trained models?\r\n\r\nGreat question! For now, we'll just add the model weights - see: #9813. A community contribution showing how to do FPTAS in a notebook would be extremely valuable though.",
"Closing in favor of #9813"
] | 1,607 | 1,611 | 1,611 | COLLABORATOR | null | Hi,
this PR adds the recently introduced BORT model from @adewynter and Daniel J. Perry from the Alexa team into Transformers.
----
BORT was introduced in the [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499).
Details about BORT:
> We extract an optimal subset of architectural parameters for the BERT architecture from Devlin et al. (2018) by applying recent breakthroughs in algorithms for neural architecture search. This optimal subset, which we refer to as "Bort", is demonstrably smaller, having an effective (that is, not counting the embedding layer) size of 5.5% the original BERT-large architecture, and 16% of the net size. Bort is also able to be pretrained in 288 GPU hours, which is 1.2% of the time required to pretrain the highest-performing BERT parametric architectural variant, RoBERTa-large (Liu et al., 2019), and about 33% of that of the world-record, in GPU hours, required to train BERT-large on the same hardware. It is also 7.9x faster on a CPU, as well as being better performing than other compressed variants of the architecture, and some of the non-compressed variants: it obtains performance improvements of between 0.3% and 31%, absolute, with respect to BERT-large, on multiple public natural language understanding (NLU) benchmarks.
This should fix #8135 :hugs:
---
ToDo tasks:
* [x] Upload models (both PyTorch and TensorFlow model) to model hub
* [x] Add conversion script from Gluonnlp to Transformers
* [x] Enable unit tests (they are working and just wait for the model upload) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9112/reactions",
"total_count": 9,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 4,
"confused": 0,
"heart": 5,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9112/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9112",
"html_url": "https://github.com/huggingface/transformers/pull/9112",
"diff_url": "https://github.com/huggingface/transformers/pull/9112.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9112.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9111 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9111/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9111/comments | https://api.github.com/repos/huggingface/transformers/issues/9111/events | https://github.com/huggingface/transformers/issues/9111 | 766,989,362 | MDU6SXNzdWU3NjY5ODkzNjI= | 9,111 | Longformer `token_type_ids` Vocabulary Size is 1 But Documentation States Otherwise | {
"login": "HHousen",
"id": 11785397,
"node_id": "MDQ6VXNlcjExNzg1Mzk3",
"avatar_url": "https://avatars.githubusercontent.com/u/11785397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HHousen",
"html_url": "https://github.com/HHousen",
"followers_url": "https://api.github.com/users/HHousen/followers",
"following_url": "https://api.github.com/users/HHousen/following{/other_user}",
"gists_url": "https://api.github.com/users/HHousen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HHousen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HHousen/subscriptions",
"organizations_url": "https://api.github.com/users/HHousen/orgs",
"repos_url": "https://api.github.com/users/HHousen/repos",
"events_url": "https://api.github.com/users/HHousen/events{/privacy}",
"received_events_url": "https://api.github.com/users/HHousen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"It also might be a good idea to catch this error somewhere before `IndexError: index out of range in self` because that is not descriptive and makes debugging difficult.",
"You're right @HHousen - thanks for the note! Do you want to open a PR to fix the docs? That would be awesome :-) Otherwise, I can do it as well ",
"We are using longformer and we are passing (input_ids, attention_mask , global_attention_mask ,token_type_ids) as input. if we are passing token_type_ids as 0's we are not having any issues but when we try to pass token_type_ids as 1's, or 0's afor segments within the sequence it is throwing following error.\r\n**IndexError: index out of range in self**\r\n**C:/w/b/windows/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: block: [52,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.**",
"i have created separate issue for **index out of range in self ** (https://github.com/huggingface/transformers/issues/9162) while using token_type_ids, from the above comment by @HHousen should i remove token_type_ids as parameter while passing it to model ?",
"@yuvarajvc Correct. The Longformer doesn't support `token_type_ids`, so you should not pass them to the model."
] | 1,607 | 1,608 | 1,608 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2 and latest (4.0.0)
- Platform: Linux (Ubuntu)
- Python version: 3.6.9 and 3.8.6
- PyTorch version (GPU?): 3.7.0 Tesla P100-PCIE-16GB and Nvidia RTX 3090
- Tensorflow version (GPU?): None
- Using GPU in script?: Yes and No
- Using distributed or parallel set-up in script?: No
### Who can help
Possibly @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): Longformer
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('allenai/longformer-base-4096')
model = AutoModel.from_pretrained('allenai/longformer-base-4096')
tokenizer_bert = AutoTokenizer.from_pretrained('bert-base-uncased')
inputs = tokenizer("How old are you?", "I'm 6 years old", return_tensors="pt", return_token_type_ids=True, return_attention_mask=True)
inputs_bert = tokenizer_bert("How old are you?", "I'm 6 years old", return_tensors="pt", return_token_type_ids=True, return_attention_mask=True)
print(inputs)
print(inputs_bert)
```
```python
inputs = {'input_ids': tensor([[ 0, 6179, 793, 32, 47, 116, 2, 2, 100, 437, 231, 107,
793, 2]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}
inputs_bert = {'input_ids': tensor([[ 101, 2129, 2214, 2024, 2017, 1029, 102, 1045, 1005, 1049, 1020, 2086,
2214, 102]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}
```
```python
inputs['token_type_ids'] = torch.tensor([[0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1]])
model.forward(**inputs)
```
Stack Trace:
```
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-65-905d6a7d6135> in <module>()
----> 1 model.forward(**inputs)
6 frames
/usr/local/lib/python3.6/dist-packages/transformers/modeling_longformer.py in forward(self, input_ids, attention_mask, global_attention_mask, token_type_ids, position_ids, inputs_embeds, output_attentions, output_hidden_states)
995
996 embedding_output = self.embeddings(
--> 997 input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds
998 )
999
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
/usr/local/lib/python3.6/dist-packages/transformers/modeling_roberta.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds)
66
67 return super().forward(
---> 68 input_ids, token_type_ids=token_type_ids, position_ids=position_ids, inputs_embeds=inputs_embeds
69 )
70
/usr/local/lib/python3.6/dist-packages/transformers/modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds)
178 inputs_embeds = self.word_embeddings(input_ids)
179 position_embeddings = self.position_embeddings(position_ids)
--> 180 token_type_embeddings = self.token_type_embeddings(token_type_ids)
181
182 embeddings = inputs_embeds + position_embeddings + token_type_embeddings
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/sparse.py in forward(self, input)
124 return F.embedding(
125 input, self.weight, self.padding_idx, self.max_norm,
--> 126 self.norm_type, self.scale_grad_by_freq, self.sparse)
127
128 def extra_repr(self) -> str:
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1850 # remove once script supports set_grad_enabled
1851 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1852 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1853
1854
IndexError: index out of range in self
```
## Expected behavior
The longformer documentation should be updated and state that the longformer does not support `token_type_ids` like RoBERTa. The `token_type_ids` [vocabulary size is 1](https://huggingface.co/allenai/longformer-base-4096/raw/main/config.json) (compared to [2 for BERT](https://huggingface.co/bert-base-uncased/raw/main/config.json)) for `allenai/longformer-base-4096`, which means `0` is the only valid input for `token_type_ids`. However, [the documentation](https://huggingface.co/transformers/model_doc/longformer.html#transformers.LongformerModel.forward) says `token_type_ids` can be selected in `[0, 1]` for the longformer. The documentation should to specify that the longformer doesn't support `token_type_ids`. For instance, the [RoBERTa documentation](https://huggingface.co/transformers/model_doc/roberta.html) states "RoBERTa doesn’t have `token_type_ids`, you don’t need to indicate which token belongs to which segment. Just separate your segments with the separation token `tokenizer.sep_token` (or `</s>`)." Should a similar message be added for the longformer since it is based on RoBERTa?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9111/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9111/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9110 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9110/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9110/comments | https://api.github.com/repos/huggingface/transformers/issues/9110/events | https://github.com/huggingface/transformers/issues/9110 | 766,926,556 | MDU6SXNzdWU3NjY5MjY1NTY= | 9,110 | Not able to train RoBERTa language model from scratch | {
"login": "shubhamk0027",
"id": 31592194,
"node_id": "MDQ6VXNlcjMxNTkyMTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/31592194?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shubhamk0027",
"html_url": "https://github.com/shubhamk0027",
"followers_url": "https://api.github.com/users/shubhamk0027/followers",
"following_url": "https://api.github.com/users/shubhamk0027/following{/other_user}",
"gists_url": "https://api.github.com/users/shubhamk0027/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shubhamk0027/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shubhamk0027/subscriptions",
"organizations_url": "https://api.github.com/users/shubhamk0027/orgs",
"repos_url": "https://api.github.com/users/shubhamk0027/repos",
"events_url": "https://api.github.com/users/shubhamk0027/events{/privacy}",
"received_events_url": "https://api.github.com/users/shubhamk0027/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The `run_mlm` script trains a `RobertaForMaskedLM` model, which does not have a pooler layer. That's why you get this warning when using this pretrained model to initialize a `RobertaModel`.",
"Thanks @sgugger! Now I get where the problem was. \r\nMoreover, I found some good tutorials on it. Here are the links for those who need it.\r\nhttps://zablo.net/blog/post/training-roberta-from-scratch-the-missing-guide-polish-language-model/\r\nhttps://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb"
] | 1,607 | 1,608 | 1,608 | NONE | null | I tried training RoBERTa language model from scratch using
<code> !python ./run_mlm.py
--model_name_or_path roberta-base
--train_file './data_lm.txt'
--do_train
--line_by_line
--num_train_epochs 3
--output_dir ./roberta
</code>
Due to the limits on the colab storage, I deleted all the checkpoints generated during the training process. So finally my model directory has the following files -
1. config.json
2. merges.txt
3. pytorch_model.bin
4. special_tokens_map.json
5. tokenizer.config
6. vocab.json
But after loading my trained model using RobertaTokenizer, RobertaForSequenceClassification and fine-tuning it for my classification task, i am receiving almost same accuracy as by loading and fine-tuning the readily available 'roberta-base'.
Also, when i try loading it as
`model = RobertaModel.from_pretrained('./roberta)
`
I get the warning-
<code>
Some weights of RobertaModel were not initialized from the model checkpoint at ./roberta/ and are newly initialized: ['roberta.pooler.dense.weight', 'roberta.pooler.dense.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
</code>
So, my question is, Is there something wrong in the training procedure of the Language Model on my dataset and the loading process? Or the checkpoints that i deleted were important?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9110/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9110/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9109 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9109/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9109/comments | https://api.github.com/repos/huggingface/transformers/issues/9109/events | https://github.com/huggingface/transformers/issues/9109 | 766,872,058 | MDU6SXNzdWU3NjY4NzIwNTg= | 9,109 | Cannot disable logging from trainer module | {
"login": "alexf-a",
"id": 12577961,
"node_id": "MDQ6VXNlcjEyNTc3OTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/12577961?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexf-a",
"html_url": "https://github.com/alexf-a",
"followers_url": "https://api.github.com/users/alexf-a/followers",
"following_url": "https://api.github.com/users/alexf-a/following{/other_user}",
"gists_url": "https://api.github.com/users/alexf-a/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexf-a/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexf-a/subscriptions",
"organizations_url": "https://api.github.com/users/alexf-a/orgs",
"repos_url": "https://api.github.com/users/alexf-a/repos",
"events_url": "https://api.github.com/users/alexf-a/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexf-a/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Could you elaborate on the script you're using. `run_ner.py` does not report `eval_class_report` or `eval_predictions` and there are no print statements in it, nor are there in `Trainer.train` method.",
"Yes, it's a custom script. The script creates an `AutoModelForTokenClassification,` passes it to the `Trainer` and calls the `Trainer.train` method.\r\n\r\nWe are using WandB to plot a confusion matrix, so we define our own `compute_metrics` function which we also pass to the `Trainer` (sorry should have stated this earlier). `compute metrics` does return a dictionary with the keys `'predictions', 'class_report' and 'target'`. It looks like one that gets output from `Trainer` has the prefix `'eval_'` in front of each key produced by `compute_metrics`. \r\n\r\nI can't find anywhere in our script or our custom dependencies where we print or log this dictionary, and when I suppress console output from `Trainer` then the problem stops. \r\n\r\n",
"Is it just a matter of changing the log level? `run_ner.py` sets it to `INFO` for the main process (it does it twice - once for the root logger and another for transformers' logger:\r\n\r\nhttps://github.com/huggingface/transformers/blob/251eb70c979d74d3823e999236ff3621b07510a1/examples/token-classification/run_ner.py#L158-L168\r\n\r\n",
"Maybe your `Trainer` ends up with a `PrinterCallback` that prints all the logs. You can remove this with\r\n```\r\nform transformers.trainer_callback import PrinterCallback\r\ntrainer.remove_callback(PrinterCallback)\r\n```",
"Fixed! I had to upgrade to 4.0. Ended up having to upgrade to import the PrinterCalback, but I believe the upgrade itself fixed the problem. ",
"Even better if the upgrade fixes the problem! There were printing statements in older versions indeed.",
"Same issue here - How do I stop all prints coming from trainer.predict() ?"
] | 1,607 | 1,683 | 1,608 | NONE | null | @sgugger @stas00
- `transformers` version: 3.2.0
- Platform:
- Python version: 3.7.6
- PyTorch version (GPU?): 1.6.0, Tesla V100
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Parallel
I am using the Hugging Face Trainer class for NER fine-tuning. Whenever I turn evaluation on with the `--do_eval` argument, my console gets overwhelmed with a printed dictionary that appears to be coming from evaluation that is happening inside of the Trainer.
The dictionary has the keys:
` 'eval_loss', 'eval_accuracy_score', 'eval_precision', 'eval_recall', 'eval_f1', 'eval_class_report', 'eval_predictions' `
It's especially hard to read the console with NER predictions, because `eval_predictions` is a list with each token receiving an IOB tag.
I tried suppressing logging from transformers with this solution https://github.com/huggingface/transformers/issues/3050. I also tried disabling all logging below CRITICAL level. The problem persisted, and I noticed that the console output of the evaluation dictionary appeared to be coming from a print statement.
I tried suppressing all print statements from the Trainer's `train(...)` method, using this solution https://stackoverflow.com/questions/977840/redirecting-fortran-called-via-f2py-output-in-python/978264#978264. That worked, but now I have no logging of training at all :(.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9109/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9109/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9108 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9108/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9108/comments | https://api.github.com/repos/huggingface/transformers/issues/9108/events | https://github.com/huggingface/transformers/issues/9108 | 766,807,174 | MDU6SXNzdWU3NjY4MDcxNzQ= | 9,108 | Time for second encoding is much higher than first time | {
"login": "datistiquo",
"id": 47474379,
"node_id": "MDQ6VXNlcjQ3NDc0Mzc5",
"avatar_url": "https://avatars.githubusercontent.com/u/47474379?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/datistiquo",
"html_url": "https://github.com/datistiquo",
"followers_url": "https://api.github.com/users/datistiquo/followers",
"following_url": "https://api.github.com/users/datistiquo/following{/other_user}",
"gists_url": "https://api.github.com/users/datistiquo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/datistiquo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/datistiquo/subscriptions",
"organizations_url": "https://api.github.com/users/datistiquo/orgs",
"repos_url": "https://api.github.com/users/datistiquo/repos",
"events_url": "https://api.github.com/users/datistiquo/events{/privacy}",
"received_events_url": "https://api.github.com/users/datistiquo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Could you show the entire code, including how you instantiate the model, as well as your environment information, as mentioned in the template? Thank you.",
"that's ok?",
"With the following code:\r\n\r\n```py\r\nfrom transformers import TFBertModel, BertTokenizer\r\nfrom time import time\r\nimport tensorflow as tf\r\n\r\nprint(\"GPU Available:\", tf.test.is_gpu_available())\r\n\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-cased\")\r\nbert_model = TFBertModel.from_pretrained('bert-base-cased',output_hidden_states=True)\r\n\r\ninput = tokenizer(\"Hey is this slow?\" * 100 , max_length=512,padding=\"max_length\",truncation=True, return_tensors=\"tf\")\r\n\r\nfor i in range(100):\r\n start = time()\r\n outputs = bert_model(input)\r\n print(time() - start)\r\n```\r\n\r\nRunning on CPU doesn't increase the time for me:\r\n\r\n```\r\nGPU Available: False\r\n0.5066382884979248\r\n0.5038580894470215\r\n0.5125613212585449\r\n0.5018391609191895\r\n0.4927494525909424\r\n0.5066125392913818\r\n0.49803781509399414\r\n0.5140326023101807\r\n0.501518726348877\r\n0.49771928787231445\r\n0.5038976669311523\r\n```\r\n\r\n[Running on GPU, no problem either:](https://colab.research.google.com/drive/1tfynzpOiQJKkEi0vkpKaDXTwhB5-k6Td?usp=sharing)\r\n\r\n```\r\nGPU Available: True\r\n0.09349918365478516\r\n0.09653115272521973\r\n0.09893131256103516\r\n0.10591268539428711\r\n0.09297466278076172\r\n0.09105610847473145\r\n0.10088920593261719\r\n0.0935661792755127\r\n0.09639692306518555\r\n0.10130929946899414\r\n0.0947415828704834\r\n0.09380221366882324\r\n```",
"Thanks. If I do exactly your code I observe an increasing time! Very strange, but I assume that this has something to do with the gpu memory not releasing?\r\n\r\n```\r\n0.07779383659362793\r\n0.20029330253601074\r\n0.2085282802581787\r\n0.22140789031982422\r\n0.23041844367980957\r\n0.22839117050170898\r\n0.23337340354919434\r\n0.22336935997009277\r\n0.22971582412719727\r\n0.22768259048461914\r\n0.22839140892028809\r\n0.22934865951538086\r\n0.23038363456726074\r\n0.22646212577819824\r\n0.23062443733215332\r\n0.22713351249694824\r\n0.24032235145568848\r\n0.24936795234680176\r\n0.24984216690063477\r\n0.2523007392883301\r\n0.2481672763824463\r\n0.2532966136932373\r\n0.24833273887634277\r\n0.2513241767883301\r\n0.2522923946380615\r\n0.2536492347717285\r\n0.25013017654418945\r\n0.25212621688842773\r\n0.24585843086242676\r\n0.25535058975219727\r\n0.2563152313232422\r\n0.2423419952392578\r\n0.6144394874572754\r\n0.647824764251709\r\n0.6494302749633789\r\n0.6406776905059814\r\n0.6507377624511719\r\n0.6411724090576172\r\n0.6513652801513672\r\n0.6484384536743164\r\n0.6489207744598389\r\n0.6405856609344482\r\n0.6493120193481445\r\n0.6484384536743164\r\n0.6372919082641602\r\n0.6494011878967285\r\n0.6433298587799072\r\n0.65077805519104\r\n0.6475985050201416\r\n0.6383304595947266\r\n0.6525297164916992\r\n0.6413178443908691\r\n0.6475212574005127\r\n0.6485188007354736\r\n0.64430832862854\r\n0.6478779315948486\r\n0.6457436084747314\r\n0.7288320064544678\r\n0.6573460102081299\r\n0.6572368144989014\r\n0.5861053466796875\r\n0.6324939727783203\r\n0.722456693649292\r\n0.6353938579559326\r\n0.6324222087860107\r\n0.6373186111450195\r\n0.6216456890106201\r\n0.6627655029296875\r\n0.7275354862213135\r\n0.6035926342010498\r\n0.6590445041656494\r\n0.5936176776885986\r\n0.6416335105895996\r\n0.6400752067565918\r\n1.1317992210388184\r\n1.2438006401062012\r\n1.2430295944213867\r\n1.2435650825500488\r\n1.2585129737854004\r\n1.2704930305480957\r\n1.2204067707061768\r\n1.2424969673156738\r\n1.2366819381713867\r\n1.2533769607543945\r\n1.2510595321655273\r\n1.2426464557647705\r\n1.2566087245941162\r\n1.2392685413360596\r\n```\r\n\r\nIf you don't mind, my issue regarding the usage for the input of the tokenizer is still open. :)\r\nhttps://github.com/huggingface/transformers/issues/7674",
"Indeed, I'll try and check the issue ASAP. Thanks for the reminder!",
"@LysandreJik . Thank you! But, do you have any idea for my issue? It seems it is a gpu issue?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,607 | 1,619 | 1,619 | NONE | null | Hi,
using a bert model on a single gpu to encode multiple times after each each other like
```
bert_model = TFBertModel.from_pretrained('bert-base-cased',output_hidden_states=True)
input = tokenizer(data , max_length=MAX_SEQ_lEN,padding="max_length",truncation=True, return_tensors="tf")
outputs1 = bert_model(input)
###time1 : 0.1 seconds
outputs2 = bert_model(input)
### time2: 1.7 seconds
```
gives a unproportional high time for the second encoding. If the first encoding time is just 0.1 second I would assume to have each other encoding afterweards also about 0.1 seconds. I run this multiple times and it seems a patterns that the encoding after the first encoding is significantly larger.
Can someone explain this behaviour? I assume it is due to gpu.
```
Env: win 10, python: 3.6, tensorflow 2.3, transformers 3.3.1
GPU: Nvidia mx 150
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9108/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9108/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9107 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9107/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9107/comments | https://api.github.com/repos/huggingface/transformers/issues/9107/events | https://github.com/huggingface/transformers/pull/9107 | 766,803,125 | MDExOlB1bGxSZXF1ZXN0NTM5NzE4MzQ2 | 9,107 | Fix T5 model parallel test | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,607 | 1,608 | 1,608 | MEMBER | null | The model was defined in the wrong model tester. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9107/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9107/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9107",
"html_url": "https://github.com/huggingface/transformers/pull/9107",
"diff_url": "https://github.com/huggingface/transformers/pull/9107.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9107.patch",
"merged_at": 1608043873000
} |
https://api.github.com/repos/huggingface/transformers/issues/9106 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9106/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9106/comments | https://api.github.com/repos/huggingface/transformers/issues/9106/events | https://github.com/huggingface/transformers/issues/9106 | 766,623,461 | MDU6SXNzdWU3NjY2MjM0NjE= | 9,106 | Cannot load community model on local machine | {
"login": "logancyang",
"id": 4860545,
"node_id": "MDQ6VXNlcjQ4NjA1NDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4860545?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/logancyang",
"html_url": "https://github.com/logancyang",
"followers_url": "https://api.github.com/users/logancyang/followers",
"following_url": "https://api.github.com/users/logancyang/following{/other_user}",
"gists_url": "https://api.github.com/users/logancyang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/logancyang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/logancyang/subscriptions",
"organizations_url": "https://api.github.com/users/logancyang/orgs",
"repos_url": "https://api.github.com/users/logancyang/repos",
"events_url": "https://api.github.com/users/logancyang/events{/privacy}",
"received_events_url": "https://api.github.com/users/logancyang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! I believe this is so because this model uses the new weights system, which was introduced in v3.5.1. Please upgrade your transformers version to at least v3.5.1, we recommand the latest (v4.0.1):\r\n\r\n```\r\npip install -U transformers==4.0.1\r\n```",
"@LysandreJik Thanks for the quick reply! It works now 👍 "
] | 1,607 | 1,607 | 1,607 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.4.0
- Platform: Darwin-19.6.0-x86_64-i386-64bit
- Python version: 3.7.6
- PyTorch version (GPU?): 1.7.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
Probably @LysandreJik
## Information
Model I am using: https://huggingface.co/huggingtweets/xinqisu
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
## To reproduce
Steps to reproduce the behavior: (this is the instruction on the model page)
```
from transformers import pipeline
generator = pipeline('text-generation', model='huggingtweets/xinqisu')
generator("My dream is", num_return_sequences=5)
```
It gives me
```
OSError: Can't load config for 'huggingtweets/xinqisu'. Make sure that:
- 'huggingtweets/xinqisu' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'huggingtweets/xinqisu' is the correct path to a directory containing a config.json file
```
## Expected behavior
The generator should work with the snippet above. I have trained other `huggingtweets` models and they still work with the same code, for example, the following still works, it downloaded the model successfully.
```
from transformers import pipeline
generator = pipeline('text-generation', model='huggingtweets/billgates')
generator("My dream is", num_return_sequences=5)
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9106/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9106/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9105 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9105/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9105/comments | https://api.github.com/repos/huggingface/transformers/issues/9105/events | https://github.com/huggingface/transformers/pull/9105 | 766,535,158 | MDExOlB1bGxSZXF1ZXN0NTM5NTE5NzEz | 9,105 | Added TF OpenAi GPT1 Sequence Classification | {
"login": "spatil6",
"id": 6419011,
"node_id": "MDQ6VXNlcjY0MTkwMTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6419011?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/spatil6",
"html_url": "https://github.com/spatil6",
"followers_url": "https://api.github.com/users/spatil6/followers",
"following_url": "https://api.github.com/users/spatil6/following{/other_user}",
"gists_url": "https://api.github.com/users/spatil6/gists{/gist_id}",
"starred_url": "https://api.github.com/users/spatil6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/spatil6/subscriptions",
"organizations_url": "https://api.github.com/users/spatil6/orgs",
"repos_url": "https://api.github.com/users/spatil6/repos",
"events_url": "https://api.github.com/users/spatil6/events{/privacy}",
"received_events_url": "https://api.github.com/users/spatil6/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Not sure, why test cases are failing for 'run_tests_tf'. @LysandreJik . \r\nLet me know any action require from my side.",
"Can you rebase on master, a fix has been recently merged.",
"The tests have already been fixed on `master`, merging! Thanks a lot @spatil6"
] | 1,607 | 1,608 | 1,608 | CONTRIBUTOR | null | This PR implements Sequence classification for TF OpenAi GPT1 model.
TFOpenAIGPTForSequenceClassification uses the last token in order to do the classification, as other causal models (e.g. Transformer XL ,GPT-2) do.
Fixes #7623
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@LysandreJik @jplu
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9105/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9105/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9105",
"html_url": "https://github.com/huggingface/transformers/pull/9105",
"diff_url": "https://github.com/huggingface/transformers/pull/9105.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9105.patch",
"merged_at": 1608049629000
} |
https://api.github.com/repos/huggingface/transformers/issues/9104 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9104/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9104/comments | https://api.github.com/repos/huggingface/transformers/issues/9104/events | https://github.com/huggingface/transformers/issues/9104 | 766,492,439 | MDU6SXNzdWU3NjY0OTI0Mzk= | 9,104 | Cannot load custom tokenizer for Trainer | {
"login": "ierezell",
"id": 30974685,
"node_id": "MDQ6VXNlcjMwOTc0Njg1",
"avatar_url": "https://avatars.githubusercontent.com/u/30974685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ierezell",
"html_url": "https://github.com/ierezell",
"followers_url": "https://api.github.com/users/ierezell/followers",
"following_url": "https://api.github.com/users/ierezell/following{/other_user}",
"gists_url": "https://api.github.com/users/ierezell/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ierezell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ierezell/subscriptions",
"organizations_url": "https://api.github.com/users/ierezell/orgs",
"repos_url": "https://api.github.com/users/ierezell/repos",
"events_url": "https://api.github.com/users/ierezell/events{/privacy}",
"received_events_url": "https://api.github.com/users/ierezell/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"After playing with it more and changing files, names etc... I managed to made it work with Roberta. I guess it was a stupid name error... Sorry for taking 5mn of you time reading this. \r\n \r\nI realize it couldn't work with distilbert (bert) as the tokenizers are differents. \r\nIn the end, the model is training. \r\n\r\nMaybe it will help someone else one day. \r\nHave a good day. ",
"Glad you could resolve your issue!"
] | 1,607 | 1,607 | 1,607 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.0.0
- Platform: Linux-5.9.13-zen1-1-zen-x86_64-with-glibc2.2.5
- Python version: 3.8.6
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@mfuntowicz
## Information
Model I am using: My own
The problem arises when using:
* [X] the official example scripts: (give details below)
The tasks I am working on is:
* [X] my own task or dataset: (give details below)
I want to finetune a model on my own dataset. For now it doesn't matter if i finetune bert, DistilBert or other, I just want good embeddings for text similarity (cosine distance)
## To reproduce
Steps to reproduce the behavior:
1. Read the [How to train tutorial](https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb)
2. Train your own tokenizer (working perfectly) and got `model-vocab.json` and `model-merges.txt`
3. Load and encode with :
```
tokenizer = ByteLevelBPETokenizer(./models/custom/my_model-vocab.json","./models/custom/my_model-merges.txt")
```
This works nicelly !
4. Try to do the same with a DistilBertTokenizerFast to use with the `Trainer` class
```
tokenizer = DistilBertTokenizerFast.from_pretrained('./models/custom', max_len=512)
```
5. Get the error `check that './models/custom' is the correct path to a directory containing relevant tokenizer files`
Note : I also tried to add a `config.json` file next to the merge and vocab which seemed missing but it doesn't change anything.
I also tried a RobertaTokenizerFast (and the 'not fast' version) but same problem
## Expected behavior
Train a custom tokenizer and be able to load it with a ModelTokenizer for the Trainer.
(The BPE tokenizer which works do not have the `mask_token` attribute to work with the dataset loader)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9104/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9104/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9103 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9103/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9103/comments | https://api.github.com/repos/huggingface/transformers/issues/9103/events | https://github.com/huggingface/transformers/issues/9103 | 766,467,004 | MDU6SXNzdWU3NjY0NjcwMDQ= | 9,103 | Seq2Seq training calculate_rouge with precision and recall | {
"login": "marcoabrate",
"id": 43387597,
"node_id": "MDQ6VXNlcjQzMzg3NTk3",
"avatar_url": "https://avatars.githubusercontent.com/u/43387597?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marcoabrate",
"html_url": "https://github.com/marcoabrate",
"followers_url": "https://api.github.com/users/marcoabrate/followers",
"following_url": "https://api.github.com/users/marcoabrate/following{/other_user}",
"gists_url": "https://api.github.com/users/marcoabrate/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marcoabrate/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marcoabrate/subscriptions",
"organizations_url": "https://api.github.com/users/marcoabrate/orgs",
"repos_url": "https://api.github.com/users/marcoabrate/repos",
"events_url": "https://api.github.com/users/marcoabrate/events{/privacy}",
"received_events_url": "https://api.github.com/users/marcoabrate/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi there. Please note that this script is not maintained anymore and is provided as is. We only maintain the `finetune_trainer.py` script now.",
"Ok, I will switch to that one. Thank you"
] | 1,607 | 1,607 | 1,607 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: master
- Platform: Google Colab
- Python version: 3.6.9
- PyTorch version (GPU?): pytorch-lightning==1.0.4
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
Trainer: @sgugger
examples/seq2seq: @patil-suraj
## Information
Model I am using (Bert, XLNet ...): bart-base
The tasks I am working on is:
summarization on XSUM
## To reproduce
Change `calculate_rouge` function in `utils.py` with `return_precision_and_recall=True`.
Fine-tune any seq2seq model with the official script `finetune.py`:
```
!python3 $finetune_script \
--model_name_or_path facebook/bart-base \
--tokenizer_name facebook/bart-base \
--data_dir $data_dir \
--learning_rate 3e-5 --label_smoothing 0.1 --num_train_epochs 2 \
--sortish_sampler --freeze_embeds --adafactor \
--task summarization \
--do_train \
--max_source_length 1024 \
--max_target_length 60 \
--val_max_target_length 60 \
--test_max_target_length 100 \
--n_train 8 --n_val 2 \
--train_batch_size 2 --eval_batch_size 2 \
--eval_beams 2 \
--val_check_interval 0.5 \
--log_every_n_steps 1 \
--logger_name wandb \
--output_dir $output_dir \
--overwrite_output_dir \
--gpus 1
```
Throws the error
```
Validation sanity check: 100%|██████████| 1/1 [00:01<00:00, 1.67s/it]Traceback (most recent call last):
File "/content/drive/My Drive/MAGMA: Summarization/seq2seq/finetune.py", line 443, in <module>
main(args)
File "/content/drive/My Drive/MAGMA: Summarization/seq2seq/finetune.py", line 418, in main
logger=logger,
File "/content/drive/My Drive/MAGMA: Summarization/seq2seq/lightning_base.py", line 389, in generic_train
trainer.fit(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 440, in fit
results = self.accelerator_backend.train()
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/accelerators/gpu_accelerator.py", line 54, in train
results = self.train_or_test()
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/accelerators/accelerator.py", line 68, in train_or_test
results = self.trainer.train()
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 462, in train
self.run_sanity_check(self.get_model())
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 650, in run_sanity_check
_, eval_results = self.run_evaluation(test_mode=False, max_batches=self.num_sanity_val_batches)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 597, in run_evaluation
num_dataloaders=len(dataloaders)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 196, in evaluation_epoch_end
deprecated_results = self.__run_eval_epoch_end(num_dataloaders, using_eval_result)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 247, in __run_eval_epoch_end
eval_results = model.validation_epoch_end(eval_results)
File "/content/drive/My Drive/MAGMA: Summarization/seq2seq/finetune.py", line 190, in validation_epoch_end
k: np.array([x[k] for x in outputs]).mean() for k in self.metric_names + ["gen_time", "gen_len"]
File "/content/drive/My Drive/MAGMA: Summarization/seq2seq/finetune.py", line 190, in <dictcomp>
k: np.array([x[k] for x in outputs]).mean() for k in self.metric_names + ["gen_time", "gen_len"]
File "/usr/local/lib/python3.6/dist-packages/numpy/core/_methods.py", line 163, in _mean
ret = ret / rcount
TypeError: unsupported operand type(s) for /: 'dict' and 'int'
```
From my understanding self.metric_names should be a list. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9103/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9103/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9102 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9102/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9102/comments | https://api.github.com/repos/huggingface/transformers/issues/9102/events | https://github.com/huggingface/transformers/issues/9102 | 766,448,186 | MDU6SXNzdWU3NjY0NDgxODY= | 9,102 | Unexpected logits shape on prediction with TFRobertaForSequenceClassification | {
"login": "steindor",
"id": 3185711,
"node_id": "MDQ6VXNlcjMxODU3MTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3185711?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/steindor",
"html_url": "https://github.com/steindor",
"followers_url": "https://api.github.com/users/steindor/followers",
"following_url": "https://api.github.com/users/steindor/following{/other_user}",
"gists_url": "https://api.github.com/users/steindor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/steindor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/steindor/subscriptions",
"organizations_url": "https://api.github.com/users/steindor/orgs",
"repos_url": "https://api.github.com/users/steindor/repos",
"events_url": "https://api.github.com/users/steindor/events{/privacy}",
"received_events_url": "https://api.github.com/users/steindor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! The main issue here is that your arrays are of shape `(seq_length)`, whereas they should be of shape `(batch_size, seq_length)`, even if the batch size is 1.\r\n\r\nUpdating your code to reflect that:\r\n\r\n```py\r\nfrom transformers import TFRobertaForSequenceClassification, RobertaConfig\r\nimport numpy as np\r\n\r\nbs = 1\r\nseq_len = 510\r\n\r\nclassifier = TFRobertaForSequenceClassification(RobertaConfig())\r\n\r\n#create random inputs for demo\r\ninput_ids = np.random.randint(0,10000, size=(bs, seq_len,))\r\nattention_mask = np.random.randint(0,2, size=(bs, seq_len,))\r\ntoken_type_ids = np.random.randint(0,2, size=(bs, seq_len,))\r\n\r\n#make a prediction with batch_size of 1\r\noutput = classifier.predict([input_ids, attention_mask, token_type_ids])\r\n\r\nprint(output.logits.shape) # -> outputs (1, 2)\r\n```\r\n\r\nHowever, there seems to be an error as the model cannot handle a sequence length of 512 when used this way. @jplu running the above code with a sequence length of 512 results in the following error:\r\n\r\n```\r\ntensorflow.python.framework.errors_impl.InvalidArgumentError: indices[0,510] = 512 is not in [0, 512)\r\n\t [[node tf_roberta_for_sequence_classification/roberta/embeddings/position_embeddings/embedding_lookup (defined at /home/lysandre/Workspaces/Python/transformers/src/transformers/models/roberta/modeling_tf_roberta.py:199) ]] [Op:__inference_predict_function_8030]\r\n\r\nErrors may have originated from an input operation.\r\nInput Source operations connected to node tf_roberta_for_sequence_classification/roberta/embeddings/position_embeddings/embedding_lookup:\r\n tf_roberta_for_sequence_classification/roberta/embeddings/add (defined at /home/lysandre/Workspaces/Python/transformers/src/transformers/models/roberta/modeling_tf_roberta.py:122)\r\n\r\nFunction call stack:\r\npredict_function\r\n```\r\n\r\nUsing a smaller sequence length doesn't raise the error. Do you mind weighing in on the issue?",
"Yep, you are limited to 510 tokens + 2 extra tokens (beginning + end)",
"After talking about it a bit offline with @jplu we realize there might be an issue with the `predict` method when passing in the values as a list. Could you try passing them as a dictionary instead?\r\n\r\nDoing this instead:\r\n\r\n```py\r\noutput = classifier.predict({\"input_ids\": input_ids, \"attention_mask\": attention_mask, \"token_type_ids\": token_type_ids})\r\n```",
"Hei! Thank you for the feedback. I passed the parameters as a dict with everything else unchanged but still get the output as (seq_len, num_labels) unfortunately. ",
"Can you try this:\r\n```\r\nfrom transformers import TFRobertaForSequenceClassification, RobertaConfig\r\nimport numpy as np\r\n\r\nbs = 1\r\nseq_len = 510\r\n\r\nclassifier = TFRobertaForSequenceClassification(RobertaConfig())\r\ninput_ids = np.random.randint(0,10000, size=(bs, seq_len,))\r\nattention_mask = np.random.randint(0,2, size=(bs, seq_len,))\r\ntoken_type_ids = np.zeros(shape=(bs, seq_len,))\r\nclassifier.predict({\"input_ids\": input_ids, \"attention_mask\": attention_mask, \"token_type_ids\": token_type_ids})\r\n```",
"Yes, this works with seq_len = 510. It might help stating this behaviour in the docs or perhaps raise an error or show a warning when one tries to input an unbatched sample. Also a bit confusing that seq_len needs to be 510 and not 512 to account for the extra tokens (and the error received when one tries with 512 is a bit murky). Anyway, thanks for the help. I'll go ahead and close this."
] | 1,607 | 1,607 | 1,607 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.0.0
- Platform: Linux-4.9.0-11-amd64-x86_64-with-debian-9.11
- Python version: 3.7.9
- PyTorch version (GPU?): 1.6.0a0+bf2bbd9 (False)
- Tensorflow version (GPU?): 2.3.1 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: Distributed
I am using the the TFRobertaForSequenceClassification to create a classifier. According to the documentation, the logits output should have a shape of (batch_size, num_labels) which makes sense. I however get (batch_size, seq_length, num_labels).
Code to reproduce:
```
from transformers import TFRobertaForSequenceClassification, RobertaConfig
import numpy as np
seq_len = 512
classifier = TFRobertaForSequenceClassification(RobertaConfig())
#create random inputs for demo
input_ids = np.random.randint(0,10000, size=(seq_len,))
attention_mask = np.random.randint(0,2, size=(seq_len,))
token_type_ids = np.random.randint(0,2, size=(seq_len,))
#make a prediction with batch_size of 1
output = classifier.predict([input_ids, attention_mask, token_type_ids])
print(output.logits.shape) -> prints out (512,2)
```
## Expected behavior
Logits in the shape of (batch_size,num_labels) or (1,2)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9102/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9102/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9101 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9101/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9101/comments | https://api.github.com/repos/huggingface/transformers/issues/9101/events | https://github.com/huggingface/transformers/pull/9101 | 766,406,402 | MDExOlB1bGxSZXF1ZXN0NTM5NDMyODA0 | 9,101 | Fix a broken link in documentation | {
"login": "SBrandeis",
"id": 33657802,
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SBrandeis",
"html_url": "https://github.com/SBrandeis",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834067346,
"node_id": "MDU6TGFiZWwxODM0MDY3MzQ2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation",
"name": "Documentation",
"color": "77cc3b",
"default": false,
"description": ""
}
] | closed | false | null | [] | [] | 1,607 | 1,607 | 1,607 | CONTRIBUTOR | null | # What does this PR do?
Fixes a broken link to the BERTology example in documentation
Fixes #9100
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
documentation: @sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9101/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9101/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9101",
"html_url": "https://github.com/huggingface/transformers/pull/9101",
"diff_url": "https://github.com/huggingface/transformers/pull/9101.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9101.patch",
"merged_at": 1607955148000
} |
https://api.github.com/repos/huggingface/transformers/issues/9100 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9100/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9100/comments | https://api.github.com/repos/huggingface/transformers/issues/9100/events | https://github.com/huggingface/transformers/issues/9100 | 766,403,605 | MDU6SXNzdWU3NjY0MDM2MDU= | 9,100 | Link to BERTology example is broken | {
"login": "SBrandeis",
"id": 33657802,
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SBrandeis",
"html_url": "https://github.com/SBrandeis",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834067346,
"node_id": "MDU6TGFiZWwxODM0MDY3MzQ2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation",
"name": "Documentation",
"color": "77cc3b",
"default": false,
"description": ""
}
] | closed | false | {
"login": "SBrandeis",
"id": 33657802,
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SBrandeis",
"html_url": "https://github.com/SBrandeis",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "SBrandeis",
"id": 33657802,
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SBrandeis",
"html_url": "https://github.com/SBrandeis",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,607 | 1,607 | 1,607 | CONTRIBUTOR | null | Link to BERTology example is broken in Documentation (https://huggingface.co/transformers/bertology.html)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9100/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9100/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9099 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9099/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9099/comments | https://api.github.com/repos/huggingface/transformers/issues/9099/events | https://github.com/huggingface/transformers/issues/9099 | 766,293,079 | MDU6SXNzdWU3NjYyOTMwNzk= | 9,099 | bug with _load_optimizer_and_scheduler in trainer.py | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @rabeehk ,\r\n\r\nIn `_load_optimizer_and_scheduler` if `model_path` exists and `optimizer` and `scheduler` `state_dict` is found then that means you are loading from a saved checkpoint and continue training from there, so the `lr` is read from the `scheduler` and used instead of the set LR. This is expected behaviour.",
"Hi Suraj,\nthanks for the reply. I have a couple of questions on this, 1) I see this\nis ignoring the training epochs in when loading from the saved checkpoints,\nso it does not train for the epochs set, how could I resolve it? Also, if I\nwant to change the lr, could I load from checkpoint but change the lr?could\nyou give me some information how loading from trained optimizer, could\nhelp?\n\nto explain better, I train a model for X epochs, then I want to finetune it\non other datasets with extra Y epochs with different learning rate, for\nthis I pass the updated model to trainer, but then should I pass the\nmodel_path so it loads from the saved checkpoint of optimizer? and why this\nis ignoring the set number of epochs?\nthanks\n\nOn Mon, Dec 14, 2020 at 11:07 AM Suraj Patil <[email protected]>\nwrote:\n\n> Hi @rabeehk <https://github.com/rabeehk> ,\n>\n> In _load_optimizer_and_scheduler if model_path exists and optimizer and\n> scheduler state_dict is found then that means you are loading from a\n> saved checkpoint and continue training from there, so the lr is read from\n> the scheduler and used instead of the set LR. This is expected behaviour.\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/9099#issuecomment-744366176>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABP4ZCDDDMJGNROAD3AKZFTSUXWVJANCNFSM4U2SCXGA>\n> .\n>\n",
"If you want to fine-tune the saved checkpoint on another dataset then you could save it in diff path or remove the saved `optimizer` and `scheduler` files.\r\n\r\nAlso @sgugger might have a better answer here.",
"Hi Suraj,\r\n\r\nI am having a similar problem here.\r\nWhen the trainer continues from a checkpoint, i.e. `trainer.train(check_point_path)`, I notice a peek in the learning curve. I suspect that is related to what Rabeeh has mentioned.\r\n\r\nPlease have a look at the learning curve I got after I had to resume the training twice.\r\n\r\n\r\n\r\nAny ideas?",
"> this I pass the updated model to trainer, but then should I pass the model_path so it loads from the saved checkpoint of optimizer? and why this is ignoring the set number of epochs?\r\n\r\nPassing a `model_path` to the train method is done when you want to resume an interrupted training, which is why it does not do all epochs (it resumes the training from where you where). If you want to do a new training, you should not use that argument and manually pass the optimizer/scheduler you want to use at init.\r\n\r\n@abdullah-alnahas I have no idea what your plot is since you haven't told us how you generated it. ",
"Thanks for your response @sgugger , and sorry for not making myself clear.\r\n\r\nI am training an Electra model from scratch using the [`Trainer`] API.(https://huggingface.co/transformers/main_classes/trainer.html). I have interrupted the trainer twice, then resumed training by `trainer.train(latest_checkpoint_path)`.\r\nAfter that, I have generated the learning curve plot from `{latest_checkpoint_path}/trainer_state.json`'s `log_history` using `step` as the x axis, and `loss` as the y axis.\r\n\r\nMy question: Is it normal that the learning curve peaks after resuming the training from a checkpoint after an interruption?",
"The loss is reinitialized to 0 (it's not saved with the checkpoints) so it could come from this. There were also some recent changes in how the loss is logged so having your transformers version would help. The CI tests the final values of the weights of a (small) model are the same with a full training or resumed training, so I think this is just some weird reporting of the loss.",
"thanks Suraj and everyone, makes sense not to initialize the optimizers.",
"Hi @sgugger, \r\n\r\nI encountered the same issue on Transformers 4.3.0. I think the problem is not the loss being reinitialized as 0, but that the model is not being loaded from model_path. Only `TrainerState` is loaded but not the model weights. I looked through the code before concluding this, but as a sanity check, the current code will run even if `pytorch_model.bin` is not in the checkpoint directory, confirming that its not being loaded at all. It's odd that the CI tests are passing....\r\n\r\nAnyway I modified `trainer.py:train()`under the code block:\r\n\r\n```\r\n# Check if continuing training from a checkpoint\r\nif model_path and os.path.isfile(os.path.join(model_path, \"trainer_state.json\")):\r\n...\r\n self._globalstep_last_logged = self.state.global_step \r\n\r\n if isinstance(self.model, PreTrainedModel):\r\n model = model.from_pretrained(model_path)\r\n if not self.is_model_parallel:\r\n model = model.to(self.args.device)\r\n else:\r\n state_dict = torch.load(os.path.join(model_path, WEIGHTS_NAME))\r\n model.load_state_dict(state_dict)\r\n```\r\n\r\n`self._globalstep_last_logged = self.state.global_step` ensures the first logging of the loss is correct. `self._globalstep_last_logged` should not be 0 (that line is removed in the later part of the code)\r\n\r\nThe training is properly resumed after this. \r\n\r\n",
"`Trainer` does not handle the reloading of the model indeed, which can be confusing. So l'll add that functionality this afternoon!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,607 | 1,619 | 1,619 | NONE | null | ## Environment info
- `transformers` version: 3.5.1
- Platform: GPU
- Python version: 3.7
- PyTorch version (GPU?): 1.4.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
Trainer: @sgugger
Text Generation: @patrickvonplaten
## Information
I am using finetune_trainer.py, what I am observing is that in trainer.py, when you call _load_optimizer_and_scheduler if the model_path folder exist, then this ignores the user's set learning rate, meaning it continues with some saved learning rate than actually use the learning rate user set, could you have a look please? thanks
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9099/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9099/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9098 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9098/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9098/comments | https://api.github.com/repos/huggingface/transformers/issues/9098/events | https://github.com/huggingface/transformers/pull/9098 | 766,268,406 | MDExOlB1bGxSZXF1ZXN0NTM5MzQ0MzA0 | 9,098 | [RAG, Bart] Align RAG, Bart cache with T5 and other models of transformers | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,607 | 1,607 | 1,607 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
In Transformers, the cache should always have the same structure. This becomes especially important for composite models like `RAG` and `EncoderDecoder` that expect all models to have the same cache.
Bart and T5 had different caches with Bart being most different from the standard cache of the library.
This PR aligns the `past_key_values` cache of Bart/Rag with all other models in the library. In general, the philosophy should be:
the past_key_value should have exactly one level for each layer, no matter whether the model is a decoder-only a.k.a. GPT2 or BART. This was not correctly refactored in BART (it should have been implemented 1-to-1 as in T5). No breaking changes here though.
- `past_key_value` tuple for each layer should always be a tuple of tensors, **not** a tuple of a tuple
- for decodre-only models (GPT2), the tuple for each layer contains 2 tensors: key and value states
- for seq2seq (BART/T5), the tuple for each layer contains 4 tensors: key and value states of uni-directional self-attention, saved key and value states for cross-attention
This doesn't break any backward compatibility and should fix some RAG problems (@ratthachat). All RAG, Bart slow tests are passing and changes correspond just to the tuple structure.
PR is blocking me for TFBart refactor -> will merge already.
cc @LysandreJik, @sgugger, @patil-suraj for info.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9098/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9098/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9098",
"html_url": "https://github.com/huggingface/transformers/pull/9098",
"diff_url": "https://github.com/huggingface/transformers/pull/9098.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9098.patch",
"merged_at": 1607945547000
} |
https://api.github.com/repos/huggingface/transformers/issues/9097 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9097/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9097/comments | https://api.github.com/repos/huggingface/transformers/issues/9097/events | https://github.com/huggingface/transformers/issues/9097 | 766,190,803 | MDU6SXNzdWU3NjYxOTA4MDM= | 9,097 | Is the LayoutLM working now? | {
"login": "shaonanqinghuaizongshishi",
"id": 75976629,
"node_id": "MDQ6VXNlcjc1OTc2NjI5",
"avatar_url": "https://avatars.githubusercontent.com/u/75976629?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shaonanqinghuaizongshishi",
"html_url": "https://github.com/shaonanqinghuaizongshishi",
"followers_url": "https://api.github.com/users/shaonanqinghuaizongshishi/followers",
"following_url": "https://api.github.com/users/shaonanqinghuaizongshishi/following{/other_user}",
"gists_url": "https://api.github.com/users/shaonanqinghuaizongshishi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shaonanqinghuaizongshishi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shaonanqinghuaizongshishi/subscriptions",
"organizations_url": "https://api.github.com/users/shaonanqinghuaizongshishi/orgs",
"repos_url": "https://api.github.com/users/shaonanqinghuaizongshishi/repos",
"events_url": "https://api.github.com/users/shaonanqinghuaizongshishi/events{/privacy}",
"received_events_url": "https://api.github.com/users/shaonanqinghuaizongshishi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @shaonanqinghuaizongshishi \r\n\r\nCould you please post the code snippet, stack trace and your env info so that we can take a look ?",
"I am working on:\r\nubuntu 16.04\r\ntorch 1.5.0\r\ntransformers 3.4.0\r\n\r\n\r\n```\r\ndevice = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\r\ntokenizer = LayoutLMTokenizer.from_pretrained(model_path)\r\nmodel = LayoutLMForTokenClassification.from_pretrained(model_path, num_labels=config.num_labels).to(device)\r\noutputs = model(b_input_ids, bbox=b_boxes, token_type_ids=None,\r\n attention_mask=b_input_mask, labels=b_labels)\r\n```\r\n\r\n\r\nThen I run CUDA_LAUNCH_BLOCKING=1 python layoutLM.py, and got the following error:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"layoutLM.py\", line 275, in <module>\r\n train(train_dataloader, validation_dataloader)\r\n File \"layoutLM.py\", line 162, in train\r\n attention_mask=b_input_mask, labels=b_labels)\r\n File \"/Classification/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 550, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/Classification/lib/python3.6/site-packages/transformers/modeling_layoutlm.py\", line 864, in forward\r\n return_dict=return_dict,\r\n File \"/Classification/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 550, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/Classification/lib/python3.6/site-packages/transformers/modeling_layoutlm.py\", line 701, in forward\r\n inputs_embeds=inputs_embeds,\r\n File \"/Classification/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 550, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/Classification/lib/python3.6/site-packages/transformers/modeling_layoutlm.py\", line 118, in forward\r\n + token_type_embeddings\r\nRuntimeError: CUDA error: an illegal memory access was encountered\r\n```\r\n\r\n",
"Hi there!\r\n\r\nI have been investigating the model by making [integration tests](https://github.com/NielsRogge/transformers/blob/e5431da34ab2d03d6114303f18fd70192c880913/tests/test_modeling_layoutlm.py#L318), and turns out it outputs the same tensors as the original repository on the same input data, so there are no issues (tested this both for the base model - `LayoutLMModel` as well as the models with heads on top - `LayoutLMForTokenClassification` and `LayoutLMForSequenceClassification`).\r\n\r\nHowever, the model is poorly documented in my opinion, I needed to first look at the original repository to understand everything. I made a demo notebook that showcases how to fine-tune HuggingFace's `LayoutLMForTokenClassification` on the FUNSD dataset (a sequence labeling task): https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForTokenClassification_on_FUNSD.ipynb\r\n\r\nLet me know if this helps you!\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,607 | 1,619 | 1,619 | NONE | null | Getting endless erros when trying to use the LayoutLMForTokenClassification from transformers for NER task, is just me doing wrong or the function still on work?
Really appreciate if anyone can give some information. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9097/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9097/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9096 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9096/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9096/comments | https://api.github.com/repos/huggingface/transformers/issues/9096/events | https://github.com/huggingface/transformers/pull/9096 | 766,066,751 | MDExOlB1bGxSZXF1ZXN0NTM5MjMwOTMx | 9,096 | Fix variable name in TrainingArguments docstring | {
"login": "navjotts",
"id": 8072161,
"node_id": "MDQ6VXNlcjgwNzIxNjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8072161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/navjotts",
"html_url": "https://github.com/navjotts",
"followers_url": "https://api.github.com/users/navjotts/followers",
"following_url": "https://api.github.com/users/navjotts/following{/other_user}",
"gists_url": "https://api.github.com/users/navjotts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/navjotts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/navjotts/subscriptions",
"organizations_url": "https://api.github.com/users/navjotts/orgs",
"repos_url": "https://api.github.com/users/navjotts/repos",
"events_url": "https://api.github.com/users/navjotts/events{/privacy}",
"received_events_url": "https://api.github.com/users/navjotts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,607 | 1,607 | 1,607 | CONTRIBUTOR | null | # What does this PR do?
Corrects a var name in the docstring for `TrainingArguments` (there is no `ignore_skip_data`)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9096/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9096/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9096",
"html_url": "https://github.com/huggingface/transformers/pull/9096",
"diff_url": "https://github.com/huggingface/transformers/pull/9096.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9096.patch",
"merged_at": 1607954575000
} |
https://api.github.com/repos/huggingface/transformers/issues/9095 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9095/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9095/comments | https://api.github.com/repos/huggingface/transformers/issues/9095/events | https://github.com/huggingface/transformers/issues/9095 | 766,022,721 | MDU6SXNzdWU3NjYwMjI3MjE= | 9,095 | [TorchScript] Received several warning during Summarization model conversion | {
"login": "lanking520",
"id": 11890922,
"node_id": "MDQ6VXNlcjExODkwOTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/11890922?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lanking520",
"html_url": "https://github.com/lanking520",
"followers_url": "https://api.github.com/users/lanking520/followers",
"following_url": "https://api.github.com/users/lanking520/following{/other_user}",
"gists_url": "https://api.github.com/users/lanking520/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lanking520/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lanking520/subscriptions",
"organizations_url": "https://api.github.com/users/lanking520/orgs",
"repos_url": "https://api.github.com/users/lanking520/repos",
"events_url": "https://api.github.com/users/lanking520/events{/privacy}",
"received_events_url": "https://api.github.com/users/lanking520/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Have you tried removing the `strict=False`, and instead specify `return_dict=False` when you initialize the model with `from_pretrained`? Can you let me know if this fixes your issue?",
"> Have you tried removing the `strict=False`, and instead specify `return_dict=False` when you initialize the model with `from_pretrained`? Can you let me know if this fixes your issue?\r\n\r\nThanks. It seemed the error message is gone. However, I still receive the warning messages. Is there anyway I can modify the script and make it work without warning?",
"Usually these do not impact the result, as they are python values that do not change over time. Have you seen an error in prediction?",
"@LysandreJik Sounds good. Haven't seen anything wrong yet. :)",
"I got this error too, when converting .safetensors to TorchScript. It's a model called ced( https://github.com/jimbozhang/hf_transformers_custom_model_ced.git )\r\n\r\n```\r\n device = torch.device(\"cpu\")\r\n example_input = torch.rand(1, 64, 301).to(device)\r\n model.eval() \r\n traced_script_module = torch.jit.trace(model, example_input)\r\n traced_script_module.save(\"ced_tiny_trace_torch_script.pt\")\r\n print('TorchScript trace saved')\r\n```\r\n\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/Users/luis/PycharmProjects/hf_transformers_custom_model_ced/convert_pytorch_mobile.py\", line 72, in <module>\r\n convert2(model)\r\n File \"/Users/luis/PycharmProjects/hf_transformers_custom_model_ced/convert_pytorch_mobile.py\", line 44, in convert2\r\n traced_script_module = torch.jit.trace(model, example_input)\r\n File \"/Users/luis/PycharmProjects/hf_transformers_custom_model_ced/venv/lib/python3.9/site-packages/torch/jit/_trace.py\", line 794, in trace\r\n return trace_module(\r\n File \"/Users/luis/PycharmProjects/hf_transformers_custom_model_ced/venv/lib/python3.9/site-packages/torch/jit/_trace.py\", line 1056, in trace_module\r\n module._c._create_method_from_trace(\r\nRuntimeError: Encountering a dict at the output of the tracer might cause the trace to be incorrect, this is only valid if the container structure does not change based on the module's inputs. Consider using a constant container instead (e.g. for `list`, use a `tuple` instead. for `dict`, use a `NamedTuple` instead). If you absolutely need this and know the side effects, pass strict=False to trace() to allow this behavior.\r\n\r\n```"
] | 1,607 | 1,705 | 1,608 | NONE | null | ## Environment info
Using Transformers 4.0.1 and PyTorch 1.6.0.
```pytorch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
tokenizer = AutoTokenizer.from_pretrained("sshleifer/distilbart-cnn-6-6")
model = AutoModelForSeq2SeqLM.from_pretrained("sshleifer/distilbart-cnn-6-6")
# model = BartModel.from_pretrained("sshleifer/bart-tiny-random")
input_ids = decoder_input_ids = torch.tensor([19 * [1] + [model.config.eos_token_id]])
traced_model = torch.jit.trace(model, (input_ids, decoder_input_ids), strict=False)
traced_model.save("distilbart.pt")
```
I have to disable the strict checking in order to pass. (Error message without disable the strict flag):
```
RuntimeError: Encountering a dict at the output of the tracer might cause the trace to be incorrect, this is only valid if the container structure does not change based on the module's inputs. Consider using a constant container instead (e.g. for `list`, use a `tuple` instead. for `dict`, use a `NamedTuple` instead). If you absolutely need this and know the side effects, pass strict=False to trace() to allow this behavior.
```
Here is the warning messages:
```
/Users/qingla/PycharmProjects/pytorch/venv/lib/python3.7/site-packages/transformers/models/bart/modeling_bart.py:232: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if not padding_mask.any():
/Users/qingla/PycharmProjects/pytorch/venv/lib/python3.7/site-packages/transformers/models/bart/modeling_bart.py:175: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if decoder_padding_mask is not None and decoder_padding_mask.shape[1] > 1:
/Users/qingla/PycharmProjects/pytorch/venv/lib/python3.7/site-packages/transformers/models/bart/modeling_bart.py:716: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert key_padding_mask is None or key_padding_mask.shape == (bsz, src_len)
/Users/qingla/PycharmProjects/pytorch/venv/lib/python3.7/site-packages/transformers/models/bart/modeling_bart.py:718: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert attn_weights.size() == (bsz * self.num_heads, tgt_len, src_len)
/Users/qingla/PycharmProjects/pytorch/venv/lib/python3.7/site-packages/transformers/models/bart/modeling_bart.py:736: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert attn_output.size() == (bsz * self.num_heads, tgt_len, self.head_dim)
/Users/qingla/PycharmProjects/pytorch/venv/lib/python3.7/site-packages/transformers/models/bart/modeling_bart.py:287: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if torch.isinf(x).any() or torch.isnan(x).any():
```
If these warning are indicated correctly, them the model I traced is highly tied to the dummy input I provided which would bring inaccurate inference result... Any thoughts on how to improve it? @sshleifer Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9095/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9095/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9094 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9094/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9094/comments | https://api.github.com/repos/huggingface/transformers/issues/9094/events | https://github.com/huggingface/transformers/issues/9094 | 766,005,039 | MDU6SXNzdWU3NjYwMDUwMzk= | 9,094 | head mask issue transformers==3.5.1 | {
"login": "jingyonglin",
"id": 14811163,
"node_id": "MDQ6VXNlcjE0ODExMTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/14811163?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jingyonglin",
"html_url": "https://github.com/jingyonglin",
"followers_url": "https://api.github.com/users/jingyonglin/followers",
"following_url": "https://api.github.com/users/jingyonglin/following{/other_user}",
"gists_url": "https://api.github.com/users/jingyonglin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jingyonglin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jingyonglin/subscriptions",
"organizations_url": "https://api.github.com/users/jingyonglin/orgs",
"repos_url": "https://api.github.com/users/jingyonglin/repos",
"events_url": "https://api.github.com/users/jingyonglin/events{/privacy}",
"received_events_url": "https://api.github.com/users/jingyonglin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,607 | 1,619 | 1,619 | NONE | null | ## Environment info
- `transformers` version: 3.5.1
- Platform: windows & linux
- Python version: python 3.7
- PyTorch version (GPU?): 1.7.0
- Tensorflow version (GPU?): 2.3.1
- Using GPU in script?: Yes both CPU and GPU
- Using distributed or parallel set-up in script?: No
### Who can help
albert, bert, GPT2, XLM: @LysandreJik
## Information
Hi
I am using tiny albert Chinese as an encoder. and I've also tried to use albert transformer in my code.
Thing is I have to change a little bit source code to avoid some head mask issues.
watch transformers/modeling_albert.py
line 387: layer_output = albert_layer(hidden_states, attention_mask, **head_mask[layer_index]**, output_attentions)
however a few lines above, the defualt head_mask is None. So **TypeError: 'NoneType' object is not subscriptable** would be raised
It's not a deep bug and could be easily avoided if making a torch.ones head_mask. Just want to bring it up so it might probably help the others who encounter the same problem.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9094/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9094/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9093 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9093/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9093/comments | https://api.github.com/repos/huggingface/transformers/issues/9093/events | https://github.com/huggingface/transformers/issues/9093 | 766,003,906 | MDU6SXNzdWU3NjYwMDM5MDY= | 9,093 | Not able to load T5 tokenizer | {
"login": "adithyaan-creator",
"id": 54103522,
"node_id": "MDQ6VXNlcjU0MTAzNTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/54103522?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adithyaan-creator",
"html_url": "https://github.com/adithyaan-creator",
"followers_url": "https://api.github.com/users/adithyaan-creator/followers",
"following_url": "https://api.github.com/users/adithyaan-creator/following{/other_user}",
"gists_url": "https://api.github.com/users/adithyaan-creator/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adithyaan-creator/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adithyaan-creator/subscriptions",
"organizations_url": "https://api.github.com/users/adithyaan-creator/orgs",
"repos_url": "https://api.github.com/users/adithyaan-creator/repos",
"events_url": "https://api.github.com/users/adithyaan-creator/events{/privacy}",
"received_events_url": "https://api.github.com/users/adithyaan-creator/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @adithyaan-creator, \r\n\r\nas the error message says you need to install the sentence piece library :-) \r\n\r\nIf you run:\r\n\r\n```\r\npip install sentencepiece==0.1.91\r\n```\r\n\r\nbefore, it should work.",
"Thanks @patrickvonplaten . ",
"Hi @patrickvonplaten \r\nI installed sentencepiece, but it still doesnt seem to be working for me. Please see the snapshot below. Please help.\r\n\r\n\r\n",
"@DesiKeki try sentencepiece version 0.1.94.",
"Thanks @adithyaan-creator , it worked!",
"Hello @patrickvonplaten \r\nI have gone through the issue and the suggestions given above. However, I am facing the same issue and for some reason, none of the above solutions are proving fruitful.\r\n\r\nThe issue I am facing is exactly the same as the one stated above:\r\n`from transformers import T5Tokenizer,T5ForConditionalGeneration,Adafactor`\r\n`!pip install sentencepiece==0.1.91`\r\n`tokenizer = T5Tokenizer.from_pretrained(\"t5-base\")`\r\n`print(tokenizer)`\r\n\r\nThe output of the above code is: None. \r\nI tried using other versions of sentencepiece as well (as the one suggested above 0.1.94 and others as well). But it is still not working.\r\n\r\n\r\n",
"Did you restart your kernel after installing `sentencepiece`? See conversation in https://github.com/huggingface/transformers/issues/10797",
"> Did you restart your kernel after installing `sentencepiece`? See conversation in #10797\r\n\r\nit works for me, thank you",
"> Did you restart your kernel after installing `sentencepiece`? See conversation in #10797\r\n\r\nIt works for me. Thanks a lot."
] | 1,607 | 1,689 | 1,607 | NONE | null | Transformers==4.0.0
torch == 1.7.0+cu101
tensorflow == 2.3.0
Platform = Colab notebook
@julien-c @patrickvonplaten
Not able to load T5 tokenizer using
`tokenizer = T5Tokenizer.from_pretrained('t5-base')'
Getting error -

I am able to download the pre-trained model though. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9093/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9093/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9092 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9092/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9092/comments | https://api.github.com/repos/huggingface/transformers/issues/9092/events | https://github.com/huggingface/transformers/pull/9092 | 765,923,369 | MDExOlB1bGxSZXF1ZXN0NTM5MTQ1MzM2 | 9,092 | Patch *ForCausalLM model with TF resize_token_embeddings | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,607 | 1,607 | 1,607 | MEMBER | null | cc @jplu | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9092/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9092/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9092",
"html_url": "https://github.com/huggingface/transformers/pull/9092",
"diff_url": "https://github.com/huggingface/transformers/pull/9092.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9092.patch",
"merged_at": 1607924396000
} |
https://api.github.com/repos/huggingface/transformers/issues/9091 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9091/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9091/comments | https://api.github.com/repos/huggingface/transformers/issues/9091/events | https://github.com/huggingface/transformers/issues/9091 | 765,804,168 | MDU6SXNzdWU3NjU4MDQxNjg= | 9,091 | Chinese | {
"login": "Cheng-Lily",
"id": 45684630,
"node_id": "MDQ6VXNlcjQ1Njg0NjMw",
"avatar_url": "https://avatars.githubusercontent.com/u/45684630?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cheng-Lily",
"html_url": "https://github.com/Cheng-Lily",
"followers_url": "https://api.github.com/users/Cheng-Lily/followers",
"following_url": "https://api.github.com/users/Cheng-Lily/following{/other_user}",
"gists_url": "https://api.github.com/users/Cheng-Lily/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cheng-Lily/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cheng-Lily/subscriptions",
"organizations_url": "https://api.github.com/users/Cheng-Lily/orgs",
"repos_url": "https://api.github.com/users/Cheng-Lily/repos",
"events_url": "https://api.github.com/users/Cheng-Lily/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cheng-Lily/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,607 | 1,607 | 1,607 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9091/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9091/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9090 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9090/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9090/comments | https://api.github.com/repos/huggingface/transformers/issues/9090/events | https://github.com/huggingface/transformers/issues/9090 | 765,786,990 | MDU6SXNzdWU3NjU3ODY5OTA= | 9,090 | run_clm example gives `CUDA out of memory. Tried to allocate` error | {
"login": "massanishi",
"id": 4588926,
"node_id": "MDQ6VXNlcjQ1ODg5MjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4588926?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/massanishi",
"html_url": "https://github.com/massanishi",
"followers_url": "https://api.github.com/users/massanishi/followers",
"following_url": "https://api.github.com/users/massanishi/following{/other_user}",
"gists_url": "https://api.github.com/users/massanishi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/massanishi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/massanishi/subscriptions",
"organizations_url": "https://api.github.com/users/massanishi/orgs",
"repos_url": "https://api.github.com/users/massanishi/repos",
"events_url": "https://api.github.com/users/massanishi/events{/privacy}",
"received_events_url": "https://api.github.com/users/massanishi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You should try to reduce the batch size. This will reduce the memory usage.",
"Yup. What @LysandreJik said is correct. Use the following:\r\n`--per_device_train_batch_size x \\`\r\n`--per_device_eval_batch_size x \\`\r\nReplace x with your preferred batch size, I would recommend the highest power of 2 your GPU memory allows.",
"It worked! With the Colab's GPU memory size of 12.72GB, the batch size worked at:\r\n\r\n`--per_device_train_batch_size 2 \\`\r\n`--per_device_eval_batch_size 16 \\`\r\n\r\nThanks for the quick response guys."
] | 1,607 | 1,607 | 1,607 | NONE | null | ## Environment info
Google Colab with GPU runtime.
- Python version: 3.6.9
## Information
I'm trying to run the GPT2 training example from `https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_clm.py`.
The problem arises when using:
- Run CLM language modeling example.
## To reproduce
Steps to reproduce the behavior:
1. Open Google Colab with GPU on
2. Run
```
!git clone https://github.com/huggingface/transformers
%cd transformers
!pip install .
%cd examples
%cd language-modeling
!pip install -r requirements.txt
```
```
!python run_clm.py \
--model_name_or_path gpt2 \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--do_train \
--do_eval \
--output_dir test
```
Log:
```
[INFO|trainer.py:668] 2020-12-14 02:09:02,049 >> ***** Running training *****
[INFO|trainer.py:669] 2020-12-14 02:09:02,049 >> Num examples = 2318
[INFO|trainer.py:670] 2020-12-14 02:09:02,049 >> Num Epochs = 3
[INFO|trainer.py:671] 2020-12-14 02:09:02,049 >> Instantaneous batch size per device = 8
[INFO|trainer.py:672] 2020-12-14 02:09:02,049 >> Total train batch size (w. parallel, distributed & accumulation) = 8
[INFO|trainer.py:673] 2020-12-14 02:09:02,049 >> Gradient Accumulation steps = 1
[INFO|trainer.py:674] 2020-12-14 02:09:02,049 >> Total optimization steps = 870
0% 0/870 [00:00<?, ?it/s]Traceback (most recent call last):
File "run_clm.py", line 357, in <module>
main()
File "run_clm.py", line 327, in main
trainer.train(model_path=model_path)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 767, in train
tr_loss += self.training_step(model, inputs)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1096, in training_step
loss = self.compute_loss(model, inputs)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1120, in compute_loss
outputs = model(**inputs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/gpt2/modeling_gpt2.py", line 895, in forward
return_dict=return_dict,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/gpt2/modeling_gpt2.py", line 740, in forward
output_attentions=output_attentions,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/gpt2/modeling_gpt2.py", line 295, in forward
output_attentions=output_attentions,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/gpt2/modeling_gpt2.py", line 239, in forward
attn_outputs = self._attn(query, key, value, attention_mask, head_mask, output_attentions)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/gpt2/modeling_gpt2.py", line 166, in _attn
w = torch.matmul(q, k)
RuntimeError: CUDA out of memory. Tried to allocate 384.00 MiB (GPU 0; 15.90 GiB total capacity; 14.75 GiB already allocated; 185.88 MiB free; 14.81 GiB reserved in total by PyTorch)
0% 0/870 [00:00<?, ?it/s]
```
## Expected behavior
Outputs the model in the output_dir with no memory error.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9090/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9090/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9089 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9089/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9089/comments | https://api.github.com/repos/huggingface/transformers/issues/9089/events | https://github.com/huggingface/transformers/pull/9089 | 765,771,646 | MDExOlB1bGxSZXF1ZXN0NTM5MDc0MTg0 | 9,089 | Fix a bug in eval_batch_retrieval of eval_rag.py | {
"login": "yoshitomo-matsubara",
"id": 11156001,
"node_id": "MDQ6VXNlcjExMTU2MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/11156001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yoshitomo-matsubara",
"html_url": "https://github.com/yoshitomo-matsubara",
"followers_url": "https://api.github.com/users/yoshitomo-matsubara/followers",
"following_url": "https://api.github.com/users/yoshitomo-matsubara/following{/other_user}",
"gists_url": "https://api.github.com/users/yoshitomo-matsubara/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yoshitomo-matsubara/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yoshitomo-matsubara/subscriptions",
"organizations_url": "https://api.github.com/users/yoshitomo-matsubara/orgs",
"repos_url": "https://api.github.com/users/yoshitomo-matsubara/repos",
"events_url": "https://api.github.com/users/yoshitomo-matsubara/events{/privacy}",
"received_events_url": "https://api.github.com/users/yoshitomo-matsubara/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@lhoestq - feel free to merge if you're ok with the PR"
] | 1,607 | 1,608 | 1,608 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Following the instructions in [RAG example](https://github.com/huggingface/transformers/tree/master/examples/research_projects/rag#retrieval-evaluation), I was trying to evaluate retrieval against DPR evaluation data.
`pipenv run python eval_rag.py --model_name_or_path facebook/rag-sequence-nq --model_type rag_sequence --evaluation_set output/biencoder-nq-dev.questions --gold_data_path output/biencoder-nq-dev.pages --predictions_path output/retrieval_preds.tsv --eval_mode retrieval --k 1`
With the above command, I faced the following error and confirmed that `question_enc_outputs` is a tuple whose length is 1.
```
...
loading weights file https://huggingface.co/facebook/rag-sequence-nq/resolve/main/pytorch_model.bin from cache at /home/ubuntu/.cache/huggingface/transformers/9456ce4ba210322153f704e0f26c6228bd6c0caad60fe1b3bdca001558adbeca.ee816b8e716f9741a2ac602bb9c6f4d84eff545b0b00a6c5353241bea6dec221
All model checkpoint weights were used when initializing RagSequenceForGeneration.
All the weights of RagSequenceForGeneration were initialized from the model checkpoint at facebook/rag-sequence-nq.
If your task is similar to the task the model of the checkpoint was trained on, you can already use RagSequenceForGeneration for predictions without further training.
initializing retrieval
Loading index from https://storage.googleapis.com/huggingface-nlp/datasets/wiki_dpr/
loading file https://storage.googleapis.com/huggingface-nlp/datasets/wiki_dpr/hf_bert_base.hnswSQ8_correct_phi_128.c_index.index.dpr from cache at /home/ubuntu/.cache/huggingface/transformers/a481b3aaed56325cb8901610e03e76f93b47f4284a1392d85e2ba5ce5d40d174.a382b038f1ea97c4fbad3098cd4a881a7cd4c5f73902c093e0c560511655cc0b
loading file https://storage.googleapis.com/huggingface-nlp/datasets/wiki_dpr/hf_bert_base.hnswSQ8_correct_phi_128.c_index.index_meta.dpr from cache at /home/ubuntu/.cache/huggingface/transformers/bb9560964463bc761c682818cbdb4e1662e91d25a9407afb102970f00445678c.f8cbe3240b82ffaad54506b5c13c63d26ff873d5cfabbc30eef9ad668264bab4
7it [00:00, 212.77it/s]
Traceback (most recent call last):
File "eval_rag.py", line 315, in <module>
main(args)
File "eval_rag.py", line 301, in main
answers = evaluate_batch_fn(args, model, questions)
File "eval_rag.py", line 99, in evaluate_batch_retrieval
question_enc_pool_output = question_enc_outputs.pooler_output
AttributeError: 'tuple' object has no attribute 'pooler_output'
```
With this simple change (`question_enc_outputs.pooler_output` -> `question_enc_outputs[0]`), I got to run the evaluation code and confirmed
`INFO:__main__:Precision@1: 70.74`
## Environments
- Ubuntu 18.04 LTS
- Python 3.7.7
- transformers 4.0.1
- torch: 1.7.1
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
@ola13 (confirmed by `git blame`) @patrickvonplaten @lhoestq | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9089/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9089/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9089",
"html_url": "https://github.com/huggingface/transformers/pull/9089",
"diff_url": "https://github.com/huggingface/transformers/pull/9089.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9089.patch",
"merged_at": 1608040016000
} |
https://api.github.com/repos/huggingface/transformers/issues/9088 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9088/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9088/comments | https://api.github.com/repos/huggingface/transformers/issues/9088/events | https://github.com/huggingface/transformers/issues/9088 | 765,630,751 | MDU6SXNzdWU3NjU2MzA3NTE= | 9,088 | run_clm.py Early stopping with ^C | {
"login": "Clickative",
"id": 35162301,
"node_id": "MDQ6VXNlcjM1MTYyMzAx",
"avatar_url": "https://avatars.githubusercontent.com/u/35162301?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Clickative",
"html_url": "https://github.com/Clickative",
"followers_url": "https://api.github.com/users/Clickative/followers",
"following_url": "https://api.github.com/users/Clickative/following{/other_user}",
"gists_url": "https://api.github.com/users/Clickative/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Clickative/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Clickative/subscriptions",
"organizations_url": "https://api.github.com/users/Clickative/orgs",
"repos_url": "https://api.github.com/users/Clickative/repos",
"events_url": "https://api.github.com/users/Clickative/events{/privacy}",
"received_events_url": "https://api.github.com/users/Clickative/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"^C means you have hit Ctrl + C on your machine and stops the command running. You should re-run the command without hitting Ctrl + C.",
"Yup I am aware that ^C is a halt command. I am running this on colab and I have tried to run this 5-7 times now, not hitting Ctrl+C once. For some reason it appears itself and halts the execution. ",
"There might be something in colab that aborts bash command after some time then, or it happens when the session disconnects. But there is absolutely nothing in the script that triggers a cancel like this, so there is nothing we can do to fix this.\r\n\r\nNote that the scripts are not meant to be run on Colab, we have [notebook versions](https://github.com/huggingface/notebooks/tree/master/examples) of them for that.",
"I think I have figured out the issue. This is happening because the dataset is large and when the full thing is loaded, colab crashes.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,607 | 1,618 | 1,618 | NONE | null | - `transformers` version: 4.0.1
- Platform: Colab
- Python version: 3.6.9
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@sgugger
## Information
Model I am using: GPT2
The problem arises when using:
`run_clm.py`
## To reproduce
`!python ./transformers/examples/language-modeling/run_clm.py \
--model_name_or_path ./GPT2_PRETRAINED_LOCAL \
--dataset_name bookcorpusopen \
--dataset_config_name plain_text \
--per_device_train_batch_size 2 \
--per_device_eval_batch_size 2 \
--block_size 128 \
--gradient_accumulation_steps 1 \
--overwrite_output_dir \
--do_train \
--do_eval \
--num_train_epochs 20 \
--save_steps 50000 \
--save_total_limit 1 \
--output_dir ./GPT2-trained-save`
Output:
`2020-12-13 20:02:51.391764: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
12/13/2020 20:02:53 - WARNING - __main__ - Process rank: -1, device: cpu, n_gpu: 0distributed training: False, 16-bits training: False
12/13/2020 20:02:53 - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir='./RPT-trained-save', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=False, evaluation_strategy=<EvaluationStrategy.NO: 'no'>, prediction_loss_only=False, per_device_train_batch_size=2, per_device_eval_batch_size=2, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=20.0, max_steps=-1, warmup_steps=0, logging_dir='runs/Dec13_20-02-53_7d34d2e22bee', logging_first_step=False, logging_steps=500, save_steps=50000, save_total_limit=1, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name='./RPT-trained-save', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None)
Downloading: 4.00kB [00:00, 2.73MB/s]
Downloading: 2.10kB [00:00, 1.47MB/s]
Downloading and preparing dataset book_corpus_open/plain_text (download: 2.24 GiB, generated: 6.19 GiB, post-processed: Unknown size, total: 8.43 GiB) to /root/.cache/huggingface/datasets/book_corpus_open/plain_text/1.0.0/5cc3e4620a202388e77500f913b37532be8b036287436f3365e066671a1bd97e...
Downloading: 100% 2.40G/2.40G [02:41<00:00, 14.9MB/s]
9990 examples [01:04, 149.50 examples/s]^C`
The ^C automatically appears and the script stops.
## Expected behavior
The training process takes place as normal.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9088/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9088/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9087 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9087/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9087/comments | https://api.github.com/repos/huggingface/transformers/issues/9087/events | https://github.com/huggingface/transformers/issues/9087 | 765,588,724 | MDU6SXNzdWU3NjU1ODg3MjQ= | 9,087 | BertForSequenceClassification finetune training loss and accuracy have some problem | {
"login": "good74152",
"id": 39672039,
"node_id": "MDQ6VXNlcjM5NjcyMDM5",
"avatar_url": "https://avatars.githubusercontent.com/u/39672039?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/good74152",
"html_url": "https://github.com/good74152",
"followers_url": "https://api.github.com/users/good74152/followers",
"following_url": "https://api.github.com/users/good74152/following{/other_user}",
"gists_url": "https://api.github.com/users/good74152/gists{/gist_id}",
"starred_url": "https://api.github.com/users/good74152/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/good74152/subscriptions",
"organizations_url": "https://api.github.com/users/good74152/orgs",
"repos_url": "https://api.github.com/users/good74152/repos",
"events_url": "https://api.github.com/users/good74152/events{/privacy}",
"received_events_url": "https://api.github.com/users/good74152/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests, rather than help with training.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!"
] | 1,607 | 1,607 | 1,607 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: v4.0.0
- Platform: colab pro
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101
- Tensorflow version (GPU?): 2.3.0
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
@sgugger
@JetRunner
## Information
I follow the paper https://arxiv.org/pdf/2003.02245.pdf to do augmentation and test the performance
Model I am using Bert for BertTokenizer, BertForMaskedLM, BertForSequenceClassification
The problem arises when using:
Use Trainer to fine-tuning on both training set and concatenated set of training set and augmentation set, the Training log is No log or 0.683592, and accuracy is always 0.8
The tasks I am working on is:
An official GLUE task: sst2, using by huggingface datasets package
The details:
Trainer setting I follow the examples/text_classification.ipynb to build the compute_metrics function and tokenize mapping function, but the training loss and accuracy have bug
my tokenized datasets format:

compute_function, little modify by examples/text_classification.ipynb

bert_finetuned_setting

fine_tuned result

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9087/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9087/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9086 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9086/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9086/comments | https://api.github.com/repos/huggingface/transformers/issues/9086/events | https://github.com/huggingface/transformers/issues/9086 | 765,557,582 | MDU6SXNzdWU3NjU1NTc1ODI= | 9,086 | Getting a 404 error when loading TFXLMRobertaModel from 'xlm-roberta-large' | {
"login": "vinaygeorgeroy",
"id": 23115208,
"node_id": "MDQ6VXNlcjIzMTE1MjA4",
"avatar_url": "https://avatars.githubusercontent.com/u/23115208?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vinaygeorgeroy",
"html_url": "https://github.com/vinaygeorgeroy",
"followers_url": "https://api.github.com/users/vinaygeorgeroy/followers",
"following_url": "https://api.github.com/users/vinaygeorgeroy/following{/other_user}",
"gists_url": "https://api.github.com/users/vinaygeorgeroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vinaygeorgeroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vinaygeorgeroy/subscriptions",
"organizations_url": "https://api.github.com/users/vinaygeorgeroy/orgs",
"repos_url": "https://api.github.com/users/vinaygeorgeroy/repos",
"events_url": "https://api.github.com/users/vinaygeorgeroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/vinaygeorgeroy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Could you try with the flag `from_pt=True` when using `from_pretrained`? ",
"That worked, thanks"
] | 1,607 | 1,607 | 1,607 | NONE | null | Getting a 404 when trying to load the model.
Manyually checked the https://huggingface.co repository for the xlm-roberta-large, was only able to find the Pytorch models, why aren't the TF models available for this, if its not, why is it not explicitly mentioned in the documentation? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9086/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9086/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9085 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9085/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9085/comments | https://api.github.com/repos/huggingface/transformers/issues/9085/events | https://github.com/huggingface/transformers/issues/9085 | 765,463,737 | MDU6SXNzdWU3NjU0NjM3Mzc= | 9,085 | Adding to docs how to train CTRL Model with control codes. | {
"login": "ludoro",
"id": 21291898,
"node_id": "MDQ6VXNlcjIxMjkxODk4",
"avatar_url": "https://avatars.githubusercontent.com/u/21291898?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ludoro",
"html_url": "https://github.com/ludoro",
"followers_url": "https://api.github.com/users/ludoro/followers",
"following_url": "https://api.github.com/users/ludoro/following{/other_user}",
"gists_url": "https://api.github.com/users/ludoro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ludoro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ludoro/subscriptions",
"organizations_url": "https://api.github.com/users/ludoro/orgs",
"repos_url": "https://api.github.com/users/ludoro/repos",
"events_url": "https://api.github.com/users/ludoro/events{/privacy}",
"received_events_url": "https://api.github.com/users/ludoro/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Did anyone figure out how to do this?"
] | 1,607 | 1,651 | 1,619 | NONE | null | # 🚀 Feature request
At the moment there is no explanation in the docs how to train a CTRL model with user defined `control codes`.
## Motivation
At the moment there is no explanation in the docs how to train a CTRL model with user defined `control codes`. I think it should be added because control codes are an important part of CTRL model.
## Your contribution
I am currently struggling on coming up with ideas on how to do that using transformer interface, but I'd love to open a PR after I understand how to do that. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9085/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9085/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9084 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9084/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9084/comments | https://api.github.com/repos/huggingface/transformers/issues/9084/events | https://github.com/huggingface/transformers/issues/9084 | 765,347,063 | MDU6SXNzdWU3NjUzNDcwNjM= | 9,084 | Problem with Token Classification models | {
"login": "Joerg99",
"id": 27426431,
"node_id": "MDQ6VXNlcjI3NDI2NDMx",
"avatar_url": "https://avatars.githubusercontent.com/u/27426431?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Joerg99",
"html_url": "https://github.com/Joerg99",
"followers_url": "https://api.github.com/users/Joerg99/followers",
"following_url": "https://api.github.com/users/Joerg99/following{/other_user}",
"gists_url": "https://api.github.com/users/Joerg99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Joerg99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Joerg99/subscriptions",
"organizations_url": "https://api.github.com/users/Joerg99/orgs",
"repos_url": "https://api.github.com/users/Joerg99/repos",
"events_url": "https://api.github.com/users/Joerg99/events{/privacy}",
"received_events_url": "https://api.github.com/users/Joerg99/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests, rather than help with training-related issues.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead? You'll have better answers over there.\r\n\r\nThanks!"
] | 1,607 | 1,607 | 1,607 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.0.1
- Platform: Win10
- Python version:3.7
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 2.3 CPU
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
examples/token-classification: @stefan-it
## Information
I followed this tutorial https://huggingface.co/transformers/custom_datasets.html#token-classification-with-w-nut-emerging-entities for token classification but results were really bad. So I changed the dataset to conll2003 and simplified the data a little (remove sentences without entities, keep only sentences with a certain length) as I saw that the model should perform well on this data. Unfortunately the results are still bad for example after epoch two with the bert model set trainable=True:
Conf mat: (rows are prediction, columns are the labels)
[[ 1 3 21 10 5 10 1 16 172]
[ 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 1 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0]
[ 2 7 7 13 7 6 0 9 162]]
Classification report
(' precision recall f1-score support\n'
'\n'
' 0 0.33 0.00 0.01 239\n'
' 1 0.00 0.00 0.00 0\n'
' 2 0.00 0.00 0.00 0\n'
' 3 0.00 0.00 0.00 1\n'
' 4 0.00 0.00 0.00 0\n'
' 5 0.00 0.00 0.00 0\n'
' 6 0.00 0.00 0.00 0\n'
' 7 0.00 0.00 0.00 0\n'
' 8 0.49 0.76 0.59 213\n'
'\n'
' accuracy 0.36 453\n'
' macro avg 0.09 0.08 0.07 453\n'
'weighted avg 0.40 0.36 0.28 453\n')
I tried a lot of things and checked pre-processing and post-processing multiple times and can't find a bug in there.
The model is close to the tutorial(in the tutorial it's a DistilBert model but as if performed in the same manner I changed to the bigger brother) but it seems like it's not learning at all. Though it should perform well with conll data and in other tutorials this model has shown good results (for example: https://www.depends-on-the-definition.com/named-entity-recognition-with-bert/)
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Here's the code is use (not complete preprocessing):
```
import tensorflow as tf
from transformers import BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased')
train_encodings = tokenizer(train_texts, is_split_into_words=True, return_offsets_mapping=True, padding=True, truncation=True)
val_encodings = tokenizer(val_texts, is_split_into_words=True, return_offsets_mapping=True, padding=True, truncation=True)
train_dataset = tf.data.Dataset.from_tensor_slices((dict(train_encodings), train_labels))
val_dataset = tf.data.Dataset.from_tensor_slices((dict(val_encodings), val_labels))
from transformers import TFBertForTokenClassification
model = TFBertForTokenClassification.from_pretrained('bert-base-cased', num_labels=len(unique_tags)) #unique tags are infered from training data
model.layers[0].trainable = True
optimizer = tf.keras.optimizers.Adam(learning_rate=5e-5)
model.compile(optimizer=optimizer, loss=model.compute_loss) #or tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
for epoch in range(8):
model.fit(train_dataset.shuffle(64).batch(16), batch_size=16, verbose=1, epochs=10)
predictions = model.predict(val_dataset)
# some post-processing. predictions is a list with all logits. for calculating the metrics I only consider
# the tags that are not -100 (which are supposed to be ignored ).
good_indexes = [i for i, l in enumerate(val_labels) if l != -100]
list_preds = []
for logi in predictions['logits']:
list_preds.append(np.argmax(logi))
pred_post = [list_preds[j] for j in good_indexes]
print(confusion_matrix(pred_post, label_post))
report = classification_report(pred_post, label_post)
pprint(report)
```
## Expected behavior
Better performance i.e. reasonable F1 Scores (for example: https://www.depends-on-the-definition.com/named-entity-recognition-with-bert/) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9084/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9084/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9083 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9083/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9083/comments | https://api.github.com/repos/huggingface/transformers/issues/9083/events | https://github.com/huggingface/transformers/issues/9083 | 764,430,548 | MDU6SXNzdWU3NjQ0MzA1NDg= | 9,083 | Image rendering not working in example notebook | {
"login": "darigovresearch",
"id": 30328618,
"node_id": "MDQ6VXNlcjMwMzI4NjE4",
"avatar_url": "https://avatars.githubusercontent.com/u/30328618?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/darigovresearch",
"html_url": "https://github.com/darigovresearch",
"followers_url": "https://api.github.com/users/darigovresearch/followers",
"following_url": "https://api.github.com/users/darigovresearch/following{/other_user}",
"gists_url": "https://api.github.com/users/darigovresearch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/darigovresearch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/darigovresearch/subscriptions",
"organizations_url": "https://api.github.com/users/darigovresearch/orgs",
"repos_url": "https://api.github.com/users/darigovresearch/repos",
"events_url": "https://api.github.com/users/darigovresearch/events{/privacy}",
"received_events_url": "https://api.github.com/users/darigovresearch/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,607 | 1,608 | 1,608 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: n/a
- Platform: n/a
- Python version: n/a
- PyTorch version (GPU?): n/a
- Tensorflow version (GPU?): n/a
- Using GPU in script?: n/a
- Using distributed or parallel set-up in script?: n/a
### Who can help
As advised looking at the git blame @mfuntowicz @n1t0 could you advise?
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Go to https://github.com/huggingface/transformers/blob/master/notebooks/02-transformers.ipynb
2. Go to section `Want it lighter? Faster? Let's talk distillation!`
3. You should see that there is an image which is not rendering like the below

<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Should be an image to explain something
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9083/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9083/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9082 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9082/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9082/comments | https://api.github.com/repos/huggingface/transformers/issues/9082/events | https://github.com/huggingface/transformers/pull/9082 | 764,401,820 | MDExOlB1bGxSZXF1ZXN0NTM4NDgyNzk1 | 9,082 | Add parallelization support for T5EncoderModel | {
"login": "agemagician",
"id": 6087313,
"node_id": "MDQ6VXNlcjYwODczMTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6087313?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/agemagician",
"html_url": "https://github.com/agemagician",
"followers_url": "https://api.github.com/users/agemagician/followers",
"following_url": "https://api.github.com/users/agemagician/following{/other_user}",
"gists_url": "https://api.github.com/users/agemagician/gists{/gist_id}",
"starred_url": "https://api.github.com/users/agemagician/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/agemagician/subscriptions",
"organizations_url": "https://api.github.com/users/agemagician/orgs",
"repos_url": "https://api.github.com/users/agemagician/repos",
"events_url": "https://api.github.com/users/agemagician/events{/privacy}",
"received_events_url": "https://api.github.com/users/agemagician/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Very cool! Could you also enable the parallelization tests for these models? You can check how it was done in the initial model parallel PR, [here's the commit](https://github.com/huggingface/transformers/pull/8696/commits/cde47f0d110176d3834b736dac27bc9bc2a4de43) related to the tests. You can just add the `T5EncoderModel` to the `all_parallelizable_model_classes` attribute of the `T5ModelTester` class.",
"> Very cool! Could you also enable the parallelization tests for these models? You can check how it was done in the initial model parallel PR, [here's the commit](https://github.com/huggingface/transformers/pull/8696/commits/cde47f0d110176d3834b736dac27bc9bc2a4de43) related to the tests. You can just add the `T5EncoderModel` to the `all_parallelizable_model_classes` attribute of the `T5ModelTester` class.\r\n\r\nThanks for the tip.\r\nDone, please let me know if anything else is needed from my side.",
"Also it would be great if you could run `make style && make quality` or `make fixup` to solve the quality issues.",
"> This LGTM. Looking into it it seems we have an error in `T5Stask` as it is creating the device map with `torch.cuda.device_count()`, rather than the `range` of that value like you're doing it here. Since we're always passing the device map to `T5Stack` (it's never used as a standalone model) we don't see it, but it doesn't seem correct.\r\n> \r\n> What do you think? If you think this is true, do you mind adding a `range` in `T5Stack` so that we can merge it together? Thanks!\r\n\r\nYes, you are correct, T5Stack should also use range. Since \"get_device_map\" function apply len to it .\r\nI have updated T5Stack using range.",
"> Also it would be great if you could run `make style && make quality` or `make fixup` to solve the quality issues.\r\n\r\nDone and passed the code quality testing.",
"Wonderful!"
] | 1,607 | 1,607 | 1,607 | CONTRIBUTOR | null | # What does this PR do?
Extend T5EncoderModel to support model parallization across different GPUs.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
T5: @patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9082/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9082/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9082",
"html_url": "https://github.com/huggingface/transformers/pull/9082",
"diff_url": "https://github.com/huggingface/transformers/pull/9082.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9082.patch",
"merged_at": 1607965246000
} |
https://api.github.com/repos/huggingface/transformers/issues/9081 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9081/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9081/comments | https://api.github.com/repos/huggingface/transformers/issues/9081/events | https://github.com/huggingface/transformers/issues/9081 | 764,338,511 | MDU6SXNzdWU3NjQzMzg1MTE= | 9,081 | Segmentation fault (core dumped) running run_qa.py | {
"login": "piecurus",
"id": 8821811,
"node_id": "MDQ6VXNlcjg4MjE4MTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8821811?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/piecurus",
"html_url": "https://github.com/piecurus",
"followers_url": "https://api.github.com/users/piecurus/followers",
"following_url": "https://api.github.com/users/piecurus/following{/other_user}",
"gists_url": "https://api.github.com/users/piecurus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/piecurus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/piecurus/subscriptions",
"organizations_url": "https://api.github.com/users/piecurus/orgs",
"repos_url": "https://api.github.com/users/piecurus/repos",
"events_url": "https://api.github.com/users/piecurus/events{/privacy}",
"received_events_url": "https://api.github.com/users/piecurus/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"same problem here? what is going on because I run glue smoothly.it seems that the problem related to the script itself.",
"The problem is in your JSON file. The squad v2 JSON file is not in a format the datasets library can directly preprocess, so you need to make it compliant with it. You should take this issue to the [`datasets`](https://github.com/huggingface/datasets) library and explain what your needs is.\r\n\r\nYou can also check the mock data file used in the [tests](https://github.com/huggingface/transformers/blob/master/tests/fixtures/tests_samples/SQUAD/sample.json) to see the expected format. A datasets expert would know better than me but I think the problem is that the squad JSON file has lists of dicts for the \"answers\" field when datasets expects a dictionary keys to list.",
"The problem is not the JSON file that I have and I was able to solve it by using Transformers 3.x with no issues.",
"Transformers v4 does not support training on SQuAD v2 via its example training script. For now, you have to use Transformers v3.",
"Yes you could run it with the older script which was parsing the JSON differently. The new version uses the datasets library and requires the JSON to be organized differently (for compatibility with Arrow). ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,607 | 1,619 | 1,619 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.0.1
- Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1+cu110 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): distilbert-base-uncased (but other bert variants do the same)
The problem arises when using:
* [ X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.mkdir squad
wget https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v2.0.json -O squad/train-v2.0.json
wget https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v2.0.json -O squad/dev
2. python run_qa.py \
--model_name_or_path distilbert-base-uncased \
--do_train \
--train_file ./squad/train-v2.0.json \
--per_device_train_batch_size 2 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir ./models/ \
--overwrite_output_dir
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
12/12/2020 21:22:50 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 1distributed training: False, 16-bits training: False
12/12/2020 21:22:50 - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir='./models/', overwrite_output_dir=True, do_train=True, do_eval=False, do_predict=False, evaluation_strategy=<EvaluationStrategy.NO: 'no'>, prediction_loss_only=False, per_device_train_batch_size=2, per_device_eval_batch_size=8, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=3e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=2.0, max_steps=-1, warmup_steps=0, logging_dir='runs/Dec12_21-22-50_piero-laptop', logging_first_step=False, logging_steps=500, save_steps=500, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name='./models/', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None)
Using custom data configuration default
Downloading and preparing dataset json/default-0b904584a9578d6f (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/piero/.cache/huggingface/datasets/json/default-0b904584a9578d6f/0.0.0/70d89ed4db1394f028c651589fcab6d6b28dddcabbe39d3b21b4d41f9a708514...
0 tables [00:00, ? tables/s]Segmentation fault (core dumped)
## Note:
I would like to test the script on the downloaded SQUAD dataset to apply the script after to my own dataset. If I run as below, everything works fine
python run_qa.py \
--model_name_or_path bert-base-uncased \
--dataset_name squad \
--do_train \
--do_eval \
--per_device_train_batch_size 4 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir ./models \
--overwrite_output_dir
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9081/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9081/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9080 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9080/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9080/comments | https://api.github.com/repos/huggingface/transformers/issues/9080/events | https://github.com/huggingface/transformers/issues/9080 | 763,950,672 | MDU6SXNzdWU3NjM5NTA2NzI= | 9,080 | Fine tune GPT-2 pytorch | {
"login": "contribcode",
"id": 24355946,
"node_id": "MDQ6VXNlcjI0MzU1OTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/24355946?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/contribcode",
"html_url": "https://github.com/contribcode",
"followers_url": "https://api.github.com/users/contribcode/followers",
"following_url": "https://api.github.com/users/contribcode/following{/other_user}",
"gists_url": "https://api.github.com/users/contribcode/gists{/gist_id}",
"starred_url": "https://api.github.com/users/contribcode/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/contribcode/subscriptions",
"organizations_url": "https://api.github.com/users/contribcode/orgs",
"repos_url": "https://api.github.com/users/contribcode/repos",
"events_url": "https://api.github.com/users/contribcode/events{/privacy}",
"received_events_url": "https://api.github.com/users/contribcode/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This repo(https://github.com/kifish/GPT4NLG) will help you out.",
"Have you taken a look at the [`run_clm.py` script](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_clm.py) in the examples? It seems to be doing exactly what you're looking for.\r\n\r\nErratum: I just realized that your link to `library` was actually a link to `run_clm.py`. What do you mean you didn't find the training script? Do you mean to say you would like an understandable guide on how to fine-tune GPT-2? Then [this guide](https://github.com/ncoop57/i-am-a-nerd/blob/master/_notebooks/2020-05-12-chatbot-part-1.ipynb) on DialoGPT, which is based off of the GPT-2 architecture, may be helpful.",
"@kifish thank you for your reply, it is an interesting repository.\r\n\r\nI am doing this for the first time so I am looking for something more simple, although I got some ideas reading the code which I will use.",
"@LysandreJik thank you , for the helpful tutorial.\r\n\r\nWhen I mentioned that I didn't find the training script, I meant that in line 327 of the `run_clm.py`, the `main` method calls the `train` method of a `Trainer`, but I couldn't find the code of the `train` method.\r\n\r\nIn addition, for the data preparation as input for the model, except for the extra tokens that I mentioned in my initial post, I thought to also add a `bos` and `eos` token at the begging and at the end of each text respectively, so that the model learns when a text starts and ends. The GPT-2 Tokenizer has already these tokens but they have the same id, they both have the id 50256. What is the reason behind this? \r\nIn order to deal with this, another way to prepare data is to use just the `eos` token to denote the end of a text, since the model should basically learn when a text ends. Can you please explain briefly?\r\n\r\nThank you in advance.\r\n\r\n",
"The code for the `train` method is in the `Trainer` class that you can find [here](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L565). It's a bit encapsulated so it might be a bit hard to follow - but we're working on simpler examples that show a basic PyTorch training loop and which should be out in a few months.\r\n\r\nYou may also find [this notebook interesting, which goes into finer detail on how to train a language model.](https://github.com/huggingface/notebooks/blob/master/examples/language_modeling.ipynb)\r\n\r\nRegarding your question on the extra tokens, adding a `bos` and `eos` token depends on how the model was pre-trained. BERT requires these as they were used during its pre-training. However, GPT-2 has very few special tokens during pre-training: a single `<|endoftext|>` token that was placed between sequences. I recommend you read the GPT-2 paper to get an idea of their pre-processing; we try to stay as close to the original implementation as possible.",
"@LysandreJik thank you for your reply.\r\n\r\nI am writing the code for training GPT-2 and firstly, as you suggested, I concatenated the input texts separated by `<|endoftext|>` token and then split it into fixed lengths, which is the model input.\r\n\r\nIn the GPT-2 casual language modelling, the model input is also the labels of the model, so to compute the loss, the input and labels arguments of the loss function are the same. The [tutorial](https://github.com/huggingface/notebooks/blob/master/examples/language_modeling.ipynb) that you mentioned in the previous post, when references the model input and labels talks about a shifting on the left or on the right.\r\n\r\nTo calculate the loss correctly, are the model input and labels arguments of the loss function the same, or am I missing something?",
"If you pass the labels to the model, you should pass exactly the same value as the input IDs. The model then shifts the label IDs and calculates the loss on its own.",
"@LysandreJik I have a question regarding training. \r\n\r\nFor a Language Model which are indicative metrics to monitor training? Because for now I just consider the loss. For example, in classification it is usually considered the F1 score.\r\nAdditionally, in the tutorial that you shared, no early stopping was used in training, and the model was trained for three epochs. I have a small dataset of ~8K small texts (2MB). Do you have any suggestions on how to train the model, i.e. if there is no early stopping, how to decide when to stop training and how can I evaluate training?\r\n\r\nYou have been very helpful,\r\nregards.",
"Hello @kifish \r\n\r\nthe GPT4NLG repo that you have shared back then was very helpful but I am no longer able to see it. Can you do anything about that?\r\n\r\nThank you in advance.",
"> Hello @kifish\r\n> \r\n> the GPT4NLG repo that you have shared back then was very helpful but I am no longer able to see it. Can you do anything about that?\r\n> \r\n> Thank you in advance.\r\n\r\nhttps://github.com/kifish/GPT4NLG/tree/github",
" https://github.com/kifish/GPT4NLG/tree/github\r\n\r\nThank you @kifish. If possible, let it visible for some days.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,607 | 1,619 | 1,619 | NONE | null | Hello,
I want to fine tune GPT-2 (PyTorch version) on a custom dataset. Words or small phrases of the dataset are marked, for example:
_some text [ss] word / small phrase [se] some other text._
I want to generate this kind of text with GPT-2, so firstly I thought to add [ss] and [se] as special tokens.
I am looking for a training script sample for GPT-2, to see how to prepare data as input for the model (if preprocessing, or specific format is needed), which type of loss to use, etc. but I cannot find any. I also looked through the [library](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_clm.py) and also didn't find the training script.
Are there any suggestions?
Thank you in advance.
P.s. if this is not the appropriate place for this question, feel free to direct me accordingly. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9080/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9080/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9079 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9079/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9079/comments | https://api.github.com/repos/huggingface/transformers/issues/9079/events | https://github.com/huggingface/transformers/issues/9079 | 763,905,784 | MDU6SXNzdWU3NjM5MDU3ODQ= | 9,079 | T5 fails on many datasets with [libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): terminate called after throwing an instance of 'google::protobuf::FatalException' what(): CHECK failed: (index) >= (0): Aborted | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi\r\nI traced the error and this is happening in this line:\r\n\r\nlabel_str = tokenizer.batch_decode(pred.label_ids, skip_special_tokens=True)\r\nhttps://github.com/rabeehk/debug-seq2seq/blob/1bcadb4b5497a0cbab6c2778e87335c5edcbd0a2/seq2seq/metrics/metrics.py#L99\r\n\r\nhere is the format of label_ids\r\n\r\n```\r\npred.label_ids [[10747 7 15 1]\r\n [10998 1 0 0]\r\n [10998 1 0 0]\r\n ...\r\n [10998 1 0 -100]\r\n [10998 1 0 -100]\r\n [10998 1 0 -100]] (3269, 4)\r\n\r\n```\r\ncould you please have a look? this is really blocking me, as T5 tokenizer fails for many datasets. thanks ",
"I understood the issue now, previously boolq dataset had labels of 0/1 => max_decoding length of 3, now they changed it to True/False => max decoding length of 4, which causes the bug in my codes for decoding since max_decoding length was set to 3. this is solved now. thanks @lhoestq ",
"Glad you resolved your issue."
] | 1,607 | 1,607 | 1,607 | NONE | null | ## Environment info
- `transformers` version: 3.5.1
- Platform: GPU
- Python version: 3.7
- PyTorch version (GPU?): 1.0.4
- Tensorflow version (GPU?): -
- Using GPU in script?: -
- Using distributed or parallel set-up in script?: -
### Who can help
tokenizers: @mfuntowicz
Trainer: @sgugger
TextGeneration: @TevenLeScao
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
examples/seq2seq: @patil-suraj
## Information
Hi
I am testing seq2seq model with T5 on different datasets and this is always getting the following bug, this is really blocking me as this fails for many datasets. could you have a look please? thanks
```
[libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0):
terminate called after throwing an instance of 'google::protobuf::FatalException'
what(): CHECK failed: (index) >= (0):
Aborted
```
To reproduce the error please run on 1 GPU:
```
git clone [email protected]:rabeehk/debug-seq2seq.git
python setup.py develop
cd seq2seq
python finetune_t5_trainer.py temp.json
```
Full output of the program:
```
(internship) rkarimi@vgnh008:/idiap/user/rkarimi/dev/debug-seq2seq/seq2seq$ python finetune_t5_trainer.py temp.json
2020-12-12 15:38:16.234542: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2020-12-12 15:38:16.234598: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
12/12/2020 15:38:32 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 1, distributed training: False, 16-bits training: False
12/12/2020 15:38:32 - INFO - __main__ - Training/evaluation parameters Seq2SeqTrainingArguments(output_dir='outputs/test', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=False, evaluate_during_training=False, evaluation_strategy=<EvaluationStrategy.NO: 'no'>, prediction_loss_only=False, per_device_train_batch_size=64, per_device_eval_batch_size=64, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=0.01, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=2, max_steps=-1, warmup_steps=500, logging_dir='runs/Dec12_15-38-32_vgnh008', logging_first_step=True, logging_steps=200, save_steps=200, save_total_limit=1, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=200, dataloader_num_workers=0, past_index=-1, run_name='outputs/test', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, label_smoothing=0.1, sortish_sampler=False, predict_with_generate=True, adafactor=False, encoder_layerdrop=None, decoder_layerdrop=None, dropout=None, attention_dropout=None, lr_scheduler='linear', fixed_length_emb=None, encoder_projection=None, encoder_pooling=None, projection_length=None, only_projection_bottleneck=False, concat_projection_token=False, gcs_bucket='ruse-xcloud-bucket', temperature=10, train_adapters=True, do_finetune=True, parametric_task_embedding=False, eval_output_dir='outputs/finetune-adapter/test-n-1-lr-1e-02-e-20')
Some weights of T5ForConditionalGeneration were not initialized from the model checkpoint at t5-small and are newly initialized: ['encoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.0.layer.0.adapter_controller.post_layer_norm.weight', 'encoder.block.0.layer.0.adapter_controller.post_layer_norm.bias', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.0.layer.1.adapter_controller.post_layer_norm.weight', 'encoder.block.0.layer.1.adapter_controller.post_layer_norm.bias', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.1.layer.0.adapter_controller.post_layer_norm.weight', 'encoder.block.1.layer.0.adapter_controller.post_layer_norm.bias', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.1.layer.1.adapter_controller.post_layer_norm.weight', 'encoder.block.1.layer.1.adapter_controller.post_layer_norm.bias', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.2.layer.0.adapter_controller.post_layer_norm.weight', 'encoder.block.2.layer.0.adapter_controller.post_layer_norm.bias', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.2.layer.1.adapter_controller.post_layer_norm.weight', 'encoder.block.2.layer.1.adapter_controller.post_layer_norm.bias', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.3.layer.0.adapter_controller.post_layer_norm.weight', 'encoder.block.3.layer.0.adapter_controller.post_layer_norm.bias', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.3.layer.1.adapter_controller.post_layer_norm.weight', 'encoder.block.3.layer.1.adapter_controller.post_layer_norm.bias', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.4.layer.0.adapter_controller.post_layer_norm.weight', 'encoder.block.4.layer.0.adapter_controller.post_layer_norm.bias', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.4.layer.1.adapter_controller.post_layer_norm.weight', 'encoder.block.4.layer.1.adapter_controller.post_layer_norm.bias', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.5.layer.0.adapter_controller.post_layer_norm.weight', 'encoder.block.5.layer.0.adapter_controller.post_layer_norm.bias', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.5.layer.1.adapter_controller.post_layer_norm.weight', 'encoder.block.5.layer.1.adapter_controller.post_layer_norm.bias', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.0.layer.0.adapter_controller.post_layer_norm.weight', 'decoder.block.0.layer.0.adapter_controller.post_layer_norm.bias', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.0.layer.2.adapter_controller.post_layer_norm.weight', 'decoder.block.0.layer.2.adapter_controller.post_layer_norm.bias', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.1.layer.0.adapter_controller.post_layer_norm.weight', 'decoder.block.1.layer.0.adapter_controller.post_layer_norm.bias', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.1.layer.2.adapter_controller.post_layer_norm.weight', 'decoder.block.1.layer.2.adapter_controller.post_layer_norm.bias', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.2.layer.0.adapter_controller.post_layer_norm.weight', 'decoder.block.2.layer.0.adapter_controller.post_layer_norm.bias', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.2.layer.2.adapter_controller.post_layer_norm.weight', 'decoder.block.2.layer.2.adapter_controller.post_layer_norm.bias', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.3.layer.0.adapter_controller.post_layer_norm.weight', 'decoder.block.3.layer.0.adapter_controller.post_layer_norm.bias', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.3.layer.2.adapter_controller.post_layer_norm.weight', 'decoder.block.3.layer.2.adapter_controller.post_layer_norm.bias', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.4.layer.0.adapter_controller.post_layer_norm.weight', 'decoder.block.4.layer.0.adapter_controller.post_layer_norm.bias', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.4.layer.2.adapter_controller.post_layer_norm.weight', 'decoder.block.4.layer.2.adapter_controller.post_layer_norm.bias', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.5.layer.0.adapter_controller.post_layer_norm.weight', 'decoder.block.5.layer.0.adapter_controller.post_layer_norm.bias', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.5.layer.2.adapter_controller.post_layer_norm.weight', 'decoder.block.5.layer.2.adapter_controller.post_layer_norm.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
cahce dir /idiap/temp/rkarimi/cache_home_1/datasets
cahce dir /idiap/temp/rkarimi/cache_home_1/datasets
12/12/2020 15:38:44 - INFO - filelock - Lock 140079090376272 acquired on /idiap/home/rkarimi/.cache/huggingface/datasets/4c7b1146606607c193d1ef601d8d0c134521b2ac59f61ee98c09119be925ee16.7ad892de9d7f1b4f9dfc598ef31e4a398a7224176bc9a3110e0e2075ff943e8f.py.lock
12/12/2020 15:38:44 - INFO - filelock - Lock 140079090376272 released on /idiap/home/rkarimi/.cache/huggingface/datasets/4c7b1146606607c193d1ef601d8d0c134521b2ac59f61ee98c09119be925ee16.7ad892de9d7f1b4f9dfc598ef31e4a398a7224176bc9a3110e0e2075ff943e8f.py.lock
Using custom data configuration default
12/12/2020 15:38:44 - INFO - filelock - Lock 140082549312272 acquired on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock
12/12/2020 15:38:44 - INFO - filelock - Lock 140082549312272 released on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock
12/12/2020 15:38:44 - INFO - filelock - Lock 140082549365648 acquired on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock
Reusing dataset boolq (/idiap/temp/rkarimi/cache_home_1/datasets/boolq/default/0.1.0/1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534)
12/12/2020 15:38:44 - INFO - filelock - Lock 140082549365648 released on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock
Loading cached processed dataset at /idiap/temp/rkarimi/cache_home_1/datasets/boolq/default/0.1.0/1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534/cache-6810ece2a440c3be.arrow
cahce dir /idiap/temp/rkarimi/cache_home_1/datasets
cahce dir /idiap/temp/rkarimi/cache_home_1/datasets
12/12/2020 15:38:45 - INFO - filelock - Lock 140082549560848 acquired on /idiap/home/rkarimi/.cache/huggingface/datasets/4c7b1146606607c193d1ef601d8d0c134521b2ac59f61ee98c09119be925ee16.7ad892de9d7f1b4f9dfc598ef31e4a398a7224176bc9a3110e0e2075ff943e8f.py.lock
12/12/2020 15:38:45 - INFO - filelock - Lock 140082549560848 released on /idiap/home/rkarimi/.cache/huggingface/datasets/4c7b1146606607c193d1ef601d8d0c134521b2ac59f61ee98c09119be925ee16.7ad892de9d7f1b4f9dfc598ef31e4a398a7224176bc9a3110e0e2075ff943e8f.py.lock
Using custom data configuration default
12/12/2020 15:38:45 - INFO - filelock - Lock 140082549560848 acquired on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock
12/12/2020 15:38:45 - INFO - filelock - Lock 140082549560848 released on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock
12/12/2020 15:38:45 - INFO - filelock - Lock 140082549365200 acquired on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock
Reusing dataset boolq (/idiap/temp/rkarimi/cache_home_1/datasets/boolq/default/0.1.0/1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534)
12/12/2020 15:38:45 - INFO - filelock - Lock 140082549365200 released on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock
Loading cached processed dataset at /idiap/temp/rkarimi/cache_home_1/datasets/boolq/default/0.1.0/1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534/cache-9a2822394a3a4e34.arrow
12/12/2020 15:38:45 - INFO - seq2seq.metrics.metrics - selected metric <function build_compute_metrics_fn.<locals>.classification_metrics at 0x7f66b464cc20> for task boolq
12/12/2020 15:38:45 - INFO - seq2seq.trainers.trainer - ***** Running training *****
12/12/2020 15:38:45 - INFO - seq2seq.trainers.trainer - Num examples = 10
12/12/2020 15:38:45 - INFO - seq2seq.trainers.trainer - Num Epochs = 2
12/12/2020 15:38:45 - INFO - seq2seq.trainers.trainer - Instantaneous batch size per device = 64
12/12/2020 15:38:45 - INFO - seq2seq.trainers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 64
12/12/2020 15:38:45 - INFO - seq2seq.trainers.trainer - Gradient Accumulation steps = 1
12/12/2020 15:38:45 - INFO - seq2seq.trainers.trainer - Total optimization steps = 2
{'loss': 529.79443359375, 'learning_rate': 2e-05, 'epoch': 1.0}
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.37it/s]12/12/2020 15:38:46 - INFO - seq2seq.trainers.trainer -
Training completed. Do not forget to share your model on huggingface.co/models =)
{'epoch': 2.0}
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.43it/s]
12/12/2020 15:38:46 - INFO - seq2seq.trainers.trainer - Saving model checkpoint to outputs/test
cahce dir /idiap/temp/rkarimi/cache_home_1/datasets
cahce dir /idiap/temp/rkarimi/cache_home_1/datasets
12/12/2020 15:38:59 - INFO - filelock - Lock 140079084929680 acquired on /idiap/home/rkarimi/.cache/huggingface/datasets/4c7b1146606607c193d1ef601d8d0c134521b2ac59f61ee98c09119be925ee16.7ad892de9d7f1b4f9dfc598ef31e4a398a7224176bc9a3110e0e2075ff943e8f.py.lock
12/12/2020 15:38:59 - INFO - filelock - Lock 140079084929680 released on /idiap/home/rkarimi/.cache/huggingface/datasets/4c7b1146606607c193d1ef601d8d0c134521b2ac59f61ee98c09119be925ee16.7ad892de9d7f1b4f9dfc598ef31e4a398a7224176bc9a3110e0e2075ff943e8f.py.lock
Using custom data configuration default
12/12/2020 15:38:59 - INFO - filelock - Lock 140079084929360 acquired on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock
12/12/2020 15:38:59 - INFO - filelock - Lock 140079084929360 released on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock
12/12/2020 15:38:59 - INFO - filelock - Lock 140079085355216 acquired on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock
Reusing dataset boolq (/idiap/temp/rkarimi/cache_home_1/datasets/boolq/default/0.1.0/1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534)
12/12/2020 15:38:59 - INFO - filelock - Lock 140079085355216 released on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock
Loading cached processed dataset at /idiap/temp/rkarimi/cache_home_1/datasets/boolq/default/0.1.0/1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534/cache-164dd1d57e9fa69a.arrow
12/12/2020 15:38:59 - INFO - seq2seq.metrics.metrics - selected metric <function build_compute_metrics_fn.<locals>.classification_metrics at 0x7f66b40c67a0> for task boolq
12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - ***** Running training *****
12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Num examples = 1
12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Num Epochs = 2
12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Instantaneous batch size per device = 64
12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 64
12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Gradient Accumulation steps = 1
12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Total optimization steps = 2
12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Continuing training from checkpoint, will skip to saved global_step
12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Continuing training from epoch 2
12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Continuing training from global step 2
12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Will skip the first 0 steps in the first epoch
0%| | 0/2 [00:00<?, ?it/s]12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer -
Training completed. Do not forget to share your model on huggingface.co/models =)
{'epoch': 2.0}
0%| | 0/2 [00:00<?, ?it/s]
12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Saving model checkpoint to outputs/finetune-adapter/test-n-1-lr-1e-02-e-20/boolq
12/12/2020 15:39:07 - INFO - seq2seq.utils.utils - using task specific params for boolq: {'max_length': 3}
12/12/2020 15:39:07 - INFO - seq2seq.trainers.trainer - ***** Running Evaluation *****
12/12/2020 15:39:07 - INFO - seq2seq.trainers.trainer - Num examples = 3269
12/12/2020 15:39:07 - INFO - seq2seq.trainers.trainer - Batch size = 64
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 52/52 [00:12<00:00, 4.86it/s][libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0):
terminate called after throwing an instance of 'google::protobuf::FatalException'
what(): CHECK failed: (index) >= (0):
Aborted
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9079/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9079/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9078 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9078/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9078/comments | https://api.github.com/repos/huggingface/transformers/issues/9078/events | https://github.com/huggingface/transformers/issues/9078 | 763,885,583 | MDU6SXNzdWU3NjM4ODU1ODM= | 9,078 | Add Definition of a transformer to the glossary | {
"login": "darigovresearch",
"id": 30328618,
"node_id": "MDQ6VXNlcjMwMzI4NjE4",
"avatar_url": "https://avatars.githubusercontent.com/u/30328618?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/darigovresearch",
"html_url": "https://github.com/darigovresearch",
"followers_url": "https://api.github.com/users/darigovresearch/followers",
"following_url": "https://api.github.com/users/darigovresearch/following{/other_user}",
"gists_url": "https://api.github.com/users/darigovresearch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/darigovresearch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/darigovresearch/subscriptions",
"organizations_url": "https://api.github.com/users/darigovresearch/orgs",
"repos_url": "https://api.github.com/users/darigovresearch/repos",
"events_url": "https://api.github.com/users/darigovresearch/events{/privacy}",
"received_events_url": "https://api.github.com/users/darigovresearch/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Are you looking for https://huggingface.co/transformers/model_summary.html ?",
"@cronoik thanks for taking a look! Wasn't looking for that page, was trying to find just a general definition of what a transformer is in terms of the general concept",
"Do you have something like this in mind: https://github.com/huggingface/transformers/blob/master/notebooks/02-transformers.ipynb ?",
"No, wasn't looking for a notebook, just a one line description/explanation of what a transformer is\r\n\r\nHow would you describe it to someone who doesn't know about transformers?",
"Maybe simply as self-attention based deep learning model architecture.",
"@cronoik That's a good start, `self-attention` & `deep learning` aren't yet defined in the glossary\r\n\r\nHow would you define those?",
"`self-attention`: each element of the input finds out which other elements of the input they should attend to.\r\n`deep learning`: machine learning algorithms which uses NN which several layers.\r\n\r\n@darigovresearch ",
"@cronoik thanks for that! Would you like to put in a pull request so that your definitions go into the transformers glossary and the set of flashcards that we built or would you like us to do it?\n\nI'm sure those definitions would be welcome and easily merged by the maintainers\n\nhttps://www.darigovresearch.com/huggingface-transformers-glossary-flashcards",
"Thanks @sgugger for merging the pull request, @cronoik your definitions are now on the glossary page and I have also added them to the flashcards so this issue can now be closed. Thank you both for your help!\n\nGlossary https://huggingface.co/transformers/glossary.html\n\nFlashcards https://www.darigovresearch.com/huggingface-transformers-glossary-flashcards",
"Thanks for your commits ;)."
] | 1,607 | 1,616 | 1,616 | CONTRIBUTOR | null | Thought it may be helpful to have an easy to understand definition for what a transformer is in the [glossary](https://huggingface.co/transformers/glossary.html) for any new joiners.
@sgugger any thoughts?
Happy to add a definition if provided with one in the open pull request #8949 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9078/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9078/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9076 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9076/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9076/comments | https://api.github.com/repos/huggingface/transformers/issues/9076/events | https://github.com/huggingface/transformers/pull/9076 | 763,593,947 | MDExOlB1bGxSZXF1ZXN0NTM4MDEwNjYx | 9,076 | Clarify use of TrainingArguments.disable_tqdm in Jupyter Notebooks | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for the suggestions! I've included them so should be good to go :)",
"Thanks a lot!"
] | 1,607 | 1,608 | 1,608 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Closes #8831 and adds some minor tweaks / improvements to the `TrainingArguments` classes.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9076/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9076/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9076",
"html_url": "https://github.com/huggingface/transformers/pull/9076",
"diff_url": "https://github.com/huggingface/transformers/pull/9076.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9076.patch",
"merged_at": 1608040819000
} |
https://api.github.com/repos/huggingface/transformers/issues/9075 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9075/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9075/comments | https://api.github.com/repos/huggingface/transformers/issues/9075/events | https://github.com/huggingface/transformers/issues/9075 | 763,592,962 | MDU6SXNzdWU3NjM1OTI5NjI= | 9,075 | Zero Shot Classification Pipeline gives poor results locally than online demo | {
"login": "nerdimite",
"id": 28258052,
"node_id": "MDQ6VXNlcjI4MjU4MDUy",
"avatar_url": "https://avatars.githubusercontent.com/u/28258052?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nerdimite",
"html_url": "https://github.com/nerdimite",
"followers_url": "https://api.github.com/users/nerdimite/followers",
"following_url": "https://api.github.com/users/nerdimite/following{/other_user}",
"gists_url": "https://api.github.com/users/nerdimite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nerdimite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nerdimite/subscriptions",
"organizations_url": "https://api.github.com/users/nerdimite/orgs",
"repos_url": "https://api.github.com/users/nerdimite/repos",
"events_url": "https://api.github.com/users/nerdimite/events{/privacy}",
"received_events_url": "https://api.github.com/users/nerdimite/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Maybe @joeddav has an idea!",
"The pipeline output is sorted from highest to lowest scores, so in your code `pred_idx` will always be `0` and `pred_cls` will always be `\"Single Patient\"`. Instead you want,\r\n\r\n```python\r\npred_cls = results['labels'][0]\r\npred_idx = labels.index(pred_cls)\r\n```",
"Oh lol, I didn't know it was that simple xD. Thanks @joeddav that increased the accuracy to 73% (though less than online demo) which is good enough. Thank you so much!"
] | 1,607 | 1,608 | 1,608 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.0.1
- Platform: Colab
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0 Yes
- Tensorflow version (GPU?): No
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@julien-c @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): facebook/bart-large-mnli
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. I have a small dataset of 26 examples and I want to classify them into 2 classes. I first ran all the examples in the [online demo](https://huggingface.co/zero-shot/) and got around 80% accuracy.
2. Then I ran the code on Colab and got only 53% accuracy which I think is just a random answer between the labels.
3. I am aware of the fact that this issue has been opened before and resolved but it isn't working for me. ([This is the previous issue](https://github.com/huggingface/transformers/issues/8122))
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```python
from transformers import pipeline
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-mnli")
model = AutoModelForSequenceClassification.from_pretrained("facebook/bart-large-mnli")
classifier = pipeline(task='zero-shot-classification', model=model, tokenizer=tokenizer)
hypothesis_template = 'This text is about {}.'
labels = ['Single Patient', 'Multiple Patient']
def predict(sequence, labels, hypothesis_template):
results = classifier(sequence, labels,
hypothesis_template=hypothesis_template)
pred_idx = np.array(results['scores']).argmax()
pred_cls = labels[pred_idx]
return pred_idx, pred_cls
def evaluate(dataset, labels, hypothesis_template):
n_correct = 0
for sequence, label in tqdm(dataset.values):
_, pred = predict(sequence, labels, hypothesis_template)
n_correct += (pred == label)
acc = n_correct / len(dataset)
print('Accuracy:', acc)
patients = pd.read_csv('patient_classification.csv')
evaluate(patients, labels, hypothesis_template)
```
While loading the model I get this warning message.
```
Some weights of the model checkpoint at facebook/bart-large-mnli were not used when initializing BartForSequenceClassification: ['model.encoder.version', 'model.decoder.version']
- This IS expected if you are initializing BartForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BartForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
The results of the online demo and my local code (Colab) are supposed to be the same. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9075/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9075/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9072 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9072/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9072/comments | https://api.github.com/repos/huggingface/transformers/issues/9072/events | https://github.com/huggingface/transformers/issues/9072 | 763,524,445 | MDU6SXNzdWU3NjM1MjQ0NDU= | 9,072 | get type error when I run the example code of token classification | {
"login": "ZihaoZheng98",
"id": 22414831,
"node_id": "MDQ6VXNlcjIyNDE0ODMx",
"avatar_url": "https://avatars.githubusercontent.com/u/22414831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZihaoZheng98",
"html_url": "https://github.com/ZihaoZheng98",
"followers_url": "https://api.github.com/users/ZihaoZheng98/followers",
"following_url": "https://api.github.com/users/ZihaoZheng98/following{/other_user}",
"gists_url": "https://api.github.com/users/ZihaoZheng98/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZihaoZheng98/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZihaoZheng98/subscriptions",
"organizations_url": "https://api.github.com/users/ZihaoZheng98/orgs",
"repos_url": "https://api.github.com/users/ZihaoZheng98/repos",
"events_url": "https://api.github.com/users/ZihaoZheng98/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZihaoZheng98/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, could you provide the full error, as well as the command you use to launch the script? Thank you!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,607 | 1,619 | 1,619 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:4.0.0
- Platform:linux and macos
- Python version:3.7 and 3.8
- PyTorch version (GPU?):1.7.0
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
The problem arises when using:
hf_argparser in line 64. type Optional[bool] can not get into the loop
elif field.type is bool or field.type is Optional[bool]
but after I change is to == ,the error disappear.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9072/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9072/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9071 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9071/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9071/comments | https://api.github.com/repos/huggingface/transformers/issues/9071/events | https://github.com/huggingface/transformers/issues/9071 | 763,239,498 | MDU6SXNzdWU3NjMyMzk0OTg= | 9,071 | attention_mask size | {
"login": "Jiaxin-Wen",
"id": 48146603,
"node_id": "MDQ6VXNlcjQ4MTQ2NjAz",
"avatar_url": "https://avatars.githubusercontent.com/u/48146603?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jiaxin-Wen",
"html_url": "https://github.com/Jiaxin-Wen",
"followers_url": "https://api.github.com/users/Jiaxin-Wen/followers",
"following_url": "https://api.github.com/users/Jiaxin-Wen/following{/other_user}",
"gists_url": "https://api.github.com/users/Jiaxin-Wen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jiaxin-Wen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jiaxin-Wen/subscriptions",
"organizations_url": "https://api.github.com/users/Jiaxin-Wen/orgs",
"repos_url": "https://api.github.com/users/Jiaxin-Wen/repos",
"events_url": "https://api.github.com/users/Jiaxin-Wen/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jiaxin-Wen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"have implemented that by myself.\r\nIf needed, I can make a pull request",
"Hi, I'm also trying to utilize such a customized mask. Would you mind sharing your implementation? Thank you!"
] | 1,607 | 1,611 | 1,611 | NONE | null | # 🚀 Feature request
current attention_mask argument is a tensor of shape [batch_size, sequence_length],
I'd like it to be a tensor of shape [batch_size, from_seq_length, to_seq_length], as I want to set a different attention mask for a different position.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9071/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9071/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9070 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9070/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9070/comments | https://api.github.com/repos/huggingface/transformers/issues/9070/events | https://github.com/huggingface/transformers/pull/9070 | 763,028,822 | MDExOlB1bGxSZXF1ZXN0NTM3NTE5ODMx | 9,070 | [CI doc] safely testing experimental CI features | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,607 | 1,672 | 1,607 | CONTRIBUTOR | null | After causing a few CI workflow disruptions with my recent attempts to figure out how to get circleCI do something new (skip heavy builds on doc-only PRs) I realized that future such experiments can be much smoother and lead to close to zero annoyance to anybody involved in submitting and handling PRs.
This PR documents my idea on how to do it given the current limitations of CircleCI and GithubActions, so that we could continue doing such experiments in the future and not interfere with anything.
That's said please vote here:
* Github Actions: https://github.com/actions/runner/issues/2347
* CircleCI: https://ideas.circleci.com/ideas/CCI-I-344 (unfortunately requires a free account to vote)
to get a much simpler support for being able to have a failing step that shouldn't impact the overall PR status.
@LysandreJik, @sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9070/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9070/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9070",
"html_url": "https://github.com/huggingface/transformers/pull/9070",
"diff_url": "https://github.com/huggingface/transformers/pull/9070.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9070.patch",
"merged_at": 1607960099000
} |
https://api.github.com/repos/huggingface/transformers/issues/9069 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9069/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9069/comments | https://api.github.com/repos/huggingface/transformers/issues/9069/events | https://github.com/huggingface/transformers/pull/9069 | 762,919,302 | MDExOlB1bGxSZXF1ZXN0NTM3NDE5Njkw | 9,069 | Fix some typos | {
"login": "bryant1410",
"id": 3905501,
"node_id": "MDQ6VXNlcjM5MDU1MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bryant1410",
"html_url": "https://github.com/bryant1410",
"followers_url": "https://api.github.com/users/bryant1410/followers",
"following_url": "https://api.github.com/users/bryant1410/following{/other_user}",
"gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions",
"organizations_url": "https://api.github.com/users/bryant1410/orgs",
"repos_url": "https://api.github.com/users/bryant1410/repos",
"events_url": "https://api.github.com/users/bryant1410/events{/privacy}",
"received_events_url": "https://api.github.com/users/bryant1410/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"@patil-suraj this pull request can also be closed! \r\nMost of the typos were already fixed, the remaining ones were fixed in [this pull request](https://github.com/huggingface/transformers/pull/10989)\r\n\r\n"
] | 1,607 | 1,617 | 1,617 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9069/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9069/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9069",
"html_url": "https://github.com/huggingface/transformers/pull/9069",
"diff_url": "https://github.com/huggingface/transformers/pull/9069.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9069.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/9068 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9068/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9068/comments | https://api.github.com/repos/huggingface/transformers/issues/9068/events | https://github.com/huggingface/transformers/pull/9068 | 762,886,799 | MDExOlB1bGxSZXF1ZXN0NTM3Mzg5Nzc5 | 9,068 | [wip] [ci] experiment for documentation | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It is clear now - ready to document how to do it right: https://github.com/huggingface/transformers/pull/9070\r\n"
] | 1,607 | 1,651 | 1,607 | CONTRIBUTOR | null | please ignore for now. thanks.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9068/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9068/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9068",
"html_url": "https://github.com/huggingface/transformers/pull/9068",
"diff_url": "https://github.com/huggingface/transformers/pull/9068.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9068.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9067 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9067/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9067/comments | https://api.github.com/repos/huggingface/transformers/issues/9067/events | https://github.com/huggingface/transformers/pull/9067 | 762,702,963 | MDExOlB1bGxSZXF1ZXN0NTM3MjIyOTkx | 9,067 | Fix min_null_pred in the run_qa script | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,607 | 1,607 | 1,607 | COLLABORATOR | null | # What does this PR do?
The `min_null_prediction` variable in the `run_qa` script was actually the maximum because the < was in the wrong direction... | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9067/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9067/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9067",
"html_url": "https://github.com/huggingface/transformers/pull/9067",
"diff_url": "https://github.com/huggingface/transformers/pull/9067.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9067.patch",
"merged_at": 1607721965000
} |
https://api.github.com/repos/huggingface/transformers/issues/9066 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9066/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9066/comments | https://api.github.com/repos/huggingface/transformers/issues/9066/events | https://github.com/huggingface/transformers/issues/9066 | 762,576,595 | MDU6SXNzdWU3NjI1NzY1OTU= | 9,066 | Add BartForCausalLM analogs to `ProphetNetForCausalLM` | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
},
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
}
] | closed | false | null | [] | [
"I dont know if @MeRajat claimed this issue, However if not **I want to take this issue** ",
"Usually one opens a PR to claim the issue (the PR does not have to be finished) - so I think it's still open. ",
"@patrickvonplaten This FR is opened for some time,so thought of working on it, almost completed development, should I raise PR?\nAs @sadakmed is also working on it, so thought of asking.",
"Hey @spatil6,\r\n\r\nI think the PR is already in an advanced stage, so I hope the PR is finished by next week. If not, I'll ping you again :-) "
] | 1,607 | 1,612 | 1,612 | MEMBER | null | # 🚀 Feature request
Bart is a seq2seq model, but there might be applications where one would like to use only the pre-trained BartDecoder in an EncoderDecoder setting with a "long" encoder, such as
```python
from transformers import EncoderDecoderModel
model = EncoderDecoderModel("allenai/longformer-large-4096", "facebook/bart-large")
# fine-tune model ...
```
This is already possible for ProphetNet:
```python
from transformers import EncoderDecoderModel
import torch
model = EncoderDecoderModel.from_encoder_decoder_pretrained("allenai/longformer-large-4096", "microsoft/prophetnet-large-uncased")
input_ids = torch.tensor([10 * [1]])
labels = torch.tensor([10 * [0]])
loss = model(input_ids, decoder_input_ids=labels, labels=labels).loss
loss.backward()
```
, but not yet for Bart. This "Good first/second issue" is about implemented a `BartForCausalLM` analogs to the one in ProphetNet here:
https://github.com/huggingface/transformers/blob/9cc9f4122e2a1027a6011951e3c6629a0f1b6c3e/src/transformers/models/prophetnet/modeling_prophetnet.py#L1882
To verify that the feature works as expected, one should make sure that the following tests are added:
- A `BartStandaloneDecoderModelTest` class as is done in https://github.com/huggingface/transformers/blob/9cc9f4122e2a1027a6011951e3c6629a0f1b6c3e/tests/test_modeling_prophetnet.py#L1072
- And an encoder-decoder test class as it's done here: https://github.com/huggingface/transformers/blob/9cc9f4122e2a1027a6011951e3c6629a0f1b6c3e/tests/test_modeling_encoder_decoder.py#L758
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
## Motivation
mentioned above for Long-range seq2seq warm-starting e.g.
## Your contribution
I'm more than happy to guide someone through this issue!
It's a bit more advanced so I'll give it both "Good first issue" and "Good second issue".
You can claim the issue by writing it below and/or opening a PR :-)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9066/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9066/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9065 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9065/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9065/comments | https://api.github.com/repos/huggingface/transformers/issues/9065/events | https://github.com/huggingface/transformers/pull/9065 | 762,472,016 | MDExOlB1bGxSZXF1ZXN0NTM3MDExMTk0 | 9,065 | Remove docs only check | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,607 | 1,607 | 1,607 | MEMBER | null | Remove the docs only check as it can result in [crashes](https://app.circleci.com/pipelines/github/huggingface/transformers/17209/workflows/d3807ea6-9697-4699-a114-98e6b4d2c4d0/jobs/135576).
Will revert if you disagree @stas00. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9065/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9065/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9065",
"html_url": "https://github.com/huggingface/transformers/pull/9065",
"diff_url": "https://github.com/huggingface/transformers/pull/9065.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9065.patch",
"merged_at": 1607700452000
} |
https://api.github.com/repos/huggingface/transformers/issues/9064 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9064/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9064/comments | https://api.github.com/repos/huggingface/transformers/issues/9064/events | https://github.com/huggingface/transformers/issues/9064 | 762,453,674 | MDU6SXNzdWU3NjI0NTM2NzQ= | 9,064 | Embedding documents on multi-GPU single-ode Docker using pretrained models of huggingface transformers and pytorch DistributedDataParallel | {
"login": "ntaherkhani",
"id": 34975127,
"node_id": "MDQ6VXNlcjM0OTc1MTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/34975127?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ntaherkhani",
"html_url": "https://github.com/ntaherkhani",
"followers_url": "https://api.github.com/users/ntaherkhani/followers",
"following_url": "https://api.github.com/users/ntaherkhani/following{/other_user}",
"gists_url": "https://api.github.com/users/ntaherkhani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ntaherkhani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ntaherkhani/subscriptions",
"organizations_url": "https://api.github.com/users/ntaherkhani/orgs",
"repos_url": "https://api.github.com/users/ntaherkhani/repos",
"events_url": "https://api.github.com/users/ntaherkhani/events{/privacy}",
"received_events_url": "https://api.github.com/users/ntaherkhani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,607 | 1,619 | 1,619 | NONE | null | Hi,
This is a question.
I am trying to embed some documents including a couple of sentences using huggingface transformers models. I have multi-gpu single-node and I want to do embedding parallel and distributed in all 8 gpus. I tried to use pytorch DistributedDataParallel, but I think all sentences are sending to all GPUs and for all sentences, it is returning one tensor. this is a sample code:
import torch
import torch.nn as nn
import torch.nn.functional as F
import time
import argparse
import os
from transformers import AlbertTokenizer, AlbertModel
import numpy
from tqdm import tqdm
from torch.utils.data import DataLoader,TensorDataset
def parse_args():
parse = argparse.ArgumentParser()
parse.add_argument(
'--local_rank',
dest = 'local_rank',
type = int,
default = 0,
)
parse.add_argument("--gpu", type=str, default='None',
help="choose gpu device.")
return parse.parse_args()
def train():
args = parse_args()
if not args.gpu == 'None':
device = torch.device("cuda")
os.environ["CUDA_VISIBLE_DEVICES"]=args.gpu
else:
device = torch.device("cpu")
torch.cuda.set_device(args.local_rank)
torch.distributed.init_process_group(
backend='nccl',
init_method='env://',
)
tokenizer = AlbertTokenizer.from_pretrained('albert-xxlarge-v2')
sentences=['I love tea',
'He hates tea',
'We love tea',
'python coder',
'geeksforgeeks',
'coder in geeksforgeeks']
sentence_tokens = []
for sent in (sentences):
token_id = tokenizer.encode(sent, max_length=128, add_special_tokens=True, pad_to_max_length=True)
sentence_tokens.append(token_id)
original_sentences = torch.tensor(sentence_tokens)
train_dataset = TensorDataset(original_sentences)
#setup training sampler
train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset,num_replicas=len(sentences))
#setup training data loader with the train sampler setup
train_dataloader = DataLoader(train_dataset, batch_size=16,sampler=train_sampler, shuffle=False)
model = AlbertModel.from_pretrained('albert-xxlarge-v2', return_dict=True)
model = model.to(device)
model = nn.parallel.DistributedDataParallel(model,
device_ids = [args.local_rank, ],
output_device = args.local_rank,\
find_unused_parameters=True
)
for batch in (train_dataloader):
batch_input_tensors = batch[0].to('cuda')
outputs = model(batch_input_tensors)
last_hidden_states = outputs.last_hidden_state
average= torch.mean(last_hidden_states,dim=1)
if __name__ == "__main__":
train()
all of sentences are sending to all 8 GPUs and output as last_hidden_states is only one tensor. I got the average of tensor elements because I thought at the end they should be same but they aren't. how can do it distributed and sentences distribute to GPUs and embed over there? and finally for each sentence or for my final case each Doc I have one tensor as feature vector? thanks
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9064/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9064/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9063 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9063/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9063/comments | https://api.github.com/repos/huggingface/transformers/issues/9063/events | https://github.com/huggingface/transformers/pull/9063 | 762,451,834 | MDExOlB1bGxSZXF1ZXN0NTM2OTkyODkw | 9,063 | Fix T5 and BART for TF | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I should have addressed everybody's comments :)"
] | 1,607 | 1,607 | 1,607 | CONTRIBUTOR | null | # What does this PR do?
This PR fix the TensorFlow implementation of T5 and BART to make them graph compilation+execution compliant and then be able to create a savedmodel for each them.
The slow tests `test_saved_model_with_hidden_states_output` and `test_saved_model_with_attentions_output` are now passing for both models.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9063/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9063/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9063",
"html_url": "https://github.com/huggingface/transformers/pull/9063",
"diff_url": "https://github.com/huggingface/transformers/pull/9063.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9063.patch",
"merged_at": 1607968021000
} |
https://api.github.com/repos/huggingface/transformers/issues/9062 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9062/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9062/comments | https://api.github.com/repos/huggingface/transformers/issues/9062/events | https://github.com/huggingface/transformers/pull/9062 | 762,448,350 | MDExOlB1bGxSZXF1ZXN0NTM2OTg5NzEy | 9,062 | Bump notebook from 6.1.4 to 6.1.5 in /examples/research_projects/movement-pruning/lxmert | {
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
} | [
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] | closed | false | null | [] | [] | 1,607 | 1,607 | 1,607 | CONTRIBUTOR | null | Bumps [notebook](https://github.com/jupyter/jupyterhub) from 6.1.4 to 6.1.5.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a href="https://github.com/jupyter/jupyterhub/commits">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9062/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9062/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9062",
"html_url": "https://github.com/huggingface/transformers/pull/9062",
"diff_url": "https://github.com/huggingface/transformers/pull/9062.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9062.patch",
"merged_at": 1607700764000
} |
https://api.github.com/repos/huggingface/transformers/issues/9061 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9061/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9061/comments | https://api.github.com/repos/huggingface/transformers/issues/9061/events | https://github.com/huggingface/transformers/issues/9061 | 762,393,270 | MDU6SXNzdWU3NjIzOTMyNzA= | 9,061 | CharacterBERT | {
"login": "helboukkouri",
"id": 36409068,
"node_id": "MDQ6VXNlcjM2NDA5MDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/36409068?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/helboukkouri",
"html_url": "https://github.com/helboukkouri",
"followers_url": "https://api.github.com/users/helboukkouri/followers",
"following_url": "https://api.github.com/users/helboukkouri/following{/other_user}",
"gists_url": "https://api.github.com/users/helboukkouri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/helboukkouri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/helboukkouri/subscriptions",
"organizations_url": "https://api.github.com/users/helboukkouri/orgs",
"repos_url": "https://api.github.com/users/helboukkouri/repos",
"events_url": "https://api.github.com/users/helboukkouri/events{/privacy}",
"received_events_url": "https://api.github.com/users/helboukkouri/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | [
"After reading the paper again, I'm really excited to pre-train models for another domain 🤗 do you know when the pre-training code will be available 🤔",
"@stefan-it glad to hear that you enjoyed our work. I haven't released the pre-training code yet as it is not as user friendly as I would want it to be. But it just happens that I'm planning to work on releasing a first version some time **this week**, so good timing 😊.\r\n\r\nYou can subscribe to the following issue if you want to be notified: https://github.com/helboukkouri/character-bert/issues/4\r\n\r\nCheers!",
"Sounds great @helboukkouri! Let us know if we can help in any way, we'd love to see character BERT in `transformers`!",
"Hey @helboukkouri , really cool PR for the upcoming model integration :hugs: \r\n\r\nI've already looked at it, and have a question about the `CharacterMapper` implementation. So in the current implementation it supports a maximum word length of 50 (so all word representations are padded to this length, if I'm correctly reading it). Do you think it would decrease training (and later fine-tuning) time, when using a smaller value :thinking: \r\n\r\nSo e.g. in German we could have really long words such as \"Bezirksschornsteinfegermeister\", but 50 is really long (but I think this is [language-dependend](https://arxiv.org/pdf/1207.2334.pdf)).",
"Hey @stefan-it, thanks! 😊\r\n\r\n> Do you think it would decrease training (and later fine-tuning) time, when using a smaller value 🤔\r\n\r\nWhen we compute some stats around model speed, we find that while CharacterBERT is twice as slow as BERT during pre-training (108% slower), it is not as slow during downstream task fine-tuning (19% on avg.) This means that most of the \"slowness\" happens during pre-training, which makes us think that the Masked Language Modeling output layer is at fault here. In particular, the differences with BERT are: (1) no parameter sharing between the wordpiece embedding matrix and the output layer and (2) a larger output layer (we use top 100k tokens in the pre-training corpus as a vocabulary) since we want to be able to predict a reasonably high number of tokens so that MLM can be beneficial.\r\n\r\nSo to answer your question: reducing the maximum word length might reduce overall speeds but this change will probably negligible when compared to the effects listed above.\r\n\r\nYou may wonder why we used 50 character long representations. To be honest, we didn't want to tweak this `CharacterCNN` too much as it is originally the same layer that is used in ELMo. We just trusted the guys from AllenAI to have done a good work choosing the architecture and just re-used it 😄",
"Hi @helboukkouri thanks for your detailed answer! This explains the whole training time/speed topic really great :hugs: ",
"> After reading the paper again, I'm really excited to pre-train models for another domain 🤗 do you know when the pre-training code will be available 🤔\r\n\r\nCode is out! Feel free to open issues if you have any problems using it.",
"Hi @helboukkouri, I have read the paper with great interest. I am currently working on the same topic. I tried to reproduce the result with our custom data. We could complete phase 1. Now we are heading towards fine-tuning of pretrained model for MLM and NSP tasks. Would you consider sharing research materials for the same. ",
"Hi @pradeepsinghnitk, thanks for your interest.\r\n\r\nCould you be more specific about what you mean by `phase 1` and also if by `fine-tuning of pretrained model for MLM and NSP tasks` you mean pre-training or actual task-specific finetuning (e.g. on text classification tasks)?\r\n\r\nIn any case, check this code as it gives basic context for loading a model and running an inference. Fine-tuning it on any task should be straightforward (as you would with BERT basically) : https://github.com/helboukkouri/character-bert\r\n\r\nAnd for NSP and MLM (which is usually what is called `pre-training`), the code is here: https://github.com/helboukkouri/character-bert-pretraining\r\n\r\nUnfortunately, the import of CharacterBERT in the `transformers` library did not really succeed. It's been a while but if I remember well the issues were related to the different tests failing due to character-based tokenization being not very well supported at the time. \r\n\r\nI'll notify everybody if I ever go back to working on this again.\r\n\r\nCheers!",
"Thank you for your response. \r\nTo be specific about phase 1; bash $WORKDIR/bash_scripts/run_pretraining.character_bert.step_1.sh (phase 1: maximum input length of 128 and maximum number of masked tokens per input of 20.) we could successfully execute this for char_bert pertaining and also for bert_based pretraining. \r\nNow, we would like to reproduce https://github.com/helboukkouri/character-bert-finetuning. But there was no code uploaded here. \r\n\r\n\r\n\"And for NSP and MLM (which is usually what is called pre-training), the code is here: https://github.com/helboukkouri/character-bert-pretraining\". this part of the scripts we have already executed\r\n",
"Looking forward to this integration since December 2020!",
"@stefan-it Hi Stefan, I saw it on your twitter account that you finished training German version of CharacterBERT. It is not on Huggingface yet, but I am writing my master thesis on OCR post correction on historical german corpus, and can really use it! Can you tell me how I can have access to your model? Thank you so much! Greetings from Stuttgart!",
"Is it still not supported by transformers?"
] | 1,607 | 1,677 | null | NONE | null | # 🌟 New model addition
## Model description
**CharacterBERT** is a **variant of BERT** that uses a CharacterCNN module **instead of** WordPieces. As a result, the model:
1. Does not require/rely on a WordPiece vocabulary
2. Produces a single embedding for any (reasonable) input token
3. Is more robust to misspellings
Paper: https://www.aclweb.org/anthology/2020.coling-main.609/
<!-- Important information -->
## Open source status
* [x] the model implementation is available: https://github.com/helboukkouri/character-bert
* [x] the model weights are available: https://github.com/helboukkouri/character-bert/blob/main/download.py#L16
* [x] who are the authors: @helboukkouri @osf9018 @Jekub @hiroshinoji @PierreZweigenbaum and Junichi Tsujii
I am willing to work on a PR but I will probably need some guidance 😊 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9061/reactions",
"total_count": 5,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 5,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9061/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9060 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9060/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9060/comments | https://api.github.com/repos/huggingface/transformers/issues/9060/events | https://github.com/huggingface/transformers/issues/9060 | 762,345,056 | MDU6SXNzdWU3NjIzNDUwNTY= | 9,060 | ImportError: cannot import name 'SAVE_STATE_WARNING' from 'torch.optim.lr_scheduler' - SAVE_STATE_WARNING has been removed from pytorch | {
"login": "dbonner",
"id": 241474,
"node_id": "MDQ6VXNlcjI0MTQ3NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/241474?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dbonner",
"html_url": "https://github.com/dbonner",
"followers_url": "https://api.github.com/users/dbonner/followers",
"following_url": "https://api.github.com/users/dbonner/following{/other_user}",
"gists_url": "https://api.github.com/users/dbonner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dbonner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dbonner/subscriptions",
"organizations_url": "https://api.github.com/users/dbonner/orgs",
"repos_url": "https://api.github.com/users/dbonner/repos",
"events_url": "https://api.github.com/users/dbonner/events{/privacy}",
"received_events_url": "https://api.github.com/users/dbonner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
".I can see you have fixed this in the source code.",
"I just upgraded to torch 1.8 and I got this error. \r\n\r\n```ImportError while importing test module '/home/dwalter/Documents/projects/lm/lm_ml/modules/quantization/tests/test_quantize.py'.\r\nHint: make sure your test modules/packages have valid Python names.\r\nTraceback:\r\n/home/dwalter/anaconda3/envs/lm-torch1.8/lib/python3.6/importlib/__init__.py:126: in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\ntests/test_quantize.py:7: in <module>\r\n import quantization.fused_nn as qnni\r\nquantization/__init__.py:2: in <module>\r\n from .fused_nn import ConvNL2d\r\nquantization/fused_nn.py:4: in <module>\r\n from .nn import Conv2d\r\nquantization/nn/__init__.py:8: in <module>\r\n import transformers.modeling_bert as bert\r\n/home/dwalter/anaconda3/envs/lm-torch1.8/lib/python3.6/site-packages/transformers/__init__.py:626: in <module>\r\n from .trainer import Trainer\r\n/home/dwalter/anaconda3/envs/lm-torch1.8/lib/python3.6/site-packages/transformers/trainer.py:69: in <module>\r\n from .trainer_pt_utils import (\r\n/home/dwalter/anaconda3/envs/lm-torch1.8/lib/python3.6/site-packages/transformers/trainer_pt_utils.py:40: in <module>\r\n from torch.optim.lr_scheduler import SAVE_STATE_WARNING\r\nE ImportError: cannot import name 'SAVE_STATE_WARNING'\r\n```\r\nIs there something I need to fix in my code or did I not upgrade correctly?\r\nupgraded with `pip install --upgrade torch`",
"@dwalterlm you're probably on an older Transformers version. This was fixed in https://github.com/huggingface/transformers/pull/8979, could you try upgrading to a more recent version, like `v4.3.0`?",
"> \r\n\r\nthe version of torch is too high,try use : torch 1.7.1"
] | 1,607 | 1,663 | 1,607 | NONE | null | ERROR:
..../my37/lib/python3.7/site-packages/transformers/trainer_pt_utils.py", line 40, in <module>
from torch.optim.lr_scheduler import SAVE_STATE_WARNING
ImportError: cannot import name 'SAVE_STATE_WARNING' from 'torch.optim.lr_scheduler' (......./my37/lib/python3.7/site-packages/torch/optim/lr_scheduler.py)
Please update transformers to be compatible with the latest pytorch source code (build from master branch:
'SAVE_STATE_WARNING' was removed from pytorch a few days ago. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9060/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9060/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9059 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9059/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9059/comments | https://api.github.com/repos/huggingface/transformers/issues/9059/events | https://github.com/huggingface/transformers/issues/9059 | 762,305,303 | MDU6SXNzdWU3NjIzMDUzMDM= | 9,059 | overflow_to_sample_mapping missing in in documentation | {
"login": "schelv",
"id": 13403863,
"node_id": "MDQ6VXNlcjEzNDAzODYz",
"avatar_url": "https://avatars.githubusercontent.com/u/13403863?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/schelv",
"html_url": "https://github.com/schelv",
"followers_url": "https://api.github.com/users/schelv/followers",
"following_url": "https://api.github.com/users/schelv/following{/other_user}",
"gists_url": "https://api.github.com/users/schelv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/schelv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/schelv/subscriptions",
"organizations_url": "https://api.github.com/users/schelv/orgs",
"repos_url": "https://api.github.com/users/schelv/repos",
"events_url": "https://api.github.com/users/schelv/events{/privacy}",
"received_events_url": "https://api.github.com/users/schelv/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | open | false | null | [] | [
"Indeed! Do you want to open a PR with a fix?",
"No not really. \r\nBut I've found that the documentation is the same because the `PreTrainedTokenizerFast` inherits the `__call__` method as well as the documentation from `PreTrainedTokenizerBase`. \r\nThe `__call__` method documentation is a concatenation of two different documentations.\r\nSo changing that single line of documentation is more complicated than I expected.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"I think that it does?"
] | 1,607 | 1,620 | null | NONE | null | In the [documentation ]( https://huggingface.co/transformers/master/main_classes/tokenizer.html#transformers.PreTrainedTokenizerFast.__call__)of the fast tokenizer, the `overflow_to_sample_mapping` field is missing.
Instead the `overflowing_tokens` is listed there, which is only part of the base tokenizer.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9059/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9059/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9058 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9058/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9058/comments | https://api.github.com/repos/huggingface/transformers/issues/9058/events | https://github.com/huggingface/transformers/issues/9058 | 762,298,982 | MDU6SXNzdWU3NjIyOTg5ODI= | 9,058 | "resize_token_embeddings" in BertForeMaskedLM won't change last linear layer "output dimension" | {
"login": "HenryPaik1",
"id": 42961175,
"node_id": "MDQ6VXNlcjQyOTYxMTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/42961175?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HenryPaik1",
"html_url": "https://github.com/HenryPaik1",
"followers_url": "https://api.github.com/users/HenryPaik1/followers",
"following_url": "https://api.github.com/users/HenryPaik1/following{/other_user}",
"gists_url": "https://api.github.com/users/HenryPaik1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HenryPaik1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HenryPaik1/subscriptions",
"organizations_url": "https://api.github.com/users/HenryPaik1/orgs",
"repos_url": "https://api.github.com/users/HenryPaik1/repos",
"events_url": "https://api.github.com/users/HenryPaik1/events{/privacy}",
"received_events_url": "https://api.github.com/users/HenryPaik1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Sorry, My mistakes."
] | 1,607 | 1,607 | 1,607 | NONE | null | ## Environment info
- `transformers` version: 4.0.1
### Who can help
albert, bert, GPT2, XLM: @LysandreJik
## Information
Model I am using (Bert, XLNet ...): BertForMaskedLM
## To reproduce
`resize_token_embeddings` cannot change decoder output feature dimension.
```
model = BertForMaskedLM.from_pretrained('bert-base-uncased')
tokenizer = BertTokenizer.from_pretrained(tokenizer_path) # len(tokenizer) == 30541 (I add some new tokens)
model.bert.resize_token_embeddings(len(tokenizer))
>>>
BertForMaskedLM(
(bert): BertModel(
(embeddings): BertEmbeddings(
(word_embeddings): Embedding(30541, 768) ######################### This is correct, but the decoder is wrong.
(position_embeddings): Embedding(512, 768)
(token_type_embeddings): Embedding(2, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
...
...
(cls): BertOnlyMLMHead(
(predictions): BertLMPredictionHead(
(transform): BertPredictionHeadTransform(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
)
(decoder): Linear(in_features=768, out_features=30522, bias=True) ########## out_features not changed
)
)
)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9058/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9058/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9057 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9057/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9057/comments | https://api.github.com/repos/huggingface/transformers/issues/9057/events | https://github.com/huggingface/transformers/issues/9057 | 762,271,120 | MDU6SXNzdWU3NjIyNzExMjA= | 9,057 | Having to specify too many `ignore_keys` in `Trainer.prediction_step` | {
"login": "Fraser-Greenlee",
"id": 8402500,
"node_id": "MDQ6VXNlcjg0MDI1MDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8402500?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Fraser-Greenlee",
"html_url": "https://github.com/Fraser-Greenlee",
"followers_url": "https://api.github.com/users/Fraser-Greenlee/followers",
"following_url": "https://api.github.com/users/Fraser-Greenlee/following{/other_user}",
"gists_url": "https://api.github.com/users/Fraser-Greenlee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Fraser-Greenlee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Fraser-Greenlee/subscriptions",
"organizations_url": "https://api.github.com/users/Fraser-Greenlee/orgs",
"repos_url": "https://api.github.com/users/Fraser-Greenlee/repos",
"events_url": "https://api.github.com/users/Fraser-Greenlee/events{/privacy}",
"received_events_url": "https://api.github.com/users/Fraser-Greenlee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
] | [
"No, this is too simplistic. First of all, all question-answering models return two logits called `start_logits` and `end_logits`. Then a user might want to get the predictions for all their `all_hidden_states` or `all_attentions` when the model has the proper config keys, which is why the `Trainer` gather all the tensors different from the loss.",
"Ahh I see, guess there's no simple answer here. Thanks for the info!"
] | 1,607 | 1,607 | 1,607 | CONTRIBUTOR | null | Since all model output dicts that have logits give it with the key `logits` I think this code could be simplified to just use the `logits` key. (Rather than having to specify a bunch of `ignore_keys`.)
from:
https://github.com/huggingface/transformers/blob/e20ac6611df97f66148ce8b7886f01ffe9d17484/src/transformers/trainer.py#L1471-L1473
to:
```python
if isinstance(outputs, dict):
loss = outputs["loss"].mean().detach()
logits = (outputs.get('logits', None),)
```
This prevents other keys for being sent to `nested_concat` causing an error:
https://github.com/huggingface/transformers/blob/e20ac6611df97f66148ce8b7886f01ffe9d17484/src/transformers/trainer.py#L1367-L1368
I'd be happy to make this change, let me know if I'm missing something here. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9057/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9057/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9056 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9056/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9056/comments | https://api.github.com/repos/huggingface/transformers/issues/9056/events | https://github.com/huggingface/transformers/issues/9056 | 762,234,519 | MDU6SXNzdWU3NjIyMzQ1MTk= | 9,056 | Token classification example (run_ner.py) should work without fast tokenizers | {
"login": "ruanchaves",
"id": 14352388,
"node_id": "MDQ6VXNlcjE0MzUyMzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/14352388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ruanchaves",
"html_url": "https://github.com/ruanchaves",
"followers_url": "https://api.github.com/users/ruanchaves/followers",
"following_url": "https://api.github.com/users/ruanchaves/following{/other_user}",
"gists_url": "https://api.github.com/users/ruanchaves/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ruanchaves/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ruanchaves/subscriptions",
"organizations_url": "https://api.github.com/users/ruanchaves/orgs",
"repos_url": "https://api.github.com/users/ruanchaves/repos",
"events_url": "https://api.github.com/users/ruanchaves/events{/privacy}",
"received_events_url": "https://api.github.com/users/ruanchaves/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, this script only supports models that have a fast tokenizer (there is now a clear assert of that after the tokenizer is loaded). The old script will work with models that only have a slow tokenizer.",
"> Yes, this script only supports models that have a fast tokenizer (there is no a clear assert of that after the tokenizer is loaded). The old script will work with models that only have a slow tokenizer.\r\n\r\nSomebody wrote an assert four days ago. There are certain inconveniences to the old script, e.g. it doesn't utilize the `datasets` library. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"I am facing same problem in BioGPT\r\n",
"I also have the same problem, has anyone found any solution for that? ",
"No solution found yet, I also have this error in BioGPT",
"Can you try `pip install tokenizers`? "
] | 1,607 | 1,695 | 1,619 | CONTRIBUTOR | null | # 🚀 Feature request
The token classification example ( [run_ner.py](https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_ner.py) ) calls the Tokenizer with `return_offsets_mapping=True` (line 279).
This is not allowed for Python tokenizers and raises the error `NotImplementedError: return_offset_mapping is not available when using Python tokenizers. To use this feature, change your tokenizer to one deriving from transformers.PreTrainedTokenizerFast.`
run_ner.py should align tokens and labels even if a fast tokenizer is not available.
## Motivation
There isn't a fast tokenizer available for `vinai/bertweet-base`, and I guess this may apply to a few other models as well.
Passing `vinai/bertweet-base` as `model_name_or_path` to `run_ner.py` instantly raises `NotImplementedError`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9056/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9056/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9055 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9055/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9055/comments | https://api.github.com/repos/huggingface/transformers/issues/9055/events | https://github.com/huggingface/transformers/issues/9055 | 762,195,911 | MDU6SXNzdWU3NjIxOTU5MTE= | 9,055 | Can't load mt5 model after resizing token embedding | {
"login": "alecoutre1",
"id": 17927714,
"node_id": "MDQ6VXNlcjE3OTI3NzE0",
"avatar_url": "https://avatars.githubusercontent.com/u/17927714?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alecoutre1",
"html_url": "https://github.com/alecoutre1",
"followers_url": "https://api.github.com/users/alecoutre1/followers",
"following_url": "https://api.github.com/users/alecoutre1/following{/other_user}",
"gists_url": "https://api.github.com/users/alecoutre1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alecoutre1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alecoutre1/subscriptions",
"organizations_url": "https://api.github.com/users/alecoutre1/orgs",
"repos_url": "https://api.github.com/users/alecoutre1/repos",
"events_url": "https://api.github.com/users/alecoutre1/events{/privacy}",
"received_events_url": "https://api.github.com/users/alecoutre1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @alecoutre1 I think this was fixed very recently. \r\n\r\nI cannot reproduce your error on master -> could you try to pip install the master version and see if the error persists? \r\n\r\n```\r\npip install git+https://github.com/huggingface/transformers\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,607 | 1,619 | 1,619 | NONE | null | ## Environment info
- `transformers` version: 4.0.1
- Platform: macOS-10.15.6-x86_64-i386-64bit
- Python version: 3.8.3
- PyTorch version (GPU?): 1.7.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
@patrickvonplaten
## Description
I am having issues to reload a saved mt5 model when the token embedding has been resized. This error doesn't appear with the t5 model. I receive the following error :
`Error(s) in loading state_dict for MT5ForConditionalGeneration:
size mismatch for lm_head.weight: copying a param with shape torch.Size([250112, 768]) from checkpoint, the shape in current model is torch.Size([250102, 768]).`
Is there something different between the models that I am missing ?
## To reproduce :
```python
from transformers import MT5ForConditionalGeneration, AutoTokenizer, T5ForConditionalGeneration
model_class = MT5ForConditionalGeneration #T5ForConditionalGeneration
model_path = "google/mt5-base" # "t5-base"
model = model_class.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
tokenizer.add_tokens(['<tok1>', '<tok2>'])
model.resize_token_embeddings(len(tokenizer))
SAVING_PATH = "/tmp/test_model"
model.save_pretrained(SAVING_PATH)
tokenizer.save_pretrained(SAVING_PATH)
new_model = model_class.from_pretrained(SAVING_PATH)
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9055/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9055/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9054 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9054/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9054/comments | https://api.github.com/repos/huggingface/transformers/issues/9054/events | https://github.com/huggingface/transformers/pull/9054 | 762,180,503 | MDExOlB1bGxSZXF1ZXN0NTM2NzU1MTYz | 9,054 | [Flax] Align FlaxBertForMaskedLM with BertForMaskedLM, implement from_pretrained, init | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Update: The init and `.from_pretrained()` should now be more aligned in Flax.\r\nNotably, the user never has to call `init(...)` himself. This is done automatically in `FlaxBertModel(config)` just as it's done in PT.\r\nThe `from_pretrained(...)` method now yields explicit warnings which weights are randomly initialized and which ones were correctly loaded, just as it's done in PT.\r\n\r\nWould be awesome if @mfuntowicz @sgugger @LysandreJik, you guys could do a second review. If this design is good for you, I'd be keen to merge this PR and think about a more general convert method.\r\n\r\n```python\r\nfrom transformers import FlaxBertModel, BertConfig\r\n\r\nmodel = FlaxBertModel(BertConfig())\r\nhid_states = model(np.ones((1, 1))) # init was done automatically \r\n\r\n# one can also add the input shape used for the init to keep flexibility\r\nmodel = FlaxBertModel(BertConfig(), input_shape=((16, 128))\r\nhid_states = model(np.ones((1, 1))) # init was done automatically \r\n\r\n# also the from_pretrained method now yields an explicit warning when weights are loaded, just as it's done in PT:\r\nmodel = FlaxBertModel.from_pretrained(\"roberta-base\")\r\n# -> rnd initializes 'bias', 'dense.kernel', 'dense.bias', 'layer_norm.bias', 'decoder.weight', 'layer_norm.weight' with explicit warning\r\n```\r\n"
] | 1,607 | 1,608 | 1,608 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR:
1) Implements a flax from_pretrained, save_pretrained method and let's `FlaxPreTrainedModel.from_pretained()` default to Flax instead of PyTorch. Tests are added and `bert-base-cased` and `roberta-base` model weights have been uploaded to the model hub. I gave the flax model file the name `flax_model.msgpack` similar to `pytorch_model.bin`.
2) Corrects FlaxBertForMaskedLM to align it with BertForMaskedLM: Some weights were incorrectly transposed and the activation function was different to Bert.
3) Adds `FlaxBertPretrainedModel` to Bert (and Roberta resp.) as it's done in PT.
4) Refactors the tests a bit. It's relatively easy to init a FlaxModel now I think without going over PyTorch (see tests).
5) Enforces naming convention that every model has a corresponding `Module` class. As discussed with @mfuntowicz in Flax it does not seem to be possible to make `PreTrainedModel` a `nn.Module` because `nn.Module` should by design be state-less and not contain a `self.params` attribute and thus we always require a `....Module` in addition to every `....Model` class in Flax (@mfuntowicz can probably better explain why I think and I guess we should have an offline discussion about it). I started this design principle now in the PR. Let me know what you think @mfuntowicz @sgugger @LysandreJik @thomwolf
Might be a good idea to go into the PR and play with the tests a bit.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9054/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9054/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9054",
"html_url": "https://github.com/huggingface/transformers/pull/9054",
"diff_url": "https://github.com/huggingface/transformers/pull/9054.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9054.patch",
"merged_at": 1608120213000
} |
https://api.github.com/repos/huggingface/transformers/issues/9053 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9053/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9053/comments | https://api.github.com/repos/huggingface/transformers/issues/9053/events | https://github.com/huggingface/transformers/issues/9053 | 762,144,521 | MDU6SXNzdWU3NjIxNDQ1MjE= | 9,053 | TFTrainingArguments | {
"login": "tangzhy",
"id": 36271700,
"node_id": "MDQ6VXNlcjM2MjcxNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36271700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tangzhy",
"html_url": "https://github.com/tangzhy",
"followers_url": "https://api.github.com/users/tangzhy/followers",
"following_url": "https://api.github.com/users/tangzhy/following{/other_user}",
"gists_url": "https://api.github.com/users/tangzhy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tangzhy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tangzhy/subscriptions",
"organizations_url": "https://api.github.com/users/tangzhy/orgs",
"repos_url": "https://api.github.com/users/tangzhy/repos",
"events_url": "https://api.github.com/users/tangzhy/events{/privacy}",
"received_events_url": "https://api.github.com/users/tangzhy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Oh this is a typo, do you want to open a PR to fix it?"
] | 1,607 | 1,608 | 1,608 | NONE | null | ## Environment info
- `transformers` version: 4.0.1
- Platform: linux
- Python version: 3.7
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 2.3.1
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
@sgugger @jplu @stefan-it
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
[ ] the official example scripts: (give details below)
[x] my own modified scripts: (give details below)
The tasks I am working on is:
[ ] an official GLUE/SQUaD task: (give the name)
[x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. specifiy training args.
2. run trainer
3. raise Exception where `evaluation_strategy` in training_args becomes `evaluate_strategy`
```python
training_args = TFTrainingArguments(
output_dir="/root/Data/marco-passage-ranking/results",
overwrite_output_dir=True,
do_train=True,
do_eval=True,
do_predict=False,
evaluation_strategy="no",
eval_steps=1000,
per_device_train_batch_size=8, # batch size per device during training
per_device_eval_batch_size=8, # batch size for evaluation
learning_rate=1e-6,
max_steps=400000,
warmup_steps=40000,
logging_dir="./tmp/log",
logging_steps=1000,
save_steps=1000,
fp16=False,
# eval_steps=1000,
xla =False
)
trainer = TFTrainer(
model=model,
args=training_args,
train_dataset=train_ds.take(100000),
eval_dataset=dev_ds.take(10000),
compute_metrics=compute_metrics,
)
trainer.train()
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-19-25eb465360cc> in <module>
7 )
8
----> 9 trainer.train()
~/Softwares/anaconda3/envs/tf2.0/lib/python3.7/site-packages/transformers/trainer_tf.py in train(self)
562 if (
563 self.args.eval_steps > 0
--> 564 and self.args.evaluate_strategy == EvaluationStrategy.STEPS
565 and self.global_step % self.args.eval_steps == 0
566 ):
AttributeError: 'TFTrainingArguments' object has no attribute 'evaluate_strategy'
```
I think this might be a bug where the inconsistency of eval_strategy name raises Exception. Any advice? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9053/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9053/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9052 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9052/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9052/comments | https://api.github.com/repos/huggingface/transformers/issues/9052/events | https://github.com/huggingface/transformers/issues/9052 | 762,139,921 | MDU6SXNzdWU3NjIxMzk5MjE= | 9,052 | Add caching mechanism to BERT/RoBERTa/GPT2 for Seq2Seq accelerated generation | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
}
] | closed | false | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"@patrickvonplaten Hi, I'd like to ask why the decoding only reuses previous key and values but no query. Since if the model parameter rests static, the query vector can be reused as well.\r\n\r\nAppreciate for your reply.",
"Hi @liyucheng09 \r\n\r\nGood question!\r\nWe don't need to reuse query states because when caching is enabled we just need the query states for the current last token since only the last query vector is needed to predict the next token.\r\n\r\nHope this makes it clear.",
"These blogs might actually help as well: \r\n- https://huggingface.co/blog/encoder-decoder\r\n- https://jalammar.github.io/illustrated-gpt2/ \r\n\r\nto better understand the difference between query, key and value :-) "
] | 1,607 | 1,619 | 1,608 | MEMBER | null | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
All Seq2Seq models that make use of `generate()` usually allow `past_key_values` to be cached for both the cross-attention layer and the uni-directional decoder self-attention layer. For this feature request we should implement the feature for Bert2Bert, and, Roberta2Roberta.
We should implement this feature analogs to how it is implemented in Bart. This means that we should
- 1) add the caching mechanism in the AttentionLayer as shown here for Bart: https://github.com/huggingface/transformers/blob/b01ddc9577b87f057e163d49563ee3f74f4810cf/src/transformers/models/bart/modeling_bart.py#L234
- 2) pass the `past_key_values` as tuple through the layers, making sure that it's optional for the cross-attention layer: https://github.com/huggingface/transformers/blob/b01ddc9577b87f057e163d49563ee3f74f4810cf/src/transformers/models/bart/modeling_bart.py#L433
- 3) Adapt the mask correspondingly. The easiest option is probably to just copy how it's done in Bart and remove the old attention_masking logic (making sure that all tests pass): https://github.com/huggingface/transformers/blob/b01ddc9577b87f057e163d49563ee3f74f4810cf/src/transformers/models/bart/modeling_bart.py#L91 and https://github.com/huggingface/transformers/blob/b01ddc9577b87f057e163d49563ee3f74f4810cf/src/transformers/models/bart/modeling_bart.py#L76
- 4) Add a test for `BertLMHeadModel` and `RobertaForCausalLM` that verifies that the caching mechanism works as expected:
https://github.com/huggingface/transformers/blob/b01ddc9577b87f057e163d49563ee3f74f4810cf/tests/test_modeling_bart.py#L287
- 5) "Turn on" caching for Encoder-Decoder (this should be the last step and this might cause some other problems - happy to help here!): https://github.com/huggingface/transformers/blob/b01ddc9577b87f057e163d49563ee3f74f4810cf/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py#L427
This might be a good issue for you @patil-suraj if interested :-)
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9052/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9052/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9051 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9051/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9051/comments | https://api.github.com/repos/huggingface/transformers/issues/9051/events | https://github.com/huggingface/transformers/pull/9051 | 762,113,534 | MDExOlB1bGxSZXF1ZXN0NTM2Njk1MTgx | 9,051 | update tatoeba workflow | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,607 | 1,607 | 1,607 | MEMBER | null | # What does this PR do?
Update the tatoeba model upload workflow for our new git-based system.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9051/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9051/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9051",
"html_url": "https://github.com/huggingface/transformers/pull/9051",
"diff_url": "https://github.com/huggingface/transformers/pull/9051.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9051.patch",
"merged_at": 1607698756000
} |
https://api.github.com/repos/huggingface/transformers/issues/9050 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9050/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9050/comments | https://api.github.com/repos/huggingface/transformers/issues/9050/events | https://github.com/huggingface/transformers/pull/9050 | 762,063,145 | MDExOlB1bGxSZXF1ZXN0NTM2NjQ5NTg4 | 9,050 | yuk | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,607 | 1,607 | 1,607 | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9050/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9050/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9050",
"html_url": "https://github.com/huggingface/transformers/pull/9050",
"diff_url": "https://github.com/huggingface/transformers/pull/9050.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9050.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9049 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9049/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9049/comments | https://api.github.com/repos/huggingface/transformers/issues/9049/events | https://github.com/huggingface/transformers/pull/9049 | 761,898,966 | MDExOlB1bGxSZXF1ZXN0NTM2NTAzMTA3 | 9,049 | New version of flax requires frozen dicts | {
"login": "KristianHolsheimer",
"id": 8200332,
"node_id": "MDQ6VXNlcjgyMDAzMzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8200332?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KristianHolsheimer",
"html_url": "https://github.com/KristianHolsheimer",
"followers_url": "https://api.github.com/users/KristianHolsheimer/followers",
"following_url": "https://api.github.com/users/KristianHolsheimer/following{/other_user}",
"gists_url": "https://api.github.com/users/KristianHolsheimer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KristianHolsheimer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KristianHolsheimer/subscriptions",
"organizations_url": "https://api.github.com/users/KristianHolsheimer/orgs",
"repos_url": "https://api.github.com/users/KristianHolsheimer/repos",
"events_url": "https://api.github.com/users/KristianHolsheimer/events{/privacy}",
"received_events_url": "https://api.github.com/users/KristianHolsheimer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @KristianHolsheimer,\r\n\r\nSorry I saw your PR a bit too late...I think this is solved now on master no?",
"Okay no worries. Sorry I should've tagged you. Thanks for the reply"
] | 1,607 | 1,609 | 1,609 | CONTRIBUTOR | null | Small update to maintain compatibility with new version of flax.
@mfuntowicz | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9049/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9049/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9049",
"html_url": "https://github.com/huggingface/transformers/pull/9049",
"diff_url": "https://github.com/huggingface/transformers/pull/9049.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9049.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9048 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9048/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9048/comments | https://api.github.com/repos/huggingface/transformers/issues/9048/events | https://github.com/huggingface/transformers/issues/9048 | 761,754,157 | MDU6SXNzdWU3NjE3NTQxNTc= | 9,048 | 🐛 [TFBART] LayerDrop not working on TPU | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It seems to work if I completely remove the `LayerDrop` (by commenting out the `if` clause, in both encoder and decoder).",
"Hey @astariul-colanim, \r\n\r\nI think a fix in #9029 (replacing the if-else by a `continue` statement) should do the trick.\r\n\r\nCould you try again from the branch and let me know?\r\n\r\nThanks!",
"So far #9029 seems working perfectly !\r\nLet's close this issue when #9029 is merged :)\r\n\r\nThanks for the fix !",
"@patrickvonplaten Finally the model crash during evaluation..\r\n\r\n<details>\r\n<summary> Full stack trace (click to view)</summary>\r\n\r\n```\r\n2020/12/15 01:19:09 - INFO - transformers_addons.trainer_tf - ***** Running Evaluation *****\r\n2020/12/15 01:19:09 - INFO - transformers_addons.trainer_tf - Batch size = 8\r\nThe parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=Tru\r\ne)`).\r\nThe parameter `return_dict` cannot be set in graph mode and will always be set to `True`.\r\nThe parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=Tru\r\ne)`).\r\nThe parameter `return_dict` cannot be set in graph mode and will always be set to `True`.\r\nThe parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=Tru\r\ne)`).\r\nThe parameter `return_dict` cannot be set in graph mode and will always be set to `True`.\r\nTraceback (most recent call last):\r\n File \"train.py\", line 190, in main\r\n trainer.train()\r\n File \"/home/remondnicola/text-summarization/transformers_addons/trainer_tf.py\", line 339, in train\r\n result = self.evaluate()\r\n File \"/home/remondnicola/text-summarization/transformers_addons/trainer_tf.py\", line 281, in evaluate\r\n output = self._prediction_loop(eval_dataset, description=\"Evaluation\", prediction_loss_only=prediction_loss_only)\r\n File \"/home/remondnicola/text-summarization/transformers_addons/trainer_tf.py\", line 207, in _prediction_loop\r\n loss, logits = self._evaluate_steps(features, labels)\r\n File \"/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py\", line 580, in __call__\r\n result = self._call(*args, **kwds)\r\n File \"/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py\", line 627, in _call\r\n self._initialize(args, kwds, add_initializers_to=initializers)\r\n File \"/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py\", line 506, in _initialize\r\n *args, **kwds))\r\n File \"/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py\", line 2446, in _get_concrete_function_internal_garbage_collected\r\n graph_function, _, _ = self._maybe_define_function(args, kwargs)\r\n File \"/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py\", line 2777, in _maybe_define_function\r\n graph_function = self._create_graph_function(args, kwargs)\r\n File \"/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py\", line 2667, in _create_graph_function\r\n capture_by_value=self._capture_by_value),\r\n File \"/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py\", line 981, in func_graph_from_py_func\r\n func_outputs = python_func(*func_args, **func_kwargs)\r\n File \"/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py\", line 441, in wrapped_fn\r\n return weak_wrapped_fn().__wrapped__(*args, **kwds)\r\n File \"/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py\", line 3299, in bound_method_wrapper\r\n return wrapped_fn(*args, **kwargs)\r\n File \"/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py\", line 968, in wrapper\r\n raise e.ag_error_metadata.to_exception(e)\r\nValueError: in user code:\r\n\r\n/home/remondnicola/text-summarization/transformers_addons/trainer_tf.py:169 _evaluate_steps *\r\n\tper_replica_loss, per_replica_logits = self.args.strategy.experimental_run_v2(\r\ntrain.py:29 _run_model *\r\n\tout = self.model(features, training=training, **labels)\r\n/home/remondnicola/text-summarization/transformers_addons/models/bart/modeling_tf_bart.py:97 call *\r\n\toutputs = super().call(inputs[\"input_ids\"],\r\n/home/remondnicola/.venv/summarization/lib/python3.7/site-packages/transformers/models/bart/modeling_tf_bart.py:1222 call *\r\n\toutputs = self.model(\r\n/home/remondnicola/.venv/summarization/lib/python3.7/site-packages/transformers/models/bart/modeling_tf_bart.py:1062 call *\r\n\tinputs[\"encoder_outputs\"] = self.encoder(\r\n/home/remondnicola/.venv/summarization/lib/python3.7/site-packages/transformers/models/bart/modeling_tf_bart.py:719 call *\r\n\tfor encoder_layer in self.layers:\r\n/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/operators/control_flow.py:924 if_stmt\r\n\tbasic_symbol_names, composite_symbol_names)\r\n/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/operators/control_flow.py:962 tf_if_stmt\r\n\terror_checking_orelse)\r\n/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/deprecation.py:507 new_func\r\n\treturn func(*args, **kwargs)\r\n/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/control_flow_ops.py:1177 cond\r\n\treturn cond_v2.cond_v2(pred, true_fn, false_fn, name)\r\n/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/cond_v2.py:91 cond_v2 \r\n\top_return_value=pred)\r\n/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py:981 func_graph_from_py_func\r\n\tfunc_outputs = python_func(*func_args, **func_kwargs)\r\n/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/operators/control_flow.py:958 error_checking_orelse\r\n\tbasic_symbol_names + composite_symbol_names)\r\n/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/operators/control_flow.py:298 _verify_tf_cond_vars\r\n\tfunctools.partial(_verify_single_cond_var, name), body_var, orelse_var)\r\n/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/nest.py:617 map_structure\r\n\tstructure[0], [func(*x) for x in entries],\r\n/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/nest.py:617 <listcomp>\r\n\tstructure[0], [func(*x) for x in entries],\r\n/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/operators/control_flow.py:242 _verify_single_cond_var\r\n\traise ValueError('\"{}\" is None at the end of the TRUE branch.'.format(name)) \r\n\r\nValueError: \"all_attentions\" is None at the end of the TRUE branch.\r\n```\r\n\r\n</details>",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,607 | 1,619 | 1,619 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.1.0.dev0
- Platform: Linux-4.19.0-13-cloud-amd64-x86_64-with-debian-10.7
- Python version: 3.7.3
- PyTorch version (GPU?): 1.7.0 (False)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: No (TPU)
- Using distributed or parallel set-up in script?: Yes
### Who can help
@patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): TFBart
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
## To reproduce
When I try to run TFBart on TPU, I'm getting the following error :
> ValueError: "attn" is None at the end of the TRUE branch.
It seems to come from the LayerDrop operation :
https://github.com/huggingface/transformers/blob/b01ddc9577b87f057e163d49563ee3f74f4810cf/src/transformers/models/bart/modeling_tf_bart.py#L387-L391
<details>
<summary> Full stack trace (click to expand...)</summary>
>2020/12/11 00:35:34 - INFO - transformers_addons.trainer_tf - ***** Running Evaluation *****
2020/12/11 00:35:34 - INFO - transformers_addons.trainer_tf - Batch size = 8
The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
Traceback (most recent call last):
File "train.py", line 203, in <module>
main()
File "train.py", line 194, in main
result = trainer.evaluate()
File "/home/remondnicola/text-summarization/transformers_addons/trainer_tf.py", line 281, in evaluate
output = self._prediction_loop(eval_dataset, description="Evaluation", prediction_loss_only=prediction_loss_only)
File "/home/remondnicola/text-summarization/transformers_addons/trainer_tf.py", line 207, in _prediction_loop
loss, logits = self._evaluate_steps(features, labels)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py", line 580, in __call__
result = self._call(*args, **kwds)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py", line 627, in _call
self._initialize(args, kwds, add_initializers_to=initializers)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py", line 506, in _initialize
*args, **kwds))
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py", line 2446, in _get_concrete_function_internal_garbage_collected
graph_function, _, _ = self._maybe_define_function(args, kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py", line 2777, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py", line 2667, in _create_graph_function
capture_by_value=self._capture_by_value),
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py", line 981, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py", line 441, in wrapped_fn
return weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py", line 3299, in bound_method_wrapper
return wrapped_fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py", line 968, in wrapper
raise e.ag_error_metadata.to_exception(e)
ValueError: in user code:
>
> /home/remondnicola/text-summarization/transformers_addons/trainer_tf.py:169 _evaluate_steps *
per_replica_loss, per_replica_logits = self.args.strategy.experimental_run_v2(
train.py:29 _run_model *
out = self.model(features, training=training, **labels)
/home/remondnicola/text-summarization/transformers_addons/models/bart/modeling_tf_bart.py:88 call *
outputs = super().call(inputs["input_ids"],
/home/remondnicola/.venv/summarization/lib/python3.7/site-packages/transformers/models/bart/modeling_tf_bart.py:1110 call *
outputs = self.model(
/home/remondnicola/.venv/summarization/lib/python3.7/site-packages/transformers/models/bart/modeling_tf_bart.py:977 call *
inputs["encoder_outputs"] = self.encoder(
/home/remondnicola/.venv/summarization/lib/python3.7/site-packages/transformers/models/bart/modeling_tf_bart.py:388 call *
if training and (dropout_probability < self.layerdrop): # skip the layer
/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/operators/control_flow.py:924 if_stmt
basic_symbol_names, composite_symbol_names)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/operators/control_flow.py:962 tf_if_stmt
error_checking_orelse)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/deprecation.py:507 new_func
return func(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/control_flow_ops.py:1177 cond
return cond_v2.cond_v2(pred, true_fn, false_fn, name)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/cond_v2.py:91 cond_v2
op_return_value=pred)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py:981 func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/operators/control_flow.py:958 error_checking_orelse
basic_symbol_names + composite_symbol_names)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/operators/control_flow.py:298 _verify_tf_cond_vars
functools.partial(_verify_single_cond_var, name), body_var, orelse_var)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/nest.py:617 map_structure
structure[0], [func(*x) for x in entries],
/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/nest.py:617 <listcomp>
structure[0], [func(*x) for x in entries],
/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/operators/control_flow.py:242 _verify_single_cond_var
raise ValueError('"{}" is None at the end of the TRUE branch.'.format(name))
>
> ValueError: "attn" is None at the end of the TRUE branch.
</details>
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9048/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9048/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9047 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9047/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9047/comments | https://api.github.com/repos/huggingface/transformers/issues/9047/events | https://github.com/huggingface/transformers/pull/9047 | 761,748,731 | MDExOlB1bGxSZXF1ZXN0NTM2MzY5NTc4 | 9,047 | Change nn.dropout to layer.Dropout in TFBart | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @astariul-colanim \r\n\r\nThanks for the fix! Looks good to me if it solves the error on TPU. Also cc @jplu "
] | 1,607 | 1,607 | 1,607 | CONTRIBUTOR | null | # What does this PR do?
This PR changes all the `tf.nn.dropout` calls in `modeling_tf_bart.py` and use `tf.keras.layers.Dropout` instead.
More consistent with `modeling_tf_roberta.py`
Fixes #9045
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9047/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9047/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9047",
"html_url": "https://github.com/huggingface/transformers/pull/9047",
"diff_url": "https://github.com/huggingface/transformers/pull/9047.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9047.patch",
"merged_at": 1607679625000
} |
https://api.github.com/repos/huggingface/transformers/issues/9046 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9046/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9046/comments | https://api.github.com/repos/huggingface/transformers/issues/9046/events | https://github.com/huggingface/transformers/issues/9046 | 761,745,858 | MDU6SXNzdWU3NjE3NDU4NTg= | 9,046 | BlenderBot RuntimeError: CUDA error: device-side assert triggered | {
"login": "manzar96",
"id": 38495091,
"node_id": "MDQ6VXNlcjM4NDk1MDkx",
"avatar_url": "https://avatars.githubusercontent.com/u/38495091?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manzar96",
"html_url": "https://github.com/manzar96",
"followers_url": "https://api.github.com/users/manzar96/followers",
"following_url": "https://api.github.com/users/manzar96/following{/other_user}",
"gists_url": "https://api.github.com/users/manzar96/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manzar96/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manzar96/subscriptions",
"organizations_url": "https://api.github.com/users/manzar96/orgs",
"repos_url": "https://api.github.com/users/manzar96/repos",
"events_url": "https://api.github.com/users/manzar96/events{/privacy}",
"received_events_url": "https://api.github.com/users/manzar96/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @manzar96,\r\n\r\nIt would be awesome if you could provide a full code snippet that I can copy paste and run to reproduce the error. I am not able to do so with your code above. \r\n\r\nThanks a lot!",
"I made an example:\r\n\r\n```import torch\r\nfrom transformers import BlenderbotSmallTokenizer, \\\r\n BlenderbotForConditionalGeneration\r\n\r\nDEVICE = 'cuda' if torch.cuda.is_available() else 'cpu'\r\n\r\nmodel = BlenderbotForConditionalGeneration.from_pretrained('facebook/blenderbot-90M')\r\nmodel.to(DEVICE)\r\ninputs = torch.tensor([[14, 49, 42, 626, 2727, 1063, 5, 0, 0, 0, 0, 0, 0, 0],\r\n [14, 1322, 7, 1427, 13, 7, 153, 384, 5, 14,\r\n 18, 64, 7261, 5]], device=DEVICE)\r\n\r\ninputs_att = torch.tensor([[1., 1., 1., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0.],\r\n [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]],\r\n device=DEVICE)\r\n\r\nrepl_targets = torch.tensor([[ 46, 15, 3283, 20, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100],\r\n [ 121, 54, 37, 53, 60, 12, 447, 10, 1427, 15, 51, 11,\r\n 598, 20]], device=DEVICE)\r\n\r\npad_targets = torch.tensor([[ 46, 15, 3283, 20, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0],\r\n [ 121, 54, 37, 53, 60, 12, 447, 10, 1427, 15, 51, 11,\r\n 598, 20]], device=DEVICE)\r\n\r\n\r\noutputs=model.forward(input_ids=inputs, attention_mask=inputs_att,\r\n labels=repl_targets, return_dict=True)\r\nimport ipdb;ipdb.set_trace()\r\n```\r\n\r\n\r\nIf you try printing the outputs['loss'] the error occurs. However, if you replace the `repl_targets` with the `pad_targets` variable everything works fine (but the loss does not mask 0, so that's not always correct for use).",
"@patrickvonplaten \r\n\r\nThis is a bug, in bart `decoder_input_ids` are prepared by shifting the `labels` to right, but it doesn't replace -100 with `pad_token_id`. \r\nhttps://github.com/huggingface/transformers/blob/6587cf9f8448b5573cf4a1c639ef4857472d1da0/src/transformers/models/bart/modeling_bart.py#L65-L73\r\n\r\nIn T5 we automatically replace -100 with `pad_token_id` when preparing `decoder_input_ids`.\r\nhttps://github.com/huggingface/transformers/blob/6587cf9f8448b5573cf4a1c639ef4857472d1da0/src/transformers/models/t5/modeling_t5.py#L740-L756",
"You're right @patil-suraj - do you want to open a PR to fix it in Bart? :-) ",
"Yeah!"
] | 1,607 | 1,608 | 1,608 | NONE | null | ## Environment info
- `transformers` version: 4.0.0
- Platform: Linux-5.4.0-56-generic-x86_64-with-glibc2.29
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes (GTX 1060 6GB)
- Using distributed or parallel set-up in script?: no
### Who can help
@patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): I am using the BlenderbotForConditionalGeneration ('facebook/blenderbot-90M') along with the relevant small tokenizer.
The problem arises when using:
I am using my own trainer implementation. I think that the problem has to do with the indexes of the labels. More specifically when I am using:
```outputs = self.model(input_ids=inputs, attention_mask=inputs_att, labels=pad_targets, return_dict=True)```
everything works fine as the "pad_targets" are the targets using 0 as the index for masked (padded) tokens.
However when I am using:
```outputs = self.model(input_ids=inputs, attention_mask=inputs_att, labels=repl_targets, return_dict=True)```
and then printing the outputs['loss'] the following error is occurred:
`RuntimeError: CUDA error: device-side assert triggered`
as the "repl_targets" are the targets using the -100 as the index for masked (padded) tokens.
The aforementioned error also occurs when using the argument:
`decoder_input_ads=repl_targets`
The tasks I am working on is:
Dialogue generation in Empathetic Dialogues dataset.
## Expected behavior
I think that there is a problem with the -100 padding token. But I am not sure :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9046/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9046/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9045 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9045/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9045/comments | https://api.github.com/repos/huggingface/transformers/issues/9045/events | https://github.com/huggingface/transformers/issues/9045 | 761,731,843 | MDU6SXNzdWU3NjE3MzE4NDM= | 9,045 | 🐛 [TF_BART] "<internal expr>" has dtype float32 in the TRUE branch, but dtype=int32 in the FALSE branch | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,607 | 1,607 | 1,607 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.1.0.dev0
- Platform: Linux-4.19.0-13-cloud-amd64-x86_64-with-debian-10.7
- Python version: 3.7.3
- PyTorch version (GPU?): 1.7.0 (False)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: No (TPU)
- Using distributed or parallel set-up in script?: Yes
### Who can help
@patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): TFBart
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
## To reproduce
When I try to run TF_Bart on TPU, I'm getting the following error :
> TypeError: "<internal expr>" has dtype float32 in the TRUE branch, but dtype=int32 in the FALSE branch. TensorFlow control flow requires that they are the same.
It seems to come from the dropout operation :
https://github.com/huggingface/transformers/blob/b01ddc9577b87f057e163d49563ee3f74f4810cf/src/transformers/models/bart/modeling_tf_bart.py#L373
<details>
<summary> Full stack trace (click to expand...)</summary>
>2020/12/11 00:00:55 - INFO - transformers_addons.trainer_tf - ***** Running Evaluation *****
2020/12/11 00:00:55 - INFO - transformers_addons.trainer_tf - Batch size = 8
The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
Traceback (most recent call last):
File "train.py", line 203, in <module>
main()
File "train.py", line 194, in main
result = trainer.evaluate()
File "/home/remondnicola/text-summarization/transformers_addons/trainer_tf.py", line 281, in evaluate
output = self._prediction_loop(eval_dataset, description="Evaluation", prediction_loss_only=prediction_loss_only)
File "/home/remondnicola/text-summarization/transformers_addons/trainer_tf.py", line 207, in _prediction_loop
loss, logits = self._evaluate_steps(features, labels)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py", line 580, in __call__
result = self._call(*args, **kwds)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py", line 627, in _call
self._initialize(args, kwds, add_initializers_to=initializers)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py", line 506, in _initialize
*args, **kwds))
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py", line 2446, in _get_concrete_function_internal_garbage_collected
graph_function, _, _ = self._maybe_define_function(args, kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py", line 2777, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py", line 2667, in _create_graph_function
capture_by_value=self._capture_by_value),
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py", line 981, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py", line 441, in wrapped_fn
return weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py", line 3299, in bound_method_wrapper
return wrapped_fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py", line 968, in wrapper
raise e.ag_error_metadata.to_exception(e)
TypeError: in user code:
>
> /home/remondnicola/text-summarization/transformers_addons/trainer_tf.py:169 _evaluate_steps *
per_replica_loss, per_replica_logits = self.args.strategy.experimental_run_v2(
train.py:29 _run_model *
out = self.model(features, training=training, **labels)
/home/remondnicola/text-summarization/transformers_addons/models/bart/modeling_tf_bart.py:88 call *
outputs = super().call(inputs["input_ids"],
/home/remondnicola/.venv/summarization/lib/python3.7/site-packages/transformers/models/bart/modeling_tf_bart.py:1110 call *
outputs = self.model(
/home/remondnicola/.venv/summarization/lib/python3.7/site-packages/transformers/models/bart/modeling_tf_bart.py:977 call *
inputs["encoder_outputs"] = self.encoder(
/home/remondnicola/.venv/summarization/lib/python3.7/site-packages/transformers/models/bart/modeling_tf_bart.py:373 call *
x = tf.nn.dropout(x, rate=self.dropout if training else 0)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/operators/control_flow.py:924 if_stmt
basic_symbol_names, composite_symbol_names)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/operators/control_flow.py:962 tf_if_stmt
error_checking_orelse)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/deprecation.py:507 new_func
return func(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/control_flow_ops.py:1177 cond
return cond_v2.cond_v2(pred, true_fn, false_fn, name)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/cond_v2.py:91 cond_v2
op_return_value=pred)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py:981 func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/operators/control_flow.py:958 error_checking_orelse
basic_symbol_names + composite_symbol_names)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/operators/control_flow.py:298 _verify_tf_cond_vars
functools.partial(_verify_single_cond_var, name), body_var, orelse_var)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/nest.py:617 map_structure
structure[0], [func(*x) for x in entries],
/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/nest.py:617 <listcomp>
structure[0], [func(*x) for x in entries],
/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/operators/control_flow.py:267 _verify_single_cond_var
orelse_var.dtype.name))
>
> TypeError: "<internal expr>" has dtype float32 in the TRUE branch, but dtype=int32 in the FALSE branch. TensorFlow control flow requires that they are the same.
</details> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9045/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9045/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9044 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9044/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9044/comments | https://api.github.com/repos/huggingface/transformers/issues/9044/events | https://github.com/huggingface/transformers/issues/9044 | 761,665,180 | MDU6SXNzdWU3NjE2NjUxODA= | 9,044 | XLNet ONNX model giving error: "Attempting to broadcast an axis by a dimension other than 1" | {
"login": "singhn27",
"id": 1694751,
"node_id": "MDQ6VXNlcjE2OTQ3NTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1694751?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/singhn27",
"html_url": "https://github.com/singhn27",
"followers_url": "https://api.github.com/users/singhn27/followers",
"following_url": "https://api.github.com/users/singhn27/following{/other_user}",
"gists_url": "https://api.github.com/users/singhn27/gists{/gist_id}",
"starred_url": "https://api.github.com/users/singhn27/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/singhn27/subscriptions",
"organizations_url": "https://api.github.com/users/singhn27/orgs",
"repos_url": "https://api.github.com/users/singhn27/repos",
"events_url": "https://api.github.com/users/singhn27/events{/privacy}",
"received_events_url": "https://api.github.com/users/singhn27/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,607 | 1,619 | 1,619 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.5.1
- Platform: Linux-4.14.193-113.317.amzn1.x86_64-x86_64-with-glibc2.10
- Python version: 3.7.6
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): 2.3.1 (True)
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@TevenLeScao @mfuntowicz @patil-suraj
## Information
Model I am using (Bert, XLNet ...): XLNet
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Trained HuggingFace Transformers model XLNetForSequenceClassification on custom dataset with PyTorch backend.
2. Used provided `convert_graph_to_onnx.py` script to convert model (from saved checkpoint) to ONNX format.
3. Loaded the model with ONNXRuntime
4. When feeding in int64 numpy arrays `input_ids` and `attention_masks`, the model returns the following error except when both inputs have shape (x, 1) or (x, 6). There is nothing in the configuration of the model or the structure of the training data from my end that would require shape (x, 1) or (x, 6).
```
RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Add node. Name:'Add_26' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/math/element_wise_ops.h:361 void onnxruntime::BroadcastIterator::Init(int64_t, int64_t) axis == 1 || axis == largest was false. Attempting to broadcast an axis by a dimension other than 1. 5 by 6
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
The expected behavior is for the model to return predictions successfully (i.e. probabilities for all classes). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9044/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9044/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9043 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9043/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9043/comments | https://api.github.com/repos/huggingface/transformers/issues/9043/events | https://github.com/huggingface/transformers/issues/9043 | 761,652,516 | MDU6SXNzdWU3NjE2NTI1MTY= | 9,043 | The example code does not work | {
"login": "fang19911030",
"id": 17070830,
"node_id": "MDQ6VXNlcjE3MDcwODMw",
"avatar_url": "https://avatars.githubusercontent.com/u/17070830?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fang19911030",
"html_url": "https://github.com/fang19911030",
"followers_url": "https://api.github.com/users/fang19911030/followers",
"following_url": "https://api.github.com/users/fang19911030/following{/other_user}",
"gists_url": "https://api.github.com/users/fang19911030/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fang19911030/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fang19911030/subscriptions",
"organizations_url": "https://api.github.com/users/fang19911030/orgs",
"repos_url": "https://api.github.com/users/fang19911030/repos",
"events_url": "https://api.github.com/users/fang19911030/events{/privacy}",
"received_events_url": "https://api.github.com/users/fang19911030/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The example code in the documentation of version 4 works with transformers version 4. You can find the examples for older versions (since you seem to be running v3.3.1) by clicking on the navigation bar at the left of the documentation pages. [Here](https://huggingface.co/transformers/v3.3.1/) is a direct link to v3.3.1."
] | 1,607 | 1,607 | 1,607 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1
- Platform: Linux-4.4.0-154-generic-x86_64-with-Ubuntu-16.04-xenial
- Python version: 3.6.12
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
-->
@LysandreJik @sgugger
## Information
When I run the code about Question Answering from the documentation https://huggingface.co/transformers/task_summary.html, there is an error reported.
## To reproduce
```
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad")
model = AutoModelForQuestionAnswering.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad")
text = r"""
🤗 Transformers (formerly known as pytorch-transformers and pytorch-pretrained-bert) provides general-purpose
architectures (BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet…) for Natural Language Understanding (NLU) and Natural
Language Generation (NLG) with over 32+ pretrained models in 100+ languages and deep interoperability between
TensorFlow 2.0 and PyTorch.
"""
if __name__ == "__main__":
questions = [
"How many pretrained models are available in 🤗 Transformers?",
"What does 🤗 Transformers provide?",
"🤗 Transformers provides interoperability between which frameworks?",
]
for question in questions:
inputs = tokenizer(question, text, add_special_tokens=True, return_tensors="pt")
input_ids = inputs["input_ids"].tolist()[0]
text_tokens = tokenizer.convert_ids_to_tokens(input_ids)
outputs = model(**inputs)
answer_start_scores = outputs.start_logits
answer_end_scores = outputs.end_logits
answer_start = torch.argmax(
answer_start_scores
) # Get the most likely beginning of answer with the argmax of the score
answer_end = torch.argmax(answer_end_scores) + 1 # Get the most likely end of answer with the argmax of the score
answer = tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end]))
print(f"Question: {question}")
print(f"Answer: {answer}")
```
## Error message
File "/home/pxf109/LegalContractModel/example.py", line 22, in <module>
answer_start_scores = outputs.start_logits
AttributeError: 'tuple' object has no attribute 'start_logits
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9043/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9043/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9042 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9042/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9042/comments | https://api.github.com/repos/huggingface/transformers/issues/9042/events | https://github.com/huggingface/transformers/pull/9042 | 761,649,163 | MDExOlB1bGxSZXF1ZXN0NTM2Mjg0Mjk5 | 9,042 | [finetune_trainer] enhancements and fixes | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Unfortunately the naming has been done a long time ago and even if it's not ideal, we can't break it like this as people rely on the names of the keys in their code. I would advocate for the renaming to be done in the script directly and not inside `Trainer`.\r\n\r\nIf there is really a lot of people asking for it, we can think of a strategy to rename those keys progressively with some kind of deprecation warning, but since it's merely cosmetic, I would leave that for scripts using Trainer.",
"I see what you mean that someone relying on \"eval_loss\" when doing predict would have their code broken. Yes, we can't do that.\r\nI moved this fix back into the finetune trainer as it was originally.\r\n\r\nCould we set a a target for when we could do breaking changes and fix this bug?\r\n\r\nI also find it strange that we use `--n_val` but `eval_`\r\n\r\nAnd then `predict` vs `test_`.\r\n\r\nThe callbacks are inconsistent too :(\r\n\r\nI'd plan a design session where we first collect all the rough edges and inputs on what needs to be polished and then adjust the trainer so that it's not limping for the rest of its life. Can this be done?",
"> Could we set a a target for when we could do breaking changes and fix this bug?\r\n\r\nLike I said, unless there is strong demand for it, I think we're just going to leave it as it. It's not the ideal naming choice but we have to deal with it now (kind of like PretrainedConfig vs PreTrainedConfig).\r\n\r\n> I also find it strange that we use `--n_val` but `eval_`\r\n> \r\n> And then `predict` vs `test_`.\r\n\r\nI don't understand that part. Also `predict` could be used for test or evaluation, so `predict` does not mean test.\r\n\r\n> The callbacks are inconsistent too \r\n\r\nCould you elaborate? If it's the evaluate vs predict you mentioned, there is a reason. `prediction_step` is called both in `predict` and `evaluate` whereas the `on_evaluate` is only called at `evaluate`.\r\n",
"> > Could we set a a target for when we could do breaking changes and fix this bug?\r\n> \r\n> Like I said, unless there is strong demand for it, I think we're just going to leave it as it. It's not the ideal naming choice but we have to deal with it now (kind of like PretrainedConfig vs PreTrainedConfig).\r\n\r\nI'm not sure how this is similar. I call `trainer.predict()` and get in return `eval_` metrics - this is very confusing.\r\n \r\n> > I also find it strange that we use `--n_val` but `eval_`\r\n> > And then `predict` vs `test_`.\r\n> \r\n> I don't understand that part. Also `predict` could be used for test or evaluation, so `predict` does not mean test.\r\n\r\nI suppose from the perspective of the existing trainer like finetune they are the same. But surely this is much less of an issue than `val` vs `eval`.\r\n\r\n> > The callbacks are inconsistent too\r\n> \r\n> Could you elaborate? If it's the evaluate vs predict you mentioned, there is a reason. `prediction_step` is called both in `predict` and `evaluate` whereas the `on_evaluate` is only called at `evaluate`.\r\n\r\nAh, I see, thank you for clarifying that - then why is there no `on_predict` to match `on_evaluate`? I assumed it was the former.\r\n",
"There is no `on_predict` event because the training loop never calls `Trainer.predict`. It does however call `Trainer.evaluate`. I guess we could add the `on_predict` event that would be called at the end of a `Trainer.predict` method.\r\n\r\n> But surely this is much less of an issue than `val` vs `eval`.\r\n\r\nCould you please clarifying that part? I'm not sure what you mean by this.\r\n\r\n> I'm not sure how this is similar. I call `trainer.predict()` and get in return `eval_` metrics - this is very confusing.\r\n\r\nIf we go down that road, `trainer.predict` should only return predictions and not even the metrics (which we won't do either as it's a bigger breaking change but it would definitely make sense to me). Predict and evaluate do not mean test vs evaluation, it's really a matter of getting the predictions of the model vs evaluating the metrics on a given dataset (which could be train/eval/test).\r\n\r\nI can get behind adding a prefix argument to those method that defaults to `None` and will be used to prefix the metrics. If one is passed, it's used (so it's easier to get the `test_` prefix you want and does not require ugly post-processing) otherwise `eval_` is used to avoid any breaking changes. Would that work for you?",
"> \r\n> \r\n> But surely this is much less of an issue than val vs eval.\r\n> \r\n> Could you please clarifying that part? I'm not sure what you mean by this.\r\n\r\nOf course, we have `--n_val` (mnemonic validation), but then we return `eval_(foo|bar)` as the metrics for \"validation\". But see below.\r\n\r\nSo now that you have further expanded on eval+predict not being correlated to validation+testing (thank you!), I think I'm not approaching the problem in the right way. \r\n\r\nReally, then there is no bug with both `predict` and `evaluate` returning metrics with `eval_`-prefixed keys and the bug is really in the end use in `finetune_reader.py`. Here is what I'm thinking:\r\n\r\n1. It shouldn't be `eval_bleu` and `test_bleu`, it should be `val_bleu` and `test_bleu` - because these are both evaluation report on 2 different splits so `--n_val` dataset should lead to `val_bleu` metrics, and `--n_test` to `test_bleu` (not sure of `valid` or `val` - probably `val` to match `--n_val`)\r\n2. Ideally that whole `eval_` prefix should be removed altogether, since it just has a potential at being confused with `val` as in `validation`, and there are no other metrics in that context - the trainer code forcefully adds `eval_` to all metrics - but as we said it's not possible to do w/o a breaking change, and it's not really a problem anyway. these are just evaluation metrics - no problem here.\r\n3. What the interface could use then is getting a `split` argument which it could prepend to the metrics keys, so if someone is doing evaluation on the validation dataset the metrics will be returned could start with `val_eval_`.\r\n\r\nSo if my logic makes sense practically we can either:\r\n1) leave trainer alone and recode `finetune_reader.py` to prefix `eval_` with the split name - so it'll be `val_eval_bleu` and `test_eval_bleu`\r\n2) add an optional trainer argument `split` for `evaluate` and `predict` and have the trainer arrange the split name prefixed in the metrics as in the option above.\r\n\r\nProbably the 1st one is simpler, since it gives the user full flexibility.\r\n\r\nThe same should happen to the results file naming - we need to choose whether those are `(val|test )_results.json`or `(eval|predict)_results.json` - and not the currently confusing pair `eval_results.json`, but `test_results.json`.\r\n",
"If you're happy with `val_eval_bleu` and `test_eval_bleu`, it's fine by me. I'd rather name `split` `prefix` in solution 2 unless I understand badly what you mean by it. It's also fine by me and could be a feature other users find useful (if they don't want `eval_xxx` as names).",
"> If you're happy with `val_eval_bleu` and `test_eval_bleu`, it's fine by me. I'd rather name `split` `prefix` in solution 2 unless I understand badly what you mean by it. It's also fine by me and could be a feature other users find useful (if they don't want `eval_xxx` as names).\r\n\r\nOK, 3 follow up questions:\r\n\r\n1. I suggested split `since` it's typically either `train|val|test`, but `prefix` works just as well. Except it's unclear then in the function API - `prefix` to what? `metrics_key_prefix`?\r\n\r\n2. So we are discussing to optionally prefix `eval_bleu`, etc. with something and not replace `eval_`, yes? So the end result is f`\"{prefix}_eval_bleu\"`, etc.\r\n\r\n3. If so, should the prefix include the separator `_` (`test_`) or just be (`test`) and trainer will `\"_\".join([prefix, key])`? I suppose the latter\r\n\r\nWhat do you think @patrickvonplaten + @patil-suraj? I think @sgugger's priority is the trainer itself, but what do you think about what would be ideal for seq2seq examples domain?",
"For 1, yes `metric_key_prefix` sounds better. For 2, I was thinking of replacing the `eval_` actually, which goes with 3, the prefix should not have the `_`. ",
"@sgugger, please have a look - do we want None as the default and use `eval` in the code or `eval` in the function signature - I suppose the latter, right? I'm a bit confused here with the optional, having non-None default and keeping the API unbroken. Help?",
"And so then the only remaining thing that I'm stuck with - do we call the results of `evaluate` in finetune trainer `val` or `eval`? Since we call it `test` for `predict` - so confusing. Are those results run on dataset splits and then should be `val` and `test` or results on functionality they check and then they should be `eval` and `predict` - but `predict` doesn't work, since the results are evaluation results.\r\n\r\nI think they should be `val` and `test` since both sets are evaluation results on 2 different splits.",
"As the comments issue is unrelated to this PR - how about I just let you edit those comments as you think would be the best, @sgugger. Anything you choose works for me. Thank you.",
"It doesn't look like others are going to review this PR. I didn't want to force anybody by asking to review, just tagging. Is it better to ask for a review explicitly?\r\n\r\n@sgugger, please let me know if you still want to adjust unrelated to this PR comments or should I merge it and you will deal with it later.\r\n\r\nThank you!",
"> Left some nits\r\n\r\nThank you, @patrickvonplaten!\r\n\r\nI went on and removed the `optional` word in the docs section as well to match the function signature. You haven't suggested I do that, so just want to make sure I did the right thing.",
"> I went on and removed the `optional` word in the docs section as well to match the function signature. You haven't suggested I do that, so just want to make sure I did the right thing.\r\n\r\nSo that was wrong - thank you for fixing that, @sgugger \r\n\r\n- So we are removing `Optional` from the function signature because `Optional == Union[..., None] `and we have no None here\r\n- but we are documenting that the argument is `optional` to the user\r\n\r\n"
] | 1,607 | 1,607 | 1,607 | CONTRIBUTOR | null | The main need was to add speed metrics to perform speed performance regressions. But on the way a bunch of other things got worked on. Hopefully you will find the proposed changes useful.
This PR change `trainer`
* [x] adds an optional `metric_key_prefix` for `evaluate` and `predict` functions to return metrics with a prefix key set by the user rather than the default `eval_`.
This PR change `finetune_trainer`
* [x] utils: sort json keys when dumping to filesystem
* [x] renames s/eval/val/ for the validation dataset results
* [x] adds speed metrics for all: train/eval/test (samples_per_second/runtime/n_objs)
* [x] refactors logging/saving code for each mode
* [x] renames internal vars to tell which is metrics and which is output that is more than just metrics
* [x] fixes a bug where all_results.json wasn't getting saved in the right place
* [x] rounds up loss values to 4 decimals - before it was `"eval_loss": 368.2950744628906,` - not sure if it's better done upstream in the trainer?
Here is a sample of `all_results.json` after this change:
```
{
"epoch": 1.0,
"test_bleu": 22.8548,
"test_gen_len": 35.9,
"test_loss": 734.8612,
"test_n_ojbs": 10,
"test_runtime": 2.5185,
"test_samples_per_second": 3.971,
"train_n_ojbs": 200,
"train_runtime": 24.9101,
"train_samples_per_second": 8.029
"val_bleu": 26.581,
"val_gen_len": 31.4,
"val_loss": 738.3438,
"val_n_ojbs": 200,
"val_runtime": 33.9329,
"val_samples_per_second": 5.894,
}
```
@sgugger, @patil-suraj, @patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9042/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9042/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9042",
"html_url": "https://github.com/huggingface/transformers/pull/9042",
"diff_url": "https://github.com/huggingface/transformers/pull/9042.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9042.patch",
"merged_at": 1607996734000
} |
https://api.github.com/repos/huggingface/transformers/issues/9041 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9041/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9041/comments | https://api.github.com/repos/huggingface/transformers/issues/9041/events | https://github.com/huggingface/transformers/issues/9041 | 761,647,334 | MDU6SXNzdWU3NjE2NDczMzQ= | 9,041 | google/bert2bert_L-24_wmt_de_en doesn't match official implementation | {
"login": "bkj",
"id": 6086781,
"node_id": "MDQ6VXNlcjYwODY3ODE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6086781?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bkj",
"html_url": "https://github.com/bkj",
"followers_url": "https://api.github.com/users/bkj/followers",
"following_url": "https://api.github.com/users/bkj/following{/other_user}",
"gists_url": "https://api.github.com/users/bkj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bkj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bkj/subscriptions",
"organizations_url": "https://api.github.com/users/bkj/orgs",
"repos_url": "https://api.github.com/users/bkj/repos",
"events_url": "https://api.github.com/users/bkj/events{/privacy}",
"received_events_url": "https://api.github.com/users/bkj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @bkj,\r\n\r\nThanks for the very in-detailed issue. It would be awesome if you could also share your custom scripts here to evaluate on the entire dataset. This indeed seems like a problem, I'll look into it",
"@patrickvonplaten Thanks for the quick response.\r\n\r\nCode to run inference w/ the two models can be found here: \r\n https://github.com/bkj/hf_bert2bert_debug\r\n\r\nBy default, it just runs one batch to save time -- you can run on the whole test dataset by setting `QUICKRUN = False` in each of the files.\r\n\r\nBLEU scores on this batch are ~ 23 for HF and ~ 35 for TF.\r\n\r\nLet me know what you think! I'm not super familiar w/ `transformers`, so it's possible I'm making some pre/post-processing mistake -- so likely a good idea to double check my glue code.",
"Hey @bkj,\r\n\r\nI'll try to allocate time to solve this problem. I think it is indeed a fundamental difference between the two implementations - will try to investigate. Thanks for your response!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Unstale",
"Sorry for replying that late!\r\n\r\nThe problem is that the original code for those translation models is not published so that debugging isn't really possible. The original github can be found here: https://github.com/google-research/google-research/tree/master/bertseq2seq and the pretrained weights here: https://tfhub.dev/google/bertseq2seq/roberta24_bbc/1 in case someone is very motivated to take a deeper look.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,607 | 1,623 | 1,623 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.0.0
- Platform: Linux-5.4.0-1030-aws-x86_64-with-debian-buster-sid
- Python version: 3.7.8
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@patrickvonplaten ; maybe @patil-suraj
## Information
I'm trying to running the `transformers` implementation of WMT14 DE->EN translation, using the `google/bert2bert_L-24_wmt_de_en` checkpoint and [instructions](https://huggingface.co/google/bert2bert_L-24_wmt_de_en).
The BLEU score I get using translations from `transformers` implementation are substantially lower than those I get from [the official Tensorflow model](https://github.com/google-research/google-research/tree/master/bertseq2seq) -- 24.7 w/ HF vs 34.0 w/ the official implementation.
## To reproduce
The following snippet shows qualitative differences in the output of the models:
```python
import datasets
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
# --
# Load dataset
dataset = datasets.load_dataset("wmt14", "de-en", split="test")
sentence = dataset[20]['translation']['de']
target = dataset[20]['translation']['en']
print(target)
# If the street is clear, the pedestrian obtains a green light immediately, if not, there is a delay of around 15 seconds.
# --
# HF model
tokenizer = AutoTokenizer.from_pretrained("google/bert2bert_L-24_wmt_de_en", pad_token="<pad>", eos_token="</s>", bos_token="<s>")
model = AutoModelForSeq2SeqLM.from_pretrained("google/bert2bert_L-24_wmt_de_en")
input_ids = tokenizer(sentence, return_tensors="pt", add_special_tokens=False).input_ids
output_ids = model.generate(input_ids)[0]
output_str = tokenizer.decode(output_ids, skip_special_tokens=True)
print(output_str)
# the road is free, it takes about 15 seconds if not directly for the footganger.
# --
# TF model
import tensorflow.compat.v1 as tf
import tensorflow_hub as hub
import tensorflow_text as tf_text
tf.disable_eager_execution()
# Load model
model = hub.Module('https://tfhub.dev/google/bertseq2seq/bert24_de_en/1')
# Setup session
sess = tf.InteractiveSession()
sess.run(tf.tables_initializer())
sess.run(tf.global_variables_initializer())
# Define graph
src = tf.placeholder(tf.string, shape=[None])
translate = model(src)
# Translate
output_str = sess.run(translate, feed_dict = {
src : [sentence]
})
print(output_str[0])
# "If the road is clear, there is a green area for the pedestrian, if not it takes about 15 seconds."
```
I can also share the (custom) scripts I'm using to run inference on the entire dataset and compute BLEU scores. Note I am using the same BLEU code for both implementations.
## Expected behavior
I would expect the BLEU scores and the quality of the translations to be comparable.
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9041/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9041/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9040 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9040/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9040/comments | https://api.github.com/repos/huggingface/transformers/issues/9040/events | https://github.com/huggingface/transformers/issues/9040 | 761,635,284 | MDU6SXNzdWU3NjE2MzUyODQ= | 9,040 | Zero Shot Classification Pipeline fails when running in CPU-only Docker container | {
"login": "ravisurdhar",
"id": 29689587,
"node_id": "MDQ6VXNlcjI5Njg5NTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/29689587?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ravisurdhar",
"html_url": "https://github.com/ravisurdhar",
"followers_url": "https://api.github.com/users/ravisurdhar/followers",
"following_url": "https://api.github.com/users/ravisurdhar/following{/other_user}",
"gists_url": "https://api.github.com/users/ravisurdhar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ravisurdhar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ravisurdhar/subscriptions",
"organizations_url": "https://api.github.com/users/ravisurdhar/orgs",
"repos_url": "https://api.github.com/users/ravisurdhar/repos",
"events_url": "https://api.github.com/users/ravisurdhar/events{/privacy}",
"received_events_url": "https://api.github.com/users/ravisurdhar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,607 | 1,619 | 1,619 | NONE | null | ## Environment info
- `transformers` version: 4.0.0
- Platform: MacOS 10.15.7 (2018 MacBook Pro 15")
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0 CPU Only
- Tensorflow version (GPU?): N/A
- Using GPU in script?: N/A
- Using distributed or parallel set-up in script?: N/A
### Who can help
Maybe @LysandreJik ?
## Information
Model I am using (Bert, XLNet ...): facebook/bart-large-mnli
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Pull and start the official [transformers-pytorch-cpu](https://hub.docker.com/r/huggingface/transformers-pytorch-cpu/dockerfile) container.
2. `docker exec -it huggingface bash`
3. `python3`
4. `from transformers import pipeline`
5. `classifier = pipeline("zero-shot-classification", model='facebook/bart-large-mnli', tokenizer='facebook/bart-large-mnli', device=-1)`
Step 4 above results in the following warning being printed:
```
/usr/local/lib/python3.6/dist-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
```
Step 5 above results in the models being downloaded, then `Killed` is printed and the Python interpreter exits.
## Expected behavior
When running locally in a Jupyter Notebook or directly in the terminal (not in a container), the following works correctly and the warning about the CUDA initialization isn't printed:
```
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model='facebook/bart-large-mnli', tokenizer='facebook/bart-large-mnli', device=-1)
```
The problem seems to be limited to either the zero shot classification pipeline, or the facebook/bart-large-mnli model, since the following works correctly in the container (though the warning from Step 4 about the CUDA initialization is still printed):
```
from transformers import pipeline
classifier = pipeline('sentiment-analysis')
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9040/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9040/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9039 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9039/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9039/comments | https://api.github.com/repos/huggingface/transformers/issues/9039/events | https://github.com/huggingface/transformers/issues/9039 | 761,633,746 | MDU6SXNzdWU3NjE2MzM3NDY= | 9,039 | BERT outputs are different with the same input in training mode | {
"login": "lamthuy",
"id": 8089862,
"node_id": "MDQ6VXNlcjgwODk4NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8089862?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lamthuy",
"html_url": "https://github.com/lamthuy",
"followers_url": "https://api.github.com/users/lamthuy/followers",
"following_url": "https://api.github.com/users/lamthuy/following{/other_user}",
"gists_url": "https://api.github.com/users/lamthuy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lamthuy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lamthuy/subscriptions",
"organizations_url": "https://api.github.com/users/lamthuy/orgs",
"repos_url": "https://api.github.com/users/lamthuy/repos",
"events_url": "https://api.github.com/users/lamthuy/events{/privacy}",
"received_events_url": "https://api.github.com/users/lamthuy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi there! The [forum](https://discuss.huggingface.co/) is a better place for those kinds of general questions, as we keep the issues for bugs and feature requests only.\r\n\r\nTo answer your question, this is because most Deep Learning models (including BERT) use a technique called [dropout](https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf) to generalize better, which randomly zeros some activations during training. This randomness is the reason you are getting different results for the same inputs.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Hi @lamthuy were you able to investigate and find out the cause?",
"Same question here!",
"I am also facing the same error. \r\n@sgugger I thought the dropout is deactivated once I call model.eval()\r\n\r\n",
"Not sure what your code sample is @lava18 . The code sample above (fixed like below) always returns the same value once the line `model.train()` is removed.\r\n```py\r\nimport torch\r\nfrom transformers import BertModel, BertTokenizer\r\nmodel = BertModel.from_pretrained(\"bert-base-uncased\")\r\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\na = tokenizer.encode(\"Hello how are you?\", return_tensors='pt')\r\n\r\ntorch.mean(model(a).to_tuple()[0])\r\n```",
"You need to call `model.eval()` after training (or before inference). That should deactivate the dropouts and you will always get the same output for the same input. "
] | 1,607 | 1,681 | 1,619 | NONE | null | When the training mode is enabled, BERT model returns different outputs even for the same input, is there any idea on why this happens?
```python
from transformers import BertModel, BertTokenizer
model = BertModel.from_pretrained("bert-base-uncased")
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
a = tokenizer.encode("Hello how are you?", return_tebsors='pt')
model.train()
torch.mean(model(a)[0])
# tensor(-0.0167, grad_fn=<MeanBackward0>)
torch.mean(model(a)[0])
# tensor(-0.0162, grad_fn=<MeanBackward0>)
torch.mean(model(a)[0])
# tensor(-0.0156, grad_fn=<MeanBackward0>)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9039/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9039/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9038 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9038/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9038/comments | https://api.github.com/repos/huggingface/transformers/issues/9038/events | https://github.com/huggingface/transformers/pull/9038 | 761,620,804 | MDExOlB1bGxSZXF1ZXN0NTM2MjYwNTEz | 9,038 | Fix typo #9012 (#1) | {
"login": "NatLun137",
"id": 66668418,
"node_id": "MDQ6VXNlcjY2NjY4NDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/66668418?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NatLun137",
"html_url": "https://github.com/NatLun137",
"followers_url": "https://api.github.com/users/NatLun137/followers",
"following_url": "https://api.github.com/users/NatLun137/following{/other_user}",
"gists_url": "https://api.github.com/users/NatLun137/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NatLun137/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NatLun137/subscriptions",
"organizations_url": "https://api.github.com/users/NatLun137/orgs",
"repos_url": "https://api.github.com/users/NatLun137/repos",
"events_url": "https://api.github.com/users/NatLun137/events{/privacy}",
"received_events_url": "https://api.github.com/users/NatLun137/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,607 | 1,607 | 1,607 | CONTRIBUTOR | null | There is a tiny typo in the code "transformers/examples/language-modeling/run_mlm_wwm.py" at line 284. [Details.](https://github.com/huggingface/transformers/issues/9012)
# What does this PR do?
Fixes #9012
## Before submitting
- [Y] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [Y] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
albert, bert, XLM: @LysandreJik
examples/distillation: @VictorSanh
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9038/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9038/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9038",
"html_url": "https://github.com/huggingface/transformers/pull/9038",
"diff_url": "https://github.com/huggingface/transformers/pull/9038.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9038.patch",
"merged_at": 1607636460000
} |
https://api.github.com/repos/huggingface/transformers/issues/9037 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9037/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9037/comments | https://api.github.com/repos/huggingface/transformers/issues/9037/events | https://github.com/huggingface/transformers/pull/9037 | 761,588,866 | MDExOlB1bGxSZXF1ZXN0NTM2MjM0NDQw | 9,037 | fix the typo 9012 | {
"login": "NatLun137",
"id": 66668418,
"node_id": "MDQ6VXNlcjY2NjY4NDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/66668418?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NatLun137",
"html_url": "https://github.com/NatLun137",
"followers_url": "https://api.github.com/users/NatLun137/followers",
"following_url": "https://api.github.com/users/NatLun137/following{/other_user}",
"gists_url": "https://api.github.com/users/NatLun137/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NatLun137/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NatLun137/subscriptions",
"organizations_url": "https://api.github.com/users/NatLun137/orgs",
"repos_url": "https://api.github.com/users/NatLun137/repos",
"events_url": "https://api.github.com/users/NatLun137/events{/privacy}",
"received_events_url": "https://api.github.com/users/NatLun137/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! I think you reverted your commit. The PR shows no diff.",
"> Hello! I think you reverted your commit. The PR shows no diff.\r\n\r\nHi! Yes, unfortunately, I was too quick... The first commit does the fix.",
"I opened [PR](https://github.com/huggingface/transformers/pull/9038) with correct changes."
] | 1,607 | 1,607 | 1,607 | CONTRIBUTOR | null | # What does this PR do?
Fixes #9012
## Before submitting
- [Y] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [Y] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
albert, bert, XLM: @LysandreJik
examples/distillation: @VictorSanh
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9037/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9037/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9037",
"html_url": "https://github.com/huggingface/transformers/pull/9037",
"diff_url": "https://github.com/huggingface/transformers/pull/9037.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9037.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9036 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9036/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9036/comments | https://api.github.com/repos/huggingface/transformers/issues/9036/events | https://github.com/huggingface/transformers/issues/9036 | 761,489,268 | MDU6SXNzdWU3NjE0ODkyNjg= | 9,036 | [docs] missing info on call back registry | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"I guess the corresponding test demonstrates the usage: https://github.com/huggingface/transformers/blob/5c0bf39782c9eac8df55b89518f61c430862a7f6/tests/test_trainer_callback.py\r\n\r\n\r\n",
"I don't think this issue needs to be closed, one more example could be added to the documentatio! Let's make a good first issue out of it, maybe a contributor could help there :-)",
"Hi @sgugger. I'm new and I'd like to start contributing. Can I work on this issue?",
"Sure!",
"Thanks. Would an example like this be okay? - \r\n```\r\nclass MyCallback(TrainerCallback):\r\n \"A callback that prints a message at the beginning of training\"\r\n \r\n def on_train_begin(self, args, state, control, **kwargs):\r\n print(\"Starting training\")\r\n\r\ntrainer = Trainer(\r\n model,\r\n args,\r\n train_dataset=train_dataset,\r\n eval_dataset=eval_dataset,\r\n callbacks=[MyCallback]\r\n)\r\n```\r\nAlso, should I add the example to https://huggingface.co/transformers/main_classes/callback.html, or would the [training tutorial](https://huggingface.co/transformers/training.html) be a better place?",
"I think is should also show how to use `add_callback` as an alternative too, otherwise, that's the gist of it. The callbacks page is perfect for this I think.",
"Thanks, I've added an example for `add_callback` too and opened a PR [here](https://github.com/huggingface/transformers/pull/10928). It's failing some tests right now but I'm not sure why since I've only modified the `callback.rst` file. Could you please help me figure out why this might be happening?",
"The CI is flakey at times, I restarted the failing jobs. It looks like network problems, may have to restart again later if it still fails.\r\n\r\nedit: no luck, still network issues, will try again later, but do not worry, as long as the docs job passes and it does - you're good.",
"Cool, thanks!"
] | 1,607 | 1,617 | 1,617 | CONTRIBUTOR | null | https://huggingface.co/transformers/main_classes/callback.html is missing instructions/examples on how to register a callback. Thanks.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9036/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9036/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9035 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9035/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9035/comments | https://api.github.com/repos/huggingface/transformers/issues/9035/events | https://github.com/huggingface/transformers/issues/9035 | 761,415,148 | MDU6SXNzdWU3NjE0MTUxNDg= | 9,035 | Improve coverage of the documentation | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834067346,
"node_id": "MDU6TGFiZWwxODM0MDY3MzQ2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation",
"name": "Documentation",
"color": "77cc3b",
"default": false,
"description": ""
},
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"I added docs for Bertweet [here](https://github.com/huggingface/transformers/pull/9379), first contribution, let me know if there is anything missing",
"Reopening the issue are not all of the items are fixed yet!",
"Hello @sgugger,\r\n\r\nIf no one else is working on it yet, I'd like to work on the `Bert Japanese` document.\r\n(I also am interested in working on `Data collators`, but I’d like to do that one by one. If there is someone else who would like to work on, please give priority to that person.)\r\n\r\n",
"Go ahead @forest1988 :-)",
"Thanks, I'll do my best!",
"I deeply apologize for my delay in opening a PR for Bert Japanese.\r\nI've just opened the PR.\r\n\r\nhttps://github.com/huggingface/transformers/pull/11219\r\n\r\nIf you find any flaws, please let me know. I'll correct it soon.\r\n\r\n"
] | 1,607 | 1,618 | 1,618 | COLLABORATOR | null | Currently, some public classes are not documented anywhere because we didn't create the corresponding doc pages. Those missing pages are:
- Benchmark classes
- Bert Japanese
- Data collators
If someone feels like working on one of those, please tag yourself with a comment on this issue. Once the objects are properly documented, they can be removed from the `SHOULD_BE_DOCUMENTED` constant in [this file](https://github.com/huggingface/transformers/blob/1310e1a758edc8e89ec363db76863c771fbeb1de/utils/check_repo.py#L374).
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9035/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9035/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9034 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9034/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9034/comments | https://api.github.com/repos/huggingface/transformers/issues/9034/events | https://github.com/huggingface/transformers/pull/9034 | 761,410,039 | MDExOlB1bGxSZXF1ZXN0NTM2MDc5MjEw | 9,034 | Refactor FLAX tests | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,607 | 1,607 | 1,607 | COLLABORATOR | null | # What does this PR do?
This PR refactors the FLAX models tests in a `test_modeling_flax_common` file and speeds them up by using small random models instead of pretrained ones. It will hopefully speed up the CI and make it less flaky! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9034/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9034/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9034",
"html_url": "https://github.com/huggingface/transformers/pull/9034",
"diff_url": "https://github.com/huggingface/transformers/pull/9034.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9034.patch",
"merged_at": 1607633860000
} |
https://api.github.com/repos/huggingface/transformers/issues/9033 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9033/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9033/comments | https://api.github.com/repos/huggingface/transformers/issues/9033/events | https://github.com/huggingface/transformers/pull/9033 | 761,371,140 | MDExOlB1bGxSZXF1ZXN0NTM2MDQ2MTU1 | 9,033 | Make ProphetNetModel really compatible with EncoderDecoder | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,607 | 1,607 | 1,607 | MEMBER | null | The interesting part of ProphetNet is its decoder which can do n-gram causal language modeling. So it could be very interesting to load a pre-trained prophetnet decoder model into an encoder-decoder design with - let's say - a longformer encoder for long-range sequence modeling.
Due to some narrow-minded thinking on my part, this didn't work previously.
```python
from transformers import EncoderDecoderModel
EncoderDecoderModel.from_encoder_decoder_pretrained("allenai/longformer-large-4096", "microsoft/prophetnet-large-uncased")
```
As one can see none of pre-trained **decoder** weights are loaded into the model. The reason is because `ProphetNetForCausalLM` was badly modularized in `ProphetNetForCausalLM`.
Merging this PR would make it possible to load any prophetnet decoder into an encoder-decoder model and fine-tuning an "build-it-yourself" encoder decoder would become much easier, *e.g.*:
```python
from transformers import EncoderDecoderModel
import torch
model = EncoderDecoderModel.from_encoder_decoder_pretrained("allenai/longformer-large-4096", "microsoft/prophetnet-large-uncased")
input_ids = torch.tensor([10 * [1]])
labels = torch.tensor([10 * [0]])
loss = model(input_ids, decoder_input_ids=labels, labels=labels).loss
loss.backward()
```
The above use-case might also be interesting for @ibeltagy actually.
## Breaking changes
This does introduce a pretty heavy breaking change to `ProphetNetForCausalLM`. However, the only reason this class was created was to make it useable with `EncoderDecoderModel` and this arguably failed a bit the first time since it made it way too difficult to load pretrained ProphetNet models into the `EncoderDecoderModel`. I guess I see this more of solving a bug then "new design". Also there are no pre-trained `ProphetNetForCausalLM` models on the model hub and I highly doubt anybody has really used this class.
I want to use the same pattern for BartForCausalLM and T5ForCausalLM, so it'd be great to get this merged even though there are some breaking changes. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9033/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9033/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9033",
"html_url": "https://github.com/huggingface/transformers/pull/9033",
"diff_url": "https://github.com/huggingface/transformers/pull/9033.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9033.patch",
"merged_at": 1607702395000
} |
https://api.github.com/repos/huggingface/transformers/issues/9032 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9032/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9032/comments | https://api.github.com/repos/huggingface/transformers/issues/9032/events | https://github.com/huggingface/transformers/issues/9032 | 761,277,835 | MDU6SXNzdWU3NjEyNzc4MzU= | 9,032 | ImportError: cannot import name 'DPRReader' from 'transformers' | {
"login": "hiteshsom",
"id": 17461216,
"node_id": "MDQ6VXNlcjE3NDYxMjE2",
"avatar_url": "https://avatars.githubusercontent.com/u/17461216?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hiteshsom",
"html_url": "https://github.com/hiteshsom",
"followers_url": "https://api.github.com/users/hiteshsom/followers",
"following_url": "https://api.github.com/users/hiteshsom/following{/other_user}",
"gists_url": "https://api.github.com/users/hiteshsom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hiteshsom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hiteshsom/subscriptions",
"organizations_url": "https://api.github.com/users/hiteshsom/orgs",
"repos_url": "https://api.github.com/users/hiteshsom/repos",
"events_url": "https://api.github.com/users/hiteshsom/events{/privacy}",
"received_events_url": "https://api.github.com/users/hiteshsom/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"The DPR model is part of the 3.1.0 release. Please update your transformers library (4.0.1 is the current release btw :)).\r\n\r\nhttps://github.com/huggingface/transformers/releases/tag/v3.1.0",
"Hi @cronoik , Thanks for answering. I tried this \r\n```\r\npip install transformers==4.0.1\r\n```\r\nbut got this error\r\n```\r\nERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\r\nsentence-transformers 0.3.9 requires transformers<3.6.0,>=3.1.0, but you have transformers 4.0.1 which is incompatible.\r\ndpr 0.1.0 requires transformers<3.1.0,>=3.0.0, but you have transformers 4.0.1 which is incompatible.\r\n```\r\n\r\nSo then I installed version 3.1.0 as follows:\r\n\r\n```\r\npip install transformers==3.1.0\r\n```\r\nBut still getting dependency error.\r\n\r\n\r\n```\r\nCollecting transformers==3.1.0\r\n Using cached transformers-3.1.0-py3-none-any.whl (884 kB)\r\nRequirement already satisfied: requests in c:\\users\\hiteshsom\\documents\\env\\lib\\site-packages (from transformers==3.1.0) (2.25.0)\r\nRequirement already satisfied: sacremoses in c:\\users\\hiteshsom\\documents\\env\\lib\\site-packages (from transformers==3.1.0) (0.0.43)\r\nRequirement already satisfied: tqdm>=4.27 in c:\\users\\hiteshsom\\documents\\env\\lib\\site-packages (from transformers==3.1.0) (4.48.0)\r\nRequirement already satisfied: numpy in c:\\users\\hiteshsom\\documents\\env\\lib\\site-packages (from transformers==3.1.0) (1.18.5)\r\nRequirement already satisfied: filelock in c:\\users\\hiteshsom\\documents\\env\\lib\\site-packages (from transformers==3.1.0) (3.0.12)\r\nRequirement already satisfied: packaging in c:\\users\\hiteshsom\\documents\\env\\lib\\site-packages (from transformers==3.1.0) (20.4)\r\nRequirement already satisfied: sentencepiece!=0.1.92 in c:\\users\\hiteshsom\\documents\\env\\lib\\site-packages (from transformers==3.1.0) (0.1.94)\r\nRequirement already satisfied: regex!=2019.12.17 in c:\\users\\hiteshsom\\documents\\env\\lib\\site-packages (from transformers==3.1.0) (2020.11.13)\r\nRequirement already satisfied: six in c:\\users\\hiteshsom\\documents\\env\\lib\\site-packages (from packaging->transformers==3.1.0) (1.15.0)\r\nRequirement already satisfied: pyparsing>=2.0.2 in c:\\users\\hiteshsom\\documents\\env\\lib\\site-packages (from packaging->transformers==3.1.0) (2.4.7)\r\nRequirement already satisfied: chardet<4,>=3.0.2 in c:\\users\\hiteshsom\\documents\\env\\lib\\site-packages (from requests->transformers==3.1.0) (3.0.4)\r\nRequirement already satisfied: certifi>=2017.4.17 in c:\\users\\hiteshsom\\documents\\env\\lib\\site-packages (from requests->transformers==3.1.0) (2020.11.8)\r\nRequirement already satisfied: idna<3,>=2.5 in c:\\users\\hiteshsom\\documents\\env\\lib\\site-packages (from requests->transformers==3.1.0) (2.10)\r\nRequirement already satisfied: urllib3<1.27,>=1.21.1 in c:\\users\\hiteshsom\\documents\\env\\lib\\site-packages (from requests->transformers==3.1.0) (1.25.10)\r\nRequirement already satisfied: regex!=2019.12.17 in c:\\users\\hiteshsom\\documents\\env\\lib\\site-packages (from transformers==3.1.0) (2020.11.13)\r\nRequirement already satisfied: six in c:\\users\\hiteshsom\\documents\\env\\lib\\site-packages (from packaging->transformers==3.1.0) (1.15.0)\r\nRequirement already satisfied: click in c:\\users\\hiteshsom\\documents\\env\\lib\\site-packages (from sacremoses->transformers==3.1.0) (7.1.2)\r\nRequirement already satisfied: joblib in c:\\users\\hiteshsom\\documents\\env\\lib\\site-packages (from sacremoses->transformers==3.1.0) (0.17.0)\r\nRequirement already satisfied: tqdm>=4.27 in c:\\users\\hiteshsom\\documents\\env\\lib\\site-packages (from transformers==3.1.0) (4.48.0)\r\nCollecting tokenizers==0.8.1.rc2\r\n Using cached tokenizers-0.8.1rc2-cp38-cp38-win_amd64.whl (1.9 MB)\r\nInstalling collected packages: tokenizers, transformers\r\n Attempting uninstall: tokenizers\r\n Found existing installation: tokenizers 0.9.4\r\n Uninstalling tokenizers-0.9.4:\r\n Successfully uninstalled tokenizers-0.9.4\r\n Attempting uninstall: transformers\r\n Found existing installation: transformers 4.0.1\r\n Uninstalling transformers-4.0.1:\r\n Successfully uninstalled transformers-4.0.1\r\nSuccessfully installed tokenizers-0.8.1rc2 transformers-3.1.0\r\n\r\nERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\r\ndpr 0.1.0 requires transformers<3.1.0,>=3.0.0, but you have transformers 3.1.0 which is incompatible.\r\n```",
"Well, you have a package (`dpr`) installed that requires transformers<3.1.0,>=3.0.0. You can now do what you always do in such a dependency conflict situation:\r\n\r\n1. Ask yourself if you need this package, if not uninstall it.\r\n2. Create a virtual environment and install the transformers library there.\r\n3. Force pip to install the package anyway but keep in mind that this might break the `dpr` package.\r\n\r\nDo the one that suits your needs the most.\r\n",
"Hi, I installed `transformers==3.0.0` which I think installed `dpr` but gave dependency error on `sentence transformers` and then I installed `transformers==3.1.0` which only gives dependency error in `dpr` and now when I do `pip freeze` and I get both the packages. \r\n\r\nAfter this I ran the example script and it gave this output\r\n```\r\nHBox(children=(FloatProgress(value=0.0, description='Downloading', max=231508.0, style=ProgressStyle(descripti…\r\n\r\n\r\nHBox(children=(FloatProgress(value=0.0, description='Downloading', max=484.0, style=ProgressStyle(description_…\r\n\r\n\r\nHBox(children=(FloatProgress(value=0.0, description='Downloading', max=437998572.0, style=ProgressStyle(descri…\r\n\r\n\r\nSome weights of DPRReader were not initialized from the model checkpoint at facebook/dpr-reader-single-nq-base and are newly initialized: ['span_predictor.encoder.bert_model.embeddings.position_ids']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\n\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\n<ipython-input-1-2f646c4dd41f> in <module>\r\n 9 )\r\n 10 outputs = model(**encoded_inputs)\r\n---> 11 start_logits = outputs.stat_logits\r\n 12 end_logits = outputs.end_logits\r\n 13 relevance_logits = outputs.relevance_logits\r\n\r\nAttributeError: 'tuple' object has no attribute 'stat_logits'\r\n```\r\n\r\nIts still an error but attribute error and not Import error so may be we can close this issue.",
"That is because the class output objects were introduced in a later transformer version. For 3.1.0 the variable `outputs` is still a tuple and you need to check the documentation of DPRReader to figure out which element of the tuple is `stat_logits`, `end_logits` and `relevance_logits`.\r\n\r\nBut I have just checked the installed packages in a virtual environment with 3.1.0 and 4.0.0 and both had no package called `dpr` installed. You probably got it from somewhere else and can remove it.",
"`dpr` may come by installing `transformers` version `3.0.0`",
"> \r\n> \r\n> That is because the class output objects were introduced in a later transformer version. For 3.1.0 the variable `outputs` is still a tuple and you need to check the documentation of DPRReader to figure out which element of the tuple is `stat_logits`, `end_logits` and `relevance_logits`.\r\n\r\n\r\n\r\nThanks for this. I will check documentation\r\n\r\n",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,607 | 1,614 | 1,614 | NONE | null | Hi, I am trying to run below code, it can be found at this [link](https://huggingface.co/transformers/model_doc/dpr.html#dprreader)
```
from transformers import DPRReader, DPRReaderTokenizer
tokenizer = DPRReaderTokenizer.from_pretrained('facebook/dpr-reader-single-nq-base')
model = DPRReader.from_pretrained('facebook/dpr-reader-single-nq-base')
encoded_inputs = tokenizer(
questions=["What is love ?"],
titles=["Haddaway"],
texts=["'What Is Love' is a song recorded by the artist Haddaway"],
return_tensors='pt'
)
outputs = model(**encoded_inputs)
start_logits = outputs.stat_logits
end_logits = outputs.end_logits
relevance_logits = outputs.relevance_logits
```
But got error
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Windows (GCP Instance)
- Python version: 3.8.6
- PyTorch version (GPU?): '1.7.0+cpu'
- Tensorflow version (GPU?): Not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
I am just trying to run DPR model.
## To reproduce
Steps to reproduce the behavior:
1. Execute this in your python
```
from transformers import DPRReader, DPRReaderTokenizer
tokenizer = DPRReaderTokenizer.from_pretrained('facebook/dpr-reader-single-nq-base')
model = DPRReader.from_pretrained('facebook/dpr-reader-single-nq-base')
encoded_inputs = tokenizer(
questions=["What is love ?"],
titles=["Haddaway"],
texts=["'What Is Love' is a song recorded by the artist Haddaway"],
return_tensors='pt'
)
outputs = model(**encoded_inputs)
start_logits = outputs.stat_logits
end_logits = outputs.end_logits
relevance_logits = outputs.relevance_logits
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
The error traceback is below:
```
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-1-2f646c4dd41f> in <module>
----> 1 from transformers import DPRReader, DPRReaderTokenizer
2 tokenizer = DPRReaderTokenizer.from_pretrained('facebook/dpr-reader-single-nq-base')
3 model = DPRReader.from_pretrained('facebook/dpr-reader-single-nq-base')
4 encoded_inputs = tokenizer(
5 questions=["What is love ?"],
ImportError: cannot import name 'DPRReader' from 'transformers' (C:\<some_path>\env\lib\site-packages\transformers\__init__.py)
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I haven't executed this but I would hope nothing in output as all the results are stored in variables. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9032/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9032/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9031 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9031/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9031/comments | https://api.github.com/repos/huggingface/transformers/issues/9031/events | https://github.com/huggingface/transformers/issues/9031 | 761,274,078 | MDU6SXNzdWU3NjEyNzQwNzg= | 9,031 | GPT2 attention mask | {
"login": "Jiaxin-Wen",
"id": 48146603,
"node_id": "MDQ6VXNlcjQ4MTQ2NjAz",
"avatar_url": "https://avatars.githubusercontent.com/u/48146603?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jiaxin-Wen",
"html_url": "https://github.com/Jiaxin-Wen",
"followers_url": "https://api.github.com/users/Jiaxin-Wen/followers",
"following_url": "https://api.github.com/users/Jiaxin-Wen/following{/other_user}",
"gists_url": "https://api.github.com/users/Jiaxin-Wen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jiaxin-Wen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jiaxin-Wen/subscriptions",
"organizations_url": "https://api.github.com/users/Jiaxin-Wen/orgs",
"repos_url": "https://api.github.com/users/Jiaxin-Wen/repos",
"events_url": "https://api.github.com/users/Jiaxin-Wen/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jiaxin-Wen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!",
"Thanks for reminding! \nhttps://discuss.huggingface.co/t/dynamic-attention-mask-during-gpt-2-training/2789\nOn 12/11/2020 06:16,Lysandre Debut<[email protected]> wrote:\n\nHello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\nCould you ask your question on the forum instead?\n\nThanks!\n\n—\nYou are receiving this because you authored the thread.\nReply to this email directly, view it on GitHub, or unsubscribe."
] | 1,607 | 1,607 | 1,607 | NONE | null | I want to use gpt2 to generate a list of options
> an option is a sentence starting with a special token '<option>'.
As I don't want the following options to rely on the previous options, I think I should mask all the previous options.
I could simply implement that during generation by generating one option per time, but I don't know how to do that during training. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9031/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9031/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9030 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9030/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9030/comments | https://api.github.com/repos/huggingface/transformers/issues/9030/events | https://github.com/huggingface/transformers/pull/9030 | 761,143,292 | MDExOlB1bGxSZXF1ZXN0NTM1ODU1MTM5 | 9,030 | Initial README for `t5-small-indonesian-summarization-cased` model | {
"login": "panggi",
"id": 249637,
"node_id": "MDQ6VXNlcjI0OTYzNw==",
"avatar_url": "https://avatars.githubusercontent.com/u/249637?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/panggi",
"html_url": "https://github.com/panggi",
"followers_url": "https://api.github.com/users/panggi/followers",
"following_url": "https://api.github.com/users/panggi/following{/other_user}",
"gists_url": "https://api.github.com/users/panggi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/panggi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/panggi/subscriptions",
"organizations_url": "https://api.github.com/users/panggi/orgs",
"repos_url": "https://api.github.com/users/panggi/repos",
"events_url": "https://api.github.com/users/panggi/events{/privacy}",
"received_events_url": "https://api.github.com/users/panggi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Awesome! @panggi - did you check out the new `mt5` model as well by any chance? It should work better for your use-case I think :-) ",
"Thanks @patrickvonplaten, i just knew it from you about `mt5` and definitely will check it out! :)",
"Thanks for sharing @panggi "
] | 1,607 | 1,607 | 1,607 | CONTRIBUTOR | null | Initial README for Indonesian T5 Summarization Small Model | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9030/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9030/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9030",
"html_url": "https://github.com/huggingface/transformers/pull/9030",
"diff_url": "https://github.com/huggingface/transformers/pull/9030.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9030.patch",
"merged_at": 1607696250000
} |
https://api.github.com/repos/huggingface/transformers/issues/9029 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9029/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9029/comments | https://api.github.com/repos/huggingface/transformers/issues/9029/events | https://github.com/huggingface/transformers/pull/9029 | 761,143,045 | MDExOlB1bGxSZXF1ZXN0NTM1ODU0OTI0 | 9,029 | [TF Bart] Refactor TFBart | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"No speed regression on GPU brutasse in graph mode. PR is ready for review IMO.",
"> Awesome!! Thanks for taking care of this part!!\r\n> \r\n> Should we merge #9063 before or after this one?\r\n\r\nLet's merge after your PR. I'll take the merge conflicts from you :-)\r\nAlso this way I can play around a bit with the new not-existing-cast-bool functionality, yaaay!"
] | 1,607 | 1,608 | 1,608 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Mirror of #8900 for TFBart.
The same improvements are done for Bart except adding torchscript functionality (as it does not exist in tf bart).
- [x] Keep dims consistent within the model -> no switching around between time x batch_size and batch_size x time. We can just stick to batch_size x time throughout the whole forward pass just like other models do too.
- [x] Clean the Attention layer: Replace dict cache by past_key_values tuple (consistency with other models and stateless which is better IMO). Break up complicated if-else cascade and remove unnecessary parameters.
- [x] Correct error with past_key_values/decoder_input_ids/use_cache
- [x] Add input_embeds to Bart
- [x] (very subjectively) better naming
- [x] Check that all slow tests are passing
- [x] Update docstring and final design change check
- [x] Refactor Bart tests
- [x] should solve https://github.com/huggingface/transformers/issues/9048
- [x] Check no speed regression | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9029/reactions",
"total_count": 6,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 4,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9029/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9029",
"html_url": "https://github.com/huggingface/transformers/pull/9029",
"diff_url": "https://github.com/huggingface/transformers/pull/9029.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9029.patch",
"merged_at": 1608049889000
} |
https://api.github.com/repos/huggingface/transformers/issues/9028 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9028/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9028/comments | https://api.github.com/repos/huggingface/transformers/issues/9028/events | https://github.com/huggingface/transformers/pull/9028 | 761,136,408 | MDExOlB1bGxSZXF1ZXN0NTM1ODQ5MzYx | 9,028 | Initial README for `t5-base-indonesian-summarization-cased` model | {
"login": "panggi",
"id": 249637,
"node_id": "MDQ6VXNlcjI0OTYzNw==",
"avatar_url": "https://avatars.githubusercontent.com/u/249637?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/panggi",
"html_url": "https://github.com/panggi",
"followers_url": "https://api.github.com/users/panggi/followers",
"following_url": "https://api.github.com/users/panggi/following{/other_user}",
"gists_url": "https://api.github.com/users/panggi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/panggi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/panggi/subscriptions",
"organizations_url": "https://api.github.com/users/panggi/orgs",
"repos_url": "https://api.github.com/users/panggi/repos",
"events_url": "https://api.github.com/users/panggi/events{/privacy}",
"received_events_url": "https://api.github.com/users/panggi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,607 | 1,607 | 1,607 | CONTRIBUTOR | null | Initial README for Indonesian T5 Summarization Base Model | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9028/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9028/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9028",
"html_url": "https://github.com/huggingface/transformers/pull/9028",
"diff_url": "https://github.com/huggingface/transformers/pull/9028.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9028.patch",
"merged_at": 1607696297000
} |
https://api.github.com/repos/huggingface/transformers/issues/9027 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9027/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9027/comments | https://api.github.com/repos/huggingface/transformers/issues/9027/events | https://github.com/huggingface/transformers/issues/9027 | 761,097,907 | MDU6SXNzdWU3NjEwOTc5MDc= | 9,027 | Uber AI plug and play language model (PPLM) | {
"login": "ajay01994",
"id": 31235529,
"node_id": "MDQ6VXNlcjMxMjM1NTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/31235529?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ajay01994",
"html_url": "https://github.com/ajay01994",
"followers_url": "https://api.github.com/users/ajay01994/followers",
"following_url": "https://api.github.com/users/ajay01994/following{/other_user}",
"gists_url": "https://api.github.com/users/ajay01994/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ajay01994/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ajay01994/subscriptions",
"organizations_url": "https://api.github.com/users/ajay01994/orgs",
"repos_url": "https://api.github.com/users/ajay01994/repos",
"events_url": "https://api.github.com/users/ajay01994/events{/privacy}",
"received_events_url": "https://api.github.com/users/ajay01994/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I don't think we host a specific pplm model in the model hub (cc @julien-c). \r\n\r\nAlso we don't really plan on continuing to support PPLM in the future",
"@ajay01994 are you looking for the code? https://github.com/huggingface/transformers/tree/master/examples/text-generation/pplm\r\n\r\nOr you might be referring to https://transformer.huggingface.co/model/pplm – but indeed we don't host the inference anymore (cc @LysandreJik) as it was a bit costly to support.",
"Thanks for your quick reply.Actually the PPLM model is having issues due to use of transformers lib - 3.1.0 which is old and thus need certain changes to upgrade. Do you know any better or similar model than PPLM for controlling text in GPT-2 ? that would be of great help \r\n\r\nRegards \r\n\r\nAjay ",
"not at the top of my head, but maybe @mimosavvy or @w4nderlust knows!",
"ok....closing this for now ,thanks for your help :)",
"> Thanks for your quick reply.Actually the PPLM model is having issues due to use of transformers lib - 3.1.0 which is old and thus need certain changes to upgrade. Do you know any better or similar model than PPLM for controlling text in GPT-2 ? that would be of great help\r\n> \r\n> Regards\r\n> \r\n> Ajay\r\n\r\nCan you be more specific about the issues? There has been a PR that should have solved the change to dictionary as returns to the model."
] | 1,607 | 1,607 | 1,607 | NONE | null | Hi Team,
Thanks for the hugging face repo and appreciate your great efforts towards adding datasets and models, I was trying to find PPLM model in model page but it was shown as 404 error, could you please check the model if it's available and let me know
Thanks
Ajay | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9027/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9027/timeline | completed | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.