url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
β | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
β | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/3912 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3912/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3912/comments | https://api.github.com/repos/huggingface/transformers/issues/3912/events | https://github.com/huggingface/transformers/issues/3912 | 605,252,709 | MDU6SXNzdWU2MDUyNTI3MDk= | 3,912 | ImportError: cannot import name 'DataCollatorForLanguageModeling'_File "run_language_modeling.py" | {
"login": "Kittisaksam",
"id": 51687539,
"node_id": "MDQ6VXNlcjUxNjg3NTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/51687539?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Kittisaksam",
"html_url": "https://github.com/Kittisaksam",
"followers_url": "https://api.github.com/users/Kittisaksam/followers",
"following_url": "https://api.github.com/users/Kittisaksam/following{/other_user}",
"gists_url": "https://api.github.com/users/Kittisaksam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Kittisaksam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Kittisaksam/subscriptions",
"organizations_url": "https://api.github.com/users/Kittisaksam/orgs",
"repos_url": "https://api.github.com/users/Kittisaksam/repos",
"events_url": "https://api.github.com/users/Kittisaksam/events{/privacy}",
"received_events_url": "https://api.github.com/users/Kittisaksam/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Unfortunatelly I have the same error\r\n\r\nTraceback (most recent call last):\r\n File \"run_language_modeling.py\", line 29, in <module>\r\n from transformers import (\r\nImportError: cannot import name 'DataCollatorForLanguageModeling' from 'transformers' (E:\\PycharmProjects\\Ancient_BERT\\venv\\lib\\site-packages\\transformers\\__init__.py)",
"> Unfortunatelly I have the same error\r\n> \r\n> Traceback (most recent call last):\r\n> File \"run_language_modeling.py\", line 29, in\r\n> from transformers import (\r\n> ImportError: cannot import name 'DataCollatorForLanguageModeling' from 'transformers' (E:\\PycharmProjects\\Ancient_BERT\\venv\\lib\\site-packages\\transformers__init__.py)\r\n\r\nI tried to do this (instead of !pip install transformers)\r\n\r\n!git clone https://github.com/huggingface/transformers\r\ncd transformers\r\npip install .\r\n\r\nAnd I get the following error:\r\n\r\n**Traceback (most recent call last):\r\nFile \"run_language_modeling.py\", line 280, in\r\nmain()\r\nFile \"run_language_modeling.py\", line 225, in main\r\nif training_args.do_train\r\nFile \"run_language_modeling.py\", line 122, in get_dataset\r\ntokenizer=tokenizer, file_path=file_path, block_size=args.block_size, local_rank=local_rank\r\nFile \"/usr/local/lib/python3.6/dist-packages/transformers/data/datasets/language_modeling.py\", line 84, in init\r\nassert os.path.isfile(file_path)\r\nAssertionError**\r\n\r\nI think it should be about **tokenizers-0.7.0.**\r\nBut I now still don't know how to fix it.",
"Both PyCharm running and example in Colab script has the same problem:\r\nhttps://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb\r\n\r\nTraceback (most recent call last):\r\n File \"run_language_modeling.py\", line 29, in <module>\r\n from transformers import (\r\nImportError: cannot import name 'DataCollatorForLanguageModeling'\r\nCPU times: user 36.2 ms, sys: 22.9 ms, total: 59.1 ms\r\nWall time: 15.2 s",
"I change in Colab installation command to: \r\n!pip install git+https://github.com/huggingface/transformers\r\n\r\nand its running :), now only to solve problem with out of memory",
"> !pip install git+https://github.com/huggingface/transformers\r\n\r\nYou have to change batch size.\r\n\r\ncmd =\t\"\"\"\r\n python run_language_modeling.py\r\n --train_data_file ./oscar.eo.txt\r\n --output_dir ./EsperBERTo-small-v1\r\n\t--model_type roberta\r\n\t--mlm\r\n\t--config_name ./EsperBERTo\r\n\t--tokenizer_name ./EsperBERTo\r\n\t--do_train\r\n\t--line_by_line\r\n\t--learning_rate 1e-4\r\n\t--num_train_epochs 1\r\n\t--save_total_limit 2\r\n\t--save_steps 2000\r\n\t**--per_gpu_train_batch_size 4**\r\n\t--seed 42\r\n\"\"\".replace(\"\\n\", \" \")\r\n\r\nAnd Thank you for your help.",
"Maybe somone had error (error occurs after reaching save step value=2000):\r\n\r\nIteration: 16% 1997/12500 [10:38<49:42, 3.52it/s]\r\nIteration: 16% 1998/12500 [10:39<47:03, 3.72it/s]\r\n \r\n{\"learning_rate\": 8.4e-05, \"loss\": 7.684231301307678, \"step\": 2000}\r\nEpoch: 0% 0/1 [10:39<?, ?it/s]\r\nIteration: 16% 1999/12500 [10:39<49:56, 3.50it/s]04/23/2020 11:55:42 - INFO - transformers.trainer - Saving model checkpoint to ./EsperBERTo-small-v1/checkpoint-2000\r\n04/23/2020 11:55:42 - INFO - transformers.configuration_utils - Configuration saved in ./EsperBERTo-small-v1/checkpoint-2000/config.json\r\n04/23/2020 11:55:42 - INFO - transformers.modeling_utils - Model weights saved in ./EsperBERTo-small-v1/checkpoint-2000/pytorch_model.bin\r\nTraceback (most recent call last):\r\n File \"run_language_modeling.py\", line 280, in <module>\r\n main()\r\n File \"run_language_modeling.py\", line 254, in main\r\n trainer.train(model_path=model_path)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/trainer.py\", line 363, in train\r\n self._rotate_checkpoints()\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/trainer.py\", line 458, in _rotate_checkpoints\r\n checkpoints_sorted = self._sorted_checkpoints(use_mtime=use_mtime)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/trainer.py\", line 443, in _sorted_checkpoints\r\n regex_match = re.match(\".*{}-([0-9]+)\".format(checkpoint_prefix), path)\r\n File \"/usr/lib/python3.6/re.py\", line 172, in match\r\n return _compile(pattern, flags).match(string)\r\nTypeError: expected string or bytes-like object\r\n\r\nEpoch: 0% 0/1 [10:40<?, ?it/s]\r\nIteration: 16% 1999/12500 [10:40<56:04, 3.12it/s]\r\nCPU times: user 2.78 s, sys: 928 ms, total: 3.71 s\r\nWall time: 11min 33s\r\n\r\n\r\nMy configuration is:\r\nconfig = {\r\n\t\"architectures\": [\r\n\t\t\"RobertaForMaskedLM\"\r\n\t],\r\n\t\"attention_probs_dropout_prob\": 0.1,\r\n\t\"hidden_act\": \"gelu\",\r\n\t\"hidden_dropout_prob\": 0.1,\r\n\t\"hidden_size\": 768,\r\n\t\"initializer_range\": 0.02,\r\n\t\"intermediate_size\": 3072,\r\n\t\"layer_norm_eps\": 1e-05,\r\n\t\"max_position_embeddings\": 514,\r\n\t\"model_type\": \"roberta\",\r\n\t\"num_attention_heads\": 12,\r\n\t\"num_hidden_layers\": 6,\r\n\t\"type_vocab_size\": 1,\r\n\t\"vocab_size\": 52000\r\n}\r\ncmd = \"\"\"\r\npython run_language_modeling.py\r\n--train_data_file ./oscar.eo.txt\r\n--output_dir ./EsperBERTo-small-v1\r\n--model_type roberta\r\n--mlm\r\n--config_name ./EsperBERTo\r\n--tokenizer_name ./EsperBERTo\r\n--do_train\r\n--line_by_line\r\n--learning_rate 1e-4\r\n--num_train_epochs 1\r\n--save_total_limit 2\r\n--save_steps 2000\r\n--per_gpu_train_batch_size 4 \r\n--seed 42\r\n\"\"\".replace(\"\\n\", \" \")\r\n",
"Closing in favor of #3920 ",
"How to solve this? Why is this closed?"
] | 1,587 | 1,657 | 1,587 | NONE | null | # β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
_https://huggingface.co/blog/how-to-train_
I followed everything in this colab without changing anything.
And this is the problem I encountered.
**Traceback (most recent call last):
File "run_language_modeling.py", line 29, in <module>
from transformers import (
ImportError: cannot import name 'DataCollatorForLanguageModeling'
CPU times: user 21.5 ms, sys: 18 ms, total: 39.5 ms
Wall time: 4.52 s**
How can I fix this problem?
Thank you for your kindness and support
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3912/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3912/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3911 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3911/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3911/comments | https://api.github.com/repos/huggingface/transformers/issues/3911/events | https://github.com/huggingface/transformers/issues/3911 | 605,228,871 | MDU6SXNzdWU2MDUyMjg4NzE= | 3,911 | Type Hints for modeling_utils.py | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] | closed | false | null | [] | [] | 1,587 | 1,590 | 1,590 | CONTRIBUTOR | null | add [type hints](https://docs.python.org/3/library/typing.html) to the methods for better readability and autocomplete.
If possible, check your work with an automated tool and tell us how you did it! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3911/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3911/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3910 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3910/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3910/comments | https://api.github.com/repos/huggingface/transformers/issues/3910/events | https://github.com/huggingface/transformers/issues/3910 | 605,167,683 | MDU6SXNzdWU2MDUxNjc2ODM= | 3,910 | GPT2 generations with specific words | {
"login": "prabalbansal",
"id": 30004110,
"node_id": "MDQ6VXNlcjMwMDA0MTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/30004110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prabalbansal",
"html_url": "https://github.com/prabalbansal",
"followers_url": "https://api.github.com/users/prabalbansal/followers",
"following_url": "https://api.github.com/users/prabalbansal/following{/other_user}",
"gists_url": "https://api.github.com/users/prabalbansal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/prabalbansal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prabalbansal/subscriptions",
"organizations_url": "https://api.github.com/users/prabalbansal/orgs",
"repos_url": "https://api.github.com/users/prabalbansal/repos",
"events_url": "https://api.github.com/users/prabalbansal/events{/privacy}",
"received_events_url": "https://api.github.com/users/prabalbansal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"These `next_token_logits[batch_idx, banned_tokens[batch_idx]]` are logits in the `range(-inf, inf)` not probabilities. You can try to set all other logits except your banned_tokens to `-float(\"inf\")`"
] | 1,587 | 1,591 | 1,591 | NONE | null | # β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
@patrickvonplaten
Just curious to generate a text with the specified words.
Generate function has a parameter bad_words, instead of removing these, I want to include these in generations. I try to edit in src/transformers/modeling_utils.py line 1194; to
next_token_logits[batch_idx, banned_tokens[batch_idx]] = -float("inf")
changed to :
next_token_logits[batch_idx, banned_tokens[batch_idx]] = 1
But this didn't work. Is there is any way we can do this?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3910/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3910/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3909 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3909/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3909/comments | https://api.github.com/repos/huggingface/transformers/issues/3909/events | https://github.com/huggingface/transformers/pull/3909 | 605,134,545 | MDExOlB1bGxSZXF1ZXN0NDA3NTk2ODg2 | 3,909 | Shuffle train subset for summarization example | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3909?src=pr&el=h1) Report\n> Merging [#3909](https://codecov.io/gh/huggingface/transformers/pull/3909?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cb3c2212c79d7ff0a4a4e84c3db48371ecc1c15d&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3909?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3909 +/- ##\n==========================================\n+ Coverage 78.45% 78.47% +0.01% \n==========================================\n Files 111 111 \n Lines 18521 18521 \n==========================================\n+ Hits 14531 14534 +3 \n+ Misses 3990 3987 -3 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3909?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.06% <0.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.92% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `68.86% <0.00%> (+0.36%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3909?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3909?src=pr&el=footer). Last update [cb3c221...3361612](https://codecov.io/gh/huggingface/transformers/pull/3909?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,587 | 1,587 | 1,587 | CONTRIBUTOR | null | Fix #3892
@sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3909/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3909/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3909",
"html_url": "https://github.com/huggingface/transformers/pull/3909",
"diff_url": "https://github.com/huggingface/transformers/pull/3909.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3909.patch",
"merged_at": 1587729335000
} |
https://api.github.com/repos/huggingface/transformers/issues/3908 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3908/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3908/comments | https://api.github.com/repos/huggingface/transformers/issues/3908/events | https://github.com/huggingface/transformers/pull/3908 | 605,119,098 | MDExOlB1bGxSZXF1ZXN0NDA3NTg0MTA3 | 3,908 | MarianMTModel.from_pretrained('Helsinki-NLP/opus-marian-en-de') | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1845609017,
"node_id": "MDU6TGFiZWwxODQ1NjA5MDE3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/seq2seq",
"name": "seq2seq",
"color": "fef2c0",
"default": false,
"description": ""
},
{
"id": 2009457320,
"node_id": "MDU6TGFiZWwyMDA5NDU3MzIw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/translation",
"name": "translation",
"color": "b2d2f4",
"default": false,
"description": "machine translation utilities and models"
}
] | closed | false | null | [] | [
"I think that's awesome! Such little code changes to include that many models is amazing :-) \r\nDoes the model with the pre-trained weights produce 1-to-1 the same results as the official Marian models in C++?",
"@patrickvonplaten the generations are very sensible, but not identical to marian, due to slightly different generation parameters: \r\n\r\nEquivalent (4/7):\r\n\r\nC+: Ich bin ein kleiner Frosch.\r\nHF: Ich bin ein kleiner Frosch.\r\n\r\nC+: Tom bat seinen Lehrer um Rat.\r\nHF: Tom bat seinen Lehrer um Rat.\r\n\r\nC+: Tom bewunderte Marias Mut wirklich.\r\nHF: Tom bewunderte Marias Mut wirklich.\r\n\r\nC+: Umdrehen und die Augen schlieΓen.\r\nHF: Umdrehen und die Augen schlieΓen.\r\n\r\nNot Equivalent (3/7):\r\nC+: Jetzt kann ich die 100 WΓΆrter vergessen, die ich kenne.\r\nHF: Jetzt kann ich die 100 WΓΆrter **des Deutschen** vergessen, die ich kenne.\r\n\r\nC+: **O** (Input=\"O\")\r\nHF: \r\n\r\nC+: So wΓΌrde ich das **machen**.\r\nHF: So wΓΌrde ich das **tun**.\r\n\r\nI'm investigating.",
"> @patrickvonplaten the generations are very sensible, but not identical to marian, due to slightly different generation parameters:\r\n> \r\n> Equivalent (4/7):\r\n> \r\n> C+: Ich bin ein kleiner Frosch.\r\n> HF: Ich bin ein kleiner Frosch.\r\n> \r\n> C+: Tom bat seinen Lehrer um Rat.\r\n> HF: Tom bat seinen Lehrer um Rat.\r\n> \r\n> C+: Tom bewunderte Marias Mut wirklich.\r\n> HF: Tom bewunderte Marias Mut wirklich.\r\n> \r\n> C+: Umdrehen und die Augen schlieΓen.\r\n> HF: Umdrehen und die Augen schlieΓen.\r\n> \r\n> Not Equivalent (3/7):\r\n> C+: Jetzt kann ich die 100 WΓΆrter vergessen, die ich kenne.\r\n> HF: Jetzt kann ich die 100 WΓΆrter **des Deutschen** vergessen, die ich kenne.\r\n> \r\n> C+: **O** (Input=\"O\")\r\n> HF:\r\n> \r\n> C+: So wΓΌrde ich das **machen**.\r\n> HF: So wΓΌrde ich das **tun**.\r\n> \r\n> I'm investigating.\r\n\r\nSounds all very coherent :-) ",
"Merging. Will add docs in next PR, when I add tons of models.",
"@sshleifer, can you elaborate on \"investigate discrepancies in generation parameters: length_penalty, decoder_start_token_id, etc. These may explain small differences in generations (detailed in comment below)\"? I see https://github.com/huggingface/transformers/pull/3908#issuecomment-618473222 but was this resolved?",
"_TLDR Was not resolved, the investigation continues._\r\nOur lib has many parameters to control generations, `length_penalty`, `repetition_penalty`, `bad_word_ids`, `decoder_start_token_id`.\r\nThey are documented [here](https://huggingface.co/transformers/model_doc/bart.html?#transformers.BartForConditionalGeneration.generate).\r\nThe marian lib has `normalize` and `word-penalty`, which are not identical, but similar. Part of the discrepancy may involve configuring our parameters to be closer to theirs.\r\n\r\n",
"I see, thanks!",
"Could anyone help with this issue: #5040 ?"
] | 1,587 | 1,592 | 1,588 | CONTRIBUTOR | null | Adds support for `opus/marian-en-de` translation models:
- There are 900 models with this `MarianSentencePieceTokenizer`, `MarianMTModel` setup. - This PR only adds en-de to avoid massive S3 maintenance if names/other things change.
- There is no formal connection to the bart authors, but the bart code is well-tested and fast and I didn't want to rewrite it. You can still read everything in one file: modeling_bart.py.
- unittest failures are from isort.
The only differences from BART are:
- static (sinusoid) positional embeddings (`config.static_position_embeddings`)
- a new `final_logits_bias` (`config.add_bias_logits`)
- no `layernorm_embedding` (`config.normalize_embedding`)
I could have also used T5 but that has more prefix token logic and relative position bias.
Solutions that split the changes across two files were much harder to read.
Why MarianSentencePieceTokenizer instead of MarianTokenizer?
- about 10% of the models (another 100) use BPE instead of SentencePiece. We will use that tokenizer (or integrate it with this one and rename) in a future PR.
### Future PRs
- load in many models
- support BPE tokenizer in some way.
- make lm_labels on the fly for training loop, like T5.
- `TranslationPipeline`
`TODO[Code]`:
- [x] investigate discrepancies in generation parameters: `length_penalty`, `decoder_start_token_id`, etc. These may explain small differences in generations (detailed in comment below).
- [x] isort
`TODO[Docs]`:
- [x] Naming: `MarianMTModel`, `MarianSentencePieceTokenizer`
- [x] put en-de model in S3, update integration tests.
- [ ] docstring examples
- [ ] notebook to show?
- [ ] marian.rst
- [ ] README.md
- [ ] `AutoModelWithLMHead`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3908/reactions",
"total_count": 6,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 4,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3908/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3908",
"html_url": "https://github.com/huggingface/transformers/pull/3908",
"diff_url": "https://github.com/huggingface/transformers/pull/3908.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3908.patch",
"merged_at": 1588112557000
} |
https://api.github.com/repos/huggingface/transformers/issues/3907 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3907/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3907/comments | https://api.github.com/repos/huggingface/transformers/issues/3907/events | https://github.com/huggingface/transformers/pull/3907 | 605,105,346 | MDExOlB1bGxSZXF1ZXN0NDA3NTcyNzYw | 3,907 | Fix TF optimization classes and apply the changes in the NER TF script | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3907?src=pr&el=h1) Report\n> Merging [#3907](https://codecov.io/gh/huggingface/transformers/pull/3907?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cb3c2212c79d7ff0a4a4e84c3db48371ecc1c15d&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3907?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3907 +/- ##\n=======================================\n Coverage 78.45% 78.45% \n=======================================\n Files 111 111 \n Lines 18521 18521 \n=======================================\n Hits 14531 14531 \n Misses 3990 3990 \n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3907?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3907?src=pr&el=footer). Last update [cb3c221...f75b213](https://codecov.io/gh/huggingface/transformers/pull/3907?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,587 | 1,589 | 1,589 | CONTRIBUTOR | null | This PR fix the AdamW and GradientAccumulator optimization classes for Tensorflow. Furthermore, we apply these new changes in the NER TF script. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3907/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3907/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3907",
"html_url": "https://github.com/huggingface/transformers/pull/3907",
"diff_url": "https://github.com/huggingface/transformers/pull/3907.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3907.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3906 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3906/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3906/comments | https://api.github.com/repos/huggingface/transformers/issues/3906/events | https://github.com/huggingface/transformers/pull/3906 | 605,098,176 | MDExOlB1bGxSZXF1ZXN0NDA3NTY3MDc2 | 3,906 | [Cleanup] Fix typos in modeling_utils.py | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,587 | 1,588 | 1,588 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3906/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3906/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3906",
"html_url": "https://github.com/huggingface/transformers/pull/3906",
"diff_url": "https://github.com/huggingface/transformers/pull/3906.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3906.patch",
"merged_at": 1588008354000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3905 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3905/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3905/comments | https://api.github.com/repos/huggingface/transformers/issues/3905/events | https://github.com/huggingface/transformers/issues/3905 | 605,073,477 | MDU6SXNzdWU2MDUwNzM0Nzc= | 3,905 | BERT TF-Lite conversion not working in TensorFlow 2.2.0 | {
"login": "r4ghu",
"id": 5736976,
"node_id": "MDQ6VXNlcjU3MzY5NzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5736976?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/r4ghu",
"html_url": "https://github.com/r4ghu",
"followers_url": "https://api.github.com/users/r4ghu/followers",
"following_url": "https://api.github.com/users/r4ghu/following{/other_user}",
"gists_url": "https://api.github.com/users/r4ghu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/r4ghu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/r4ghu/subscriptions",
"organizations_url": "https://api.github.com/users/r4ghu/orgs",
"repos_url": "https://api.github.com/users/r4ghu/repos",
"events_url": "https://api.github.com/users/r4ghu/events{/privacy}",
"received_events_url": "https://api.github.com/users/r4ghu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,593 | 1,593 | NONE | null | # π Bug
Model conversion succeeds but inputs and outputs are not recognized.
## Information
Model I am using (Bert, XLNet ...): BERT (`bert-base-uncased`)
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
- TensorFlow 2.1.0 + Transformers 2.8.0 - has no problem converting `bert-base-uncased` model to tflite version.
- TensorFlow 2.2.0rc3 + Transformers 2.8.0 - has issues with interoperability.
The tasks I am working on is:
* Convert BERT models to TF-Lite format to use it in mobile apps.
* Trying to use the latest TF-Lite package version for Android in the place of TF-Lite package provided in the repo `huggingface/tflite-android-transformers`.
## To reproduce
Please execute the following code with TensorFlow versions 2.1.0 and 2.2.0-rc3
```
import transformers
from transformers import TFBertModel, BertConfig
import tensorflow as tf
print('TensorFlow version =', tf.__version__)
print('Transformers version =', transformers.__version__)
MODEL_DIR = 'bert-base-uncased'
MAX_SEQ_LEN = 50
# Read the model
config = BertConfig.from_pretrained(MODEL_DIR)
model = TFBertModel(config)
# Set input Spec
input_spec = [
tf.TensorSpec([1, MAX_SEQ_LEN], tf.int32),
tf.TensorSpec([1, MAX_SEQ_LEN], tf.int32),
tf.TensorSpec([1, MAX_SEQ_LEN], tf.int32)
]
model._set_inputs(input_spec, training=False)
print(model.inputs)
print(model.outputs)
```
- For TensorFlow 2.2.0-rc3: Model outputs and inputs are **None**
```
TensorFlow version = 2.2.0-rc3
Transformers version = 2.8.0
None
None
```
- For TensorFlow 2.1.0:
```
TensorFlow version = 2.1.0
Transformers version = 2.8.0
...
[<tf.Tensor 'input_1:0' shape=(None, 50) dtype=int32>, <tf.Tensor 'input_2:0' shape=(None, 50) dtype=int32>, <tf.Tensor 'input_3:0' shape=(None, 50) dtype=int32>]
[<tf.Tensor 'tf_bert_model/Identity:0' shape=(None, 50, 768) dtype=float32>, <tf.Tensor 'tf_bert_model/Identity_1:0' shape=(None, 768) dtype=float32>]
```
## Expected behavior
- I expect the BERT model conversion to work properly to TensorFlow 2.2.0-rc{1/2/3}
- Preferably BERT should use the default TF-Lite supported layers just like MobileBERT model provided by Google.
- Image - MobileBERT from Google's bert-qa android example (left) vs BERT converted using the above script using TensorFlow v2.1.0 (right)

<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.8.0
- Platform: Windows
- Python version: 3.7.7
- PyTorch version (GPU?): 1.4.0 (No GPU)
- Tensorflow version (GPU?): 2.1.0 (working), 2.2.0-rc3 (not working) (no GPU for both versions)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3905/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3905/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3904 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3904/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3904/comments | https://api.github.com/repos/huggingface/transformers/issues/3904/events | https://github.com/huggingface/transformers/issues/3904 | 605,041,334 | MDU6SXNzdWU2MDUwNDEzMzQ= | 3,904 | how i pass from an AutoModelWithLMHead to a GPT2DoubleHeadsModel ? | {
"login": "nikkon3",
"id": 41228217,
"node_id": "MDQ6VXNlcjQxMjI4MjE3",
"avatar_url": "https://avatars.githubusercontent.com/u/41228217?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nikkon3",
"html_url": "https://github.com/nikkon3",
"followers_url": "https://api.github.com/users/nikkon3/followers",
"following_url": "https://api.github.com/users/nikkon3/following{/other_user}",
"gists_url": "https://api.github.com/users/nikkon3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nikkon3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nikkon3/subscriptions",
"organizations_url": "https://api.github.com/users/nikkon3/orgs",
"repos_url": "https://api.github.com/users/nikkon3/repos",
"events_url": "https://api.github.com/users/nikkon3/events{/privacy}",
"received_events_url": "https://api.github.com/users/nikkon3/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It will load the weights, but will not load the multiple choice head as it's not in the checkpoint. That's why fine-tuning it is a good idea, so that this head's weights get trained!"
] | 1,587 | 1,589 | 1,589 | NONE | null | I used your script of run_language_modeling.py to train a gpt2 model from scratch. Here i save it and upload:
from transformers import AutoModelWithLMHead, AutoTokenizer
import os
directory = "./checkpoint"
model = AutoModelWithLMHead.from_pretrained(directory)
tokenizer = AutoTokenizer.from_pretrained(directory)
out = "gpt_2"
os.makedirs(out, exist_ok=True)
model.save_pretrained(out)
tokenizer.save_pretrained(out)
!transformers-cli upload ./gpt_2/
After if i want to use it for fine-tuning in a specific task and i want to use a GPT2DoubleHeadsModel
i can just load it like this?
from transformers import GPT2DoubleHeadsModel,GPT2Tokenizer
model = GPT2DoubleHeadsModel.from_pretrained("ex/gpt_2")
tokenizer = GPT2Tokenizer.from_pretrained("ex/gpt_2")
Or it will not load the weights? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3904/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3904/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3903 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3903/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3903/comments | https://api.github.com/repos/huggingface/transformers/issues/3903/events | https://github.com/huggingface/transformers/issues/3903 | 604,996,755 | MDU6SXNzdWU2MDQ5OTY3NTU= | 3,903 | T5 shows weird behaviour in a sanity check of its pre-training task. | {
"login": "yuvalkirstain",
"id": 57996478,
"node_id": "MDQ6VXNlcjU3OTk2NDc4",
"avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuvalkirstain",
"html_url": "https://github.com/yuvalkirstain",
"followers_url": "https://api.github.com/users/yuvalkirstain/followers",
"following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}",
"gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions",
"organizations_url": "https://api.github.com/users/yuvalkirstain/orgs",
"repos_url": "https://api.github.com/users/yuvalkirstain/repos",
"events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuvalkirstain/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @yuvalkirstain,\r\n\r\n\r\nThanks for your message! \r\n\r\nWhen you put in `input_ids` and `lm_labels` at the some time the following happens:\r\nthe `decoder_input_ids` are created automatically depending on the `lm_labels` (shift `lm_labels` to the right and pre-prend `<PAD>`) and look as follows: \r\n`<PAD> <extra_id_1> cute dog <extra_id_2> the <extra_id_3>`. \r\n\r\nsee https://github.com/huggingface/transformers/blob/4e817ff41885063e08bb3bcd63e5adfd835b9911/src/transformers/modeling_t5.py#L1060 \r\n\r\nNow in order for `<extra_id_1>` to show up, it should be guessed from `<PAD>`, but I'm not sure whether this makes much sense. After `<extra_id_1>` has been processed though from the input, it's much more likely that another `<extra_id_2>` is in the labels, so it does make more sense to me that `<extra_id_2>` is guessed.\r\n\r\nLet me know if it does not makes sense to you :-) \r\n",
"ahh ok. Thank you so much and sorry that I bothered you with it. Really enjoying working with it so far :)"
] | 1,587 | 1,588 | 1,587 | CONTRIBUTOR | null | # π Bug
## Information
Model I am using (Bert, XLNet ...): T5
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* no script, just a few simple commands. watch below.
## To reproduce
Steps to reproduce the behavior:
1.
```
from transformers import T5Tokenizer, T5ForConditionalGeneration
import torch
tokenizer = T5Tokenizer.from_pretrained('t5-small')
model = T5ForConditionalGeneration.from_pretrained('t5-small')
input_ids = tokenizer.encode('The <extra_id_1> walks in <extra_id_2> park', return_tensors='pt')
lm_labels = tokenizer.encode('<extra_id_1> cute dog <extra_id_2> the <extra_id_3> </s>', return_tensors='pt')
outputs = model(input_ids=input_ids, lm_labels=lm_labels)
probs, preds = torch.topk(outputs[1], 10)
preds = preds[0]
for ind_lst in preds:
tokenizer.convert_ids_to_tokens(ind_lst)
```
This yeilds the following:
```
['βThe', 'βWalking', 'βPark', 'βWalk', 'β', 'βwalking', 'βthe', 'βWe', 'βIn', 'βLa']
['βpark', 'βPark', 'βparks', 'βWalking', 'βNational', 'βwalk', 'β', 'βForest', 'βwalking', 'park']
['βpark', '<extra_id_2>', 'βPark', 'βwalks', 'βwalking', 'βparks', 'βnature', 'βwalk', 'βWalking', 'βand']
['βpark', 'βwalks', '<extra_id_2>', 'βparks', 'βPark', 'βwalk', 'βwalking', 'walk', 'park', 'βalso']
['βthe', 'βThe', 'β', 'βpark', 'βto', 'βPark', 'βand', 'βat', 'βwalking', 'βin']
['βpark', '<extra_id_3>', 'βPark', 'βThe', 'βparks', 'βparking', 'β', 'βcar', 'βbeautiful', 'βthe']
['βpark', 'βPark', 'βWalking', 'βparks', 'βwalk', 'βwalking', 'βWalk', 'park', 'βNational', 'βwalks']
```
## Expected behavior
I would expect that '<extra_id_1>' will show up as a possible top10 prediction.
during pre-training T5 gets supervision to return the first token to be <extra_id_1>. So if it doesn't return it, it is weird behavior. I hope that I didn't get it wrong. Thank you. @patrickvonplaten
## Environment info
- `transformers` version: 2.8.0
- Platform: Darwin-19.3.0-x86_64-i386-64bit
- Python version: 3.7.0
- PyTorch version (GPU?): 1.4.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3903/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3903/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3902 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3902/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3902/comments | https://api.github.com/repos/huggingface/transformers/issues/3902/events | https://github.com/huggingface/transformers/issues/3902 | 604,995,241 | MDU6SXNzdWU2MDQ5OTUyNDE= | 3,902 | RuntimeError: index out of range at /pytorch/aten/src/TH/generic/THTensorEvenMoreMath.cpp:237 | {
"login": "rvoak",
"id": 42851812,
"node_id": "MDQ6VXNlcjQyODUxODEy",
"avatar_url": "https://avatars.githubusercontent.com/u/42851812?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rvoak",
"html_url": "https://github.com/rvoak",
"followers_url": "https://api.github.com/users/rvoak/followers",
"following_url": "https://api.github.com/users/rvoak/following{/other_user}",
"gists_url": "https://api.github.com/users/rvoak/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rvoak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rvoak/subscriptions",
"organizations_url": "https://api.github.com/users/rvoak/orgs",
"repos_url": "https://api.github.com/users/rvoak/repos",
"events_url": "https://api.github.com/users/rvoak/events{/privacy}",
"received_events_url": "https://api.github.com/users/rvoak/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"How big are your batches? BERT can only accept tensors of maximum length of 512. Is it possible one of your inputs is longer than 512?",
"I had the same issue while my `per_gpu_eval_batch_size=8` and `per_gpu_train_batch_size=8 `. ",
"Yes, it's unrelated to batch size. Is your sequence length longer than 512?",
"Also having this issue, getting a different error when I try to use GPU but I understand that the error is caused by the same thing and the CPU stack trace is just more informative. My sequence lengths are definitely <512",
"> I had the same issue while my `per_gpu_eval_batch_size=8` and `per_gpu_train_batch_size=8 `.\r\n\r\nI got the same error when I use batch_size = 8. After I change it to 64, then no errors. ",
"> Also having this issue, getting a different error when I try to use GPU but I understand that the error is caused by the same thing and the CPU stack trace is just more informative. My sequence lengths are definitely <512\r\n\r\ndid you find the solution?\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
" return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)\r\n\r\nRuntimeError: index out of range: Tried to access index 0 out of table with 202 rows. at /pytorch/aten/src/TH/generic/THTensorEvenMoreMath.cpp:237\r\n\r\nis there anyone knows what's the problem ? , I checked the embedding matrix of shape (203, 16), it shows index 0 out of range, how it is possible???"
] | 1,587 | 1,612 | 1,605 | NONE | null | # π Bug
## Information
Model I am using (Bert, XLNet ...): Bert-base-uncased
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x ] my own task or dataset: (give details below)
## To reproduce
I am trying to fine-tune BERT for sentiment analysis on the IMDB Dataset. Most of the code is based on this blog: http://mccormickml.com/2019/07/22/BERT-fine-tuning/
I create my model as follows:
```
from transformers import BertForSequenceClassification
model = BertForSequenceClassification.from_pretrained(
"bert-base-uncased", # Use the 12-layer BERT model, with an uncased vocab.
num_labels = 2, # The number of output labels--2 for binary classification.
output_attentions = False, # Whether the model returns attentions weights.
output_hidden_states = False, # Whether the model returns all hidden-states.
)
```
In my training loop, I do the following:
```
for step, batch in enumerate(train_dataloader):
b_input_ids = batch[0].to(device)
b_input_mask = batch[1].to(device)
b_labels = batch[2].to(device)
model.zero_grad()
loss, logits = model(b_input_ids,
token_type_ids=None,
attention_mask=b_input_mask,
labels=b_labels)
```
I get the following error:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-13-77ffde63bb0d> in <module>()
109 token_type_ids=None,
110 attention_mask=b_input_mask,
--> 111 labels=b_labels)
112
113 # Accumulate the training loss over all of the batches so that we can
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
545 result = self._slow_forward(*input, **kwargs)
546 else:
--> 547 result = self.forward(*input, **kwargs)
548 for hook in self._forward_hooks.values():
549 hook_result = hook(self, input, result)
~/anaconda3/lib/python3.6/site-packages/transformers/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, labels)
1030 position_ids=position_ids,
1031 head_mask=head_mask,
-> 1032 inputs_embeds=inputs_embeds)
1033
1034 pooled_output = outputs[1]
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
545 result = self._slow_forward(*input, **kwargs)
546 else:
--> 547 result = self.forward(*input, **kwargs)
548 for hook in self._forward_hooks.values():
549 hook_result = hook(self, input, result)
~/anaconda3/lib/python3.6/site-packages/transformers/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask)
733 head_mask = [None] * self.config.num_hidden_layers
734
--> 735 embedding_output = self.embeddings(input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds)
736 encoder_outputs = self.encoder(embedding_output,
737 attention_mask=extended_attention_mask,
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
545 result = self._slow_forward(*input, **kwargs)
546 else:
--> 547 result = self.forward(*input, **kwargs)
548 for hook in self._forward_hooks.values():
549 hook_result = hook(self, input, result)
~/anaconda3/lib/python3.6/site-packages/transformers/modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds)
185 if inputs_embeds is None:
186 inputs_embeds = self.word_embeddings(input_ids)
--> 187 position_embeddings = self.position_embeddings(position_ids)
188 token_type_embeddings = self.token_type_embeddings(token_type_ids)
189
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
545 result = self._slow_forward(*input, **kwargs)
546 else:
--> 547 result = self.forward(*input, **kwargs)
548 for hook in self._forward_hooks.values():
549 hook_result = hook(self, input, result)
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/sparse.py in forward(self, input)
112 return F.embedding(
113 input, self.weight, self.padding_idx, self.max_norm,
--> 114 self.norm_type, self.scale_grad_by_freq, self.sparse)
115
116 def extra_repr(self):
~/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1465 # remove once script supports set_grad_enabled
1466 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1467 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1468
1469
RuntimeError: index out of range: Tried to access index 512 out of table with 511 rows. at ../aten/src/TH/generic/THTensorEvenMoreMath.cpp:237
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.3.0
- Platform: Mac OS
- Python version: 3.6
- PyTorch version (GPU?): 1.2.0, no GPU
- Tensorflow version (GPU?): 1.2.0, no GPU
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3902/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3902/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3901 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3901/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3901/comments | https://api.github.com/repos/huggingface/transformers/issues/3901/events | https://github.com/huggingface/transformers/issues/3901 | 604,975,183 | MDU6SXNzdWU2MDQ5NzUxODM= | 3,901 | How to only use part of pretrained model | {
"login": "yes1234man",
"id": 59166627,
"node_id": "MDQ6VXNlcjU5MTY2NjI3",
"avatar_url": "https://avatars.githubusercontent.com/u/59166627?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yes1234man",
"html_url": "https://github.com/yes1234man",
"followers_url": "https://api.github.com/users/yes1234man/followers",
"following_url": "https://api.github.com/users/yes1234man/following{/other_user}",
"gists_url": "https://api.github.com/users/yes1234man/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yes1234man/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yes1234man/subscriptions",
"organizations_url": "https://api.github.com/users/yes1234man/orgs",
"repos_url": "https://api.github.com/users/yes1234man/repos",
"events_url": "https://api.github.com/users/yes1234man/events{/privacy}",
"received_events_url": "https://api.github.com/users/yes1234man/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"How did you obtain your pre-trained model? Does your pre-train model have a classification head? What is `model_class`? Is it `BertForSequenceClassification`?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,595 | 1,595 | NONE | null | Hi
I am using BERT for classification, I have a pretrained model, and my question is when I load the model like this:
model = model_class.from_pretrained(
args.model_name_or_path,
from_tf=bool(".ckpt" in args.model_name_or_path),
config=config,
cache_dir=args.cache_dir if args.cache_dir else None,
)
How can I not load the classifier, and only load the BERT encoder.
thanks
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3901/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3901/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3900 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3900/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3900/comments | https://api.github.com/repos/huggingface/transformers/issues/3900/events | https://github.com/huggingface/transformers/pull/3900 | 604,949,987 | MDExOlB1bGxSZXF1ZXN0NDA3NDQ1OTg5 | 3,900 | quick fix wording readme for community models | {
"login": "clmnt",
"id": 821155,
"node_id": "MDQ6VXNlcjgyMTE1NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/821155?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clmnt",
"html_url": "https://github.com/clmnt",
"followers_url": "https://api.github.com/users/clmnt/followers",
"following_url": "https://api.github.com/users/clmnt/following{/other_user}",
"gists_url": "https://api.github.com/users/clmnt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/clmnt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clmnt/subscriptions",
"organizations_url": "https://api.github.com/users/clmnt/orgs",
"repos_url": "https://api.github.com/users/clmnt/repos",
"events_url": "https://api.github.com/users/clmnt/events{/privacy}",
"received_events_url": "https://api.github.com/users/clmnt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,587 | 1,587 | 1,587 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3900/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3900/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3900",
"html_url": "https://github.com/huggingface/transformers/pull/3900",
"diff_url": "https://github.com/huggingface/transformers/pull/3900.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3900.patch",
"merged_at": 1587665986000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3899 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3899/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3899/comments | https://api.github.com/repos/huggingface/transformers/issues/3899/events | https://github.com/huggingface/transformers/issues/3899 | 604,832,353 | MDU6SXNzdWU2MDQ4MzIzNTM= | 3,899 | NLQ application | {
"login": "thiagomoeng",
"id": 64150563,
"node_id": "MDQ6VXNlcjY0MTUwNTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/64150563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thiagomoeng",
"html_url": "https://github.com/thiagomoeng",
"followers_url": "https://api.github.com/users/thiagomoeng/followers",
"following_url": "https://api.github.com/users/thiagomoeng/following{/other_user}",
"gists_url": "https://api.github.com/users/thiagomoeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thiagomoeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thiagomoeng/subscriptions",
"organizations_url": "https://api.github.com/users/thiagomoeng/orgs",
"repos_url": "https://api.github.com/users/thiagomoeng/repos",
"events_url": "https://api.github.com/users/thiagomoeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/thiagomoeng/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I assume you concatenate all paragraphs in a single text string, encode it with your question, and give the string as an input to your QA model (or pipeline). What you get back, is an answer span: indexes of the first and last character of the answer in the input text. To find a paragraph it belongs to, compute indexes of the paragraph spans in your concatenated text strings. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,594 | 1,594 | NONE | null | # β Questions & Help
I have a pdf extractor and from this I got a dataframe with 2 columns (sections,paragraphs).
Is there any easy way to do a question and get answer like: (example)
Question: "where is the book?"
Answer: "It's on the bookshelf."
Section: "1.2.3 The Book"
Paragraph: "(Full section paragraph)"
Sorry for bad english. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3899/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3899/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3898 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3898/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3898/comments | https://api.github.com/repos/huggingface/transformers/issues/3898/events | https://github.com/huggingface/transformers/pull/3898 | 604,827,892 | MDExOlB1bGxSZXF1ZXN0NDA3MzQ3OTQx | 3,898 | Bump tokenizers version to final 0.7.0 | {
"login": "n1t0",
"id": 1217986,
"node_id": "MDQ6VXNlcjEyMTc5ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1217986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/n1t0",
"html_url": "https://github.com/n1t0",
"followers_url": "https://api.github.com/users/n1t0/followers",
"following_url": "https://api.github.com/users/n1t0/following{/other_user}",
"gists_url": "https://api.github.com/users/n1t0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/n1t0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/n1t0/subscriptions",
"organizations_url": "https://api.github.com/users/n1t0/orgs",
"repos_url": "https://api.github.com/users/n1t0/repos",
"events_url": "https://api.github.com/users/n1t0/events{/privacy}",
"received_events_url": "https://api.github.com/users/n1t0/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,587 | 1,587 | 1,587 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3898/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3898/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3898",
"html_url": "https://github.com/huggingface/transformers/pull/3898",
"diff_url": "https://github.com/huggingface/transformers/pull/3898.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3898.patch",
"merged_at": 1587567750000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3897 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3897/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3897/comments | https://api.github.com/repos/huggingface/transformers/issues/3897/events | https://github.com/huggingface/transformers/issues/3897 | 604,774,339 | MDU6SXNzdWU2MDQ3NzQzMzk= | 3,897 | How to fine-tune DialoGPT with your own data? | {
"login": "avinregmi",
"id": 32203792,
"node_id": "MDQ6VXNlcjMyMjAzNzky",
"avatar_url": "https://avatars.githubusercontent.com/u/32203792?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/avinregmi",
"html_url": "https://github.com/avinregmi",
"followers_url": "https://api.github.com/users/avinregmi/followers",
"following_url": "https://api.github.com/users/avinregmi/following{/other_user}",
"gists_url": "https://api.github.com/users/avinregmi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/avinregmi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avinregmi/subscriptions",
"organizations_url": "https://api.github.com/users/avinregmi/orgs",
"repos_url": "https://api.github.com/users/avinregmi/repos",
"events_url": "https://api.github.com/users/avinregmi/events{/privacy}",
"received_events_url": "https://api.github.com/users/avinregmi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"DialoGPT is fine-tuned in the same way GPT2 in fine-tuned. Please take a look at the paper: \r\nhttps://arxiv.org/pdf/1911.00536.pdf . \r\n\r\nI will also add some training tips to the doc string."
] | 1,587 | 1,588 | 1,588 | NONE | null | # π How to fine-tune DialoGPT with your own data?
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
## Motivation
I really liked the new DialoGPT that allows to make chatbots, but how do i fine-tune this for own dataset?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3897/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3897/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3896 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3896/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3896/comments | https://api.github.com/repos/huggingface/transformers/issues/3896/events | https://github.com/huggingface/transformers/issues/3896 | 604,681,806 | MDU6SXNzdWU2MDQ2ODE4MDY= | 3,896 | ImportError: cannot import name 'DataCollatorForLanguageModeling' in run_language_modeling.py | {
"login": "ishaansharma",
"id": 8963395,
"node_id": "MDQ6VXNlcjg5NjMzOTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8963395?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ishaansharma",
"html_url": "https://github.com/ishaansharma",
"followers_url": "https://api.github.com/users/ishaansharma/followers",
"following_url": "https://api.github.com/users/ishaansharma/following{/other_user}",
"gists_url": "https://api.github.com/users/ishaansharma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ishaansharma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ishaansharma/subscriptions",
"organizations_url": "https://api.github.com/users/ishaansharma/orgs",
"repos_url": "https://api.github.com/users/ishaansharma/repos",
"events_url": "https://api.github.com/users/ishaansharma/events{/privacy}",
"received_events_url": "https://api.github.com/users/ishaansharma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Duplicate of #3893 \r\n\r\nPlease install from source until we release a new version of the library on PyPI"
] | 1,587 | 1,587 | 1,587 | NONE | null | # π Bug ImportError: cannot import name 'DataCollatorForLanguageModeling' in run_language_modeling.py
## Information
> Traceback (most recent call last):
> File "transformers/examples/run_language_modeling.py", line 29, in <module>
> from transformers import (
> ImportError: cannot import name 'DataCollatorForLanguageModeling'
Model I am using (Bert, XLNet ...): roberta, Trying to build the model from scratch using the tutorial.
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
transformers/examples/run_language_modeling.py
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
My own Data set
## To reproduce
Steps to reproduce the behavior:
Follow the tutorial [here](https://huggingface.co/blog/how-to-train)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: transformers-2.8.0 sacremoses-0.0.41 sentencepiece-0.1.85 tokenizers-0.5.2
- Platform:
- Python version: 3.x
- PyTorch version (GPU?):1.4.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: None
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3896/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3896/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3895 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3895/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3895/comments | https://api.github.com/repos/huggingface/transformers/issues/3895/events | https://github.com/huggingface/transformers/issues/3895 | 604,659,995 | MDU6SXNzdWU2MDQ2NTk5OTU= | 3,895 | run_bertology: AttributeError: 'NoneType' object has no attribute 'abs' | {
"login": "ThomasSYT",
"id": 41875489,
"node_id": "MDQ6VXNlcjQxODc1NDg5",
"avatar_url": "https://avatars.githubusercontent.com/u/41875489?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ThomasSYT",
"html_url": "https://github.com/ThomasSYT",
"followers_url": "https://api.github.com/users/ThomasSYT/followers",
"following_url": "https://api.github.com/users/ThomasSYT/following{/other_user}",
"gists_url": "https://api.github.com/users/ThomasSYT/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ThomasSYT/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ThomasSYT/subscriptions",
"organizations_url": "https://api.github.com/users/ThomasSYT/orgs",
"repos_url": "https://api.github.com/users/ThomasSYT/repos",
"events_url": "https://api.github.com/users/ThomasSYT/events{/privacy}",
"received_events_url": "https://api.github.com/users/ThomasSYT/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Sorry, I think it is the same issue as #4103. ",
"Please don't open duplicate issues. If you have more info about this issue, post it here. Also, please use [code blocks](https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks).",
"> Please don't open duplicate issues. If you have more info about this issue, post it here. Also, please use [code blocks](https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks).\r\n\r\nSorry for this.",
"Facing the same issue - \r\nEnvironment - \r\n\r\n- transformers version:2.2.2\r\n- Platform: \r\n- Python version:3.7.3\r\n- PyTorch version (GPU?): 1.3.1 (no GPU)\r\n- Tensorflow version (GPU?): 1.14.0 (no GPU)\r\n- Using GPU in script?:No\r\n- Using distributed or parallel set-up in script?: No\r\n\r\nWhile computing the initial importance score (When the head_mask is None) I am getting the following error. -\r\n \r\n File \"/Users/user/Desktop/org//prune_attention_heads.py\", line 226, in compute_heads_importance\r\n head_importance += head_mask.grad.abs().detach()\r\nAttributeError: 'NoneType' object has no attribute 'abs'\r\n\r\nOn putting the line above in a try and except block and printing the head_mask I get the following - \r\n\r\ntensor([[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],\r\n [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],\r\n [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],\r\n [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],\r\n [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],\r\n [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],\r\n [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],\r\n [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],\r\n [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],\r\n [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],\r\n [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],\r\n [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]], requires_grad=True)\r\n\r\nHere, the head_mask has requires_grad=True and I am passing the head_mask into the model and calling loss.backward() as in the bertology script. \r\n",
"I met the same problem, the head_mask has no grad at the second time.",
"Seems that during the second pruning, the head_mask tensor becomes a non-leaf node, when the grad is None, I printed the `head_mask.is_leaf` attribute and get the warning(PyTorch 1.5.0) as below:\r\n> UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the gradient for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations.\r\n warnings.warn(\"The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad \"\r\nhead_mask is leaf: False",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,595 | 1,595 | NONE | null | # π Bug
## Information
One more question did run_bertology also support albert model?
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...):English
The problem arises when using:
* [ ] the official example scripts: (give details below)
run_bertology.py
* [ ] my own modified scripts: (give details below)
python3.7/site-packages/transformers/data/processors/glue.py
# Because the dataset SciEntsBank-3way labeled as "correct", "incorrect", "contradictory".
-223 return ["contradiction", "entailment", "neutral"]
+223 return ["correct", "incorrect", "contradictory"]
# Because the dataset SciEntsBank-3way structure. label at first position, text_a at second position and text_b at third position.
-232 text_a = line[8]
-233 text_b = line[9]
-234 label = line[-1]
+232 text_a = line[1]
+233 text_b = line[2]
+234 label = line[0]
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
mnli
* [ ] my own task or dataset: (give details below)
dataset: SciEntsBank-3way(https://www.cs.york.ac.uk/semeval-2013/task7.html)
## To reproduce
Steps to reproduce the behavior:
python ./run_bertology.py --data_dir SciEntsBank-3way
--model_name bert-base-uncased
--task_name mnli
--max_seq_length 128
--output_dir ./tmp/$TASK_NAME/
--try_masking
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Iteration: 100%|ββββββββββ| 4561/4561 [02:15<00:00, 33.57it/s]
INFO:__main__:Attention entropies
INFO:__main__:lv, h > 1 2 3 4 5 6 7 8 9 10 11 12
INFO:__main__:layer 1: 3.03255 2.82196 1.77876 1.64802 3.27255 2.91101 3.34266 3.03600 2.73255 3.09043 1.35738 2.52412
INFO:__main__:layer 2: 2.73629 1.11241 2.86221 2.44852 0.95509 2.39331 0.45580 2.82749 2.93869 2.88269 2.19532 2.48865
INFO:__main__:layer 3: 0.05847 1.66529 1.91624 2.79214 2.31408 2.67645 2.18180 2.62745 2.48442 0.05168 2.52636 2.49648
INFO:__main__:layer 4: 1.54150 2.90387 2.40694 2.06858 2.77907 0.80181 2.69664 2.88957 2.70095 1.19583 2.33666 1.83265
INFO:__main__:layer 5: 2.34246 2.64519 2.03515 1.37404 2.88754 1.67422 2.14421 1.41457 2.03571 2.69347 1.98139 1.44582
INFO:__main__:layer 6: 1.71052 1.10676 2.28401 1.87228 2.55920 1.75916 1.22450 1.35704 1.92916 1.02535 1.67920 1.60766
INFO:__main__:layer 7: 1.63887 1.93625 1.83002 1.20811 1.58296 1.65662 1.55572 2.38742 2.09030 1.69326 1.42275 1.08153
INFO:__main__:layer 8: 1.95536 1.73146 1.59791 1.17307 1.12128 1.95980 1.11606 1.11680 1.97816 1.64787 1.53183 1.28007
INFO:__main__:layer 9: 1.54698 1.96436 1.45466 2.03807 1.60202 1.44075 1.36014 2.32559 2.59592 2.09076 1.75704 1.85274
INFO:__main__:layer 10: 2.00444 1.91784 2.12478 1.99289 1.58305 2.48627 2.08822 1.69971 2.70500 1.71860 2.03850 2.38604
INFO:__main__:layer 11: 2.76158 1.53031 1.99278 2.26007 1.97855 1.66471 1.90139 2.13217 2.45516 1.83803 1.99372 2.15438
INFO:__main__:layer 12: 1.73656 2.10304 2.72498 1.85723 2.04607 2.20456 2.16210 1.82173 2.18728 2.71702 1.84256 1.83663
INFO:__main__:Head importance scores
INFO:__main__:lv, h > 1 2 3 4 5 6 7 8 9 10 11 12
INFO:__main__:layer 1: 0.30328 0.05899 0.06971 0.03727 0.13938 1.00000 0.04436 0.03679 0.22807 0.07911 0.19918 0.05241
INFO:__main__:layer 2: 0.04867 0.02256 0.21194 0.04069 0.23058 0.15942 0.65188 0.38251 0.47535 0.40172 0.10869 0.34316
INFO:__main__:layer 3: 0.06349 0.08003 0.56604 0.41141 0.38410 0.16264 0.29070 0.37301 0.28161 0.18325 0.45048 0.02401
INFO:__main__:layer 4: 0.74869 0.15986 0.29754 0.02072 0.20961 0.06570 0.35717 0.44580 0.01144 0.11113 0.26962 0.28707
INFO:__main__:layer 5: 0.05413 0.58029 0.29859 0.64154 0.25539 0.11611 0.36774 0.05591 0.19390 0.34493 0.04906 0.02742
INFO:__main__:layer 6: 0.24067 0.06599 0.45376 0.22384 0.40461 0.53808 0.06806 0.21937 0.04209 0.13334 0.19226 0.57838
INFO:__main__:layer 7: 0.33972 0.12576 0.31489 0.10031 0.29630 0.19341 0.28052 0.29937 0.78337 0.09395 0.23640 0.05812
INFO:__main__:layer 8: 0.23342 0.27415 0.27682 0.22111 0.23234 0.79778 0.03235 0.09092 0.40418 0.01651 0.21795 0.22528
INFO:__main__:layer 9: 0.01306 0.88878 0.08858 0.45180 0.04019 0.08035 0.13417 0.15899 0.39753 0.01761 0.10785 0.01428
INFO:__main__:layer 10: 0.01597 0.01365 0.08691 0.04718 0.01268 0.32052 0.00453 0.05614 0.81534 0.00000 0.02659 0.66734
INFO:__main__:layer 11: 0.86446 0.00818 0.05306 0.12751 0.13587 0.00293 0.06480 0.22173 0.21643 0.04838 0.48050 0.32190
INFO:__main__:layer 12: 0.08048 0.32489 0.56753 0.28201 0.37204 0.09334 0.26549 0.07130 0.00372 0.53481 0.24909 0.36108
INFO:__main__:Head ranked by importance scores
INFO:__main__:lv, h > 1 2 3 4 5 6 7 8 9 10 11 12
INFO:__main__:layer 1: 41 108 102 123 81 0 119 124 62 100 72 114
INFO:__main__:layer 2: 116 129 70 121 61 79 8 28 17 25 89 35
INFO:__main__:layer 3: 107 99 13 22 27 77 46 29 49 76 20 128
INFO:__main__:layer 4: 6 78 44 130 71 105 33 21 138 88 53 47
INFO:__main__:layer 5: 112 10 43 9 55 87 31 111 73 34 115 126
INFO:__main__:layer 6: 57 104 18 64 23 14 103 67 120 84 75 11
INFO:__main__:layer 7: 36 86 40 91 45 74 50 42 5 92 58 109
INFO:__main__:layer 8: 59 52 51 66 60 4 125 94 24 132 68 63
INFO:__main__:layer 9: 136 1 95 19 122 98 83 80 26 131 90 134
INFO:__main__:layer 10: 133 135 96 118 137 39 140 110 3 143 127 7
INFO:__main__:layer 11: 2 139 113 85 82 142 106 65 69 117 16 38
INFO:__main__:layer 12: 97 37 12 48 30 93 54 101 141 15 56 32
Iteration: 100%|ββββββββββ| 4561/4561 [01:54<00:00, 39.80it/s]
INFO:__main__:Attention entropies
INFO:__main__:lv, h > 1 2 3 4 5 6 7 8 9 10 11 12
INFO:__main__:layer 1: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 2: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 3: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 4: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 5: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 6: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 7: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 8: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 9: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 10: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 11: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 12: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:Head importance scores
INFO:__main__:lv, h > 1 2 3 4 5 6 7 8 9 10 11 12
INFO:__main__:layer 1: 0.30328 0.05899 0.06971 0.03727 0.13938 1.00000 0.04436 0.03679 0.22807 0.07911 0.19918 0.05241
INFO:__main__:layer 2: 0.04867 0.02256 0.21194 0.04069 0.23058 0.15942 0.65188 0.38251 0.47535 0.40172 0.10869 0.34316
INFO:__main__:layer 3: 0.06349 0.08003 0.56604 0.41141 0.38410 0.16264 0.29070 0.37301 0.28161 0.18325 0.45048 0.02401
INFO:__main__:layer 4: 0.74869 0.15986 0.29754 0.02072 0.20961 0.06570 0.35717 0.44580 0.01144 0.11113 0.26962 0.28707
INFO:__main__:layer 5: 0.05413 0.58029 0.29859 0.64154 0.25539 0.11611 0.36774 0.05591 0.19390 0.34493 0.04906 0.02742
INFO:__main__:layer 6: 0.24067 0.06599 0.45376 0.22384 0.40461 0.53808 0.06806 0.21937 0.04209 0.13334 0.19226 0.57838
INFO:__main__:layer 7: 0.33972 0.12576 0.31489 0.10031 0.29630 0.19341 0.28052 0.29937 0.78337 0.09395 0.23640 0.05812
INFO:__main__:layer 8: 0.23342 0.27415 0.27682 0.22111 0.23234 0.79778 0.03235 0.09092 0.40418 0.01651 0.21795 0.22528
INFO:__main__:layer 9: 0.01306 0.88878 0.08858 0.45180 0.04019 0.08035 0.13417 0.15899 0.39753 0.01761 0.10785 0.01428
INFO:__main__:layer 10: 0.01597 0.01365 0.08691 0.04718 0.01268 0.32052 0.00453 0.05614 0.81534 0.00000 0.02659 0.66734
INFO:__main__:layer 11: 0.86446 0.00818 0.05306 0.12751 0.13587 0.00293 0.06480 0.22173 0.21643 0.04838 0.48050 0.32190
INFO:__main__:layer 12: 0.08048 0.32489 0.56753 0.28201 0.37204 0.09334 0.26549 0.07130 0.00372 0.53481 0.24909 0.36108
INFO:__main__:Head ranked by importance scores
INFO:__main__:lv, h > 1 2 3 4 5 6 7 8 9 10 11 12
INFO:__main__:layer 1: 41 108 102 123 81 0 119 124 62 100 72 114
INFO:__main__:layer 2: 116 129 70 121 61 79 8 28 17 25 89 35
INFO:__main__:layer 3: 107 99 13 22 27 77 46 29 49 76 20 128
INFO:__main__:layer 4: 6 78 44 130 71 105 33 21 138 88 53 47
INFO:__main__:layer 5: 112 10 43 9 55 87 31 111 73 34 115 126
INFO:__main__:layer 6: 57 104 18 64 23 14 103 67 120 84 75 11
INFO:__main__:layer 7: 36 86 40 91 45 74 50 42 5 92 58 109
INFO:__main__:layer 8: 59 52 51 66 60 4 125 94 24 132 68 63
INFO:__main__:layer 9: 136 1 95 19 122 98 83 80 26 131 90 134
INFO:__main__:layer 10: 133 135 96 118 137 39 140 110 3 143 127 7
INFO:__main__:layer 11: 2 139 113 85 82 142 106 65 69 117 16 38
INFO:__main__:layer 12: 97 37 12 48 30 93 54 101 141 15 56 32
INFO:__main__:Pruning: original score: 0.091866, threshold: 0.082679
INFO:__main__:Heads to mask: [117, 125, 140, 114, 121, 44, 112, 96, 109, 107, 108, 93, 105, 39]
INFO:__main__:lv, h > 1 2 3 4 5 6 7 8 9 10 11 12
INFO:__main__:layer 1: 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000
INFO:__main__:layer 2: 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000
INFO:__main__:layer 3: 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000
INFO:__main__:layer 4: 1.00000 1.00000 1.00000 0.00000 1.00000 1.00000 1.00000 1.00000 0.00000 1.00000 1.00000 1.00000
INFO:__main__:layer 5: 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000
INFO:__main__:layer 6: 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000
INFO:__main__:layer 7: 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000
INFO:__main__:layer 8: 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 0.00000 1.00000 1.00000
INFO:__main__:layer 9: 0.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 0.00000 1.00000 0.00000
INFO:__main__:layer 10: 0.00000 0.00000 1.00000 1.00000 0.00000 1.00000 0.00000 1.00000 1.00000 0.00000 1.00000 1.00000
INFO:__main__:layer 11: 1.00000 0.00000 1.00000 1.00000 1.00000 0.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000
INFO:__main__:layer 12: 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 0.00000 1.00000 1.00000 1.00000
Iteration: 100%|ββββββββββ| 4561/4561 [01:54<00:00, 39.68it/s]
INFO:__main__:Attention entropies
INFO:__main__:lv, h > 1 2 3 4 5 6 7 8 9 10 11 12
INFO:__main__:layer 1: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 2: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 3: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 4: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 5: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 6: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 7: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 8: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 9: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 10: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 11: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 12: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:Head importance scores
INFO:__main__:lv, h > 1 2 3 4 5 6 7 8 9 10 11 12
INFO:__main__:layer 1: 0.39574 0.02468 0.08140 0.12466 0.12766 1.00000 0.09364 0.03733 0.21125 0.05515 0.27669 0.01294
INFO:__main__:layer 2: 0.04871 0.01492 0.02147 0.00766 0.25008 0.16705 0.73248 0.48019 0.40388 0.39327 0.06609 0.38033
INFO:__main__:layer 3: 0.14803 0.02220 0.64758 0.29125 0.45867 0.02242 0.34411 0.33109 0.30959 0.29897 0.41782 0.01806
INFO:__main__:layer 4: 0.82395 0.20768 0.33463 0.05595 0.30457 0.01353 0.38665 0.33563 0.02096 0.12274 0.15990 0.30594
INFO:__main__:layer 5: 0.24467 0.77639 0.26634 0.43066 0.28580 0.15136 0.29888 0.09479 0.20161 0.38652 0.07106 0.09292
INFO:__main__:layer 6: 0.24192 0.04294 0.19242 0.18251 0.64465 0.64657 0.03439 0.17273 0.05866 0.20935 0.14715 0.51240
INFO:__main__:layer 7: 0.24221 0.11722 0.54783 0.09908 0.30887 0.33625 0.18271 0.09798 0.76243 0.19917 0.26639 0.02415
INFO:__main__:layer 8: 0.34639 0.10483 0.42852 0.23310 0.20756 0.85146 0.05960 0.06187 0.25805 0.12922 0.14193 0.28091
INFO:__main__:layer 9: 0.02696 0.97099 0.08023 0.36748 0.05116 0.06451 0.07015 0.23535 0.39404 0.14999 0.01570 0.01164
INFO:__main__:layer 10: 0.02307 0.01918 0.09727 0.05241 0.03105 0.32034 0.02875 0.08710 0.92011 0.00000 0.05011 0.59763
INFO:__main__:layer 11: 0.81794 0.00753 0.05141 0.14622 0.09715 0.00008 0.03071 0.09413 0.17420 0.05189 0.69652 0.31542
INFO:__main__:layer 12: 0.09312 0.36467 0.56134 0.25867 0.37935 0.06446 0.34758 0.09796 0.03789 0.56186 0.25654 0.37430
INFO:__main__:Head ranked by importance scores
INFO:__main__:lv, h > 1 2 3 4 5 6 7 8 9 10 11 12
INFO:__main__:layer 1: 24 126 100 85 84 0 96 120 64 111 52 138
INFO:__main__:layer 2: 117 136 131 140 58 75 8 18 23 26 104 29
INFO:__main__:layer 3: 79 130 10 49 19 129 36 40 43 47 22 134
INFO:__main__:layer 4: 4 66 39 110 46 137 27 38 132 86 76 45
INFO:__main__:layer 5: 59 6 54 20 50 77 48 94 68 28 102 98
INFO:__main__:layer 6: 61 118 70 72 12 11 121 74 109 65 80 17
INFO:__main__:layer 7: 60 87 16 89 44 37 71 90 7 69 53 127
INFO:__main__:layer 8: 35 88 21 63 67 3 108 107 56 83 82 51
INFO:__main__:layer 9: 125 1 101 32 115 105 103 62 25 78 135 139
INFO:__main__:layer 10: 128 133 92 112 122 41 124 99 2 143 116 13
INFO:__main__:layer 11: 5 141 114 81 93 142 123 95 73 113 9 42
INFO:__main__:layer 12: 97 33 15 55 30 106 34 91 119 14 57 31
INFO:__main__:Masking: current score: 0.092085, remaning heads 130 (90.3 percents)
INFO:__main__:Heads to mask: [15, 11, 41, 13, 106, 35, 14, 25, 29, 83, 1, 126, 66, 7]
INFO:__main__:lv, h > 1 2 3 4 5 6 7 8 9 10 11 12
INFO:__main__:layer 1: 1.00000 0.00000 1.00000 1.00000 1.00000 1.00000 1.00000 0.00000 1.00000 1.00000 1.00000 0.00000
INFO:__main__:layer 2: 1.00000 0.00000 0.00000 0.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000
INFO:__main__:layer 3: 1.00000 0.00000 1.00000 1.00000 1.00000 0.00000 1.00000 1.00000 1.00000 1.00000 1.00000 0.00000
INFO:__main__:layer 4: 1.00000 1.00000 1.00000 0.00000 1.00000 0.00000 1.00000 1.00000 0.00000 1.00000 1.00000 1.00000
INFO:__main__:layer 5: 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000
INFO:__main__:layer 6: 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 0.00000 1.00000 1.00000 1.00000 1.00000 1.00000
INFO:__main__:layer 7: 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 0.00000
INFO:__main__:layer 8: 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 0.00000 1.00000 1.00000
INFO:__main__:layer 9: 0.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 10: 0.00000 0.00000 1.00000 1.00000 0.00000 1.00000 0.00000 1.00000 1.00000 0.00000 1.00000 1.00000
INFO:__main__:layer 11: 1.00000 0.00000 1.00000 1.00000 1.00000 0.00000 0.00000 1.00000 1.00000 1.00000 1.00000 1.00000
INFO:__main__:layer 12: 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 0.00000 1.00000 1.00000 1.00000
Iteration: 0%| | 0/4561 [00:00<?, ?it/s]
Traceback (most recent call last):
File "run_bertology.py", line 426, in <module>
main()
File "run_bertology.py", line 421, in main
head_mask = mask_heads(args, model, eval_dataloader)
File "run_bertology.py", line 179, in mask_heads
args, model, eval_dataloader, compute_entropy=False, head_mask=new_head_mask
File "run_bertology.py", line 104, in compute_heads_importance
head_importance += head_mask.grad.abs().detach()
AttributeError: 'NoneType' object has no attribute 'abs'
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:2.8.0
- Platform:
- Python version:3.7
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:yes
- Using distributed or parallel set-up in script?:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3895/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3895/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3894 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3894/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3894/comments | https://api.github.com/repos/huggingface/transformers/issues/3894/events | https://github.com/huggingface/transformers/issues/3894 | 604,647,290 | MDU6SXNzdWU2MDQ2NDcyOTA= | 3,894 | FileNotFoundError: [Errno 2] No such file or directory: 'mnli/dev_matched.tsv' | {
"login": "ThomasSYT",
"id": 41875489,
"node_id": "MDQ6VXNlcjQxODc1NDg5",
"avatar_url": "https://avatars.githubusercontent.com/u/41875489?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ThomasSYT",
"html_url": "https://github.com/ThomasSYT",
"followers_url": "https://api.github.com/users/ThomasSYT/followers",
"following_url": "https://api.github.com/users/ThomasSYT/following{/other_user}",
"gists_url": "https://api.github.com/users/ThomasSYT/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ThomasSYT/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ThomasSYT/subscriptions",
"organizations_url": "https://api.github.com/users/ThomasSYT/orgs",
"repos_url": "https://api.github.com/users/ThomasSYT/repos",
"events_url": "https://api.github.com/users/ThomasSYT/events{/privacy}",
"received_events_url": "https://api.github.com/users/ThomasSYT/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Well, it looks like you don't have the \"mnli/dev_matched.tsv\" file downloaded? You can download the GLUE datasets using [utils/download_glue_data.py](https://github.com/huggingface/transformers/blob/master/utils/download_glue_data.py)"
] | 1,587 | 1,587 | 1,587 | NONE | null | # π Bug
## Information
A strange errorοΌ
INFO:transformers.data.datasets.glue:Creating features from dataset file at mnli
Traceback (most recent call last):
File "run_bertology.py", line 426, in <module>
main()
File "run_bertology.py", line 407, in main
eval_dataset = GlueDataset(args, tokenizer=tokenizer, evaluate=True, local_rank=args.local_rank)
File "/Users/thomas/opt/anaconda3/lib/python3.7/site-packages/transformers/data/datasets/glue.py", line 100, in __init__
if evaluate
File "/Users/thomas/opt/anaconda3/lib/python3.7/site-packages/transformers/data/processors/glue.py", line 219, in get_dev_examples
return self._create_examples(self._read_tsv(os.path.join(data_dir, "dev_matched.tsv")), "dev_matched")
File "/Users/thomas/opt/anaconda3/lib/python3.7/site-packages/transformers/data/processors/utils.py", line 115, in _read_tsv
with open(input_file, "r", encoding="utf-8-sig") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'mnli/dev_matched.tsv'
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
run_bertology.py
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
mnli
## To reproduce
Steps to reproduce the behavior:
export TASK_NAME=mnli
python ./run_bertology.py --data_dir $GLUE_DIR/$TASK_NAME
--model_name bert-base-uncased
--task_name $TASK_NAME
--max_seq_length 128
--output_dir ./tmp/$TASK_NAME/
--try_masking
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
INFO:transformers.data.datasets.glue:Creating features from dataset file at mnli
Traceback (most recent call last):
File "run_bertology.py", line 426, in <module>
main()
File "run_bertology.py", line 407, in main
eval_dataset = GlueDataset(args, tokenizer=tokenizer, evaluate=True, local_rank=args.local_rank)
File "/Users/thomas/opt/anaconda3/lib/python3.7/site-packages/transformers/data/datasets/glue.py", line 100, in __init__
if evaluate
File "/Users/thomas/opt/anaconda3/lib/python3.7/site-packages/transformers/data/processors/glue.py", line 219, in get_dev_examples
return self._create_examples(self._read_tsv(os.path.join(data_dir, "dev_matched.tsv")), "dev_matched")
File "/Users/thomas/opt/anaconda3/lib/python3.7/site-packages/transformers/data/processors/utils.py", line 115, in _read_tsv
with open(input_file, "r", encoding="utf-8-sig") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'mnli/dev_matched.tsv'
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.8.0
- Platform:
- Python version:3.7
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:yes
- Using distributed or parallel set-up in script?:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3894/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3894/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3893 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3893/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3893/comments | https://api.github.com/repos/huggingface/transformers/issues/3893/events | https://github.com/huggingface/transformers/issues/3893 | 604,468,068 | MDU6SXNzdWU2MDQ0NjgwNjg= | 3,893 | Can not import DataCollatorForLanguageModeling | {
"login": "parmarsuraj99",
"id": 9317265,
"node_id": "MDQ6VXNlcjkzMTcyNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/9317265?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/parmarsuraj99",
"html_url": "https://github.com/parmarsuraj99",
"followers_url": "https://api.github.com/users/parmarsuraj99/followers",
"following_url": "https://api.github.com/users/parmarsuraj99/following{/other_user}",
"gists_url": "https://api.github.com/users/parmarsuraj99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/parmarsuraj99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/parmarsuraj99/subscriptions",
"organizations_url": "https://api.github.com/users/parmarsuraj99/orgs",
"repos_url": "https://api.github.com/users/parmarsuraj99/repos",
"events_url": "https://api.github.com/users/parmarsuraj99/events{/privacy}",
"received_events_url": "https://api.github.com/users/parmarsuraj99/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I was installing from pip and it isn't updated yet. Building from source solved this.",
"I am trying to run this from source and still I am getting the same error! \r\nPlease have a look at this [here](https://github.com/huggingface/transformers/issues/3896#issue-604681806)",
"It's because the pip package hasn't been updated. The script to train is changed fundamentally. so you can try building from scratch using \r\n`git clone https://github.com/huggingface/transformers\r\ncd transformers\r\npip install .` \r\nor \r\nYou can use old script of `run_language_modeling.py` from previous commits. "
] | 1,587 | 1,587 | 1,587 | CONTRIBUTOR | null | # π Bug
## Information
Model I am using (ALBERT):
Language I am using the model on (Sanskrit, Hindi):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. In Google Colab
2. ` !python /content/transformers/examples/run_language_modeling.py \
--train_data_file /content/corpus/train/full.txt \
--eval_data_file /content/corpus/valid/full_val.txt \
--model_type albert-base-v2 \ `
3. This worked yesterdy, bbut the latest added `DataCollatorForLanguageModeling` can't be imported.
The error i am getting
`2020-04-22 05:12:25.640328: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
Traceback (most recent call last):
File "/content/transformers/examples/run_language_modeling.py", line 29, in <module>
from transformers import (
ImportError: cannot import name 'DataCollatorForLanguageModeling'`
So, I checked if it can be imported directly.
`from transformers import DataCollatorForLanguageModeling`
ERROR
`from transformers import DataCollatorForLanguageModeling
ImportError: cannot import name 'DataCollatorForLanguageModeling' `
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.8.0
- Platform: Colab
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0
- Tensorflow version (GPU?): 2.2.0-rc3
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3893/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3893/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3892 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3892/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3892/comments | https://api.github.com/repos/huggingface/transformers/issues/3892/events | https://github.com/huggingface/transformers/issues/3892 | 604,404,617 | MDU6SXNzdWU2MDQ0MDQ2MTc= | 3,892 | β Summarization example : Why no shuffling ? | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"No good reason.\r\nDo you feel comfortable sending a PR that shuffles for train loader only?\r\n"
] | 1,587 | 1,587 | 1,587 | CONTRIBUTOR | null | # β Questions & Help
Usually, when loading using Dataloader, data is shuffled for the training set. But in the case of the summarization example, data is not shuffled for training set :
https://github.com/huggingface/transformers/blob/1dc9b3c7847269961458c059ad8ad443b26bf60d/examples/summarization/bart/finetune.py#L105-L108
---
**Why data is not shuffled ?** | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3892/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3892/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3891 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3891/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3891/comments | https://api.github.com/repos/huggingface/transformers/issues/3891/events | https://github.com/huggingface/transformers/issues/3891 | 604,330,183 | MDU6SXNzdWU2MDQzMzAxODM= | 3,891 | Allow one to return encoder attentions in seq2seq generation | {
"login": "aced125",
"id": 44452903,
"node_id": "MDQ6VXNlcjQ0NDUyOTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/44452903?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aced125",
"html_url": "https://github.com/aced125",
"followers_url": "https://api.github.com/users/aced125/followers",
"following_url": "https://api.github.com/users/aced125/following{/other_user}",
"gists_url": "https://api.github.com/users/aced125/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aced125/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aced125/subscriptions",
"organizations_url": "https://api.github.com/users/aced125/orgs",
"repos_url": "https://api.github.com/users/aced125/repos",
"events_url": "https://api.github.com/users/aced125/events{/privacy}",
"received_events_url": "https://api.github.com/users/aced125/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @aced125, I agree that this functionality should be provided :-) \r\n\r\nI think in the PR, one has to include a `output_attention` argument to the `generate()` function and then make sure that the output idx are correct! \r\nBefore starting this PR, this PR should probably be solved before: https://github.com/huggingface/transformers/issues/3880",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"want to take a look at this soon",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"#6735 is a first step to allow for this feature",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,608 | 1,608 | NONE | null | # π Feature request
Please could we have the ability to return attention weights from the decoded generated tokens to the encoded source?
## Motivation
To attribute the decoded text. E.g in the summarization task we want to see where the decoder was paying attention to in the source.
## Your contribution
May be able to look into a PR but stretched for time at the minute.
FairSeq has implemented this capability I believe. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3891/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3891/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3890 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3890/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3890/comments | https://api.github.com/repos/huggingface/transformers/issues/3890/events | https://github.com/huggingface/transformers/pull/3890 | 604,297,515 | MDExOlB1bGxSZXF1ZXN0NDA2OTE4MzM4 | 3,890 | Create model card | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3890?src=pr&el=h1) Report\n> Merging [#3890](https://codecov.io/gh/huggingface/transformers/pull/3890?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/eb5601b0a5a88824a2598956f96e06e7f2422bce&el=desc) will **decrease** coverage by `0.03%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3890?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3890 +/- ##\n==========================================\n- Coverage 78.57% 78.53% -0.04% \n==========================================\n Files 106 106 \n Lines 17962 17962 \n==========================================\n- Hits 14113 14106 -7 \n- Misses 3849 3856 +7 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3890?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3890/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `91.28% <0.00%> (-1.32%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3890/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.06% <0.00%> (+0.12%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3890?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3890?src=pr&el=footer). Last update [eb5601b...fe606bb](https://codecov.io/gh/huggingface/transformers/pull/3890?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,587 | 1,587 | 1,587 | CONTRIBUTOR | null | Model: TinyBERT-spanish-uncased-finetuned-ner | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3890/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3890/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3890",
"html_url": "https://github.com/huggingface/transformers/pull/3890",
"diff_url": "https://github.com/huggingface/transformers/pull/3890.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3890.patch",
"merged_at": 1587581803000
} |
https://api.github.com/repos/huggingface/transformers/issues/3889 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3889/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3889/comments | https://api.github.com/repos/huggingface/transformers/issues/3889/events | https://github.com/huggingface/transformers/pull/3889 | 604,296,355 | MDExOlB1bGxSZXF1ZXN0NDA2OTE3Mzky | 3,889 | Update comparison table | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,587 | 1,587 | 1,587 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3889/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3889/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3889",
"html_url": "https://github.com/huggingface/transformers/pull/3889",
"diff_url": "https://github.com/huggingface/transformers/pull/3889.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3889.patch",
"merged_at": 1587581658000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3888 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3888/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3888/comments | https://api.github.com/repos/huggingface/transformers/issues/3888/events | https://github.com/huggingface/transformers/issues/3888 | 604,260,241 | MDU6SXNzdWU2MDQyNjAyNDE= | 3,888 | encode_for_summarization function did actually add CLS and SEP to separate sentences | {
"login": "xdwang0726",
"id": 16963017,
"node_id": "MDQ6VXNlcjE2OTYzMDE3",
"avatar_url": "https://avatars.githubusercontent.com/u/16963017?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xdwang0726",
"html_url": "https://github.com/xdwang0726",
"followers_url": "https://api.github.com/users/xdwang0726/followers",
"following_url": "https://api.github.com/users/xdwang0726/following{/other_user}",
"gists_url": "https://api.github.com/users/xdwang0726/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xdwang0726/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xdwang0726/subscriptions",
"organizations_url": "https://api.github.com/users/xdwang0726/orgs",
"repos_url": "https://api.github.com/users/xdwang0726/repos",
"events_url": "https://api.github.com/users/xdwang0726/events{/privacy}",
"received_events_url": "https://api.github.com/users/xdwang0726/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,593 | 1,593 | NONE | null | https://github.com/huggingface/transformers/blob/d32585a304107cb9f42ccb0e1278405aa3eb6c9c/examples/summarization/bertabs/utils_summarization.py#L130
Hi,
Could you please take a look at this part of the codes? I think this part might not act the function to separate the sentences. The function doesn't actually add CLS and SEP to sentences. Thank you in advance for your help! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3888/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3888/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3887 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3887/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3887/comments | https://api.github.com/repos/huggingface/transformers/issues/3887/events | https://github.com/huggingface/transformers/issues/3887 | 604,229,904 | MDU6SXNzdWU2MDQyMjk5MDQ= | 3,887 | pytorch lightning examples doesn't work in multi gpu's with backend=dp | {
"login": "leslyarun",
"id": 5101854,
"node_id": "MDQ6VXNlcjUxMDE4NTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5101854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leslyarun",
"html_url": "https://github.com/leslyarun",
"followers_url": "https://api.github.com/users/leslyarun/followers",
"following_url": "https://api.github.com/users/leslyarun/following{/other_user}",
"gists_url": "https://api.github.com/users/leslyarun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leslyarun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leslyarun/subscriptions",
"organizations_url": "https://api.github.com/users/leslyarun/orgs",
"repos_url": "https://api.github.com/users/leslyarun/repos",
"events_url": "https://api.github.com/users/leslyarun/events{/privacy}",
"received_events_url": "https://api.github.com/users/leslyarun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I get the below error:\r\n\r\n```\r\nValidation sanity check: 0%| | 0/5 [00:00<?, ?it/s]Traceback (most recent call last):\r\n File \"run_pl_glue.py\", line 186, in <module>\r\n trainer = generic_train(model, args)\r\n File \"/home/jupyter/transformers/examples/transformer_base.py\", line 307, in generic_train\r\n trainer.fit(model)\r\n File \"/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py\", line 701, in fit\r\n self.dp_train(model)\r\n File \"/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py\", line 540, in dp_train\r\n self.run_pretrain_routine(model)\r\n File \"/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py\", line 843, in run_pretrain_routine\r\n False)\r\n File \"/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/evaluation_loop.py\", line 262, in _evaluate\r\n output = self.evaluation_forward(model, batch, batch_idx, dataloader_idx, test_mode)\r\n File \"/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/evaluation_loop.py\", line 430, in evaluation_forward\r\n output = model(*args)\r\n File \"/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 532, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/opt/conda/lib/python3.7/site-packages/pytorch_lightning/overrides/data_parallel.py\", line 66, in forward\r\n return self.gather(outputs, self.output_device)\r\n File \"/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py\", line 165, in gather\r\n return gather(outputs, output_device, dim=self.dim)\r\n File \"/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py\", line 68, in gather\r\n res = gather_map(outputs)\r\n File \"/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py\", line 62, in gather_map\r\n for k in out))\r\n File \"/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py\", line 62, in <genexpr>\r\n for k in out))\r\n File \"/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py\", line 55, in gather_map\r\n return Gather.apply(target_device, dim, *outputs)\r\n File \"/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/_functions.py\", line 54, in forward\r\n assert all(map(lambda i: i.is_cuda, inputs))\r\nAssertionError\r\n```\r\n@nateraw @williamFalcon",
"update to the latest lightning version?\r\n0.7.4rc1",
"@williamFalcon doesn't work with lightning version 0.7.4rc1, 0.7.4rc2 and even 0.7.3, 0.7.1\r\n",
"ok, can you share a colab here? happy to take a look",
"@williamFalcon Thanks. I'm running the code as per the given instructions in https://github.com/huggingface/transformers/tree/master/examples/glue\r\nI didn't make any changes, I just ran the same official example script in multi gpu's - https://github.com/huggingface/transformers/blob/master/examples/glue/run_pl.sh \r\nIt works in CPU and single GPU, but doesn't work in multi-gpu's ",
"It is a bit unclear what is going on in there: the bash script installs lightning but the python code doesn't seem to use it?",
"I am also facing the error but on a different custom learning model. My code is working properly on a single GPU, however, if I increase the number of GPUs to 2, it gives me the above error. I checked both PL 0.7.3 and 0.7.4rc3 \r\n\r\n**Update: Interestingly when I changed ``distributed_backend`` to ``ddp`` then it worked perfectly without any error** I think there is an issue with the **``dp``** distributed_backend",
"\r\n\r\nrun_pl.sh runs fine.\r\n\r\nI ran without ANY changes to the file. Did you guys change anything in the file?",
"@williamFalcon Didn't change anything, hope you ran it in multi-gpu's. The code seems to run fine in ddp, but not in dp, as mentioned by @mmiakashs . \r\n\r\nWhen I debugged, I found that when using dp (DataParallel) with 8 gpu's, it generates 8 different losses and since the training_step can't gather 8 losses, it showed error like this:\r\n``` TypeError: zip argument #1 must support iteration ```\r\n",
"Ummm, yeah not sure... It looks ok to me.\r\n\r\n\r\n\r\n\r\n\r\n\r\nTry running dp on 2 GPUs? This test is on 2 GPUs",
"It looks like hf sets ddp as the backend which is great because dp has a bunch of issues (this is a PyTorch problem, not lightning). Both PyTorch and lightning discourage dp use.\r\n\r\nJust ran this with the default ddp and it works well (although the run_pl.sh script has a bunch of usability issues, ie: i need the data in a different part of the cluster but that script doesn't do that, so I had to run from that directory in the cluster. Ideally --data_dir solves this issue but it doesn't).",
"I can confirm that the issue occurs only when using multi-gpu's with dp as backend. Using ddp solves the issues.\r\n\r\nI found one more issue. If I use fast tokenizers with ddp as backend, I get the below error:\r\n\r\n```\r\nINFO:lightning:GPU available: True, used: True\r\nINFO:lightning:CUDA_VISIBLE_DEVICES: [0,1]\r\n/opt/conda/lib/python3.7/site-packages/pytorch_lightning/utilities/warnings.py:18: RuntimeWarning: You have defined a `val_dataloader()` and have defined a `validation_step()`, you may also want to define `validation_epoch_end()` for accumulating stats.\r\n warnings.warn(*args, **kwargs)\r\nTraceback (most recent call last):\r\n File \"run_pl_glue.py\", line 187, in <module>\r\n trainer = generic_train(model, args)\r\n File \"/home/jupyter/transformers/examples/transformer_base.py\", line 310, in generic_train\r\n trainer.fit(model)\r\n File \"/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py\", line 734, in fit\r\n mp.spawn(self.ddp_train, nprocs=self.num_processes, args=(model,))\r\n File \"/opt/conda/lib/python3.7/site-packages/torch/multiprocessing/spawn.py\", line 162, in spawn\r\n process.start()\r\n File \"/opt/conda/lib/python3.7/multiprocessing/process.py\", line 112, in start\r\n self._popen = self._Popen(self)\r\n File \"/opt/conda/lib/python3.7/multiprocessing/context.py\", line 284, in _Popen\r\n return Popen(process_obj)\r\n File \"/opt/conda/lib/python3.7/multiprocessing/popen_spawn_posix.py\", line 32, in __init__\r\n super().__init__(process_obj)\r\n File \"/opt/conda/lib/python3.7/multiprocessing/popen_fork.py\", line 20, in __init__\r\n self._launch(process_obj)\r\n File \"/opt/conda/lib/python3.7/multiprocessing/popen_spawn_posix.py\", line 47, in _launch\r\n reduction.dump(process_obj, fp)\r\n File \"/opt/conda/lib/python3.7/multiprocessing/reduction.py\", line 60, in dump\r\n ForkingPickler(file, protocol).dump(obj)\r\nTypeError: can't pickle Tokenizer objects\r\n```",
">\r\n> I found one more issue. If I use fast tokenizers with ddp as backend, I get the below error:\r\n> \r\n@leslyarun I am also facing a similar issue with ddp backend (not exactly the same): [github issue](https://github.com/PyTorchLightning/pytorch-lightning/issues/1578)\r\nMy guess is that maybe there is an issue with the callback and the saving objects with pickle. At this moment I will try to manually save checkpoint without using the callbacks.\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@mmiakashs did that end up working?",
"> @mmiakashs did that end up working?\r\n\r\ncurrently, I am using ddp_spwan mode and it is working fine. ",
"@sshleifer can confirm A) the Lightning examples don't work at all with `dp` B) does run, but needs significant editing with `ddp`\r\n\r\nFor examples I've looked at it's not as simple as turning `ddp` on and all great. It seems whomever wrote the Lightning examples never tried multi-GPU. Happy to elaborate or share (though mine are not in great shape at the moment).\r\n\r\nAnd `ddp_spawn` definitely does not work for me. Gives several spawn-based errors -- says my model is not compliant. ",
"A) don't know but that sounds very likely. @williamFalcon told me \"Dont use dp\".\r\n\r\nB) `examples/seq2seq/finetune.py` works in multigpu with two caveats:\r\n(a) versions need to be transformers=master, pl=0.8.1. \r\n(b) you cannot pass `--do_predict`. (`pl.Trainer.test` is broken for multi-gpu)\r\n\r\nFor the other two pl examples: ner, and glue, I haven't tested multi-gpu, but they should be at least close to working because they inherit from the same `BaseTransformer`. Which one of those were you trying to run/ are you interesting in running?\r\n\r\n",
"Thanks @sshleifer. We're fine using `ddp` for everything -- only need one version to work, not multiple ways to do the same thing. Also according to the docs, `ddp` is the only one that works with FP16 anyway (have not tested yet, will do soon).\r\nhttps://pytorch-lightning.readthedocs.io/en/latest/multi_gpu.html\r\n\r\nI'm working off of `transformers` from GitHub... so should be a recent version. If that's not what you are saying couple you please be more specific?\r\n\r\nWe also don't necessarily \"need\" Lightning... but would be great if it worked (in single set of settings) for multi-GPU. As it is great having reasonable out of the box options for LR schedule, model synchronization, gradient accumulation, and all those other things I've grown tired of implementing for every project. ",
"@moscow25 dp is NOT recommended by PyTorch\r\n\r\nhttps://pytorch.org/docs/master/generated/torch.nn.DataParallel.html\r\n\r\n\r\n\r\n2. The current base transformers has a few issues which I've submitted a PR for.\r\n3. Please let me know what example you are using / what code i can look at to reproduce the issues.",
"> @sshleifer can confirm A) the Lightning examples don't work at all with `dp` B) does run, but needs significant editing with `ddp`\r\n> \r\n> For examples I've looked at it's not as simple as turning `ddp` on and all great. It seems whomever wrote the Lightning examples never tried multi-GPU. Happy to elaborate or share (though mine are not in great shape at the moment).\r\n> \r\n> And `ddp_spawn` definitely does not work for me. Gives several spawn-based errors -- says my model is not compliant.\r\n\r\nddp doesn't work for me and ddp_spawn gives a lot of errors. On using ddp, no error is shown but it doesn't start anything on the GPU - just the notebook cell being busy indefinitely. I am using the DistilBertTokenizer and DistilBertModel - has anyone been able to run pytorch lightning on multipe gpus with Distilbert?",
"I suspect that your issue is ddp+jupyter rather than distillbert. Try running your command from the terminal.",
"> I suspect that your issue is ddp+jupyter rather than distillbert. Try running your command from the terminal.\r\n\r\nWhy does running the code in Jupyter notebook create a problem? I was able to run the BertModels like SequenceClassification in the Jupyter notebook on multiple gpus without any problem - but running into this multiple gpu problem using pytorch lightning. It is nice to be able to use Pytorch lightning given all the built in options. It makes it easier to build the models interactively on the Jupyter notebook",
"> > I suspect that your issue is ddp+jupyter rather than distillbert. Try running your command from the terminal.\r\n> \r\n> Why does running the code in Jupyter notebook create a problem? I was able to run the BertModels like SequenceClassification in the Jupyter notebook on multiple gpus without any problem - but running into this multiple gpu problem using pytorch lightning. It is nice to be able to use Pytorch lightning given all the built in options. It makes it easier to build the models interactively on the Jupyter notebook\r\n\r\nLooks like usage of ddp doesn't work in Jupyter notebook. and transformers don't work with dp parameter of pytorch lightning in Jupyter notebook. So looks like the only option to use pytorch lightning, multiple gpus and transformer is to run it as a python script.\r\nhttps://pytorch-lightning.readthedocs.io/en/latest/multi_gpu.html\r\nJupyter Notebooks\r\nUnfortunately any ddp_ is not supported in jupyter notebooks. Please use dp for multiple GPUs. This is a known Jupyter issue. If you feel like taking a stab at adding this support, feel free to submit a PR!",
"i believe @nateraw is almost done updating the examples with the latest version of PL. \r\n\r\ncan you share the model that does work with multiple gpus in a jupyter notebook?",
"I read somewhere on the pytorch lightning documents about being careful to checkpoint models when running on DDP mode - can't find that documentation now but is there something I need to be careful about checkpointing while running DDP on a single machine with 8 GPUs? It was something about the model getting split among multiple machines - not sure if that is valid if DDP used on a single machine. ",
"nothing you have to worry about... we save the checkpoint correctly automatically ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,603 | 1,603 | NONE | null | # π Bug
## Information
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: run_pl.sh (run_pl_glue.py)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: Glue
## To reproduce
Steps to reproduce the behavior:
1. run_pl.sh script with multi-gpu's (ex:8 gpu's)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Glue training should happen
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.8.0
- Platform: Linux
- Python version: 3.7
- PyTorch version (GPU?): 1.4
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: DataParallel
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3887/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3887/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3886 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3886/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3886/comments | https://api.github.com/repos/huggingface/transformers/issues/3886/events | https://github.com/huggingface/transformers/issues/3886 | 604,178,238 | MDU6SXNzdWU2MDQxNzgyMzg= | 3,886 | How to find a correct place of original word from the list of predicted words from GPT-2 model? | {
"login": "states786",
"id": 64096105,
"node_id": "MDQ6VXNlcjY0MDk2MTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/64096105?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/states786",
"html_url": "https://github.com/states786",
"followers_url": "https://api.github.com/users/states786/followers",
"following_url": "https://api.github.com/users/states786/following{/other_user}",
"gists_url": "https://api.github.com/users/states786/gists{/gist_id}",
"starred_url": "https://api.github.com/users/states786/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/states786/subscriptions",
"organizations_url": "https://api.github.com/users/states786/orgs",
"repos_url": "https://api.github.com/users/states786/repos",
"events_url": "https://api.github.com/users/states786/events{/privacy}",
"received_events_url": "https://api.github.com/users/states786/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"It won't be that easy since some words will be split into multiple tokens so you have to make two forward passes. \r\n\r\nIf you limit your `original_word` to just one token words (you can check that simply with `len(tokenizer.encode(original_word))==1`. Then your idea here should work. \r\n\r\nIf not it's gonna be trickier. Also this issue might be helpful: \r\nhttps://github.com/huggingface/transformers/issues/2311",
"Thanks @patrickvonplaten for your response. \r\nYes, the code works for `len(tokenizer.encode(original_word))==1`, but not for those `original_word` , which consist of more than one tokens.\r\n\r\nI look at the shared issue, but I am confused, which selected word id, should I pass to the model again, as `next_word_logits.topk(5)` gives me 5 token ids?\r\n\r\nCan you please share any code snippet, which will work for the second part?",
"Hi @patrickvonplaten,\r\n\r\ncan u plz let me know about any update?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,593 | 1,593 | NONE | null | Hi,
I would like to calculate at which place correct word option lies in the top 5 predicted words from GPT-2 model?
For this purpose, I am using following code snippet:
```
subseq = "The car moves very" #sample sequence
orignal_word="fast"
sequence = tokenizer.encode(subseq, return_tensors="pt")
next_word_id = tokenizer.encode(orignal_word, return_tensors="pt")
next_word = tokenizer.decode(next_word_id[0])
next_word_logits = model(sequence)[0][0, -1].detach()
probabilities, word_ids = next_word_logits.topk(5) #Getting top 5 next word options
rank=1.0
for word_id in word_ids:
word = tokenizer.decode([word_id])
if word == next_word:
break;
rank=rank+1.0
print("Rank of Correct option is "+ str(rank))
```
I am not sure whether it is done perfectly or not as GPT-2 model uses BPE tokenizer. Am I doing it in a right way? Kindly share your thoughts, and correct me if I am doing something wrong in it.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3886/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3886/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3885 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3885/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3885/comments | https://api.github.com/repos/huggingface/transformers/issues/3885/events | https://github.com/huggingface/transformers/issues/3885 | 604,173,463 | MDU6SXNzdWU2MDQxNzM0NjM= | 3,885 | Pretrain From Scratch using Google TPU | {
"login": "muhammadfahid51",
"id": 57350797,
"node_id": "MDQ6VXNlcjU3MzUwNzk3",
"avatar_url": "https://avatars.githubusercontent.com/u/57350797?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muhammadfahid51",
"html_url": "https://github.com/muhammadfahid51",
"followers_url": "https://api.github.com/users/muhammadfahid51/followers",
"following_url": "https://api.github.com/users/muhammadfahid51/following{/other_user}",
"gists_url": "https://api.github.com/users/muhammadfahid51/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muhammadfahid51/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muhammadfahid51/subscriptions",
"organizations_url": "https://api.github.com/users/muhammadfahid51/orgs",
"repos_url": "https://api.github.com/users/muhammadfahid51/repos",
"events_url": "https://api.github.com/users/muhammadfahid51/events{/privacy}",
"received_events_url": "https://api.github.com/users/muhammadfahid51/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,593 | 1,593 | NONE | null | @julien-c @patrickvonplaten
I want to pretrain a model from scratch by utilizing Google Cloud TPU offered in kaggle. I can train the model without TPU but I want to train it on a TPU. Any help will be much appreciated.
Also what options do I have, if there is no straight forward approach ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3885/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3885/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3884 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3884/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3884/comments | https://api.github.com/repos/huggingface/transformers/issues/3884/events | https://github.com/huggingface/transformers/issues/3884 | 604,143,345 | MDU6SXNzdWU2MDQxNDMzNDU= | 3,884 | Problem trying to run AlbertForMaskedLM on Colab TPU: TypeError: can't pickle torch._C.ScriptFunction objects when calling xm.send_cpu_data_to_device(model, dev) | {
"login": "ayanyuegupta",
"id": 43908663,
"node_id": "MDQ6VXNlcjQzOTA4NjYz",
"avatar_url": "https://avatars.githubusercontent.com/u/43908663?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ayanyuegupta",
"html_url": "https://github.com/ayanyuegupta",
"followers_url": "https://api.github.com/users/ayanyuegupta/followers",
"following_url": "https://api.github.com/users/ayanyuegupta/following{/other_user}",
"gists_url": "https://api.github.com/users/ayanyuegupta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ayanyuegupta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayanyuegupta/subscriptions",
"organizations_url": "https://api.github.com/users/ayanyuegupta/orgs",
"repos_url": "https://api.github.com/users/ayanyuegupta/repos",
"events_url": "https://api.github.com/users/ayanyuegupta/events{/privacy}",
"received_events_url": "https://api.github.com/users/ayanyuegupta/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Seems fixed now--delete transformers installed via pip and install by cloning this repo."
] | 1,587 | 1,588 | 1,588 | NONE | null | # π Bug
## Information
Model I am using (Bert, XLNet ...): AlbertForMaskedLM
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [X ] the official example scripts: (give details below)
* [X ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Related issues: https://github.com/pytorch/xla/issues/1909
The following has already been talked about here (https://github.com/huggingface/transformers/pull/3743) but I couldn't find a solution? Apologies if I'm posting about something that's already been dealt with, pretty new to all of this.
I am running the following code on colab on a TPU session taken from the example here: https://huggingface.co/transformers/model_doc/albert.html#albertformaskedlm
```
import os
import torch
import torch_xla
import torch_xla.core.xla_model as xm
assert os.environ['COLAB_TPU_ADDR']
dev = xm.xla_device()
from transformers import AlbertTokenizer, AlbertForMaskedLM
import torch
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')
model = AlbertForMaskedLM.from_pretrained('albert-base-v2')
model = xm.send_cpu_data_to_device(model, dev)
model = model.to(dev)
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1
data = input_ids.to(dev)
outputs = model(data, masked_lm_labels=data)
loss, prediction_scores = outputs[:2]
```
I haven't done anything to the example code except move ```input_ids``` and ```model``` onto the TPU device using ```.to(dev)``` and ```xm.send_cpu_data_to_device(model, dev)```. It seems everything is moved to the TPU no problem as when I input ```data``` I get the following output: ```tensor([[ 2, 10975, 15, 51, 1952, 25, 10901, 3]], device='xla:1')```
However when I run this code I get the following error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-6-b7b68efc9620> in <module>()
11 tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')
12 model = AlbertForMaskedLM.from_pretrained('albert-base-v2')
---> 13 model = xm.send_cpu_data_to_device(model, dev)
14 model = model.to(dev)
15 input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1
18 frames
/usr/lib/python3.6/copy.py in copy(x)
94 reductor = getattr(x, "__reduce_ex__", None)
95 if reductor:
---> 96 rv = reductor(4)
97 else:
98 reductor = getattr(x, "__reduce__", None)
TypeError: can't pickle torch._C.ScriptFunction objects
```
Anyone know what's going on?
## Expected behavior
I expected the AlbertForMaskedLM model to work on colab TPU without any errors.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.8.0
- Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.0a0+ab660ae (False)
- Tensorflow version (GPU?): 2.2.0-rc3 (False)
- Using GPU in script?: no, attempting to use TPU
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3884/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3884/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3883 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3883/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3883/comments | https://api.github.com/repos/huggingface/transformers/issues/3883/events | https://github.com/huggingface/transformers/issues/3883 | 604,112,109 | MDU6SXNzdWU2MDQxMTIxMDk= | 3,883 | No longer able to fine-tune GPT2 using provided examples | {
"login": "texturejc",
"id": 24894080,
"node_id": "MDQ6VXNlcjI0ODk0MDgw",
"avatar_url": "https://avatars.githubusercontent.com/u/24894080?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/texturejc",
"html_url": "https://github.com/texturejc",
"followers_url": "https://api.github.com/users/texturejc/followers",
"following_url": "https://api.github.com/users/texturejc/following{/other_user}",
"gists_url": "https://api.github.com/users/texturejc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/texturejc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/texturejc/subscriptions",
"organizations_url": "https://api.github.com/users/texturejc/orgs",
"repos_url": "https://api.github.com/users/texturejc/repos",
"events_url": "https://api.github.com/users/texturejc/events{/privacy}",
"received_events_url": "https://api.github.com/users/texturejc/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Never mind; this was an issue with the colab. It's sorted now."
] | 1,587 | 1,587 | 1,587 | NONE | null | A few months ago, I was able to run GPT2 on a Google Colab notebook. This was using the following script, which is based on the provided docs:
```
!git clone https://github.com/huggingface/transformers
%cd transformers
!pip install .
!pip install -r ./examples/requirements.txt
!python /content/transformers/examples/run_lm_finetuning.py \
--output_dir=output \
--model_type=gpt2 \
--model_name_or_path=gpt2 \
--do_train \
--train_data_file=/content/train.txt \
--do_eval \
--eval_data_file=/content/test.txt \
--per_gpu_train_batch_size=2
!python /content/transformers/examples/run_generation.py \
--model_type=gpt2 \
--model_name_or_path=gpt2 \
--length 500
```
Coming back to it after a little while, it no longer works. I realise that `run_lm_finetuning.py` has been replaced by `run_language_modeling.py`. However, running this file instead either produces a `command not found error`, or it asks me to provide details that I've already provided: `the following arguments are required: --train_data_file, --output_dir, --model_type`.
I appreciate that you guys perform a great service to the community by making these models available, and I thank you for doing so. I also understand that it's my responsibility to keep up with changes. All the same, any help in getting this functionality back on track would be appreciated!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3883/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3883/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3882 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3882/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3882/comments | https://api.github.com/repos/huggingface/transformers/issues/3882/events | https://github.com/huggingface/transformers/pull/3882 | 604,099,727 | MDExOlB1bGxSZXF1ZXN0NDA2NzU2ODM1 | 3,882 | Create model card for RoBERTa large fine-tuned on wsc | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"I there any problem with this card?"
] | 1,587 | 1,587 | 1,587 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3882/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3882/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3882",
"html_url": "https://github.com/huggingface/transformers/pull/3882",
"diff_url": "https://github.com/huggingface/transformers/pull/3882.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3882.patch",
"merged_at": 1587758221000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3881 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3881/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3881/comments | https://api.github.com/repos/huggingface/transformers/issues/3881/events | https://github.com/huggingface/transformers/pull/3881 | 604,065,612 | MDExOlB1bGxSZXF1ZXN0NDA2NzI5NTMz | 3,881 | Fix Torch.hub + Integration test | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"(Hmm, GitHub was failing earlier today, and now it seems to have posted my comment multiple times. Sorry about that.)",
"See cbbb3c43c55d2d93a156fc80bd12f31ecbac8520"
] | 1,587 | 1,587 | 1,587 | MEMBER | null | - Torch.hub doesn't use pip-installed versions of modules, but uses a [custom importer instead](https://github.com/pytorch/pytorch/blob/master/torch/hub.py#L70-L83) (it imports `hubconf.py`) which means that:
- all imports from hubconf.py refer to src.transformers instead of transformers
- all imports inside the lib's code **must** be relative, i.e. shouldn't assume that the transformers module is installed (it's not)
- Added a GitHub action workflow to ensure that `hub.list` and `hub.help` always work. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3881/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3881/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3881",
"html_url": "https://github.com/huggingface/transformers/pull/3881",
"diff_url": "https://github.com/huggingface/transformers/pull/3881.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3881.patch",
"merged_at": 1587492811000
} |
https://api.github.com/repos/huggingface/transformers/issues/3880 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3880/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3880/comments | https://api.github.com/repos/huggingface/transformers/issues/3880/events | https://github.com/huggingface/transformers/issues/3880 | 604,022,389 | MDU6SXNzdWU2MDQwMjIzODk= | 3,880 | Replace `config.output_attentions` parameter with function argument `output_attentions` | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Hi, I would like to work on this issue.",
"That's great :-) Do you want to open a PR and do the changes analogous to PR: #3734 ? ",
"Is this still be worked on? If not, I'd be happy to make a first contribution here",
"First PR first serve ;-) Still an open issue",
"Any tips on how I should proceed? I was thinking of following the changes made for `config.output_past` (01c37dc), but for `config.output_attentions` instead.",
"Oh sorry @drjosephliu didn't notice the comment as I was working on it earlier today, my apologies π ",
"Hey @patrickvonplaten, i noticed this issue has been closed. Any updates on what changes were made and any updates to the PR i still need to make?",
"Hey @drjosephliu, I will take a closer look at your PR tomorrow :-) "
] | 1,587 | 1,591 | 1,591 | MEMBER | null | # π Feature request
Currently the user has to decide whether the model should output the attentions when she/he creates the config of a model: config.output_attentions = True/False. It would be nice if the user can decide this when calling the models `forward()` / `call()` with a flag `output_attentions`. This should be done for all TF and PT models that can output attentions.
A very similar recent change was done for the variable `config.output_past` -> see PR:#3734
## Motivation
The user has more flexibility when the hidden states should be output or not.
## Your contribution
If someone feels like contributing to the library, this would be a great first PR. I'm very happy to guide the contributor through the PR!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3880/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3880/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3879 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3879/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3879/comments | https://api.github.com/repos/huggingface/transformers/issues/3879/events | https://github.com/huggingface/transformers/issues/3879 | 604,020,219 | MDU6SXNzdWU2MDQwMjAyMTk= | 3,879 | Replace `config.output_hidden_states` parameter with function argument `output_hidden_states` | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Hi, @patrickvonplaten I want to take up this issue. Can I move forward with it? ",
"I think this could have side effects for libraries that use `config.output_hidden_states`, so I'm cc'ing @Timoeller and @brandenchan, because this parameter is used in [FARM](https://github.com/deepset-ai/FARM).",
"> Hi, @patrickvonplaten I want to take up this issue. Can I move forward with it?\r\n\r\nThat would be great, feel free to open a PR and do a first model. The PR should be very similar to what was done in PR #3734",
"Hi, @patrickvonplaten as I am new here I might take some time to get acquainted with the codebase and come up with a PR. Is it okay?",
"Sorry @gaurav-singh1998, I saw this issue was still open so I made a PR for it.",
"Okay, no issues @drjosephliu I'll find some other good first issues to solve. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,595 | 1,595 | MEMBER | null | # π Feature request
Currently the user has to decide whether the model should output the hidden states when she/he creates the config of a model: `config.output_hidden_states = True/False`. It would be nice if the user can decide this when calling the models `forward()` / `call()` with a flag `output_hidden_states`. This should be done for all TF and PT models that can output hidden states.
A very similar recent change was done for the variable `config.output_past` -> see PR:https://github.com/huggingface/transformers/pull/3734
## Motivation
The user has more flexibility when the hidden states should be output or not.
## Your contribution
If someone feels like contributing to the library, this would be a great first PR. I'm very happy to guide the contributor through the PR! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3879/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3879/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3878 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3878/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3878/comments | https://api.github.com/repos/huggingface/transformers/issues/3878/events | https://github.com/huggingface/transformers/issues/3878 | 603,976,902 | MDU6SXNzdWU2MDM5NzY5MDI= | 3,878 | When will ELECTRA pretraining from scratch will be available? | {
"login": "008karan",
"id": 18630864,
"node_id": "MDQ6VXNlcjE4NjMwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/18630864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/008karan",
"html_url": "https://github.com/008karan",
"followers_url": "https://api.github.com/users/008karan/followers",
"following_url": "https://api.github.com/users/008karan/following{/other_user}",
"gists_url": "https://api.github.com/users/008karan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/008karan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/008karan/subscriptions",
"organizations_url": "https://api.github.com/users/008karan/orgs",
"repos_url": "https://api.github.com/users/008karan/repos",
"events_url": "https://api.github.com/users/008karan/events{/privacy}",
"received_events_url": "https://api.github.com/users/008karan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Working on it as we speak :).\r\n\r\nI'd say it will be out in a few weeks at most.",
"Is Albert pretraining from scratch is available? @LysandreJik \r\n",
"@LysandreJik do you think the update will be available by the end of the month ? Maybe it has been postponed due to the recent addition of the Trainer and the refactor of the language_modeling script ?",
"It has been postponed a bit due to the recent addition of the Trainer and the TPU work on it, but I definitely aim to have it out earlier than by the end of the month :)",
"Was looking for Albert pre-training from scratch but I think there is support for Bert, Roberta and distillbert only as of now. \r\n@LysandreJik can you guide how can I do Albert pretraining from scratch?",
"@LysandreJik Is the code for pretraining Electra from scratch available now?\r\n",
"> @LysandreJik Is the code for pretraining Electra from scratch available now?\r\n\r\nNot yet. There's a PR about it;\r\nhttps://github.com/huggingface/transformers/pull/4656",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Are there any updates to this, or plans to release the ELECTRA pre-training from scratch feature soon?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hello. \r\nAre there any updates?",
"Curious as well.",
"The development of the ELECTRA pretraining from scratch is in a stale state with no plans to work on it further, see https://github.com/huggingface/transformers/pull/4656#issuecomment-711082850\r\n\r\nSee https://discuss.huggingface.co/t/electra-training-reimplementation-and-discussion/1004 by @richarddwang for a PyTorch implementation of the ELECTRA pretraining."
] | 1,587 | 1,620 | 1,605 | NONE | null | # β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
Is pre training from scratch for electra available? couldnt find it.
Thanks!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3878/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3878/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3877 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3877/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3877/comments | https://api.github.com/repos/huggingface/transformers/issues/3877/events | https://github.com/huggingface/transformers/issues/3877 | 603,951,572 | MDU6SXNzdWU2MDM5NTE1NzI= | 3,877 | ImportError: cannot import name 'MODEL_CLASSES' from 'run_glue' | {
"login": "ThomasSYT",
"id": 41875489,
"node_id": "MDQ6VXNlcjQxODc1NDg5",
"avatar_url": "https://avatars.githubusercontent.com/u/41875489?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ThomasSYT",
"html_url": "https://github.com/ThomasSYT",
"followers_url": "https://api.github.com/users/ThomasSYT/followers",
"following_url": "https://api.github.com/users/ThomasSYT/following{/other_user}",
"gists_url": "https://api.github.com/users/ThomasSYT/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ThomasSYT/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ThomasSYT/subscriptions",
"organizations_url": "https://api.github.com/users/ThomasSYT/orgs",
"repos_url": "https://api.github.com/users/ThomasSYT/repos",
"events_url": "https://api.github.com/users/ThomasSYT/events{/privacy}",
"received_events_url": "https://api.github.com/users/ThomasSYT/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
}
] | [
"Should be fixed on master, please share if it does/your results"
] | 1,587 | 1,587 | 1,587 | NONE | null | # π Bug
## Information
I try to run the latest versions of the examples and got the error message(I have installed the main README mentioned procedure from source):
Traceback (most recent call last):
File "run_bertology.py", line 33, in <module>
from run_glue import ALL_MODELS, MODEL_CLASSES, load_and_cache_examples, set_seed
ImportError: cannot import name 'MODEL_CLASSES' from 'run_glue' (/home/stud-yantao/Transformer/transformers/examples/run_glue.py)
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
run_bertology.py
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
MNLI
## To reproduce
Steps to reproduce the behavior:
export TASK_NAME=mnli
python ./run_bertology.py --data_dir $GLUE_DIR/$TASK_NAME
--model_name bert-base-uncased
--task_name $TASK_NAME
--output_dir ./tmp/$TASK_NAME/
--try_masking
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Traceback (most recent call last):
File "run_bertology.py", line 33, in <module>
from run_glue import ALL_MODELS, MODEL_CLASSES, load_and_cache_examples, set_seed
ImportError: cannot import name 'MODEL_CLASSES' from 'run_glue' (/home/stud-yantao/Transformer/transformers/examples/run_glue.py)
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.8.0
- Platform:
- Python version:3.7
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:yes
- Using distributed or parallel set-up in script?:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3877/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3877/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3876 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3876/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3876/comments | https://api.github.com/repos/huggingface/transformers/issues/3876/events | https://github.com/huggingface/transformers/issues/3876 | 603,868,520 | MDU6SXNzdWU2MDM4Njg1MjA= | 3,876 | How to reduce random summary generation of BART Summarization models? | {
"login": "aliasneo1",
"id": 3222258,
"node_id": "MDQ6VXNlcjMyMjIyNTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/3222258?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aliasneo1",
"html_url": "https://github.com/aliasneo1",
"followers_url": "https://api.github.com/users/aliasneo1/followers",
"following_url": "https://api.github.com/users/aliasneo1/following{/other_user}",
"gists_url": "https://api.github.com/users/aliasneo1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aliasneo1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aliasneo1/subscriptions",
"organizations_url": "https://api.github.com/users/aliasneo1/orgs",
"repos_url": "https://api.github.com/users/aliasneo1/repos",
"events_url": "https://api.github.com/users/aliasneo1/events{/privacy}",
"received_events_url": "https://api.github.com/users/aliasneo1/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"BART is used as a model for abstractive summarization so it can use different words than those used in the original text. But it should not go *off-topic*. You could use an extractive summarization model instead which does not generate new nouns. Also you might be interested in the methods from [*Controlling the Amount of Verbatim Copying in Abstractive Summarization*](https://arxiv.org/pdf/1911.10390.pdf) to control the degree of change in the summaries.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,593 | 1,593 | NONE | null | Currently BART model trained on CNN dataset is generating summaries which consist of new nouns which are not present in the input text.
How to control the randomness of these summaries. Is there any parameter like temperature in GPT-2 model which can control the degree to which the model can go off-topic?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3876/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3876/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3875 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3875/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3875/comments | https://api.github.com/repos/huggingface/transformers/issues/3875/events | https://github.com/huggingface/transformers/issues/3875 | 603,862,682 | MDU6SXNzdWU2MDM4NjI2ODI= | 3,875 | T5 Translation Error | {
"login": "liesun1994",
"id": 16813308,
"node_id": "MDQ6VXNlcjE2ODEzMzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/16813308?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liesun1994",
"html_url": "https://github.com/liesun1994",
"followers_url": "https://api.github.com/users/liesun1994/followers",
"following_url": "https://api.github.com/users/liesun1994/following{/other_user}",
"gists_url": "https://api.github.com/users/liesun1994/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liesun1994/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liesun1994/subscriptions",
"organizations_url": "https://api.github.com/users/liesun1994/orgs",
"repos_url": "https://api.github.com/users/liesun1994/repos",
"events_url": "https://api.github.com/users/liesun1994/events{/privacy}",
"received_events_url": "https://api.github.com/users/liesun1994/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Why do you use `decoder_start_token_id = tokenizer.eos_token_id` ? Is that stated in the examples somewhere? \r\n\r\nIf you do:\r\n```\r\nfrom transformers import T5Tokenizer, T5ForConditionalGeneration\r\ntokenizer = T5Tokenizer.from_pretrained('t5-base')\r\nmodel = T5ForConditionalGeneration.from_pretrained('t5-base')\r\ndata=\"translate English to German: Hello, my dog is cute\"\r\ninput_ids = tokenizer.encode(data, return_tensors=\"pt\") # Batch size 1\r\noutputs = model.generate(input_ids, decoder_start_token_id = tokenizer.eos_token_id)\r\nprint(tokenizer.decode(outputs[0]))\r\n```\r\n\r\nThe translation is correct. Also I recommend using the translation pipeline. This way T5 uses better generation paramaters.",
"@patrickvonplaten Thanks a lot ! I used previous configuration file from https://s3.amazonaws.com/models.huggingface.co/bert/t5-base-config.json ; After using the new configuration file, translation error is gone ! \r\nMy code is : \r\n```from transformers import T5Tokenizer, T5ForConditionalGeneration\r\ntokenizer = T5Tokenizer.from_pretrained('t5-base')\r\nmodel = T5ForConditionalGeneration.from_pretrained('t5-base')\r\ndata=\"translate English to German: Hello, my dog is cute\"\r\ninput_ids = tokenizer.encode(data, return_tensors=\"pt\") # Batch size 1\r\noutputs = model.generate(input_ids)\r\nprint(tokenizer.decode(outputs[0]))"
] | 1,587 | 1,587 | 1,587 | NONE | null | # π Bug
## Information
Model I am using (T5-base):
Language I am using the model on (T5-base):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
```
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained('t5-base')
model = T5ForConditionalGeneration.from_pretrained('t5-base')
data="translate English to German: Hello, my dog is cute"
input_ids = tokenizer.encode(data, return_tensors="pt") # Batch size 1
outputs = model.generate(input_ids, decoder_start_token_id = tokenizer.eos_token_id)
print(outputs)
```
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. pip install transformers
2. run the code mentioned above, which always produces tensor([[1, 1]])
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.8.0
- Platform: linux
- Python version: python3.6
- PyTorch version (GPU?): torch 1.2.0 , with GPU
- Tensorflow version (GPU?): p40
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3875/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3875/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3874 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3874/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3874/comments | https://api.github.com/repos/huggingface/transformers/issues/3874/events | https://github.com/huggingface/transformers/pull/3874 | 603,644,379 | MDExOlB1bGxSZXF1ZXN0NDA2MzkxMDE1 | 3,874 | create readme for spentaur/yelp model | {
"login": "spentaur",
"id": 2055801,
"node_id": "MDQ6VXNlcjIwNTU4MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2055801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/spentaur",
"html_url": "https://github.com/spentaur",
"followers_url": "https://api.github.com/users/spentaur/followers",
"following_url": "https://api.github.com/users/spentaur/following{/other_user}",
"gists_url": "https://api.github.com/users/spentaur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/spentaur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/spentaur/subscriptions",
"organizations_url": "https://api.github.com/users/spentaur/orgs",
"repos_url": "https://api.github.com/users/spentaur/repos",
"events_url": "https://api.github.com/users/spentaur/events{/privacy}",
"received_events_url": "https://api.github.com/users/spentaur/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"It is! (though ideally you would add an example of use + details about training)\r\n\r\nWill merge this unless you add more in the next 24 hours.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3874?src=pr&el=h1) Report\n> Merging [#3874](https://codecov.io/gh/huggingface/transformers/pull/3874?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b1ff0b2ae7d368b7db3a8a8472a29cc195d278d8&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3874?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3874 +/- ##\n=======================================\n Coverage 78.57% 78.57% \n=======================================\n Files 106 106 \n Lines 17962 17962 \n=======================================\n Hits 14114 14114 \n Misses 3848 3848 \n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3874?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3874?src=pr&el=footer). Last update [b1ff0b2...8a40fb1](https://codecov.io/gh/huggingface/transformers/pull/3874?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"that makes a lot of sense. i'll update that. thank you",
"[model page](https://huggingface.co/spentaur/yelp)"
] | 1,587 | 1,587 | 1,587 | CONTRIBUTOR | null | sorry, not sure if this is the right way to do this | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3874/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3874/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3874",
"html_url": "https://github.com/huggingface/transformers/pull/3874",
"diff_url": "https://github.com/huggingface/transformers/pull/3874.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3874.patch",
"merged_at": 1587497497000
} |
https://api.github.com/repos/huggingface/transformers/issues/3873 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3873/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3873/comments | https://api.github.com/repos/huggingface/transformers/issues/3873/events | https://github.com/huggingface/transformers/issues/3873 | 603,602,658 | MDU6SXNzdWU2MDM2MDI2NTg= | 3,873 | Call to torch.pow() passing integer as exponent isn't per PyTorch docs | {
"login": "mneilly-et",
"id": 55827703,
"node_id": "MDQ6VXNlcjU1ODI3NzAz",
"avatar_url": "https://avatars.githubusercontent.com/u/55827703?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mneilly-et",
"html_url": "https://github.com/mneilly-et",
"followers_url": "https://api.github.com/users/mneilly-et/followers",
"following_url": "https://api.github.com/users/mneilly-et/following{/other_user}",
"gists_url": "https://api.github.com/users/mneilly-et/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mneilly-et/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mneilly-et/subscriptions",
"organizations_url": "https://api.github.com/users/mneilly-et/orgs",
"repos_url": "https://api.github.com/users/mneilly-et/repos",
"events_url": "https://api.github.com/users/mneilly-et/events{/privacy}",
"received_events_url": "https://api.github.com/users/mneilly-et/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think you are correct. Any way you'd open a PR? Otherwise we'll get on it in the next few days/weeks"
] | 1,587 | 1,587 | 1,587 | CONTRIBUTOR | null | # π Bug
## Information
I am running the GPT2 pytest and it fails to load in [PyTorch Glow](https://github.com/pytorch/glow) because the model calls _torch.pow()_ with an integer for the _exponent_ parameter.
Per the PyTorch documentation (https://pytorch.org/docs/master/torch.html?highlight=torch%20pow#torch.pow):
> exponent can be either a single float number or a Tensor with the same number of elements as input.
and
>exponent (float or tensor) β the exponent value
The test was run with the following modifications to enable Glow:
```
diff --git a/src/transformers/modeling_gpt2.py b/src/transformers/modeling_gpt2.py
index 12013996..d6f39007 100644
--- a/src/transformers/modeling_gpt2.py
+++ b/src/transformers/modeling_gpt2.py
@@ -265,7 +265,7 @@ class GPT2PreTrainedModel(PreTrainedModel):
if isinstance(module, (nn.Linear, nn.Embedding, Conv1D)):
# Slightly different from the TF version which uses truncated_normal for initialization
# cf https://github.com/pytorch/pytorch/pull/5617
- module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
+ module.weight.data.normal_(mean=0.0, std=0.02) #self.config.initializer_range)
if isinstance(module, (nn.Linear, Conv1D)) and module.bias is not None:
module.bias.data.zero_()
elif isinstance(module, nn.LayerNorm):
diff --git a/tests/test_modeling_common.py b/tests/test_modeling_common.py
index 1d11ef8c..9df209f7 100644
--- a/tests/test_modeling_common.py
+++ b/tests/test_modeling_common.py
@@ -27,6 +27,7 @@ from .utils import require_torch, slow, torch_device
if is_torch_available():
import torch
+ import torch_glow
import numpy as np
from transformers import (
@@ -209,6 +210,8 @@ class ModelTesterMixin:
inputs = inputs_dict["input_ids"] # Let's keep only input_ids
try:
+ torch_glow.enableFusionPass()
+ torch_glow.setGlowBackend('Interpreter')
traced_gpt2 = torch.jit.trace(model, inputs)
except RuntimeError:
self.fail("Couldn't trace module.")
```
## To reproduce
Steps to reproduce the behavior:
1. python -m pytest -v -k 'test_torchscript and not test_torchscript_' ./tests/test_modeling_gpt2.py
## Expected behavior
Expect exponent to be passed as a float per the documentation so that model loaders adhering to the docs will be able to load the model.
## Environment info
- `transformers` version: 2.8.0
- Platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-centos-7.5.1804-Core
- Python version: 3.6.8
- PyTorch version (GPU?): 1.5.0a0+8eaafbd (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3873/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3873/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3872 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3872/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3872/comments | https://api.github.com/repos/huggingface/transformers/issues/3872/events | https://github.com/huggingface/transformers/issues/3872 | 603,593,929 | MDU6SXNzdWU2MDM1OTM5Mjk= | 3,872 | torchscript tests fail with RuntimeError: normal_ expects std > 0.0, but found std=0 | {
"login": "mneilly-et",
"id": 55827703,
"node_id": "MDQ6VXNlcjU1ODI3NzAz",
"avatar_url": "https://avatars.githubusercontent.com/u/55827703?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mneilly-et",
"html_url": "https://github.com/mneilly-et",
"followers_url": "https://api.github.com/users/mneilly-et/followers",
"following_url": "https://api.github.com/users/mneilly-et/following{/other_user}",
"gists_url": "https://api.github.com/users/mneilly-et/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mneilly-et/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mneilly-et/subscriptions",
"organizations_url": "https://api.github.com/users/mneilly-et/orgs",
"repos_url": "https://api.github.com/users/mneilly-et/repos",
"events_url": "https://api.github.com/users/mneilly-et/events{/privacy}",
"received_events_url": "https://api.github.com/users/mneilly-et/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,587 | 1,588 | 1,588 | CONTRIBUTOR | null | # π Bug
## Information
I am running the gpt2 torchscript test from master and a call to _normal_()_ fails because the _std_ parameter is zero. The error is not limited to the GPT2 model.
## To reproduce
Steps to reproduce the behavior:
1. python -m pytest -v -k 'test_torchscript and not test_torchscript_' ./tests/test_modeling_gpt2.py
The test fails with the following errors:
```
$ python -m pytest -v -k 'test_torchscript and not test_torchscript_' ./tests/test_m
odeling_gpt2.py
========================================================================================================= test session starts ==========================================================================================================
platform linux -- Python 3.6.8, pytest-5.2.0, py-1.8.1, pluggy-0.13.1 -- /local/mneilly/sw-platform-cawg/build/install-staging/sw-platform-sysroot/bin/python
cachedir: .pytest_cache
metadata: {'Python': '3.6.8', 'Platform': 'Linux-3.10.0-957.el7.x86_64-x86_64-with-centos-7.5.1804-Core', 'Packages': {'pytest': '5.2.0', 'py': '1.8.1', 'pluggy': '0.13.1'}, 'Plugins': {'forked': '1.1.3', 'html': '2.0.0', 'metadata'
: '1.8.0', 'xdist': '1.30.0'}}
rootdir: /local/mneilly/sw-platform-cawg/build/cawg-regression/pytorch-models/transformers/transformers/src/transformers
plugins: forked-1.1.3, html-2.0.0, metadata-1.8.0, xdist-1.30.0
collected 29 items / 28 deselected / 1 selected
tests/test_modeling_gpt2.py::GPT2ModelTest::test_torchscript FAILED [100%]
=============================================================================================================== FAILURES ===============================================================================================================
____________________________________________________________________________________________________ GPT2ModelTest.test_torchscript ____________________________________________________________________________________________________
self = <tests.test_modeling_gpt2.GPT2ModelTest testMethod=test_torchscript>
def test_torchscript(self):
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
> self._create_and_check_torchscript(config, inputs_dict)
tests/test_modeling_common.py:186:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/test_modeling_common.py:207: in _create_and_check_torchscript
model = model_class(config=configs_no_init)
src/transformers/modeling_gpt2.py:353: in __init__
self.init_weights()
src/transformers/modeling_utils.py:392: in init_weights
self.apply(self._init_weights)
../../../../../../install-staging/sw-platform-sysroot/lib/python3.6/site-packages/torch/nn/modules/module.py:289: in apply
module.apply(fn)
../../../../../../install-staging/sw-platform-sysroot/lib/python3.6/site-packages/torch/nn/modules/module.py:290: in apply
fn(self)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GPT2Model(
(wte): Embedding(99, 32)
(wpe): Embedding(512, 32)
(drop): Dropout(p=0.1, inplace=False)
(h): Modul...pout): Dropout(p=0.1, inplace=False)
)
)
)
(ln_f): LayerNorm((32,), eps=1e-05, elementwise_affine=True)
)
module = Embedding(99, 32)
def _init_weights(self, module):
""" Initialize the weights.
"""
if isinstance(module, (nn.Linear, nn.Embedding, Conv1D)):
# Slightly different from the TF version which uses truncated_normal for initialization
# cf https://github.com/pytorch/pytorch/pull/5617
> module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
E RuntimeError: normal_ expects std > 0.0, but found std=0
src/transformers/modeling_gpt2.py:268: RuntimeError
=================================================================================================== 1 failed, 28 deselected in 1.97s ===================================================================================================
```
The test passes with the following modification:
```
diff --git a/src/transformers/modeling_gpt2.py b/src/transformers/modeling_gpt2.py
index 12013996..d6f39007 100644
--- a/src/transformers/modeling_gpt2.py
+++ b/src/transformers/modeling_gpt2.py
@@ -265,7 +265,7 @@ class GPT2PreTrainedModel(PreTrainedModel):
if isinstance(module, (nn.Linear, nn.Embedding, Conv1D)):
# Slightly different from the TF version which uses truncated_normal for initialization
# cf https://github.com/pytorch/pytorch/pull/5617
- module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
+ module.weight.data.normal_(mean=0.0, std=0.02) #self.config.initializer_range)
if isinstance(module, (nn.Linear, Conv1D)) and module.bias is not None:
module.bias.data.zero_()
elif isinstance(module, nn.LayerNorm):
```
Producing the following output:
```
$ python -m pytest -v -k 'test_torchscript and not test_torchscript_' ./tests/test_modeling_gpt2.py
========================================================================================================= test session starts ==========================================================================================================
platform linux -- Python 3.6.8, pytest-5.2.0, py-1.8.1, pluggy-0.13.1 -- /local/mneilly/sw-platform-cawg/build/install-staging/sw-platform-sysroot/bin/python
cachedir: .pytest_cache
metadata: {'Python': '3.6.8', 'Platform': 'Linux-3.10.0-957.el7.x86_64-x86_64-with-centos-7.5.1804-Core', 'Packages': {'pytest': '5.2.0', 'py': '1.8.1', 'pluggy': '0.13.1'}, 'Plugins': {'forked': '1.1.3', 'html': '2.0.0', 'metadata': '1.8.0', 'xdist': '1.30.0'}}
rootdir: /local/mneilly/sw-platform-cawg/build/cawg-regression/pytorch-models/transformers/transformers/src/transformers
plugins: forked-1.1.3, html-2.0.0, metadata-1.8.0, xdist-1.30.0
collected 29 items / 28 deselected / 1 selected
tests/test_modeling_gpt2.py::GPT2ModelTest::test_torchscript PASSED [100%]
=========================================================================================================== warnings summary ===========================================================================================================
tests/test_modeling_gpt2.py::GPT2ModelTest::test_torchscript
/local/mneilly/sw-platform-cawg/build/cawg-regression/pytorch-models/transformers/transformers/src/transformers/src/transformers/modeling_gpt2.py:146: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
w = w / math.sqrt(v.size(-1))
tests/test_modeling_gpt2.py::GPT2ModelTest::test_torchscript
/local/mneilly/sw-platform-cawg/build/cawg-regression/pytorch-models/transformers/transformers/src/transformers/src/transformers/modeling_gpt2.py:148: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
mask = self.bias[:, :, ns - nd : ns, :ns]
-- Docs: https://docs.pytest.org/en/latest/warnings.html
```
## Expected behavior
Expected test to pass
## Environment info
- `transformers` version: 2.8.0
- Platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-centos-7.5.1804-Core
- Python version: 3.6.8
- PyTorch version (GPU?): 1.5.0a0+8eaafbd (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3872/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3872/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3871 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3871/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3871/comments | https://api.github.com/repos/huggingface/transformers/issues/3871/events | https://github.com/huggingface/transformers/issues/3871 | 603,581,467 | MDU6SXNzdWU2MDM1ODE0Njc= | 3,871 | Tokenizer could accept a string tensor | {
"login": "celsofranssa",
"id": 11181748,
"node_id": "MDQ6VXNlcjExMTgxNzQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/11181748?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/celsofranssa",
"html_url": "https://github.com/celsofranssa",
"followers_url": "https://api.github.com/users/celsofranssa/followers",
"following_url": "https://api.github.com/users/celsofranssa/following{/other_user}",
"gists_url": "https://api.github.com/users/celsofranssa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/celsofranssa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/celsofranssa/subscriptions",
"organizations_url": "https://api.github.com/users/celsofranssa/orgs",
"repos_url": "https://api.github.com/users/celsofranssa/repos",
"events_url": "https://api.github.com/users/celsofranssa/events{/privacy}",
"received_events_url": "https://api.github.com/users/celsofranssa/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,593 | 1,593 | NONE | null | Currently, the [`batch_encode_plus`](https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.batch_encode_plus) method from tokenizer, can return a Tensorflow tensor.
It can be done assigning `tf` to the optional parameter `return_tensors`.
It would be great if this method also accepted a Tensorflow string tensor in parameter ` batch_text_or_text_pairs`.
For instance, if someone has the following `sample_string_tensor`:
```python
import tensorflow as tf
batch_size = 4
sample_string_tensor = tf.convert_to_tensor(
["sΓ£mple utf-8 strΓng - " + str(i) for i in range(n_strings)]
)
# <tf.Tensor: shape=(4,), dtype=string, numpy=
# array([b's\xc3\xa3mple utf-8 str\xc3\xadng - 0',
# b's\xc3\xa3mple utf-8 str\xc3\xadng - 1',
# b's\xc3\xa3mple utf-8 str\xc3\xadng - 2',
# b's\xc3\xa3mple utf-8 str\xc3\xadng - 3'], dtype=object)>
```
the tokenization would be as simple as:
```python
tokenized_sample = tokenizer.batch_encode_plus(
sample_string_tensor,
max_length=max_length,
pad_to_max_length=True,
return_tensors="tf"
)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3871/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3871/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3870 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3870/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3870/comments | https://api.github.com/repos/huggingface/transformers/issues/3870/events | https://github.com/huggingface/transformers/issues/3870 | 603,579,780 | MDU6SXNzdWU2MDM1Nzk3ODA= | 3,870 | bert summarizer module import error | {
"login": "eeegnu",
"id": 36356950,
"node_id": "MDQ6VXNlcjM2MzU2OTUw",
"avatar_url": "https://avatars.githubusercontent.com/u/36356950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eeegnu",
"html_url": "https://github.com/eeegnu",
"followers_url": "https://api.github.com/users/eeegnu/followers",
"following_url": "https://api.github.com/users/eeegnu/following{/other_user}",
"gists_url": "https://api.github.com/users/eeegnu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eeegnu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eeegnu/subscriptions",
"organizations_url": "https://api.github.com/users/eeegnu/orgs",
"repos_url": "https://api.github.com/users/eeegnu/repos",
"events_url": "https://api.github.com/users/eeegnu/events{/privacy}",
"received_events_url": "https://api.github.com/users/eeegnu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"HELLO eeegnu, I'm also facing the same issue.\r\n\r\n**Environment Info :**\r\nPlatform: Windows 10 64bit\r\nPython version: 3.6.10\r\nPyTorch version (GPU?): 1.4.0 \r\nTensorflow version (GPU?): not installed (NA)\r\n\r\n\r\n",
"Hi!\r\nChange: `from .utils_summarization import (\r\n CNNDMDataset,\r\n build_mask,\r\n compute_token_type_ids,\r\n encode_for_summarization,\r\n truncate_or_pad,\r\n)` \r\nTo: `from utils_summarization import (\r\n CNNDMDataset,\r\n build_mask,\r\n compute_token_type_ids,\r\n encode_for_summarization,\r\n truncate_or_pad,\r\n)` ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,597 | 1,597 | NONE | null | # π
Running bert summarizer ([run_summarization.py](https://github.com/huggingface/transformers/blob/master/examples/summarization/bertabs/run_summarization.py)) gives the following error
```
Traceback (most recent call last):
File "run_summarization.py", line 15, in <module>
from .utils_summarization import (
ModuleNotFoundError: No module named '__main__.utils_summarization'; '__main__' is not a package
```
## Information
I've managed to fix this issue personally by changing "from .utils_summarization import" to "from utils_summarization import", though I don't know if this is due to a convention change in python module imports.
The problem arises when using:
* [x] the official example scripts: (give details below)
Running the following command yields the error
python3 run_summarization.py --documents_dir ".../bertabs/dataset/input" --summaries_output_dir ".../bertabs/dataset/output" --no_cuda false --batch_size 4 --min_length 50 --max_length 200 --beam_size 5 --alpha 0.95 --block_trigram true
## To reproduce
Steps to reproduce the behavior:
1. Followed the steps here https://github.com/huggingface/transformers/blob/5b396457e5035a8b16ddee14b205c098598fe6bb/examples/summarization/bertabs/README.md
Though I skipped the 'Reproduce the authors' ROUGE score' section, that should not have any effect on usage for different inputs.
2. Created custom data paths for testing a single input article.
3. Run the command given above.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
Traceback (most recent call last):
File "run_summarization.py", line 15, in <module>
from .utils_summarization import (
ModuleNotFoundError: No module named '__main__.utils_summarization'; '__main__' is not a package
```
## Expected behavior
It should import the module utils_summarization
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.8.0
- Platform: Linux-4.15.0-74-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3870/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3870/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3869 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3869/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3869/comments | https://api.github.com/repos/huggingface/transformers/issues/3869/events | https://github.com/huggingface/transformers/issues/3869 | 603,479,766 | MDU6SXNzdWU2MDM0Nzk3NjY= | 3,869 | ImportError: cannot import name 'HfArgumentParser' from 'transformers' | {
"login": "ThomasSYT",
"id": 41875489,
"node_id": "MDQ6VXNlcjQxODc1NDg5",
"avatar_url": "https://avatars.githubusercontent.com/u/41875489?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ThomasSYT",
"html_url": "https://github.com/ThomasSYT",
"followers_url": "https://api.github.com/users/ThomasSYT/followers",
"following_url": "https://api.github.com/users/ThomasSYT/following{/other_user}",
"gists_url": "https://api.github.com/users/ThomasSYT/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ThomasSYT/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ThomasSYT/subscriptions",
"organizations_url": "https://api.github.com/users/ThomasSYT/orgs",
"repos_url": "https://api.github.com/users/ThomasSYT/repos",
"events_url": "https://api.github.com/users/ThomasSYT/events{/privacy}",
"received_events_url": "https://api.github.com/users/ThomasSYT/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, \r\n\r\nAs mentioned [here](https://github.com/huggingface/transformers/tree/master/examples#examples) and in the main README you need to install from source in order to run the latest versions of the examples"
] | 1,587 | 1,587 | 1,587 | NONE | null | Hi,
I installed tokenizers-0.5.2 transformers-2.8.0.
When I try to run run_bertology.py in the example dir calling it with
export TASK_NAME=mnli
python ./run_bertology.py --data_dir $GLUE_DIR/$TASK_NAME
--model_name bert-base-uncased
--task_name $TASK_NAME
--output_dir ./tmp/$TASK_NAME/
--try_masking
But it fails with
Traceback (most recent call last):
File "run_bertology.py", line 33, in <module>
from run_glue import ALL_MODELS, MODEL_CLASSES, load_and_cache_examples, set_seed
File "/Users/thomas/PycharmProjects/transformers/examples/run_glue.py", line 34, in <module>
from transformers import (
ImportError: cannot import name 'HfArgumentParser' from 'transformers' (/Users/thomas/opt/anaconda3/lib/python3.7/site-packages/transformers/__init__.py)
Please help, thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3869/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3869/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3868 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3868/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3868/comments | https://api.github.com/repos/huggingface/transformers/issues/3868/events | https://github.com/huggingface/transformers/issues/3868 | 603,410,942 | MDU6SXNzdWU2MDM0MTA5NDI= | 3,868 | unable to load model 'bert', tensor 'input_ids': the model expects 1 dimensions but the model configuration specified 2 dimensions | {
"login": "laohur",
"id": 5047256,
"node_id": "MDQ6VXNlcjUwNDcyNTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5047256?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/laohur",
"html_url": "https://github.com/laohur",
"followers_url": "https://api.github.com/users/laohur/followers",
"following_url": "https://api.github.com/users/laohur/following{/other_user}",
"gists_url": "https://api.github.com/users/laohur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/laohur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/laohur/subscriptions",
"organizations_url": "https://api.github.com/users/laohur/orgs",
"repos_url": "https://api.github.com/users/laohur/repos",
"events_url": "https://api.github.com/users/laohur/events{/privacy}",
"received_events_url": "https://api.github.com/users/laohur/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
}
] | closed | false | null | [] | [
"I'm assuming you read https://blog.einstein.ai/benchmarking-tensorrt-inference-server/ ?",
"> \r\n> \r\n> I'm assuming you read https://blog.einstein.ai/benchmarking-tensorrt-inference-server/ ?\r\nok: torch.jit.trace(bert)\r\nFailed :torch.jit.script\r\nFailed:torch.jit.trace(bert+after)\r\nFailed: to onnx: \r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/modeling_bert.py\", line 686, in forward\r\n extended_attention_mask = extended_attention_mask.to(dtype=next(self.parameters()).dtype) # fp16 compatibility\r\nUnboundLocalError: local variable 'extended_attention_mask' referenced before assignment \r\nοΌpip transformers==2.2.0οΌ fixed latested sourceοΌ\r\n\r\nsolved by remove all none default option argments. minize uncertainty\r\n\r\n"
] | 1,587 | 1,587 | 1,587 | NONE | null | # β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
[https://github.com/NVIDIA/triton-inference-server/issues/1338](url)
how to support batchsize for bert onnx in triton-inference-server?
use bert from https://github.com/huggingface/transformers,
fellow https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/python/tools/bert#export-a-bert-model-from-pytorch.
gotten "model.onnx" with batchsize as first dim (netron).
i set dynamic_batching { preferred_batch_size: [ 4, 8, 32 ] max_queue_delay_microseconds: 100 } in "config.pbtxt"
wrong
max_batch_size : 1024 input [ { name: "input_ids" data_type: TYPE_INT64 dims: [-1, 128] } ]
wrong max_batch_size : 1024 input [ { name: "input_ids" data_type: TYPE_INT64 dims: [1, 128] reshape: { shape: [-1,128 ] } } ]
ok max_batch_size : 1024 input [ { name: "input_ids" data_type: TYPE_INT64 dims: [128] } ]
all latest two version.
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3868/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3868/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3867 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3867/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3867/comments | https://api.github.com/repos/huggingface/transformers/issues/3867/events | https://github.com/huggingface/transformers/issues/3867 | 603,382,590 | MDU6SXNzdWU2MDMzODI1OTA= | 3,867 | Tokenization issue with RoBERTa and DistilRoBERTa. | {
"login": "vincentwen1995",
"id": 29601049,
"node_id": "MDQ6VXNlcjI5NjAxMDQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/29601049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vincentwen1995",
"html_url": "https://github.com/vincentwen1995",
"followers_url": "https://api.github.com/users/vincentwen1995/followers",
"following_url": "https://api.github.com/users/vincentwen1995/following{/other_user}",
"gists_url": "https://api.github.com/users/vincentwen1995/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vincentwen1995/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vincentwen1995/subscriptions",
"organizations_url": "https://api.github.com/users/vincentwen1995/orgs",
"repos_url": "https://api.github.com/users/vincentwen1995/repos",
"events_url": "https://api.github.com/users/vincentwen1995/events{/privacy}",
"received_events_url": "https://api.github.com/users/vincentwen1995/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
">Furthermore, I am also curious about what these 'Δ ' characters are in the RoBERTa encoding?\r\n\r\nIt's a feature of byte-level BPE (an encoded space character)\r\n[Ref-bart-fairseq](https://github.com/pytorch/fairseq/issues/1716), [Ref-openai-gpt](https://github.com/openai/gpt-2/issues/80)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,593 | 1,593 | NONE | null | # π Bug
## Information
Model I am using (Bert, XLNet ...):
RoBERTa (roberta-base), DistilRoBERTa (distilroberta-base)
Language I am using the model on (English, Chinese ...):
English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
I am trying to encode the embeddings for the sentences, and I found a tokenization issue with a certain (type of) sentence which ends with ").". I noticed that the tokenizer cannot tokenize ')' from '.' and further causes issues with the sentence length.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
**Dataset: SemEval 2016 Task 5, SB1 EN-REST**
## To reproduce
Steps to reproduce the behavior:
See in the following codes:
```python
import torch
import numpy as np
from transformers import AutoModel, AutoTokenizer
text = '(Besides that there should be more restaurants like it around the city).'
for model_name in ['roberta-base', 'distilroberta-base']:
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
token_dict = tokenizer.encode_plus(text, None, return_tensors='pt')
print('model_name: {}'.format(model_name))
print("Token (str): {}".format(
tokenizer.convert_ids_to_tokens(token_dict['input_ids'][0])))
print("Token (int): {}".format(token_dict['input_ids']))
print("Type: {}".format(
token_dict['token_type_ids']))
print('Output Embeddings: {}\n'.format(
model(token_dict['input_ids'])[0].shape))
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Expected output:
```
model_name: roberta-base
Token (str): ['<s>', 'Δ (', 'Besides', 'Δ that', 'Δ there', 'Δ should', 'Δ be', 'Δ more', 'Δ restaurants', 'Δ like', 'Δ it', 'Δ around', 'Δ the', 'Δ city', ')', 'Δ .', '</s>']
Token (int): tensor([[ 0, 36, 41107, 14, 89, 197, 28, 55, 4329, 101,
24, 198, 5, 343, 43, 479, 2]])
Type: tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
Output Embeddings: torch.Size([1, 17, 768])
model_name: distilroberta-base
Token (str): ['<s>', 'Δ (', 'Besides', 'Δ that', 'Δ there', 'Δ should', 'Δ be', 'Δ more', 'Δ restaurants', 'Δ like', 'Δ it', 'Δ around', 'Δ the', 'Δ city', ')', 'Δ .', '</s>']
Token (int): tensor([[ 0, 36, 41107, 14, 89, 197, 28, 55, 4329, 101,
24, 198, 5, 343, 43, 479, 2]])
Type: tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
Output Embeddings: torch.Size([1, 17, 768])
```
<!-- A clear and concise description of what you would expect to happen. -->
Basically, the expected behavior is to tokenize ')' and '.' separately. ~~Furthermore, I am also curious about what these 'Δ ' characters are in the RoBERTa encoding? I checked the vocabulary and I found both the normal words and the words starting with this 'Δ ' character so I am a bit confused.~~
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.5.1
- Platform: Windows-10-10.0.18362-SP0
- Python version: 3.7.6
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3867/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3867/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3866 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3866/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3866/comments | https://api.github.com/repos/huggingface/transformers/issues/3866/events | https://github.com/huggingface/transformers/pull/3866 | 603,287,310 | MDExOlB1bGxSZXF1ZXN0NDA2MTAzMjAw | 3,866 | [examples] fix summarization do_predict | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3866?src=pr&el=h1) Report\n> Merging [#3866](https://codecov.io/gh/huggingface/transformers/pull/3866?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a21d4fa410dc3b4c62f93aa0e6bbe4b75a101ee9&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3866?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3866 +/- ##\n=======================================\n Coverage 78.61% 78.62% \n=======================================\n Files 106 106 \n Lines 17953 17953 \n=======================================\n+ Hits 14114 14115 +1 \n+ Misses 3839 3838 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3866?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3866/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.06% <0.00%> (+0.12%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3866?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3866?src=pr&el=footer). Last update [a21d4fa...c84984c](https://codecov.io/gh/huggingface/transformers/pull/3866?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,587 | 1,590 | 1,587 | CONTRIBUTOR | null | - by copying NER
- add type hints | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3866/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3866/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3866",
"html_url": "https://github.com/huggingface/transformers/pull/3866",
"diff_url": "https://github.com/huggingface/transformers/pull/3866.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3866.patch",
"merged_at": 1587394197000
} |
https://api.github.com/repos/huggingface/transformers/issues/3865 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3865/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3865/comments | https://api.github.com/repos/huggingface/transformers/issues/3865/events | https://github.com/huggingface/transformers/issues/3865 | 603,223,777 | MDU6SXNzdWU2MDMyMjM3Nzc= | 3,865 | Summarisation tuning | {
"login": "dimagalat",
"id": 15843978,
"node_id": "MDQ6VXNlcjE1ODQzOTc4",
"avatar_url": "https://avatars.githubusercontent.com/u/15843978?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dimagalat",
"html_url": "https://github.com/dimagalat",
"followers_url": "https://api.github.com/users/dimagalat/followers",
"following_url": "https://api.github.com/users/dimagalat/following{/other_user}",
"gists_url": "https://api.github.com/users/dimagalat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dimagalat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dimagalat/subscriptions",
"organizations_url": "https://api.github.com/users/dimagalat/orgs",
"repos_url": "https://api.github.com/users/dimagalat/repos",
"events_url": "https://api.github.com/users/dimagalat/events{/privacy}",
"received_events_url": "https://api.github.com/users/dimagalat/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Great, thanks @sshleifer ",
"(Duplicate of https://github.com/huggingface/transformers/issues/3853)\r\n",
"sorry, mentioned wrong issue"
] | 1,587 | 1,588 | 1,587 | CONTRIBUTOR | null | Hi everybody,
Iβve tried using BART summarisation code, and I had a question about finetune.py
Can SummarisationTrainer checkpoint be loaded as a BartForConditionalGeneration model from the evaluation script? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3865/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3865/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3864 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3864/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3864/comments | https://api.github.com/repos/huggingface/transformers/issues/3864/events | https://github.com/huggingface/transformers/pull/3864 | 603,131,626 | MDExOlB1bGxSZXF1ZXN0NDA1OTc3NDY0 | 3,864 | Add language and license information to model cards | {
"login": "alexcombessie",
"id": 4739848,
"node_id": "MDQ6VXNlcjQ3Mzk4NDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/4739848?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexcombessie",
"html_url": "https://github.com/alexcombessie",
"followers_url": "https://api.github.com/users/alexcombessie/followers",
"following_url": "https://api.github.com/users/alexcombessie/following{/other_user}",
"gists_url": "https://api.github.com/users/alexcombessie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexcombessie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexcombessie/subscriptions",
"organizations_url": "https://api.github.com/users/alexcombessie/orgs",
"repos_url": "https://api.github.com/users/alexcombessie/repos",
"events_url": "https://api.github.com/users/alexcombessie/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexcombessie/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3864?src=pr&el=h1) Report\n> Merging [#3864](https://codecov.io/gh/huggingface/transformers/pull/3864?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a21d4fa410dc3b4c62f93aa0e6bbe4b75a101ee9&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3864?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3864 +/- ##\n=======================================\n Coverage 78.61% 78.62% \n=======================================\n Files 106 106 \n Lines 17953 17953 \n=======================================\n+ Hits 14114 14115 +1 \n+ Misses 3839 3838 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3864?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3864/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.76% <0.00%> (+0.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3864?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3864?src=pr&el=footer). Last update [a21d4fa...7bbf47b](https://codecov.io/gh/huggingface/transformers/pull/3864?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Hi @julien-c !\r\n\r\nI hope you are well. My pull request is ready for review.\r\n\r\nI have tried my best to add license and language information to all model cards. I have added a few model cards as well.\r\n\r\nNote that my changes may have some downstream consequences:\r\n- the addition of a \"license\" key, the value being an atomic list)\r\n- the normalization of the \"languages\" key (I added the \"s\"), the value being a list of [ISO 639-1 codes](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes). For multilingual BERT, I had to simplify some rare languages by merging it with its ISO \"macro code\" (example: \"South Azerbaijani\" -> \"az\", \"Bavarian\" -> \"de\")\r\n \r\nOn the website, you may want to read those values and render them back in human-readable form.\r\n\r\nCheers,\r\n\r\nAlex\r\n",
"Hi Alex,\r\n\r\nas mentioned in the previous issue, I'd rather use identifiers listed in https://help.github.com/en/github/creating-cloning-and-archiving-repositories/licensing-a-repository\r\n\r\nI can probably search and replace though.\r\n\r\nAlso, you pushed whitespace changes which make reviewing the actual changes slightly tedious.",
"[EDIT] Sure, I have replaced licenses with these identifiers! \r\nFor whitespaces, I have autoformatting activated on sublime, that's why. Sorry for the inconvenience. ",
"Good morning @julien-c,\r\n\r\nI hope all is well. What do you think of this PR?\r\n\r\nCheers,\r\n\r\nAlex",
"I'm doing a partial merge (retaining your authorship information, @alexcombessie) of the licenses, as the languages will require some backend changes.\r\n\r\n(I'll do a search and replace at a later point)\r\n\r\nThank you for your contribution"
] | 1,587 | 1,588 | 1,588 | CONTRIBUTOR | null | Should fix issues #3397 and #3357 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3864/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3864/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3864",
"html_url": "https://github.com/huggingface/transformers/pull/3864",
"diff_url": "https://github.com/huggingface/transformers/pull/3864.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3864.patch",
"merged_at": 1588106421000
} |
https://api.github.com/repos/huggingface/transformers/issues/3863 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3863/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3863/comments | https://api.github.com/repos/huggingface/transformers/issues/3863/events | https://github.com/huggingface/transformers/issues/3863 | 602,979,654 | MDU6SXNzdWU2MDI5Nzk2NTQ= | 3,863 | Cannot convert RoBERTa to tflite model | {
"login": "kubux1",
"id": 18151912,
"node_id": "MDQ6VXNlcjE4MTUxOTEy",
"avatar_url": "https://avatars.githubusercontent.com/u/18151912?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kubux1",
"html_url": "https://github.com/kubux1",
"followers_url": "https://api.github.com/users/kubux1/followers",
"following_url": "https://api.github.com/users/kubux1/following{/other_user}",
"gists_url": "https://api.github.com/users/kubux1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kubux1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kubux1/subscriptions",
"organizations_url": "https://api.github.com/users/kubux1/orgs",
"repos_url": "https://api.github.com/users/kubux1/repos",
"events_url": "https://api.github.com/users/kubux1/events{/privacy}",
"received_events_url": "https://api.github.com/users/kubux1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I believe this is because `tf.Cumsum` is not a supported operation and not an issue relating to this repo. Here is a link to the tensorflow documentation on supported ops. [https://www.tensorflow.org/lite/guide/ops_compatibility]\r\nIn the past, I've been able to get around unsupported ops by reimplementing the operator with supported ops or replacing the unsupported portion with another op. ie. `relu` in place of `gelu`.",
"Hey @will-rice, thank you for giving me an idea how to handle this issue. I managed to overcome this problem, by using custom _cumsum_ function implemented in pure python by @ibab in here https://github.com/tensorflow/tensorflow/issues/813.\r\nI just changed it to sum over rows not columns, as the way it is done in the Roberta model.\r\n\r\nHere is a cumsum function:\r\n```python\r\ndef cumsum(xs):\r\n values = tf.unstack(xs, axis=1)\r\n out = []\r\n prev = tf.zeros_like(values[0])\r\n for val in values:\r\n s = prev + val\r\n out.append(s)\r\n prev = s\r\n result = tf.stack(out, axis=1)\r\n return result\r\n```\r\nand it is used in the _modeling_tf_roberta.py_ file in line 69:\r\n```python\r\n # Original code / non tflite compatible way\r\n incremental_indicies = tf.math.cumsum(mask, axis=1) * mask)\r\n\r\n # My custom code / tflite compatible way\r\n incremental_indicies = cumsum(mask) * mask\r\n```\r\n\r\nHope it will help anyone aswell!",
"Also cc'ing @Pierrci ",
"@julien-c any updates on this feature? Was browsing through the later releases but could not find any reference.\r\n\r\nThanks!",
"@dshahrokhian As mentioned by @will-rice, the issue is due to the lack of support for the `tf.Cumsum` operator by TFLite and thus not related to `transformers`. If you encounter the same problem you can implement the workaround posted by @kubux1 earlier, or implement a similar one if you're having this issue with a different operator.",
"@Pierrci thanks! It also seems to be have been solved in the latest release of `tf-nightly`: https://github.com/tensorflow/tensorflow/issues/42382#issuecomment-675000451"
] | 1,587 | 1,606 | 1,587 | NONE | null | # π Bug
## Information
Model I am using:
RoBERTa (roberta-base)
Language I am using the model on:
English
The problem arises when using:
Conversion based on https://github.com/huggingface/tflite-android-transformers/blob/master/models_generation/distilbert.py
The tasks I am working on is:
It is irrelevant on this step.
## To reproduce
1. Build python conversion script.
2. Run it.
**Conversion script**
```python
import tensorflow as tf
from transformers import TFRobertaForSequenceClassification
model = TFRobertaForSequenceClassification.from_pretrained('roberta-base')
input_spec = tf.TensorSpec([1, 384], tf.int32)
model._set_inputs(input_spec, training=False)
converter = tf.lite.TFLiteConverter.from_keras_model(model)
# For conversion with hybrid quantization:
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
converter.experimental_new_converter = True
tflite_model = converter.convert()
open("test.tflite", "wb").write(tflite_model)
```
Error: **tf.Cumsum op is neither a custom op nor a flex op and needs a custom implementation**
## Expected behavior
No errors.
## Environment info
- Transformers version: 2.8.0
- Platform: Windows 10
- Python version: 3.7.0
- Tensorflow version: 2.1.0
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3863/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3863/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3862 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3862/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3862/comments | https://api.github.com/repos/huggingface/transformers/issues/3862/events | https://github.com/huggingface/transformers/pull/3862 | 602,888,730 | MDExOlB1bGxSZXF1ZXN0NDA1NzgzOTQ0 | 3,862 | New model added | {
"login": "punyajoy",
"id": 22558222,
"node_id": "MDQ6VXNlcjIyNTU4MjIy",
"avatar_url": "https://avatars.githubusercontent.com/u/22558222?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/punyajoy",
"html_url": "https://github.com/punyajoy",
"followers_url": "https://api.github.com/users/punyajoy/followers",
"following_url": "https://api.github.com/users/punyajoy/following{/other_user}",
"gists_url": "https://api.github.com/users/punyajoy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/punyajoy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/punyajoy/subscriptions",
"organizations_url": "https://api.github.com/users/punyajoy/orgs",
"repos_url": "https://api.github.com/users/punyajoy/repos",
"events_url": "https://api.github.com/users/punyajoy/events{/privacy}",
"received_events_url": "https://api.github.com/users/punyajoy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Thanks! [model page](https://huggingface.co/Hate-speech-CNERG/dehatebert-mono-english)",
"A few things to add to the model card if you can (happy to help!)\r\n\r\n- which language(s) is it trained on?\r\n- How can one use it, i.e. is this a sequence classifier? "
] | 1,587 | 1,587 | 1,587 | CONTRIBUTOR | null | The first model added to the repo | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3862/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3862/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3862",
"html_url": "https://github.com/huggingface/transformers/pull/3862",
"diff_url": "https://github.com/huggingface/transformers/pull/3862.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3862.patch",
"merged_at": 1587417002000
} |
https://api.github.com/repos/huggingface/transformers/issues/3861 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3861/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3861/comments | https://api.github.com/repos/huggingface/transformers/issues/3861/events | https://github.com/huggingface/transformers/issues/3861 | 602,881,156 | MDU6SXNzdWU2MDI4ODExNTY= | 3,861 | How to do parameter sharing between two BERT models | {
"login": "xiepuzhao",
"id": 30302487,
"node_id": "MDQ6VXNlcjMwMzAyNDg3",
"avatar_url": "https://avatars.githubusercontent.com/u/30302487?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiepuzhao",
"html_url": "https://github.com/xiepuzhao",
"followers_url": "https://api.github.com/users/xiepuzhao/followers",
"following_url": "https://api.github.com/users/xiepuzhao/following{/other_user}",
"gists_url": "https://api.github.com/users/xiepuzhao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xiepuzhao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiepuzhao/subscriptions",
"organizations_url": "https://api.github.com/users/xiepuzhao/orgs",
"repos_url": "https://api.github.com/users/xiepuzhao/repos",
"events_url": "https://api.github.com/users/xiepuzhao/events{/privacy}",
"received_events_url": "https://api.github.com/users/xiepuzhao/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"You could do it very simply by passing the reference around:\r\n\r\n```py\r\nfrom transformers import BertModel\r\n\r\nmodel = BertModel.from_pretrained(\"bert-base-cased\")\r\nmodel2 = BertModel.from_pretrained(\"bert-base-cased\")\r\n\r\nmodel2.embeddings = model.embeddings\r\n\r\nprint(model2.embeddings.word_embeddings.weight)\r\n\r\nmodel.embeddings.word_embeddings.weight = torch.nn.Parameter(torch.zeros_like(model.embeddings.word_embeddings.weight))\r\n\r\nprint(model2.embeddings.word_embeddings.weight)\r\n```\r\n\r\nwhich outputs the result (note that I'm updating the `model.embeddings` and printing the `model2.embeddings`):\r\n\r\n```py\r\nParameter containing:\r\ntensor([[-0.0005, -0.0416, 0.0131, ..., -0.0039, -0.0335, 0.0150],\r\n [ 0.0169, -0.0311, 0.0042, ..., -0.0147, -0.0356, -0.0036],\r\n [-0.0006, -0.0267, 0.0080, ..., -0.0100, -0.0331, -0.0165],\r\n ...,\r\n [-0.0064, 0.0166, -0.0204, ..., -0.0418, -0.0492, 0.0042],\r\n [-0.0048, -0.0027, -0.0290, ..., -0.0512, 0.0045, -0.0118],\r\n [ 0.0313, -0.0297, -0.0230, ..., -0.0145, -0.0525, 0.0284]],\r\n requires_grad=True)\r\nParameter containing:\r\ntensor([[0., 0., 0., ..., 0., 0., 0.],\r\n [0., 0., 0., ..., 0., 0., 0.],\r\n [0., 0., 0., ..., 0., 0., 0.],\r\n ...,\r\n [0., 0., 0., ..., 0., 0., 0.],\r\n [0., 0., 0., ..., 0., 0., 0.],\r\n [0., 0., 0., ..., 0., 0., 0.]], requires_grad=True)\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,593 | 1,593 | NONE | null | # β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3861/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3861/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3860 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3860/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3860/comments | https://api.github.com/repos/huggingface/transformers/issues/3860/events | https://github.com/huggingface/transformers/pull/3860 | 602,779,094 | MDExOlB1bGxSZXF1ZXN0NDA1NzA1NjMz | 3,860 | Update README.md | {
"login": "ahotrod",
"id": 44321615,
"node_id": "MDQ6VXNlcjQ0MzIxNjE1",
"avatar_url": "https://avatars.githubusercontent.com/u/44321615?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahotrod",
"html_url": "https://github.com/ahotrod",
"followers_url": "https://api.github.com/users/ahotrod/followers",
"following_url": "https://api.github.com/users/ahotrod/following{/other_user}",
"gists_url": "https://api.github.com/users/ahotrod/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahotrod/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahotrod/subscriptions",
"organizations_url": "https://api.github.com/users/ahotrod/orgs",
"repos_url": "https://api.github.com/users/ahotrod/repos",
"events_url": "https://api.github.com/users/ahotrod/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahotrod/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3860?src=pr&el=h1) Report\n> Merging [#3860](https://codecov.io/gh/huggingface/transformers/pull/3860?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a21d4fa410dc3b4c62f93aa0e6bbe4b75a101ee9&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3860?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3860 +/- ##\n=======================================\n Coverage 78.61% 78.61% \n=======================================\n Files 106 106 \n Lines 17953 17953 \n=======================================\n Hits 14114 14114 \n Misses 3839 3839 \n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3860?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3860?src=pr&el=footer). Last update [a21d4fa...9e4fe33](https://codecov.io/gh/huggingface/transformers/pull/3860?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,587 | 1,587 | 1,587 | CONTRIBUTOR | null | Improved results from new hardware | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3860/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3860/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3860",
"html_url": "https://github.com/huggingface/transformers/pull/3860",
"diff_url": "https://github.com/huggingface/transformers/pull/3860.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3860.patch",
"merged_at": 1587391857000
} |
https://api.github.com/repos/huggingface/transformers/issues/3859 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3859/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3859/comments | https://api.github.com/repos/huggingface/transformers/issues/3859/events | https://github.com/huggingface/transformers/issues/3859 | 602,762,083 | MDU6SXNzdWU2MDI3NjIwODM= | 3,859 | ValueError: Unable to set proper padding strategy as the tokenizer does not have a padding token. | {
"login": "banunitte",
"id": 6847024,
"node_id": "MDQ6VXNlcjY4NDcwMjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/6847024?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/banunitte",
"html_url": "https://github.com/banunitte",
"followers_url": "https://api.github.com/users/banunitte/followers",
"following_url": "https://api.github.com/users/banunitte/following{/other_user}",
"gists_url": "https://api.github.com/users/banunitte/gists{/gist_id}",
"starred_url": "https://api.github.com/users/banunitte/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/banunitte/subscriptions",
"organizations_url": "https://api.github.com/users/banunitte/orgs",
"repos_url": "https://api.github.com/users/banunitte/repos",
"events_url": "https://api.github.com/users/banunitte/events{/privacy}",
"received_events_url": "https://api.github.com/users/banunitte/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"\r\n",
"tokenizer.pad_token = 0",
"You have to set the pad_token_id yourself as it's stated in the error message ;-). I would recommend using the `eos_token_id` as the `pad_token_id` for GPT2:\r\n```python \r\ntokenizer = GPT2Tokenizer.from_pretrained('gpt2')\r\ntokenizer.pad_token = tokenizer.eos_token\r\n```\r\nas it's written in the error message ;-)",
"I hit the same issue after using [add_special_tokens](https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.add_special_tokens) with `{\"pad_token\": \"PAD\"}` dictionary.\r\n\r\nI understand based on the error and the documentation it should not raise the error, right? @patrickvonplaten should the issue be reopened?",
"For completeness: \r\n- patrickvonplaten tip `tokenizer.pad_token = tokenizer.eos_token` solved it\r\n- the whole error message for me was\r\n\r\n```\r\nValueError: Unable to set proper padding strategy as the tokenizer does not have a padding token. In this case please set the `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` \r\nor add a new pad token via the function add_special_tokens if you want to use a padding strategy\r\n```",
"You didn't set pad_token. You can set it like this:\r\n```python\r\ntokenizer.pad_token = \"[PAD]\"\r\n```"
] | 1,587 | 1,603 | 1,588 | NONE | null | # β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
Using pad_token, but it is not set yet.
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
tokens = tokenizer.batch_encode_plus(
["This is a sample", "This is another longer sample text"],
pad_to_max_length=True ,max_length=10 ,return_attention_mask = True# First sentence will have some PADDED tokens to match second sequence length
)
for i in range(2):
print("Tokens (int) : {}".format(tokens['input_ids'][i]))
print("Tokens (str) : {}".format([tokenizer.convert_ids_to_tokens(s) for s in tokens['input_ids'][i]]))
print("Tokens (attn_mask): {}".format(tokens['attention_mask'][i]))
print()
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3859/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3859/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3858 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3858/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3858/comments | https://api.github.com/repos/huggingface/transformers/issues/3858/events | https://github.com/huggingface/transformers/issues/3858 | 602,706,271 | MDU6SXNzdWU2MDI3MDYyNzE= | 3,858 | Write with transformers demo hardware | {
"login": "pelegb",
"id": 6249508,
"node_id": "MDQ6VXNlcjYyNDk1MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6249508?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pelegb",
"html_url": "https://github.com/pelegb",
"followers_url": "https://api.github.com/users/pelegb/followers",
"following_url": "https://api.github.com/users/pelegb/following{/other_user}",
"gists_url": "https://api.github.com/users/pelegb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pelegb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pelegb/subscriptions",
"organizations_url": "https://api.github.com/users/pelegb/orgs",
"repos_url": "https://api.github.com/users/pelegb/repos",
"events_url": "https://api.github.com/users/pelegb/events{/privacy}",
"received_events_url": "https://api.github.com/users/pelegb/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! We run on K80 and V100 GPUs. Running the 1.5B params model was very costly and hard to maintain, so we're not running it anymore, however. It should run fine on a single V100/Titan RTX however.\r\n\r\nDid you have issues with the T4 GPU because of memory?",
"Thanks for you response. This is really beneficial. I think that it is due to memory. \r\nAre you serving your models with a flask server, TF-serving, or a different serving framework?\r\nWere you serving it using your PyTorch or Tensorflow implementation?\r\n\r\nThanks again",
"We're serving our models in PyTorch, using a mix of gunicorn/falcon to handle requests. You can see the detailers [here](https://medium.com/huggingface/scaling-a-massive-state-of-the-art-deep-learning-model-in-production-8277c5652d5f)!",
"Really clear blog post.\r\nThanks"
] | 1,587 | 1,589 | 1,589 | NONE | null | Hi,
Your package is great and your demo is cool too. I was wondering what kind of hardware does it take to generate text with the large language models. I've had a difficult time running GPT-2 with 1.5B params on a Tesla T4 GPU.
Any pointers will be greatly appreciated.
Thanks in advance,
Barak | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3858/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3858/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3857 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3857/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3857/comments | https://api.github.com/repos/huggingface/transformers/issues/3857/events | https://github.com/huggingface/transformers/pull/3857 | 602,704,576 | MDExOlB1bGxSZXF1ZXN0NDA1NjU3Njcx | 3,857 | [Pipelines] Encode to max length of input not max length of tokenizer for batch input | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Update: rm'ed inaccurate comment"
] | 1,587 | 1,588 | 1,587 | MEMBER | null | I don't see a reason why we have to pad to `tokenizer.max_length` when encoding. Tokenizers automatically encode until the longest `input_ids` which is much more efficient IMO. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3857/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3857/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3857",
"html_url": "https://github.com/huggingface/transformers/pull/3857",
"diff_url": "https://github.com/huggingface/transformers/pull/3857.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3857.patch",
"merged_at": 1587407957000
} |
https://api.github.com/repos/huggingface/transformers/issues/3856 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3856/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3856/comments | https://api.github.com/repos/huggingface/transformers/issues/3856/events | https://github.com/huggingface/transformers/issues/3856 | 602,686,416 | MDU6SXNzdWU2MDI2ODY0MTY= | 3,856 | Bug in optimization_tf create_optimizer | {
"login": "JunnYu",
"id": 50394665,
"node_id": "MDQ6VXNlcjUwMzk0NjY1",
"avatar_url": "https://avatars.githubusercontent.com/u/50394665?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JunnYu",
"html_url": "https://github.com/JunnYu",
"followers_url": "https://api.github.com/users/JunnYu/followers",
"following_url": "https://api.github.com/users/JunnYu/following{/other_user}",
"gists_url": "https://api.github.com/users/JunnYu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JunnYu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JunnYu/subscriptions",
"organizations_url": "https://api.github.com/users/JunnYu/orgs",
"repos_url": "https://api.github.com/users/JunnYu/repos",
"events_url": "https://api.github.com/users/JunnYu/events{/privacy}",
"received_events_url": "https://api.github.com/users/JunnYu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I have raised https://github.com/huggingface/transformers/pull/4940, waiting for approval.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,597 | 1,597 | CONTRIBUTOR | null | # π Bug
## Information
WHEN I am using optimization_tf(create_optimizer):
Problems with learning rate schedule
## To reproduce
```
from transformers.optimization_tf import create_optimizer
import matplotlib.pyplot as plt
%matplotlib inline
opt = create_optimizer(init_lr=5e-5, num_train_steps=100, num_warmup_steps=50)
lr = opt.learning_rate
results = [lr(i).numpy() for i in range(101)]
print(results[49:51])
plt.plot(results)
plt.show()
```
output [4.9e-05, 2.5e-05, 2.45e-05]
## Expected output
the max lr should be 5e-05
expected output [4.9e-05, 5e-05, 4.9e-05]
```
class WarmUp(tf.keras.optimizers.schedules.LearningRateSchedule):
def __call__(self, step):
with tf.name_scope(self.name or "WarmUp") as name:
# Implements polynomial warmup. i.e., if global_step < warmup_steps, the
# learning rate will be `global_step/num_warmup_steps * init_lr`.
global_step_float = tf.cast(step, tf.float32)
warmup_steps_float = tf.cast(self.warmup_steps, tf.float32)
warmup_percent_done = global_step_float / warmup_steps_float
warmup_learning_rate = self.initial_learning_rate * tf.math.pow(warmup_percent_done, self.power)
return tf.cond(
global_step_float < warmup_steps_float,
lambda: warmup_learning_rate,
lambda: self.decay_schedule_fn(step),
name=name,
)
```
change:
lambda: self.decay_schedule_fn(step) =>lambda: self.decay_schedule_fn(step-warmup_steps_float)
```
def create_optimizer(init_lr, num_train_steps, num_warmup_steps):
"""Creates an optimizer with learning rate schedule."""
# Implements linear decay of the learning rate.
learning_rate_fn = tf.keras.optimizers.schedules.PolynomialDecay(
initial_learning_rate=init_lr, decay_steps=num_train_steps, end_learning_rate=0.0
)
if num_warmup_steps :
learning_rate_fn = WarmUp(
initial_learning_rate=init_lr, decay_schedule_fn=learning_rate_fn, warmup_steps=num_warmup_steps
)
```
change:
PolynomialDecay decay_steps should be num_train_steps-num_warmup_steps
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3856/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3856/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3855 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3855/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3855/comments | https://api.github.com/repos/huggingface/transformers/issues/3855/events | https://github.com/huggingface/transformers/pull/3855 | 602,681,067 | MDExOlB1bGxSZXF1ZXN0NDA1NjQxNzgz | 3,855 | Fix Documentation issue in BertForMaskedLM forward | {
"login": "bharatr21",
"id": 13381361,
"node_id": "MDQ6VXNlcjEzMzgxMzYx",
"avatar_url": "https://avatars.githubusercontent.com/u/13381361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bharatr21",
"html_url": "https://github.com/bharatr21",
"followers_url": "https://api.github.com/users/bharatr21/followers",
"following_url": "https://api.github.com/users/bharatr21/following{/other_user}",
"gists_url": "https://api.github.com/users/bharatr21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bharatr21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bharatr21/subscriptions",
"organizations_url": "https://api.github.com/users/bharatr21/orgs",
"repos_url": "https://api.github.com/users/bharatr21/repos",
"events_url": "https://api.github.com/users/bharatr21/events{/privacy}",
"received_events_url": "https://api.github.com/users/bharatr21/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3855?src=pr&el=h1) Report\n> Merging [#3855](https://codecov.io/gh/huggingface/transformers/pull/3855?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a21d4fa410dc3b4c62f93aa0e6bbe4b75a101ee9&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3855?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3855 +/- ##\n=======================================\n Coverage 78.61% 78.62% \n=======================================\n Files 106 106 \n Lines 17953 17953 \n=======================================\n+ Hits 14114 14115 +1 \n+ Misses 3839 3838 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3855?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/3855/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.40% <100.00%> (ΓΈ)` | |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3855/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.76% <0.00%> (+0.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3855?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3855?src=pr&el=footer). Last update [a21d4fa...06fb495](https://codecov.io/gh/huggingface/transformers/pull/3855?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Good catch @Bharat123rox - thanks for the PR :-) "
] | 1,587 | 1,587 | 1,587 | CONTRIBUTOR | null | Fix #3066 by interchanging positions of `ltr_lm_loss` and `masked_lm_loss` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3855/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3855/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3855",
"html_url": "https://github.com/huggingface/transformers/pull/3855",
"diff_url": "https://github.com/huggingface/transformers/pull/3855.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3855.patch",
"merged_at": 1587452901000
} |
https://api.github.com/repos/huggingface/transformers/issues/3854 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3854/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3854/comments | https://api.github.com/repos/huggingface/transformers/issues/3854/events | https://github.com/huggingface/transformers/pull/3854 | 602,661,527 | MDExOlB1bGxSZXF1ZXN0NDA1NjI4NjMx | 3,854 | Added electra-bahasa README | {
"login": "huseinzol05",
"id": 19810909,
"node_id": "MDQ6VXNlcjE5ODEwOTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/19810909?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/huseinzol05",
"html_url": "https://github.com/huseinzol05",
"followers_url": "https://api.github.com/users/huseinzol05/followers",
"following_url": "https://api.github.com/users/huseinzol05/following{/other_user}",
"gists_url": "https://api.github.com/users/huseinzol05/gists{/gist_id}",
"starred_url": "https://api.github.com/users/huseinzol05/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/huseinzol05/subscriptions",
"organizations_url": "https://api.github.com/users/huseinzol05/orgs",
"repos_url": "https://api.github.com/users/huseinzol05/repos",
"events_url": "https://api.github.com/users/huseinzol05/events{/privacy}",
"received_events_url": "https://api.github.com/users/huseinzol05/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3854?src=pr&el=h1) Report\n> Merging [#3854](https://codecov.io/gh/huggingface/transformers/pull/3854?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a21d4fa410dc3b4c62f93aa0e6bbe4b75a101ee9&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3854?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3854 +/- ##\n=======================================\n Coverage 78.61% 78.62% \n=======================================\n Files 106 106 \n Lines 17953 17953 \n=======================================\n+ Hits 14114 14115 +1 \n+ Misses 3839 3838 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3854?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.76% <0.00%> (+0.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3854?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3854?src=pr&el=footer). Last update [a21d4fa...b5f2dc5](https://codecov.io/gh/huggingface/transformers/pull/3854?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Cherry-picked in 7f23af16840113fe137f42415a9daa7ce7f7f15f\r\n\r\nThank you, that looks great! cc @LysandreJik and @clarkkev "
] | 1,587 | 1,587 | 1,587 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3854/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3854/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3854",
"html_url": "https://github.com/huggingface/transformers/pull/3854",
"diff_url": "https://github.com/huggingface/transformers/pull/3854.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3854.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3853 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3853/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3853/comments | https://api.github.com/repos/huggingface/transformers/issues/3853/events | https://github.com/huggingface/transformers/issues/3853 | 602,563,213 | MDU6SXNzdWU2MDI1NjMyMTM= | 3,853 | How to use fine-tuned BART for prediction? | {
"login": "riacheruvu",
"id": 22090501,
"node_id": "MDQ6VXNlcjIyMDkwNTAx",
"avatar_url": "https://avatars.githubusercontent.com/u/22090501?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/riacheruvu",
"html_url": "https://github.com/riacheruvu",
"followers_url": "https://api.github.com/users/riacheruvu/followers",
"following_url": "https://api.github.com/users/riacheruvu/following{/other_user}",
"gists_url": "https://api.github.com/users/riacheruvu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/riacheruvu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/riacheruvu/subscriptions",
"organizations_url": "https://api.github.com/users/riacheruvu/orgs",
"repos_url": "https://api.github.com/users/riacheruvu/repos",
"events_url": "https://api.github.com/users/riacheruvu/events{/privacy}",
"received_events_url": "https://api.github.com/users/riacheruvu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Facing a similar type of issue for T5. @sshleifer ",
"The last ckpt file should be loaded into a `pl.LightningModule` if the --do_predict flag is specified.\r\n\r\nThere is a bug on master that messes up the loading, but it's fixed in #3866\r\n\r\nTo use that code immediately, you can run:\r\n```\r\ngit fetch\r\ngit checkout examples-summ-do-predict\r\n```\r\nthen your same `finetune.py` command\r\n with `--do_predict` (and not --do_train) and the proper `--output_dir`.\r\n\r\nWould love to know if that works!\r\n\r\ncc: @ethanjperez.",
"Change is on master, let me know if this solves the problem!",
"Config.json is still not generated while training.",
"```python\r\n def log_hyperparams(model: pl.LightningModule):\r\n model.config.save_pretrained(model.hparams.output_dir)\r\n with open(os.path.join(model.hparams.output_dir, \"hparam.json\")) as f:\r\n json.dump(model.hparams, f)\r\n```\r\nYou can call this somewhere in your code, if that's helpful.",
"@sshleifer, thank you - I can run ./run_train.sh with the --predict() option successfully. \r\n\r\nRegarding my original question, could you please specify how to load the checkpoint into the LighteningModule?\r\n\r\nAfter inspecting [transformer_base.py](https://github.com/huggingface/transformers/blob/master/examples/transformer_base.py), I think hparams is equivalent to the arguments provided in run_train.sh, so a separate hparams.json file does not need to be generated. Please correct me if I'm wrong.\r\n\r\nI am receiving the following error with my current code:\r\n\r\n`pytorch_lightning.utilities.exceptions.MisconfigurationException: Checkpoint contains hyperparameters but LightningModule's __init__ is missing the argument 'hparams'. Are you loading the correct checkpoint?`\r\n\r\nI've been using the following code, based on the discussion in https://github.com/PyTorchLightning/pytorch-lightning/issues/525 and https://pytorch-lightning.readthedocs.io/en/latest/weights_loading.html:\r\n```\r\n\r\n# load model\r\nimport pytorch_lightning as pl\r\n\r\nfrom argparse import Namespace\r\n\r\n# usually these come from command line args\r\nargs = Namespace(data_dir='CE_data/',\r\nmodel_type='bart',\r\nmodel_name_or_path='bart-large',\r\nlearning_rate='3e-5',\r\ntrain_batch_size=4,\r\neval_batch_size=4,\r\noutput_dir='transformers/examples/summarization/bart/bart_sum',\r\ndo_predict='do_predict')\r\n\r\npretrained_model = pl.LightningModule.load_from_checkpoint('bart_sum/checkpointepoch=2.ckpt', hparams=args)\r\npretrained_model.eval()\r\n\r\n# or for prediction\r\nout = model(inputs['input_ids'])\r\nprint(out)\r\n``'\r\n\r\nThank you for your time.",
"Seems close to correct.\r\n\r\nhttps://github.com/huggingface/transformers/blob/7d40901ce3ad9e1c79fd9bb117f5b84bff42c33f/examples/summarization/bart/finetune.py#L164-L175\r\n\r\nis how we do it @riacheruvu",
"@sshleifer \r\n1. Originally config.json is not created which is a requirement for prediction using fine-tuned model.\r\n*As shown in the screenshot, I add this code in transformer_base.py in end, config and hparam files are created.\r\n* Then try to predict with --do_predict, then it gives, \"\"We assumed '/content/t5' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.\"\"\r\nWhat are the requirements to use fine-tuned model?\r\n<img width=\"696\" alt=\"Screenshot 2020-04-21 at 5 50 10 PM\" src=\"https://user-images.githubusercontent.com/30004110/79886728-c1d0bf80-83f9-11ea-90e5-400afc575da1.png\">\r\n\r\n----------------------------------------------------------------\r\n2. To predict for a single instance using the fine-tuned model, do I need to specify the test.target file also. I want to predict unknown instance without calculating the loss value.\r\n",
"@sshleifer, thank you. I've got to the point where I can load the model and generate \"outputs\" using the forward() function, but I can't decode the outputs - using tokenizer.decoder() results in an error. Should I be using model.generate() instead of model.forward()? If so, it seems SummarizationTrainer does not support model.generate?\r\n\r\n\r\nRevised code:\r\n\r\n```\r\n tokenizer = BartTokenizer.from_pretrained('bart-large-cnn')\r\n ARTICLE_TO_SUMMARIZE = \"My friends are cool but they eat too many carbs.\"\r\n inputs = tokenizer.batch_encode_plus([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors='pt')['input_ids']\r\n checkpoints = list(sorted(glob.glob(os.path.join(args.output_dir, \"checkpointepoch=*.ckpt\"), recursive=True)))\r\n model = model.load_from_checkpoint(checkpoints[-1])\r\n model.eval()\r\n model.freeze()\r\n outputs = model(inputs)\r\n print(outputs) #Successfully prints two 3D tensors in a tuple\r\n #print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in outputs]) #Results in ValueError: only one element tensors can be converted to Python scalars\r\n print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in outputs[0][0]])\r\n```\r\nThe error I'm encountering\r\n```\r\nTraceback (most recent call last):\r\n File \"finetune.py\", line 194, in <module>\r\n main(args)\r\n File \"finetune.py\", line 184, in main\r\n print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in outputs[1][0]])\r\n File \"finetune.py\", line 184, in <listcomp>\r\n print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in outputs[1][0]])\r\n File \"/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/tokenization_utils.py\", line 2141, in decode\r\n sub_texts.append(self.convert_tokens_to_string(current_sub_text))\r\n File \"/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/tokenization_gpt2.py\", line 235, in convert_tokens_to_string\r\n text = \"\".join(tokens)\r\nTypeError: sequence item 0: expected str instance, NoneType found\r\n```",
"I found a solution. The model.generate() function is necessary to extract the predictions. I defined a separate function in the SummarizationTrainer() class to use self.model.generate(), and was able to use tokenizer.decoder() on the outputs.\r\n\r\nI was encountering issues when using self.tokenizer, so I assume using 'bart-large-cnn' tokenizer for similar custom summarization datasets is okay.\r\n\r\n@prabalbansal, I'm not sure if the same method will apply to T5, but it could work for predicting for a single instance, per one of your questions.\r\n\r\nMy code is below: \r\n\r\n```\r\n def text_predictions(self, input_ids):\r\n generated_ids = self.model.generate(\r\n input_ids=input_ids,\r\n num_beams=1,\r\n max_length=80,\r\n repetition_penalty=2.5,\r\n length_penalty=1.0,\r\n early_stopping=True,\r\n )\r\n preds = [\r\n self.tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True)\r\n for g in generated_ids\r\n ]\r\n return preds\r\n...\r\n # Optionally, predict on dev set and write to output_dir\r\n if args.do_predict:\r\n # See https://github.com/huggingface/transformers/issues/3159\r\n # pl use this format to create a checkpoint:\r\n # https://github.com/PyTorchLightning/pytorch-lightning/blob/master\\\r\n # /pytorch_lightning/callbacks/model_checkpoint.py#L169\r\n tokenizer = BartTokenizer.from_pretrained('bart-large-cnn')\r\n ARTICLE_TO_SUMMARIZE = \"My friends are cool but they eat too many carbs.\"\r\n inputs = tokenizer.batch_encode_plus([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors='pt')['input_ids']\r\n checkpoints = list(sorted(glob.glob(os.path.join(args.output_dir, \"checkpointepoch=*.ckpt\"), recursive=True)))\r\n model = model.load_from_checkpoint(checkpoints[-1])\r\n model.eval()\r\n model.freeze()\r\n outputs = model.text_predictions(inputs)\r\n print(outputs)\r\n```\r\n\r\nThank you for the help, @sshleifer !",
"@riacheruvu Thank You. It works for T5 also.",
"I followed the steps given in this thread and am still facing an issue. I get an error saying the below when I try to use my fine-tuned model for prediction.\r\n\r\nOSError: Can't load '/home/bart/bart_1/checkpointepoch=3.ckpt'. Make sure that:\r\n\r\n- '/home/bart/bart_1/checkpointepoch=3.ckpt' is a correct model identifier listed on 'https://huggingface.co/models'\r\n\r\n- or '/home/bart/bart_1/checkpointepoch=3.ckpt' is the correct path to a directory containing a 'config.json' file\r\n",
"@sangeethabal15, with my model, files were only generated up till the 2nd epoch. Just to confirm, do you have a checkpointepoch=3.ckpt file?\n\nAre you using the load_from_checkpoint() function?\n",
"@riacheruvu yes I do have checkpoint=3.ckpt file. I gave my own number of epochs instead of the default 3. \r\n\r\nYes I am using the load_from_checkpoint() function",
"Ok. Could you share your code here, @sangeethabal15? It might be easier to help debug. ",
"@riacheruvu This is my modified code -\r\n\r\n\r\n\r\n # Optionally, predict on dev set and write to output_dir\r\n if args.do_predict:\r\n # See https://github.com/huggingface/transformers/issues/3159\r\n # pl use this format to create a checkpoint:\r\n # https://github.com/PyTorchLightning/pytorch-lightning/blob/master\\\r\n # /pytorch_lightning/callbacks/model_checkpoint.py#L169\r\n examples = [\" \" + x.rstrip() for x in open(\"/home/bart/input/test.source\").readlines()]\r\n fout = Path(\"output.txt\").open(\"w\")\r\n checkpoints = list(sorted(glob.glob(os.path.join(args.output_dir, \"checkpointepoch=*.ckpt\"), recursive=True)))\r\n model = model.load_from_checkpoint(checkpoints[-1])\r\n tokenizer = BartTokenizer.from_pretrained(\"bart-large\")\r\n\r\n max_length = 80\r\n min_length = 5\r\n\r\n for batch in tqdm(list(chunks(examples, 8))):\r\n dct = tokenizer.batch_encode_plus(batch, max_length=1024, return_tensors=\"pt\", pad_to_max_length=True)\r\n summaries = model.generate(\r\n input_ids=dct[\"input_ids\"].to(device),\r\n attention_mask=dct[\"attention_mask\"],\r\n num_beams=4,\r\n length_penalty=2.0,\r\n max_length=max_length + 2, # +2 from original because we start at step=1 and stop before max_length\r\n min_length=min_length + 1, # +1 from original because we start at step=1\r\n no_repeat_ngram_size=3,\r\n early_stopping=True,\r\n decoder_start_token_id=model.config.eos_token_id,\r\n )\r\n dec = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summaries]\r\n for hypothesis in dec:\r\n fout.write(hypothesis + \"\\n\")\r\n fout.flush()",
"Thank you, @sangeethabal15. From the error message you posted earlier, it seems load_from_checkpoint() is expecting a config.json file in the specified directory. \n\nI have a few more debug questions:\n\n- Do you have the latest version of the code?\n\n- Does load_from_checkpoint() work with the checkpoint file for the 2nd epoch? \n\n- If that fails, does your code run successfully if you use the default number of epochs?",
"@riacheruvu \r\n\r\n- I do have the latest version of the code though I have not trained the model on the latest version of it.\r\n\r\n- load_from_checkpoint doesn't work with the 2nd either and expects a config.json file\r\n\r\n- and yes the code runs successfully on the default number of epochs as well.",
" import json\r\n def log_hyperparams(model: pl.LightningModule):\r\n model.config.save_pretrained(model.hparams.output_dir)\r\n with open(os.path.join(model.hparams.output_dir, \"hparam.json\"),'w') as f:\r\n json.dump(model.hparams.__dict__, f)\r\n if args.do_train:\r\n trainer.fit(model)\r\n log_hyperparams(model)\r\n\r\n@sangeethabal15 Could you add this at the end of transformer_base.py. This works for me.\r\n",
"@prabalbansal this is for when I am training my model. Since I have already fine-tuned my model, is there any workaround for test time when I am trying to predict my outputs?",
"@riacheruvu I am currently working on a Text Summarization problem. I have collected a small dataset of my own. Implementing BART is very easy. I can generate a great summary. But I want to know how to how to use BART model for training my own custom dataset. Can you please kindly help me with this?\r\n\r\nI have browsed through internet. But I cannot any find any helpful resources as it is relatively new compared to other Transfer learning models.",
"@murugeshmanthiramoorthi you can just use run_train.sh in the bart folder where you give in your parameters to run the fiinetune.py file",
"@sangeethabal15 Thank you so much for your reply mam. I am completely new to transfer learning mam. I can't get what you are upto. Can you kindly explain more elaborately or share a resource so that I can follow up?\r\nThanks in advance mam. \r\n",
"@sangeethabal15 I somehow managed to load the dataset. I run the run_train.sh file. But it is showing me error \"python3: can't open file 'finetune.py': [Errno 2] No such file or directory\". I even tried changing the data set from my custom dataset to default CNN/daily news dataset. Still, I am getting the same error. Can anyone help me out?",
"@riacheruvu @prabalbansal did y'all finetune Bart on your own dataset?",
"@sangeethabal15, I fine-tuned BART on my own custom dataset. It's strange that your code runs successfully on the default number of epochs, but load_from_checkpoint() does not work with the 2nd epoch .ckpt file with the original configuration. Where did you modify the default number of epochs?\r\n\r\n@murugeshmanthiramoorthi, \r\n\r\nPer the instructions given in https://github.com/huggingface/transformers/tree/master/examples/summarization/bart:\r\n\r\nThe steps I followed are cloning the transformers repo, navigating to the examples/summarization/bart directory, copying over a folder containing the data files (train.target, train.source, val.target, val.source, test.target, and test.source files), and then modifying run_train.sh to use this folder for the data_dir and filling in the other parameters.\r\n\r\nFor your .source and .target files, you need to structure them similar to the CNN/DM dataset: The .source files should have an article on each line, and the .target files should have a target summary on each line (corresponding to the article in the .source file).",
"@riacheruvu I noticed that I get this warning for both training and while testing\r\n\r\nINFO:transformers.modeling_utils:Weights from pretrained model not used in BartForConditionalGeneration: ['encoder.version', 'decoder.version']\r\n\r\nSeems like my model hasn't been trained properly. Any idea how to go about this?\r\n\r\nAlso, I have the number of epochs in my run_train.sh. It is defined in the add_specific_args in the transformer_base.py",
"that warning doesn't matter.",
"@sangeethabal15, I agree that the warning does not matter as I saw that warning as well. It seems the issue might be when training the model with a different number of epochs compared to the default. @sshleifer, has the HuggingFace team tested the code with a different number of epochs before?",
"@riacheruvu Thank you so much for your help. But when I proceeded with those steps, I get the error \r\n\r\nTraceback (most recent call last):\r\n File \"finetune.py\", line 10, in <module>\r\n from transformer_base import BaseTransformer, add_generic_args, generic_train, get_linear_schedule_with_warmup\r\nModuleNotFoundError: No module named 'transformer_base'\r\n\r\nDo you have any idea solving this issue."
] | 1,587 | 1,612 | 1,597 | NONE | null | # β Questions & Help
## Details
I fine-tuned the BART model on a custom summarization dataset using the **transformers/examples/summarization/bart/finetune.py** and **transformers/examples/summarization/bart/run_train.sh** files in the repository for training (which generated three _checkpointepoch=*.ckpt_ files) and prediction (which generated a _.txt_ file with the test loss scores).
I have two questions on using this model for prediction:
- How can I modify _finetune.py_ to generate predictions for the test set, in addition to the loss scores? I see some test functions in _finetune.py_, but I'm not sure how to use these for generating a _.txt_ file with the predictions.
- How can I load the generated _.ckpt_ files into BartForConditionalGeneration()? A _config.json_ file was not generated along with the checkpoint files; there doesn't seem to be a TFBartForConditionalGeneration; and the _convert_tf_checkpoint_to_pytorch.py_ script in the repo doesn't seem to support BART yet.
Thank you for your time! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3853/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3853/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3852 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3852/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3852/comments | https://api.github.com/repos/huggingface/transformers/issues/3852/events | https://github.com/huggingface/transformers/issues/3852 | 602,547,217 | MDU6SXNzdWU2MDI1NDcyMTc= | 3,852 | TFT5: get_input_embeddings() and get_output_embeddings() | {
"login": "parthe",
"id": 5085600,
"node_id": "MDQ6VXNlcjUwODU2MDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/5085600?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/parthe",
"html_url": "https://github.com/parthe",
"followers_url": "https://api.github.com/users/parthe/followers",
"following_url": "https://api.github.com/users/parthe/following{/other_user}",
"gists_url": "https://api.github.com/users/parthe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/parthe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/parthe/subscriptions",
"organizations_url": "https://api.github.com/users/parthe/orgs",
"repos_url": "https://api.github.com/users/parthe/repos",
"events_url": "https://api.github.com/users/parthe/events{/privacy}",
"received_events_url": "https://api.github.com/users/parthe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"From what I'm seeing, the `TFT5Model` **does** have [documentation](https://huggingface.co/transformers/model_doc/t5.html#transformers.TFT5Model.get_input_embeddings) for `get_input_embeddings` and `get_output_embeddings`.\r\n\r\nI believe the output embeddings and input embeddings should actually be the same. The embeddings are shared between input and output. Wdyt @patrickvonplaten? ",
"Agree that the documentation is not the greatest, could definitely be improved :-). \r\n\r\nThe idea is that both `get_input_embeddings()` and `get_output_embeddings` return the **same** (this should be made clearer in the docs) embeddings matrix of dimension Vocab_size x Hidden_size. \r\n\r\nNow, to make the embeddings matrix work for both input and output, we need to be able to get a Vocab_size -> Hidden_size mapping (for the input embeddings) and Hidden_size -> Vocab_size mapping (for the output embeddings). In TF, we use a trick here, by wrapping the embedding in this layer: https://github.com/huggingface/transformers/blob/c53cc018de70436196858ca91c1a34f1b8947028/src/transformers/modeling_tf_utils.py#L1521\r\n\r\nAnd then by calling the embedding with different modes (\"linear\" and \"embedding\"), we get the correct mapping. See https://github.com/huggingface/transformers/blob/c53cc018de70436196858ca91c1a34f1b8947028/src/transformers/modeling_tf_t5.py#L1074 for example. \r\n\r\nSo IMO, the code is fine, I agree with you @parthe, that the documention should be cleaner and explain the logic I explained here a bit. \r\n\r\nIf you feel like it @parthe, it would be amazing if you could open a PR to straighten up the documentation here (the docstring). ",
"Hi - I think I have related questions since I couldn't find answers in the documentation for get_input_embeddings().... \r\n\r\nI've been using the approach [here](https://mccormickml.com/2019/05/14/BERT-word-embeddings-tutorial/) to access the hidden states to obtain embeddings (I've also updated it to be based on `transformers` in my own notebook instead of `pytorch-pretrained-bert`) . I was wondering how the output of get_input_embeddings maps to the output of the hidden states there? I've not been able to figure that out. Also, what would be the advantage of using one over the other?\r\n\r\nThanks!",
"So I would recommend that you take the `hidden_states` by setting `config.output_hidden_states=True` (Note: this API will be changed soon, see PR: #4538). \r\nThen you can map the `hidden_states` to `lm_logits` (non normalized scores for each word in the vocab) using:\r\n\r\n```python \r\nembed_tokens = self.get_output_embeddings()\r\nlm_logits = embed_tokens(<your_hidden_states>, mode=\"linear\")\r\n```\r\n\r\nLet me know if this isn't clear :-) ",
"Hi @patrickvonplaten, referring to the quote below (from this [comment](https://github.com/huggingface/transformers/issues/3852#issuecomment-618852195)):\r\n> The idea is that both `get_input_embeddings()` and `get_output_embeddings` return the **same** (this should be made clearer in the docs) embeddings matrix of dimension Vocab_size x Hidden_size.\r\n> \r\n> Now, to make the embeddings matrix work for both input and output, we need to be able to get a Vocab_size -> Hidden_size mapping (for the input embeddings) and Hidden_size -> Vocab_size mapping (for the output embeddings). In TF, we use a trick here, by wrapping the embedding in this layer:\r\n> \r\n> https://github.com/huggingface/transformers/blob/c53cc018de70436196858ca91c1a34f1b8947028/src/transformers/modeling_tf_utils.py#L1521\r\n> \r\n> And then by calling the embedding with different modes (\"linear\" and \"embedding\"), we get the correct mapping. See\r\n> \r\n> https://github.com/huggingface/transformers/blob/c53cc018de70436196858ca91c1a34f1b8947028/src/transformers/modeling_tf_t5.py#L1074\r\n> \r\n> for example.\r\n\r\nDoes this only apply to TFT5Model or is the same across all models which has `get_input_embeddings` and `get_output_embeddings` method?",
"It should be the same across models that share input and output embeddings :-) "
] | 1,587 | 1,661 | 1,591 | CONTRIBUTOR | null | In class TFT5Model
1. The get_input_embeddings() and get_output_embeddings() methods do not have any documentation provided in them
2. Furthermore, the get_output_embeddings provides the same output as the get_input embeddings. This needs to be resolved. Or flagged with a NotImplementedError | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3852/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3852/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3851 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3851/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3851/comments | https://api.github.com/repos/huggingface/transformers/issues/3851/events | https://github.com/huggingface/transformers/issues/3851 | 602,529,109 | MDU6SXNzdWU2MDI1MjkxMDk= | 3,851 | How properly apply a tokenizer map function to a Tensorflow batched dataset? | {
"login": "celsofranssa",
"id": 11181748,
"node_id": "MDQ6VXNlcjExMTgxNzQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/11181748?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/celsofranssa",
"html_url": "https://github.com/celsofranssa",
"followers_url": "https://api.github.com/users/celsofranssa/followers",
"following_url": "https://api.github.com/users/celsofranssa/following{/other_user}",
"gists_url": "https://api.github.com/users/celsofranssa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/celsofranssa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/celsofranssa/subscriptions",
"organizations_url": "https://api.github.com/users/celsofranssa/orgs",
"repos_url": "https://api.github.com/users/celsofranssa/repos",
"events_url": "https://api.github.com/users/celsofranssa/events{/privacy}",
"received_events_url": "https://api.github.com/users/celsofranssa/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This seems like more of a TF-related question rather than a Transformers-related question. The issue seems to stem from your code trying to get the value of a tensor which is not eager, using numpy. I believe the `tf.data.Dataset.map` method must trace inputs, resulting in the Tensors not being eager.\r\n\r\nCouldn't you build the `tf.data.Dataset` with already tokenized inputs instead?",
"The ideal would be to follow the pipeline (read from the file >> generate batches >> tokenize >> train >> evaluate). It is the most efficient approach as pointed in the [TensorFlow tutorial](https://www.tensorflow.org/tutorials/customization/performance).\r\n\r\nTensorflow when dealing with texts generates string tensors that are stored as byte string:\r\n\r\n```python\r\n<tf.Tensor: shape=(2,), dtype=string, numpy=array(\r\n [b'ThΓͺ first utf-8 string of the batΓ§h.',\r\n b'ThΓͺ secΓ΄nd utf-8 string of the batΓ§h.'], dtype=object)>\r\n```\r\n\r\nHowever, I didn't find an efficient way to decode this kind of tensor as a list of strings. It's even worse if the byte string containing a non-ascii character.\r\n\r\nWhat I really need is one of these two options:\r\n\r\n1. a tokenizer which is able to accept aforementioned byte string tensor as input to tokenize; or\r\n2. a vectorized approach to transforming a byte string tensor into a string list.\r\n\r\nThank you very much for all your help.",
"@Ceceu I am running into this exact issue as well, and am wondering if you had found a good solution?",
"@oja,\r\nThe best solution I could find was adapting an example from the Tensorflow tutorial: [Load Text](https://www.tensorflow.org/tutorials/load_data/text#encode_text_lines_as_numbers) which uses `tf.py_function`.\r\nLet me know if I can help more.",
"@Ceceu got it, thank you!",
"Tokenizers can now output `numpy` arrays with `return_tensors='np'` so I think this should work now.",
"Thanks @thomwolf, I will check it out and if it works on TPU then it solves https://github.com/huggingface/transformers/issues/5066",
"> Thanks @thomwolf, I will check it out and if it works on TPU then it solves #5066\r\n\r\nDid you check if it works on TPU?",
"It does not work on TPU",
"@oja, @Santosh-Gupta, @celsofranssa I too am facing this problem. Did you guys find any solution?",
"cc @Rocketknight1 ",
"Bump, I'm still having this issue (on a CPU)."
] | 1,587 | 1,639 | 1,591 | NONE | null | Considering the following `batched_dataset`:
```python3
samples = ([{"query": "this is a query 1", "doc": "this is one relevant document regarding query 1"},
{"query": "this is a query 2", "doc": "this is one relevant document regarding query 2"},
{"query": "this is a query 3", "doc": "this is one relevant document regarding query 3"},
{"query": "this is a query 4", "doc": "this is one relevant document regarding query 4"},
])
dataset = tf.data.Dataset.from_generator(
lambda: samples, {"query": tf.string, "doc": tf.string})
batched_dataset = dataset.batch(2)
#{
#'doc': <tf.Tensor: shape=(2,), dtype=string, numpy=array(
# [b'this is one relevant document regarding query 1',
# b'this is one relevant document regarding query 2'], dtype=object)>,
#
#'query': <tf.Tensor: shape=(2,), dtype=string, numpy=array(
# [b'this is a query 1',
# b'this is a query 2'], dtype=object)>
#}
```
and a map function to tokenize this `batched_dataset`:
```python3
def tokenize(sample):
tokenized_query = tokenizer.batch_encode_plus(sample["query"].numpy().astype('str'), ...)
tokenized_doc = tokenizer.batch_encode_plus(sample["doc"].numpy().astype('str'), ...)
return (tokenized_query, tokenized_doc)
```
I could tokenize the entire batched_dataset using a for-loop:
```python3
for batch in batched_dataset:
tokenize(batch)
# (
# {'input_ids': <tf.Tensor: shape=(2, 8), dtype=int32, numpy=
# array([[ 101, 2023, 2003, 1037, 23032, 1015, 102, 0],
# [ 101, 2023, 2003, 1037, 23032, 1016, 102, 0]],
# dtype=int32)>,
# 'attention_mask': <tf.Tensor: shape=(2, 8), dtype=int32, numpy=
# array([[1, 1, 1, 1, 1, 1, 1, 0],
# [1, 1, 1, 1, 1, 1, 1, 0]], dtype=int32)>},
# {'input_ids': <tf.Tensor: shape=(2, 8), #dtype=int32, numpy=
# array([[ 101, 2023, 2003, 2028, 7882, 6254, 4953, 102],
# [ 101, 2023, 2003, 2028, 7882, 6254, 4953, 102]], dtype=int32)>,
# 'attention_mask': <tf.Tensor: shape=(2, 8), dtype=int32, numpy=
# array([[1, 1, 1, 1, 1, 1, 1, 1],
# [1, 1, 1, 1, 1, 1, 1, 1]], dtype=int32)>})
# ...
```
However, when using [`tf.data.Dataset.map`][1] the following error arises:
```python3
tokenized_dataset = batched_dataset.map(tokenize)
AttributeError: 'Tensor' object has no attribute 'numpy'
```
Then, how properly apply a tokenizer map function to a batched dataset?
**Note**: I published a working example on [`Google Colab`][2].
[1]: https://www.tensorflow.org/api_docs/python/tf/data/Dataset?version=nightly#map
[2]: https://colab.research.google.com/drive/1TUbWwEgbgPHwY1QjgRLIqLpjin310pdh | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3851/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3851/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3850 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3850/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3850/comments | https://api.github.com/repos/huggingface/transformers/issues/3850/events | https://github.com/huggingface/transformers/issues/3850 | 602,484,854 | MDU6SXNzdWU2MDI0ODQ4NTQ= | 3,850 | 'pad_to_max_length' in Pipeline should be set to True by default | {
"login": "Akashdesarda",
"id": 26931751,
"node_id": "MDQ6VXNlcjI2OTMxNzUx",
"avatar_url": "https://avatars.githubusercontent.com/u/26931751?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Akashdesarda",
"html_url": "https://github.com/Akashdesarda",
"followers_url": "https://api.github.com/users/Akashdesarda/followers",
"following_url": "https://api.github.com/users/Akashdesarda/following{/other_user}",
"gists_url": "https://api.github.com/users/Akashdesarda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Akashdesarda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Akashdesarda/subscriptions",
"organizations_url": "https://api.github.com/users/Akashdesarda/orgs",
"repos_url": "https://api.github.com/users/Akashdesarda/repos",
"events_url": "https://api.github.com/users/Akashdesarda/events{/privacy}",
"received_events_url": "https://api.github.com/users/Akashdesarda/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @Akashdesarda, thanks for reporting this.\r\nYou should be able to pass `pad_to_max_length=True` when calling your pipeline: \r\n\r\n`pipe(train_data['comment_text'][:100].values.tolist(), pad_to_max_length=True)`\r\n\r\nCan you let us know if it works in your case ?\r\n",
"Yes it worked, thanks for the solution."
] | 1,587 | 1,587 | 1,587 | NONE | null | # π Bug
pad_to_max_length is set by default False in Piplene class' _parse_and_tokenize() function
## Information
Model I am using (Bert):
Language I am using the model on (English):
The problem arises when using: my own modified scripts:
```
import numpy as np
from transformers import AutoTokenizer, pipeline, TFDistilBertModel
model = TFDistilBertModel.from_pretrained('distilbert-base-uncased')
# model = AutoModel.from_pretrained('distilbert-base-uncased')
tokenizer = AutoTokenizer.from_pretrained('distilbert-base-uncased', pad_to_max_length=True)
pipe = pipeline('feature-extraction', model=model, tokenizer=tokenizer)
features = pipe(train_data['comment_text'][:100].values.tolist())
features = np.squeeze(features)
print(features.shape)
```
As there are about 100 input of variable length, tokenizer should perform padding. But even after giving ```pad_to_max_length=True``` , padding operation is not perform.
I get the following error
```
~/anaconda3/envs/tf2/lib/python3.7/site-packages/transformers/pipelines.py in predict(self, X)
392 Scikit / Keras interface to transformers' pipelines. This method will forward to __call__().
393 """
--> 394 return self(X=X)
395
396 @contextmanager
~/anaconda3/envs/tf2/lib/python3.7/site-packages/transformers/pipelines.py in __call__(self, *args, **kwargs)
551
552 def __call__(self, *args, **kwargs):
--> 553 return super().__call__(*args, **kwargs).tolist()
554
555
~/anaconda3/envs/tf2/lib/python3.7/site-packages/transformers/pipelines.py in __call__(self, *texts, **kwargs)
465
466 def __call__(self, *texts, **kwargs):
--> 467 inputs = self._parse_and_tokenize(*texts, **kwargs)
468 return self._forward(inputs)
469
~/anaconda3/envs/tf2/lib/python3.7/site-packages/transformers/pipelines.py in _parse_and_tokenize(self, pad_to_max_length, *texts, **kwargs)
456 return_tensors=self.framework,
457 max_length=self.tokenizer.max_len,
--> 458 pad_to_max_length=pad_to_max_length,
459 )
460
~/anaconda3/envs/tf2/lib/python3.7/site-packages/transformers/tokenization_utils.py in batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, max_length, stride, truncation_strategy, pad_to_max_length, return_tensors, return_token_type_ids, return_attention_masks, return_overflowing_tokens, return_special_tokens_masks, return_offsets_mapping, return_input_lengths, **kwargs)
1260 raise ValueError(self.NO_PAD_TOKEN_FOR_BATCH_MSG)
1261 else:
-> 1262 raise ValueError(self.UNEVEN_SEQUENCES_FOR_BATCH_MSG)
1263 elif return_tensors == "pt" and is_torch_available():
1264 try:
ValueError: The sequences building the batch are not of the same size, no tensor can be built. Set `pad_to_max_length=True` to pad the smaller sequencesup to the larger sequence's length.
```
The tasks I am working on is:
* [ ] my own task or dataset: (give details below)
kaggle toxic tweet dataset.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Pipeline should perform padding operation.
<!-- A clear and concise description of what you would expect to happen. -->
When I made pad_to_max_length=True inside function _parse_and_tokenize() of class Pipeline,
I got expected result. Pipeline perform its task perfectly (padding operation was also done) and feature extraction was executed on all inputs (in my case 100)
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.8.0
- Platform: Linux-5.3.0-46-generic-x86_64-with-debian-buster-sid
- Python version:3.8
- Tensorflow version (GPU?): 2.1
- Using GPU in script?: Yes | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3850/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3850/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3849 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3849/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3849/comments | https://api.github.com/repos/huggingface/transformers/issues/3849/events | https://github.com/huggingface/transformers/issues/3849 | 602,454,721 | MDU6SXNzdWU2MDI0NTQ3MjE= | 3,849 | Bug in run_glue | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You need to install from source as specified in the README."
] | 1,587 | 1,587 | 1,587 | NONE | null | Hi
I am getting this error when running run_glue.py
ImportError: cannot import name 'TrainingArguments' from 'transformers' (/idiap/user/rkarimi/libs/anaconda3/envs/iborn/lib/python3.7/site-packages/transformers/__init__.py)
Traceback (most recent call last):
File "run_glue.py", line 34, in <module>
from transformers import (
ImportError: cannot import name 'HfArgumentParser' from 'transformers' (/idiap/user/rkarimi/libs/anaconda3/envs/iborn/lib/python3.7/site-packages/transformers/__init__.py)
To fix I searched in the repo on how you use the hf_argparser, I modified it as below:
from transformers.hf_argparser import HfArgumentParser
again, getting the error:
Traceback (most recent call last):
File "run_glue.py", line 44, in <module>
from transformers.hf_argparser import HfArgumentParser
ModuleNotFoundError: No module named 'transformers.hf_argparser'
however, this is how you call it in the tests but this does not work. Seems to me that things have been changed but the codes are not updated. thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3849/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3849/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3848 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3848/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3848/comments | https://api.github.com/repos/huggingface/transformers/issues/3848/events | https://github.com/huggingface/transformers/issues/3848 | 602,359,819 | MDU6SXNzdWU2MDIzNTk4MTk= | 3,848 | Electra for question answering | {
"login": "Santosh-Gupta",
"id": 5524261,
"node_id": "MDQ6VXNlcjU1MjQyNjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5524261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Santosh-Gupta",
"html_url": "https://github.com/Santosh-Gupta",
"followers_url": "https://api.github.com/users/Santosh-Gupta/followers",
"following_url": "https://api.github.com/users/Santosh-Gupta/following{/other_user}",
"gists_url": "https://api.github.com/users/Santosh-Gupta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Santosh-Gupta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Santosh-Gupta/subscriptions",
"organizations_url": "https://api.github.com/users/Santosh-Gupta/orgs",
"repos_url": "https://api.github.com/users/Santosh-Gupta/repos",
"events_url": "https://api.github.com/users/Santosh-Gupta/events{/privacy}",
"received_events_url": "https://api.github.com/users/Santosh-Gupta/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You can basically copy+paste the code from BertForQuestionAnswering and just change it for ELECTRA. However, the original ELECTRA implementation to fine-tune on squad looks a bit different (it's more like in XLNet). \r\n\r\nIf you want to reproduce the official implementation it's probably best you take a look at the published code: https://github.com/google-research/electra/blob/master/finetune/qa/qa_tasks.py#L419",
"Any updates about this? I managed the creation of ElectraForQuestionAnswering on my own and the code works. If the progress of this thread is in stand-by I can proceed submitting my pull request",
"@volker42maru I implemented the same model as described in the official Electra Repository by google. I am still unable to reproduce the original paper results which is 75% EM on the squad v1 and 82% F1. The maximum I could get was 70% EM and 78% F1. @mmuffo94 Please let me know if you have successfully reproduced the results on the squad v1 dataset. It would be great help. I am using the Electra Small Model for now.",
"@ankur19030 you are refering to ELECTRA-Small I assume? \r\n\r\nI actually finetuned and evaluated only on squad 2.0, but the score on squad 1.1 even with the Small model should be significantly higher. The different QA Head that is used for ELECTRA might not play such a big role in squad 1.1, because it's mostly used to get better predictions on answerability. You can try using a simple QA head first to check the performance on squad 1.1 with ELECTRA, e.g.:\r\n```\r\nclass ElectraForQuestionAnswering(ElectraPreTrainedModel):\r\n def __init__(self, config):\r\n super(ElectraForQuestionAnswering, self).__init__(config)\r\n self.num_labels = config.num_labels\r\n\r\n self.electra = ElectraModel(config)\r\n self.qa_outputs = nn.Linear(config.hidden_size, config.num_labels)\r\n\r\n self.init_weights()\r\n\r\n def forward(\r\n self,\r\n input_ids=None,\r\n attention_mask=None,\r\n token_type_ids=None,\r\n position_ids=None,\r\n head_mask=None,\r\n inputs_embeds=None,\r\n start_positions=None,\r\n end_positions=None,\r\n ):\r\n outputs = self.electra(\r\n input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds\r\n )\r\n\r\n sequence_output = outputs[0]\r\n\r\n logits = self.qa_outputs(sequence_output)\r\n start_logits, end_logits = logits.split(1, dim=-1)\r\n start_logits = start_logits.squeeze(-1)\r\n end_logits = end_logits.squeeze(-1)\r\n\r\n outputs = (start_logits, end_logits,) + outputs[2:]\r\n if start_positions is not None and end_positions is not None:\r\n # If we are on multi-GPU, split add a dimension\r\n if len(start_positions.size()) > 1:\r\n start_positions = start_positions.squeeze(-1)\r\n if len(end_positions.size()) > 1:\r\n end_positions = end_positions.squeeze(-1)\r\n # sometimes the start/end positions are outside our model inputs, we ignore these terms\r\n ignored_index = start_logits.size(1)\r\n start_positions.clamp_(0, ignored_index)\r\n end_positions.clamp_(0, ignored_index)\r\n\r\n loss_fct = torch.nn.CrossEntropyLoss(ignore_index=ignored_index)\r\n start_loss = loss_fct(start_logits, start_positions)\r\n end_loss = loss_fct(end_logits, end_positions)\r\n total_loss = (start_loss + end_loss) / 2\r\n outputs = (total_loss,) + outputs\r\n\r\n return outputs # (loss), start_logits, end_logits, (hidden_states), (attentions)\r\n```\r\n\r\nHowever, if you try to use ELECTRA-Base or Large you also want to use `layerwise_lr_decay`, as used in the official implementation (https://github.com/google-research/electra/blob/master/model/optimization.py#L48). For me that made quite a big difference in score.\r\n\r\nBTW, be sure to use `google/electra-small-discriminator` weights, not the generator.",
"@volker42maru I initially tried with the simple QA head only, the same as you described but could not reproduce the results, not even near them even though I am using the same hyper-parameters as the ones used in the official implementation except layer wise LR decay. My EM is <70 on squad v1 which is around 5% less than the results from official repo. But I can say for sure that simply Layerwise LR decay in small model should not cause this much difference, as I have already tried removing Layerwise LR decay from official implementation and it still give the same results. So I have no idea where is the gap ?",
"Are you sure that's for squad v1 and not v2?\r\n\r\nI just trained ELECTRA Small for 1 epoch using the simple QA head from above and I get the following score on squad v1 dev: `'exact': 73.46263008514664, 'f1': 82.46777637449017`\r\n\r\nI used mostly default parameters and no `layerwise_lr_decay`:\r\n```\r\n--num_train_epochs 1\r\n--per_gpu_train_batch_size 24\r\n--learning_rate 5e-5\r\n```",
"@volker42maru I will cross check again then, Thanks ",
"It's been added https://github.com/huggingface/transformers/pull/4913"
] | 1,587 | 1,591 | 1,591 | CONTRIBUTOR | null | # π Feature request
Electra for question answering
## Motivation
Electra is the highest rated single model (non essemble) on the Squad leaderboard
## Your contribution
I am not sure if I have the skills, but I'm willing to take a crack at it! Looking the other QA architectures, it seems that I'll need to put a single linear layer (two outputs) on top of the Electra discriminator ?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3848/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3848/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3847 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3847/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3847/comments | https://api.github.com/repos/huggingface/transformers/issues/3847/events | https://github.com/huggingface/transformers/issues/3847 | 602,246,881 | MDU6SXNzdWU2MDIyNDY4ODE= | 3,847 | Share more details on fine-tuning GPT-2 on WikiText-2 ? | {
"login": "xihui-wu",
"id": 58450438,
"node_id": "MDQ6VXNlcjU4NDUwNDM4",
"avatar_url": "https://avatars.githubusercontent.com/u/58450438?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xihui-wu",
"html_url": "https://github.com/xihui-wu",
"followers_url": "https://api.github.com/users/xihui-wu/followers",
"following_url": "https://api.github.com/users/xihui-wu/following{/other_user}",
"gists_url": "https://api.github.com/users/xihui-wu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xihui-wu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xihui-wu/subscriptions",
"organizations_url": "https://api.github.com/users/xihui-wu/orgs",
"repos_url": "https://api.github.com/users/xihui-wu/repos",
"events_url": "https://api.github.com/users/xihui-wu/events{/privacy}",
"received_events_url": "https://api.github.com/users/xihui-wu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"@xihui-wu To get the hyperparameters specific to the model (in this case `gpt2`), you can check the config file of `gpt2` with the code below:\r\n\r\n```\r\nfrom transformers import GPT2Config\r\nprint(GPT2Config())\r\n```\r\n\r\nSome higher level hyperparameters are still not included here (e.g. \"epochs\"). These can be set explicitly as arguments when running the CLI `run_language_modeling.py`; otherwise, the default values are used.\r\n\r\nYou can find the hyperparameters and their default values at the beginning of the `main` function in [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py). For example, the default epoch count (`num_train_epochs`) is 1.\r\n\r\nHope this helps!",
"> @xihui-wu To get the hyperparameters specific to the model (in this case `gpt2`), you can check the config file of `gpt2` with the code below:\r\n> \r\n> ```\r\n> from transformers import GPT2Config\r\n> print(GPT2Config())\r\n> ```\r\n> \r\n> Some higher level hyperparameters are still not included here (e.g. \"epochs\"). These can be set explicitly as arguments when running the CLI `run_language_modeling.py`; otherwise, the default values are used.\r\n> \r\n> You can find the hyperparameters and their default values at the beginning of the `main` function in [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py). For example, the default epoch count (`num_train_epochs`) is 1.\r\n> \r\n> Hope this helps!\r\n\r\nThanks a lot @enzoampil! Do you know what hyper-parameters to get the result: \"This takes about half an hour to train on a single K80 GPU and about one minute for the evaluation to run. It reaches a score of ~20 perplexity once fine-tuned on the dataset.\" ?",
"@xihui-wu This result comes from running the default training script with no explicitly specified hyperparameters; therefore, the **default** hyperparameters will apply. \r\n\r\n>You can find the hyperparameters and their default values at the beginning of the main function in [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py). For example, the default epoch count (num_train_epochs) is 1.\r\n\r\nFor reference, this is the code snippet that fine-tunes `gpt2` with the default hyperparameters.\r\n```\r\nexport TRAIN_FILE=/path/to/dataset/wiki.train.raw\r\nexport TEST_FILE=/path/to/dataset/wiki.test.raw\r\n\r\npython run_language_modeling.py \\\r\n --output_dir=output \\\r\n --model_type=gpt2 \\\r\n --model_name_or_path=gpt2 \\\r\n --do_train \\\r\n --train_data_file=$TRAIN_FILE \\\r\n --do_eval \\\r\n --eval_data_file=$TEST_FILE\r\n```",
"> @xihui-wu This result comes from running the default training script with no explicitly specified hyperparameters; therefore, the **default** hyperparameters will apply.\r\n> \r\n> > You can find the hyperparameters and their default values at the beginning of the main function in [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py). For example, the default epoch count (num_train_epochs) is 1.\r\n> \r\n> For reference, this is the code snippet that fine-tunes `gpt2` with the default hyperparameters.\r\n> \r\n> ```\r\n> export TRAIN_FILE=/path/to/dataset/wiki.train.raw\r\n> export TEST_FILE=/path/to/dataset/wiki.test.raw\r\n> \r\n> python run_language_modeling.py \\\r\n> --output_dir=output \\\r\n> --model_type=gpt2 \\\r\n> --model_name_or_path=gpt2 \\\r\n> --do_train \\\r\n> --train_data_file=$TRAIN_FILE \\\r\n> --do_eval \\\r\n> --eval_data_file=$TEST_FILE\r\n> ```\r\n\r\nI got GPU memory error with k80 on this, what's the batch_size and how can I configure?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"> > @xihui-wu This result comes from running the default training script with no explicitly specified hyperparameters; therefore, the **default** hyperparameters will apply.\r\n> > > You can find the hyperparameters and their default values at the beginning of the main function in [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py). For example, the default epoch count (num_train_epochs) is 1.\r\n> > \r\n> > \r\n> > For reference, this is the code snippet that fine-tunes `gpt2` with the default hyperparameters.\r\n> > ```\r\n> > export TRAIN_FILE=/path/to/dataset/wiki.train.raw\r\n> > export TEST_FILE=/path/to/dataset/wiki.test.raw\r\n> > \r\n> > python run_language_modeling.py \\\r\n> > --output_dir=output \\\r\n> > --model_type=gpt2 \\\r\n> > --model_name_or_path=gpt2 \\\r\n> > --do_train \\\r\n> > --train_data_file=$TRAIN_FILE \\\r\n> > --do_eval \\\r\n> > --eval_data_file=$TEST_FILE\r\n> > ```\r\n> \r\n> I got GPU memory error with k80 on this, what's the batch_size and how can I configure?\r\n\r\nYou can use a per_device_train_batch_size=1, worked for me on a K80"
] | 1,587 | 1,605 | 1,598 | NONE | null | Hello! Regarding https://github.com/huggingface/transformers/tree/master/examples#gpt-2gpt-and-causal-language-modeling, would you mind sharing what hyper-parameters you use to get this result ? How many epochs, what's the batch size? etc... | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3847/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3847/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3846 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3846/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3846/comments | https://api.github.com/repos/huggingface/transformers/issues/3846/events | https://github.com/huggingface/transformers/issues/3846 | 602,221,827 | MDU6SXNzdWU2MDIyMjE4Mjc= | 3,846 | Roberta (and BERT) tokenization converts "do not" to "don't" | {
"login": "davisyoshida",
"id": 1377776,
"node_id": "MDQ6VXNlcjEzNzc3NzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1377776?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davisyoshida",
"html_url": "https://github.com/davisyoshida",
"followers_url": "https://api.github.com/users/davisyoshida/followers",
"following_url": "https://api.github.com/users/davisyoshida/following{/other_user}",
"gists_url": "https://api.github.com/users/davisyoshida/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davisyoshida/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davisyoshida/subscriptions",
"organizations_url": "https://api.github.com/users/davisyoshida/orgs",
"repos_url": "https://api.github.com/users/davisyoshida/repos",
"events_url": "https://api.github.com/users/davisyoshida/events{/privacy}",
"received_events_url": "https://api.github.com/users/davisyoshida/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"For anyone coming across this issue, you can disable all such transformations in `decode` by passing `clean_up_tokenization_spaces=False`. However, I maintain that this decoding behavior is not a sensible default.",
"fixed with #4024 "
] | 1,587 | 1,591 | 1,591 | NONE | null | # π Bug
## Information
Model I am using (Bert, XLNet ...): Roberta
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Execute the following:
```python
import transformers
tokenizer = transformers.RobertaTokenizer.from_pretrained('roberta-base')
print(tokenizer.decode(tokenizer.encode('is not')))
print(tokenizer.decode(tokenizer.encode('do not')))
```
The output is
```
<s> don't</s>
<s> is not</s>
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The detokenization should not incorrectly introduce a contraction.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.8.0
- Platform: Linux-5.5.3-arch1-1-x86_64-with-glibc2.2.5
- Python version: 3.8.1
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3846/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3846/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3845 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3845/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3845/comments | https://api.github.com/repos/huggingface/transformers/issues/3845/events | https://github.com/huggingface/transformers/issues/3845 | 602,074,288 | MDU6SXNzdWU2MDIwNzQyODg= | 3,845 | list index out of range error when I execute a command with examples/run_glue.py | {
"login": "HiroshigeAoki",
"id": 58395317,
"node_id": "MDQ6VXNlcjU4Mzk1MzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/58395317?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HiroshigeAoki",
"html_url": "https://github.com/HiroshigeAoki",
"followers_url": "https://api.github.com/users/HiroshigeAoki/followers",
"following_url": "https://api.github.com/users/HiroshigeAoki/following{/other_user}",
"gists_url": "https://api.github.com/users/HiroshigeAoki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HiroshigeAoki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HiroshigeAoki/subscriptions",
"organizations_url": "https://api.github.com/users/HiroshigeAoki/orgs",
"repos_url": "https://api.github.com/users/HiroshigeAoki/repos",
"events_url": "https://api.github.com/users/HiroshigeAoki/events{/privacy}",
"received_events_url": "https://api.github.com/users/HiroshigeAoki/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, have you fixed this error? I just got the same error. Any help will be grateful!",
"Hi, I have already fixed it. I made a mistake on tsv files. After converting them to correct format by using LibreOffice, glue.py ran correctly. \r\nI hope this could help you!",
"Thanks! I fixed this error. There were some errors in my input file."
] | 1,587 | 1,590 | 1,587 | NONE | null | Hi, I am new to transformers, and I got a list index out of range error when I execute a command for examples/run_glue.py.
I want to do Fine-Turning to classify Japanese words, and I modified some files following a web site.
**The process was:**
**1. I used these commands below to install transformers and to use examples following github's instructions.**
```
$ pip install transformers
$ git clone https://github.com/huggingface/transformers
$ cd transformers
$ pip install .
$ pip install -r ./examples/requirements.txt
```
**2. I changed two files(transformers/data/processors/glue.py, transformers/data/metrics/__init__.py)**
I will show you them last of this question.
**3. I made train.tsv and dev.tsv under the data/original/ directory after making this directory.**
I will show you them last of this question.
**4. I executed a command below.**
```
$ python ./examples/run_glue.py
--data_dir=./src/transformers/data/original/
--model_type=bert
--model_name_or_path=bert-base-japanese-whole-word-masking
--task_name=original
--do_train
--do_eval
--output_dir=output/original
```
**5. list index out of range error occurred.**
```
Traceback (most recent call last):
File "./examples/run_glue.py", line 562, in <module>
main()
File "./examples/run_glue.py", line 510, in main
train_dataset = load_and_cache_examples(args, args.task_name, tokenizer, evaluate=False)
File "./examples/run_glue.py", line 358, in load_and_cache_examples
processor.get_dev_examples(args.data_dir) if evaluate else processor.get_train_examples(args.data_dir)
File "/home/haoki/Bert1/lib/python3.6/site-packages/transformers/data/processors/glue.py", line 519, in get_train_examples
return self._create_examples(self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
File "/home/haoki/Bert1/lib/python3.6/site-packages/transformers/data/processors/glue.py", line 538, in _create_examples
label = line[1]
IndexError: list index out of range
```
**My environment:**
OS: linux
IDE: pycharm
python: python 3.6
I stacked almost for a day with problem... Pleeease help meeee.
--------------------------------------------------------------------------
**Codes of procedure 2 and 3:**
**transformers/data/processors/glue.pyγ(Transformers are installed with pip):**
```
~~~
#added this class
class OriginalProcessor(DataProcessor):
"""Processor for the original data set."""
def get_example_from_tensor_dict(self, tensor_dict):
"""See base class."""
return InputExample(
tensor_dict["idx"].numpy(),
tensor_dict["sentence"].numpy().decode("utf-8"),
None,
str(tensor_dict["label"].numpy()),
)
def get_train_examples(self, data_dir):
"""See base class."""
return self._create_examples(self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
def get_dev_examples(self, data_dir):
"""See base class."""
return self._create_examples(self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev")
def get_labels(self):
"""See base class."""
return ["0", "1"]
def _create_examples(self, lines, set_type):
"""Creates examples for the training and dev sets."""
examples = []
for (i, line) in enumerate(lines):
# If TSV file has a header, it will take off it.
# if i == 0:
# continue
guid = "%s-%s" % (set_type, i)
text_a = line[0]
label = line[1]
examples.append(InputExample(guid=guid, text_a=text_a, text_b=None, label=label))
return examples
glue_tasks_num_labels = {
"cola": 2,
"mnli": 3,
"mrpc": 2,
"sst-2": 2,
"sts-b": 1,
"qqp": 2,
"qnli": 2,
"rte": 2,
"wnli": 2,
"original": 2, # added
}
glue_processors = {
"cola": ColaProcessor,
"mnli": MnliProcessor,
"mnli-mm": MnliMismatchedProcessor,
"mrpc": MrpcProcessor,
"sst-2": Sst2Processor,
"sts-b": StsbProcessor,
"qqp": QqpProcessor,
"qnli": QnliProcessor,
"rte": RteProcessor,
"wnli": WnliProcessor,
"original": OriginalProcessor, # added
}
glue_output_modes = {
"cola": "classification",
"mnli": "classification",
"mnli-mm": "classification",
"mrpc": "classification",
"sst-2": "classification",
"sts-b": "regression",
"qqp": "classification",
"qnli": "classification",
"rte": "classification",
"wnli": "classification",
"original": "classification", # added
}
```
**transformers/data/metrics/__init__.py (Transformers are installed with pip)**
```
def glue_compute_metrics(task_name, preds, labels):
assert len(preds) == len(labels)
if task_name == "cola":
return {"mcc": matthews_corrcoef(labels, preds)}
elif task_name == "sst-2":
return {"acc": simple_accuracy(preds, labels)}
elif task_name == "mrpc":
return acc_and_f1(preds, labels)
elif task_name == "sts-b":
return pearson_and_spearman(preds, labels)
elif task_name == "qqp":
return acc_and_f1(preds, labels)
elif task_name == "mnli":
return {"acc": simple_accuracy(preds, labels)}
elif task_name == "mnli-mm":
return {"acc": simple_accuracy(preds, labels)}
elif task_name == "qnli":
return {"acc": simple_accuracy(preds, labels)}
elif task_name == "rte":
return {"acc": simple_accuracy(preds, labels)}
elif task_name == "wnli":
return {"acc": simple_accuracy(preds, labels)}
# added
elif task_name == "original":
return {"acc": simple_accuracy(preds, labels)}
else:
raise KeyError(task_name)
````
**train.tsv**
```
ι’η½γγ£γ 0 #interesting
ζ₯½γγγ£γ 0 #fun
ιε±γ γ£γ 1 #boring
ζ²γγγ£γ 1 #sad
```
**dev.tsv**
```
ζΊε«γγ 0 #satisfied
θΎγγ£γ 1 #hard
````
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3845/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3845/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3844 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3844/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3844/comments | https://api.github.com/repos/huggingface/transformers/issues/3844/events | https://github.com/huggingface/transformers/pull/3844 | 602,051,732 | MDExOlB1bGxSZXF1ZXN0NDA1MTk0NjEx | 3,844 | [TF T5] Higher tolerance for past testing in TF T5 | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,587 | 1,587 | 1,587 | MEMBER | null | Higher tolerance to be certain that tests pass | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3844/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3844/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3844",
"html_url": "https://github.com/huggingface/transformers/pull/3844",
"diff_url": "https://github.com/huggingface/transformers/pull/3844.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3844.patch",
"merged_at": 1587137177000
} |
https://api.github.com/repos/huggingface/transformers/issues/3843 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3843/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3843/comments | https://api.github.com/repos/huggingface/transformers/issues/3843/events | https://github.com/huggingface/transformers/pull/3843 | 602,051,460 | MDExOlB1bGxSZXF1ZXN0NDA1MTk0Mzg4 | 3,843 | [T5] Higher tolerance for past testing in T5 | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,587 | 1,587 | 1,587 | MEMBER | null | Higher tolerance to be certain that tests pass | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3843/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3843/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3843",
"html_url": "https://github.com/huggingface/transformers/pull/3843",
"diff_url": "https://github.com/huggingface/transformers/pull/3843.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3843.patch",
"merged_at": 1587137115000
} |
https://api.github.com/repos/huggingface/transformers/issues/3842 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3842/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3842/comments | https://api.github.com/repos/huggingface/transformers/issues/3842/events | https://github.com/huggingface/transformers/pull/3842 | 602,034,292 | MDExOlB1bGxSZXF1ZXN0NDA1MTgwNDMx | 3,842 | Fix bug in run_*.py scripts: double wrap into DataParallel during eval | {
"login": "and-kul",
"id": 15240922,
"node_id": "MDQ6VXNlcjE1MjQwOTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/15240922?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/and-kul",
"html_url": "https://github.com/and-kul",
"followers_url": "https://api.github.com/users/and-kul/followers",
"following_url": "https://api.github.com/users/and-kul/following{/other_user}",
"gists_url": "https://api.github.com/users/and-kul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/and-kul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/and-kul/subscriptions",
"organizations_url": "https://api.github.com/users/and-kul/orgs",
"repos_url": "https://api.github.com/users/and-kul/repos",
"events_url": "https://api.github.com/users/and-kul/events{/privacy}",
"received_events_url": "https://api.github.com/users/and-kul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Merging this, though it will be rendered obsolete (for a subset of the script initially) by #3800 "
] | 1,587 | 1,587 | 1,587 | CONTRIBUTOR | null | This bug is present in several scripts in `examples`:
* `examples/run_language_modeling.py`
* `examples/run_multiple_choice.py`
* `examples/run_xnli.py`
* `examples/ner/run_ner.py`
* `examples/mm-imdb/run_mmimdb.py`
* `examples/hans/test_hans.py`
The problem is exactly the same as it was in #1801 and in #1504:
During the evaluation, we are trying to wrap the `model` into `DataParallel` second time (we did it already during training). As a result we have:
> "RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cuda:1" (ids of devices may differ)
The fix is straightforward:
Before:
```python
# multi-gpu eval
if args.n_gpu > 1:
model = torch.nn.DataParallel(model)
```
After:
```python
# multi-gpu eval
if args.n_gpu > 1 and not isinstance(model, torch.nn.DataParallel):
model = torch.nn.DataParallel(model)
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3842/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3842/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3842",
"html_url": "https://github.com/huggingface/transformers/pull/3842",
"diff_url": "https://github.com/huggingface/transformers/pull/3842.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3842.patch",
"merged_at": 1587425865000
} |
https://api.github.com/repos/huggingface/transformers/issues/3841 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3841/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3841/comments | https://api.github.com/repos/huggingface/transformers/issues/3841/events | https://github.com/huggingface/transformers/issues/3841 | 601,948,472 | MDU6SXNzdWU2MDE5NDg0NzI= | 3,841 | Reproducing squad score with TFXLMRoberta? | {
"login": "nchocho",
"id": 3957900,
"node_id": "MDQ6VXNlcjM5NTc5MDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3957900?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nchocho",
"html_url": "https://github.com/nchocho",
"followers_url": "https://api.github.com/users/nchocho/followers",
"following_url": "https://api.github.com/users/nchocho/following{/other_user}",
"gists_url": "https://api.github.com/users/nchocho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nchocho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nchocho/subscriptions",
"organizations_url": "https://api.github.com/users/nchocho/orgs",
"repos_url": "https://api.github.com/users/nchocho/repos",
"events_url": "https://api.github.com/users/nchocho/events{/privacy}",
"received_events_url": "https://api.github.com/users/nchocho/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,592 | 1,592 | NONE | null | # β Questions & Help
Hello all,
first of all, thanks for the library that is very helpful. There have been several discussions regarding XLM-Roberta and questions answering (#3732 #3694 ). On my side, I added a TFXLMRobertaForQuestionAnswering but never reproduced a decent squad score (I was always below 50% f1). The base LM I was using was xlm-roberta-base converted to tf, or the community ones (jplu), I tried with type_vocab_size=1 or type_vocab_size=2 (in order to use segment_ids as for Bert, I did it by overwriting the create_token_type_ids_from_sequences in the tokenizer). This did not really change anything. I am using a AdamWeightDecay as for Bert finetuning and I start to believe that there is no problem with my code since I saw yesterday the PR #3812 which is basically the same code as mine. For info, I am not only training on squad, I am training on squad + mlqa. Using the exact same approach, I got very good score with Bert multilingual. Since I am a bit stuck I was wondering if someone here (from huggingface or not) managed to properly train XLMRoberta with tensorflow and get good squad results. If so, it would be super helpful for me if you can share the parameters that was used (learning rate, number of epochs, etc).
** | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3841/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3841/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3840 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3840/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3840/comments | https://api.github.com/repos/huggingface/transformers/issues/3840/events | https://github.com/huggingface/transformers/issues/3840 | 601,912,906 | MDU6SXNzdWU2MDE5MTI5MDY= | 3,840 | Decoding predictions for masked language modeling task using custom BPE | {
"login": "singhay",
"id": 14236438,
"node_id": "MDQ6VXNlcjE0MjM2NDM4",
"avatar_url": "https://avatars.githubusercontent.com/u/14236438?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/singhay",
"html_url": "https://github.com/singhay",
"followers_url": "https://api.github.com/users/singhay/followers",
"following_url": "https://api.github.com/users/singhay/following{/other_user}",
"gists_url": "https://api.github.com/users/singhay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/singhay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/singhay/subscriptions",
"organizations_url": "https://api.github.com/users/singhay/orgs",
"repos_url": "https://api.github.com/users/singhay/repos",
"events_url": "https://api.github.com/users/singhay/events{/privacy}",
"received_events_url": "https://api.github.com/users/singhay/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Maybe @mfuntowicz ? :-) ",
"Bump",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,593 | 1,593 | NONE | null | # β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
https://stackoverflow.com/questions/61232399/decoding-predictions-for-masked-language-modeling-task-using-custom-bpe | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3840/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3840/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3839 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3839/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3839/comments | https://api.github.com/repos/huggingface/transformers/issues/3839/events | https://github.com/huggingface/transformers/issues/3839 | 601,746,519 | MDU6SXNzdWU2MDE3NDY1MTk= | 3,839 | Different output encode and encode_plus | {
"login": "Stuffooh",
"id": 50005268,
"node_id": "MDQ6VXNlcjUwMDA1MjY4",
"avatar_url": "https://avatars.githubusercontent.com/u/50005268?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Stuffooh",
"html_url": "https://github.com/Stuffooh",
"followers_url": "https://api.github.com/users/Stuffooh/followers",
"following_url": "https://api.github.com/users/Stuffooh/following{/other_user}",
"gists_url": "https://api.github.com/users/Stuffooh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Stuffooh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Stuffooh/subscriptions",
"organizations_url": "https://api.github.com/users/Stuffooh/orgs",
"repos_url": "https://api.github.com/users/Stuffooh/repos",
"events_url": "https://api.github.com/users/Stuffooh/events{/privacy}",
"received_events_url": "https://api.github.com/users/Stuffooh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I made a mistake in my code. The output is the same."
] | 1,587 | 1,587 | 1,587 | NONE | null | Hi everyone,
I'm struggling with the following scenario:
I have the following input:
```
sentence = "Sheldon : If a photon is directed through a plane with two slits in it and either is observed . Sheldon : . it will not go through both . If unobserved , it will . UNKNAME : If it 's observed after it left the plane , before it hits its target . Sheldon : . it will not have gone through both slits . Agreed . Leonard : What 's your point ? Sheldon : There 's no point , I just think it 's a good idea for a T shirt . UNKNAME : Excuse me . Hang on . Leonard : One across is Aegean , eight down is Nabokov . Leonard : Twenty six across is MCM . Leonard : Fourteen down is . Leonard : Move your finger . UNKNAME : . phylum , which makes 14 across Port Au Prince . Leonard : See , Papa Doc 's capital idea , that 's Port Au Prince . Leonard : Haiti . UNKNAME : Can I help you ? Yes . UNKNAME : Um , is this the high IQ sperm bank ?"
```
I am using bert-base-uncased and I get different length of tokens depending if I use `tokenizer.encode` or `tokenizer.encode_plus`.
Below is an example:
```
test = tokenizer.encode_plus(sentence, add_special_tokens=True, max_length=512)["input_ids"]
print(test)
214
```
```
test2 = test2 = tokenizer.encode_plus(sentence, add_special_tokens=True, max_length=512)["input_ids"]
print(test2)
189
```
In above scenario I expect the amount of tokens to be the same length. I looked at the documentation but I cannot find an explanation for the difference. It is problematic for me because I need to use tokenizer.batch_encode_plus but for my model I expect and need the length of 189 instead of 214.
Can someone please explain why the output is different and how to make encode.plus output the same as encode?
Thanks in advance ;)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3839/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3839/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3838 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3838/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3838/comments | https://api.github.com/repos/huggingface/transformers/issues/3838/events | https://github.com/huggingface/transformers/issues/3838 | 601,711,222 | MDU6SXNzdWU2MDE3MTEyMjI= | 3,838 | Cutom tokenizer not loaded in AutoTokenizer | {
"login": "muhammadfahid51",
"id": 57350797,
"node_id": "MDQ6VXNlcjU3MzUwNzk3",
"avatar_url": "https://avatars.githubusercontent.com/u/57350797?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muhammadfahid51",
"html_url": "https://github.com/muhammadfahid51",
"followers_url": "https://api.github.com/users/muhammadfahid51/followers",
"following_url": "https://api.github.com/users/muhammadfahid51/following{/other_user}",
"gists_url": "https://api.github.com/users/muhammadfahid51/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muhammadfahid51/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muhammadfahid51/subscriptions",
"organizations_url": "https://api.github.com/users/muhammadfahid51/orgs",
"repos_url": "https://api.github.com/users/muhammadfahid51/repos",
"events_url": "https://api.github.com/users/muhammadfahid51/events{/privacy}",
"received_events_url": "https://api.github.com/users/muhammadfahid51/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"If it's a BPETokenizer, you can load it with `RobertaTokenizer.from_pretrained(\"vocab.json\", \"merges.json\")`",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,592 | 1,592 | NONE | null | I am training a language model from scratch. I trained **ByteLevelBPETokenizer** and then when I am trying to load this tokenizer using **AutoTokenizer**, it is giving me the following error.
OSError: Model name './MyRobertaConfig/' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, bert-base-finnish-cased-v1, bert-base-finnish-uncased-v1, bert-base-dutch-cased). We assumed './MyRobertaConfig/' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.
I have not added tokenizer_config.json file in config directory, I don't think it should be an issue. Now do I need to migrate or transform my custom tokenizer to make it compatible with transformers tokenizers or what. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3838/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3838/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3837 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3837/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3837/comments | https://api.github.com/repos/huggingface/transformers/issues/3837/events | https://github.com/huggingface/transformers/pull/3837 | 601,694,215 | MDExOlB1bGxSZXF1ZXN0NDA0OTE1NTMx | 3,837 | PretrainedTokenizer cleanup: Typehints, decode_batch | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3837?src=pr&el=h1) Report\n> Merging [#3837](https://codecov.io/gh/huggingface/transformers/pull/3837?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f399c00610506325bc1690f0e68c6885e73395ec&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `84.21%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3837?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3837 +/- ##\n==========================================\n- Coverage 78.48% 78.48% -0.01% \n==========================================\n Files 106 106 \n Lines 17930 17934 +4 \n==========================================\n+ Hits 14072 14075 +3 \n- Misses 3858 3859 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3837?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3837/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `87.50% <82.35%> (-0.07%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/3837/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `83.46% <100.00%> (ΓΈ)` | |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3837/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.76% <0.00%> (ΓΈ)` | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3837?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3837?src=pr&el=footer). Last update [f399c00...41502fa](https://codecov.io/gh/huggingface/transformers/pull/3837?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,587 | 1,588 | 1,588 | CONTRIBUTOR | null | This is very very minor cleanup
- adds `decode_batch` which calls `.decode` on every entry in a list.
- A few cosmetic changes to tokenization_utils.py (type hints using defaultdict)
- adds type hints in files I touched. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3837/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3837/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3837",
"html_url": "https://github.com/huggingface/transformers/pull/3837",
"diff_url": "https://github.com/huggingface/transformers/pull/3837.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3837.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3836 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3836/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3836/comments | https://api.github.com/repos/huggingface/transformers/issues/3836/events | https://github.com/huggingface/transformers/pull/3836 | 601,656,875 | MDExOlB1bGxSZXF1ZXN0NDA0ODg2MjAw | 3,836 | Update camembert-base-README.md | {
"login": "benjamin-mlr",
"id": 17753315,
"node_id": "MDQ6VXNlcjE3NzUzMzE1",
"avatar_url": "https://avatars.githubusercontent.com/u/17753315?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/benjamin-mlr",
"html_url": "https://github.com/benjamin-mlr",
"followers_url": "https://api.github.com/users/benjamin-mlr/followers",
"following_url": "https://api.github.com/users/benjamin-mlr/following{/other_user}",
"gists_url": "https://api.github.com/users/benjamin-mlr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/benjamin-mlr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/benjamin-mlr/subscriptions",
"organizations_url": "https://api.github.com/users/benjamin-mlr/orgs",
"repos_url": "https://api.github.com/users/benjamin-mlr/repos",
"events_url": "https://api.github.com/users/benjamin-mlr/events{/privacy}",
"received_events_url": "https://api.github.com/users/benjamin-mlr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3836?src=pr&el=h1) Report\n> Merging [#3836](https://codecov.io/gh/huggingface/transformers/pull/3836?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f0c96fafd16d206b22a74fe76b251414f7314703&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3836?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3836 +/- ##\n==========================================\n+ Coverage 78.47% 78.48% +0.01% \n==========================================\n Files 106 106 \n Lines 17930 17930 \n==========================================\n+ Hits 14071 14073 +2 \n+ Misses 3859 3857 -2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3836?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3836/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.92% <0.00%> (+0.32%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3836?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3836?src=pr&el=footer). Last update [f0c96fa...2142ca8](https://codecov.io/gh/huggingface/transformers/pull/3836?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Markdown for the table was broken so I fixed it in 60a42ef1c04591e0709429276ccbc02608b7d47d\r\n\r\nThank you @benjamin-mlr !",
"Thank you Julien !\n\nOn Sat, Apr 18, 2020 at 8:22 AM Julien Chaumond <[email protected]>\nwrote:\n\n> Markdown for the table was broken so I fixed it in 60a42ef\n> <https://github.com/huggingface/transformers/commit/60a42ef1c04591e0709429276ccbc02608b7d47d>\n>\n> Thank you @benjamin-mlr <https://github.com/benjamin-mlr> !\n>\n> β\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/pull/3836#issuecomment-615519941>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEHOJY2XTLDRZ255RTUWYSDRNDXDZANCNFSM4MKMNEYQ>\n> .\n>\n\n\n-- \nBenjamin Muller\n*Ms in Data Science, specialised in Deep Learning applied to NLP*\n*www.linkedin.com/in/ <http://www.linkedin.com/in/>benjamin-muller-19796191*\n",
"Hi @julien-c , \r\n\r\nA quick question regarding Pipeline and the new camembert checkpoints. \r\n\r\nThe pipeline \"fill-mask\" is not currently working for the new camembert checkpoints \r\ne.g : camembert_fill_mask = pipeline(\"fill-mask\",model=\"camembert/camembert-base-ccnet-4gb\",tokenizer=\"camembert-base\")\r\nError : \"Model name 'camembert/camembert-base-ccnet-4gb' was not found in model name list...\"\r\n\r\nShould we do something to make it work ? \r\n \r\nThanks ! ",
"@benjamin-mlr Works successfully for me. What's your version of transformers?",
"Hi Julien,\n\nI got confused by the warning and the \"not that accurate prediction\" on the\nmasked sentence I tried out. It works, I can confirm now.\n\nThanks,\nBenjamin\n\n\nOn Tue, Apr 21, 2020 at 9:28 AM Julien Chaumond <[email protected]>\nwrote:\n\n> @benjamin-mlr <https://github.com/benjamin-mlr> Works successfully for\n> me. What's your version of transformers?\n>\n> β\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/pull/3836#issuecomment-616895868>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEHOJY2K4DP2WI4XUM2QQKLRNTZDDANCNFSM4MKMNEYQ>\n> .\n>\n\n\n-- \nBenjamin Muller\n*Ms in Data Science, specialised in Deep Learning applied to NLP*\n*www.linkedin.com/in/ <http://www.linkedin.com/in/>benjamin-muller-19796191*\n",
"Yes I indeed noticed that that model was outputting weird predictions."
] | 1,587 | 1,588 | 1,587 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3836/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3836/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3836",
"html_url": "https://github.com/huggingface/transformers/pull/3836",
"diff_url": "https://github.com/huggingface/transformers/pull/3836.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3836.patch",
"merged_at": 1587168493000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3835 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3835/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3835/comments | https://api.github.com/repos/huggingface/transformers/issues/3835/events | https://github.com/huggingface/transformers/issues/3835 | 601,435,600 | MDU6SXNzdWU2MDE0MzU2MDA= | 3,835 | Transfo-XL cannot generate long texts. Using run_generation.py to generate texts | {
"login": "AdaUchendu",
"id": 32556160,
"node_id": "MDQ6VXNlcjMyNTU2MTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/32556160?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AdaUchendu",
"html_url": "https://github.com/AdaUchendu",
"followers_url": "https://api.github.com/users/AdaUchendu/followers",
"following_url": "https://api.github.com/users/AdaUchendu/following{/other_user}",
"gists_url": "https://api.github.com/users/AdaUchendu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AdaUchendu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AdaUchendu/subscriptions",
"organizations_url": "https://api.github.com/users/AdaUchendu/orgs",
"repos_url": "https://api.github.com/users/AdaUchendu/repos",
"events_url": "https://api.github.com/users/AdaUchendu/events{/privacy}",
"received_events_url": "https://api.github.com/users/AdaUchendu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"What is the error?",
"So, first, it does not generate long texts. It usually generates texts with\nabout 10 tokens. And I think it is because of the warning:\nWARNING - transformers.modeling_utils - Setting `pad_token_id` to 0 (first\n`eos_token_id`) to generate sequence\n\nOther generators in run_generation.py are able to generate the length of\ntexts, specified in the command.\n\nTo recreate this error/warning. I loaded the git repo of huggingface,\ninstalled transformers and then ran the following on the command line:\ncd transformers/\npython examples/run_generation.\npy --model_type transfo-xl --model_name_or_path transfo-xl-wt103 \\\n --prompt\n\"China wants to take a victory lap over its handling of the\ncoronavirus outbreak\"\n --repetition 2.2 \\\n --length 500 \\\n --temperature 0.8 --k 8\n\nWhat do you think I am missing?\nI tried adding this argument to the command: -- pretrained_init_configuration\n{\"pad_token\":0}\nbecause in the other files transformers.modeling_utils, it has this command\nand the commands where imported into the run_generation.py file. But as I\nsuspected, this resulted in a Non-recognition error.\n\n\n\n\nOn Fri, Apr 17, 2020 at 7:39 AM singhay <[email protected]> wrote:\n\n> What is the error?\n>\n> β\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/3835#issuecomment-615197924>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AHYMJAGIR2TKDRIEEZJC63DRNA5WTANCNFSM4MKD6TAA>\n> .\n>\n\n\n-- \n*Adaku Uchendu*\n\n*McNair Scholar*\n*Mathematics major*\n*Statistic minor *\n*Math Lab Tutor*\n*Pre-Calculus LA*\n*University of Maryland, Baltimore County *\n*Class of 2018*\n",
"I mentioned this problem in a previous issue #3769. \r\n\r\nIf you read what Colanim said, basically, text generation stops at the eos token, and you can prevent that by specifying a min_length which forces the model to not generate a eos token until the min_length is reached.\r\n\r\nHowever, the generate() method (the method run_generation uses to generate text) has another argument called max_length, which specifies the maximum length generated. If you look in the code, the length argument is equivalent to the max_length argument in the generate method, meaning length only specifies the maximum length not the minimum. For other models like XLNet, this is not a problem as it doesn't generate eos tokens (only eop and eod). But, for Transformer-XL this causes it to stop short. \r\n\r\nYou could fix this problem by editing the script changing the min_length argument equal to the length argument and max_length equal to length+1 (Otherwise it uses the default max_length of 20 and will still stop short). \r\n\r\nHowever, right now both Transformer-XL and XLNet have an exponential time complexity, meaning if you want to generate a lot tokens it will take a long time, e.g., generating 4000 tokens will take 850 hours on a P100.\r\n\r\nSo, if you really need to generate long text check out [this](https://github.com/rusiaaman/XLnet-gen) repository, which uses XLNet and is able to generate a max of around 4000 tokens in 3 hours with 16 GB ram. If you are generating less than 1024 tokens you should use GPT-2 instead as it is faster, more coherent, and fine-tunable using the language modeling script. \r\n\r\n\r\n\r\n\r\n",
"Thank you but how did you get the Transformer-XL to generate long\ncoherent texts. Now it generates long texts but the article that is not\ncoherent.\n\nOn Fri, Apr 17, 2020 at 10:21 PM urlocal12 <[email protected]> wrote:\n\n> I mentioned this problem in a previous issue #3769\n> <https://github.com/huggingface/transformers/issues/3769>.\n>\n> If you read what Colanim said, basically, text generation stops at the eos\n> token, and you can prevent that by specifying a min_length which forces the\n> model to not generate a eos token until the min_length is reached.\n>\n> However, the generate() method (the method run_generation uses to generate\n> text) has another argument called max_length, which specifies the maximum\n> length generated. If you look in the code, the length argument is\n> equivalent to the max_length argument in the generate method, meaning\n> length only specifies the maximum length not the minimum. For other models\n> like XLNet, this is not a problem as it doesn't generate eos tokens (only\n> eop and eod). But, for Transformer-XL this causes it to stop short.\n>\n> You could fix this problem by editing the script changing the min_length\n> argument equal to the length argument and max_length equal to length+1\n> (Otherwise it uses the default max_length of 20 and will still stop short).\n>\n> However, right now both Transformer-XL and XLNet have an exponential time\n> complexity, meaning if you want to generate a lot tokens it will take a\n> long time, e.g., generating 4000 tokens will take 850 hours on a P100.\n>\n> So, if you really need to generate long text check out this\n> <https://github.com/rusiaaman/XLnet-gen> repository, which uses XLNet and\n> is able to generate a max of around 4000 tokens in 3 hours with 16 GB ram.\n> If you are generating less than 1024 tokens you should use GPT-2 instead as\n> it is faster, more coherent, and fine-tunable using the language modeling\n> script.\n>\n> β\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/3835#issuecomment-615540583>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AHYMJAAUVFARGLTRN2BN6FDRNEFCLANCNFSM4MKD6TAA>\n> .\n>\n\n\n-- \n*Adaku Uchendu*\n\n*McNair Scholar*\n*Mathematics major*\n*Statistic minor *\n*Math Lab Tutor*\n*Pre-Calculus LA*\n*University of Maryland, Baltimore County *\n*Class of 2018*\n",
"Closing for now since #3769 seems to be resolved.\r\n\r\nAlso note that the `text-generation` pipeline as shown here: https://huggingface.co/transformers/usage.html#text-generation should be used :-) "
] | 1,587 | 1,591 | 1,591 | NONE | null | # β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Trying to generate long text with Transfo-XL Text-generator but I continuously get a
warning which may be the reason I am unable to generate the long text.
Here is the Warning: WARNING - transformers.modeling_utils - Setting `pad_token_id` to 0 (first `eos_token_id`) to generate sequence -->
This is how I run the code:
cd transformers/
python examples/run_generation.py --model_type transfo-xl --model_name_or_path transfo-xl-wt103 \
--prompt "China wants to take a victory lap over its handling of the coronavirus outbreak" --repetition 2.2 \
--length 500 \
--temperature 0.8 --k 8
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3835/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3835/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3834 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3834/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3834/comments | https://api.github.com/repos/huggingface/transformers/issues/3834/events | https://github.com/huggingface/transformers/issues/3834 | 601,381,666 | MDU6SXNzdWU2MDEzODE2NjY= | 3,834 | i want help to create saved model(.pth) from Pytorch Dump(pytorch_model.bin) if possible! | {
"login": "pumpkinband",
"id": 39296817,
"node_id": "MDQ6VXNlcjM5Mjk2ODE3",
"avatar_url": "https://avatars.githubusercontent.com/u/39296817?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pumpkinband",
"html_url": "https://github.com/pumpkinband",
"followers_url": "https://api.github.com/users/pumpkinband/followers",
"following_url": "https://api.github.com/users/pumpkinband/following{/other_user}",
"gists_url": "https://api.github.com/users/pumpkinband/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pumpkinband/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pumpkinband/subscriptions",
"organizations_url": "https://api.github.com/users/pumpkinband/orgs",
"repos_url": "https://api.github.com/users/pumpkinband/repos",
"events_url": "https://api.github.com/users/pumpkinband/events{/privacy}",
"received_events_url": "https://api.github.com/users/pumpkinband/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"i used `transformers-cli convert` to make `python_model.bin` from checkpoints",
"Did you try to put in in quotes? If you have a model you should do `torch.save(model.state_dict(), PATH)`.\r\n\r\nPlease take a look at the [PyTorch documentation](https://pytorch.org/tutorials/beginner/saving_loading_models.html). We prefer using `model.save_pretrained(PATH)`, however, as it saves the configuration object alongside it which is necessary when loading the model afterwards."
] | 1,587 | 1,587 | 1,587 | NONE | null | i have, PROJECT(folder)
βββ pytorch_model.bin
βββ bert_config.json
βββ vocab.txt
i tried saving it with
`torch.save( pytorch_model.bin , PATH)`
but came with error
`-bash: syntax error near unexpected token `pytorch_model.bin,'`
what am i doing wrong ?
and please help me convert pretrained model to saved model( .pth)!!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3834/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3834/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3833 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3833/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3833/comments | https://api.github.com/repos/huggingface/transformers/issues/3833/events | https://github.com/huggingface/transformers/pull/3833 | 601,245,894 | MDExOlB1bGxSZXF1ZXN0NDA0NTM0NDY3 | 3,833 | Remove tqdm logging when using pipelines. | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,587 | 1,587 | 1,587 | MEMBER | null | Attempt to fix #3744
Introduce `tqdm_enabled` parameter on `squad_convert_examples_to_features()` default to `True` and set to `False` in QA pipelines. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3833/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3833/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3833",
"html_url": "https://github.com/huggingface/transformers/pull/3833",
"diff_url": "https://github.com/huggingface/transformers/pull/3833.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3833.patch",
"merged_at": 1587416333000
} |
https://api.github.com/repos/huggingface/transformers/issues/3832 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3832/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3832/comments | https://api.github.com/repos/huggingface/transformers/issues/3832/events | https://github.com/huggingface/transformers/issues/3832 | 601,244,776 | MDU6SXNzdWU2MDEyNDQ3NzY= | 3,832 | The issue I met when do the NER task for Universal language by using XLM-R | {
"login": "zongbingwang",
"id": 6621467,
"node_id": "MDQ6VXNlcjY2MjE0Njc=",
"avatar_url": "https://avatars.githubusercontent.com/u/6621467?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zongbingwang",
"html_url": "https://github.com/zongbingwang",
"followers_url": "https://api.github.com/users/zongbingwang/followers",
"following_url": "https://api.github.com/users/zongbingwang/following{/other_user}",
"gists_url": "https://api.github.com/users/zongbingwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zongbingwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zongbingwang/subscriptions",
"organizations_url": "https://api.github.com/users/zongbingwang/orgs",
"repos_url": "https://api.github.com/users/zongbingwang/repos",
"events_url": "https://api.github.com/users/zongbingwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/zongbingwang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,592 | 1,592 | NONE | null | # β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
Firstly, go through the roughly logic when we do slot tagging task for English query, take query βwhatβs the weather in Beijingβ and detect location slot tagging as example:
1. We are using space to split the query and label it as βO O O O B-locationβ when generate the training data
2. By using sentence piece to tokenize the query to ['_what', "'", 's', '_the', '_weather', '_in', '_bei', 'jing'], and corresponding tagging mask [1, 0, 0, 1, 1, 1, 1, 0] (take first token in the word as 1, other tokens in the word as 0)
3. By using NER task to do the predict, and then extract the tokenβs prediction result as the word prediction result if the tokenβs mask is 1
When we do the NER task as above logic for CJK language, take βδ»ε€©ε京倩ζ°ζδΉζ ·οΌβ as example, after sentence piece tokenize, the tokens is ['β', 'εδΊ¬', 'δ»ε€©', '倩ζ°', 'ζδΉζ ·', '?'], since there are no space in the query, so it will take the whole query as one slot and the corresponding tagging mask is [1, 0, 0, 0, 0, 0], so we canβt get the location slot βεδΊ¬β
The cause is that the split methods for English and CJK are different, In French, βJe tβaimeβ is means βI love youβ, βJeβ means βIβ, βtββ means βyouβ, βaimeβ means βloveβ, it is using βββ as splitter.
I think we can solve it by using below solutions:
1. Detect what language it is firstly for the query
2. For the language likes English, we donβt change the logic as above, for the language likes CJK, after split by using space, we can use tokens generated by sentence piece as word,
a. For example βεδΊ¬δ»ε€©ε€©ζ°ζδΉζ ·οΌθ₯Ώι
εΎε’?β, split it as [βεδΊ¬δ»ε€©ε€©ζ°ζδΉζ ·β, βθ₯Ώι
εΎε’?β] as space
b. For βεδΊ¬δ»ε€©ε€©ζ°ζδΉζ ·β, after tokenized is: ['β', 'εδΊ¬', 'δ»ε€©', '倩ζ°', 'ζδΉζ ·', ',β], we mark each token as word we need to predict, the correspond tagging mask is [0, 1, 1, 1, 1, 0]
By using NER task to do the predict, and then extract the tokenβs prediction result as the word prediction result if the tokenβs mask is 1, so we can predict βεδΊ¬β as location slot
But for this, it needs to have specific language knowledge and language classification detector, any better ideas do you have? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3832/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3832/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3831 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3831/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3831/comments | https://api.github.com/repos/huggingface/transformers/issues/3831/events | https://github.com/huggingface/transformers/issues/3831 | 601,224,342 | MDU6SXNzdWU2MDEyMjQzNDI= | 3,831 | AlbertModel output is not HalfTensor when using apex fp16 | {
"login": "rasoolims",
"id": 1611317,
"node_id": "MDQ6VXNlcjE2MTEzMTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1611317?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rasoolims",
"html_url": "https://github.com/rasoolims",
"followers_url": "https://api.github.com/users/rasoolims/followers",
"following_url": "https://api.github.com/users/rasoolims/following{/other_user}",
"gists_url": "https://api.github.com/users/rasoolims/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rasoolims/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rasoolims/subscriptions",
"organizations_url": "https://api.github.com/users/rasoolims/orgs",
"repos_url": "https://api.github.com/users/rasoolims/repos",
"events_url": "https://api.github.com/users/rasoolims/events{/privacy}",
"received_events_url": "https://api.github.com/users/rasoolims/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @rasoolims, \r\n\r\nThis seems correct. Basically, using `opt_level=\"O1\"` means apex will add some downcasting to `float16` from a set of whitelisted operations, such as GEMM or Conv. This way, these operations will benefit from using Tensor Cores on latest devices, achieving higher throughput.. \r\n\r\nIn the other hand, for some operations you want the whole spectrum of representable values to keep a very high accuracy in the output, this is true for many activation functions such as `Softmax` or `GeLU`.\r\n\r\nWhat you observe here with your ouputs (`f1, m1`) is directly related to the dynamic downcasting of apex: \r\n\r\n- `f1` : Comes from a GeLU operation from AlbertaLayer, which is not downcasted to `float16`\r\n- `m1`: Comes from a Linear layer operation, which is implemented through `gemm` and would greatly benefits from using Tensor Cores.\r\n\r\nIn addition, you should not be doing any input type conversion / downcasting when using `opt_level=\"O1\"`.",
"Hi @mfuntowicz \r\nThanks for the response.\r\nDo you mean with the current setting, it is better to just use fp32? Or do you recommend changing the opt_level or activation function?\r\n",
"It really depends what you want to do and which level of Mixed-Precision training you want to achieve. \r\n\r\nWith O1, only the operations which can expected improvements by running specialised CUDA/CuDNN fp16 kernels (on Tensor Cores) will be patched to have fp16 weights and input/output conversions.\r\n\r\nWith 02, all the weights of the model are going to be converted to fp16 with the exception of some layers like Batch Norm, so you have a quasi complete fp16 training. \r\n\r\nWith O3, everything is run through fp16.",
"#thanks @mfuntowicz "
] | 1,587 | 1,593 | 1,587 | NONE | null | # π Bug
Despite the fact that I turned the model to fp16 with apex, the hidden representation output is not half tensor (see code snippet for details) while class heads are half tensors.
## Information
Model I am using AlbertModel.
Language I am using the model on arbitrary:
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
```py
import torch
import torch.nn as nn
import torch.nn.functional as F
from transformers import AlbertModel, AlbertConfig
from transformers.modeling_albert import AlbertMLMHead
import apex
pad_token_id = 0
bos_token_id = 2
eos_token_id = 3
vocab_size = 20
config = {
"attention_probs_dropout_prob": 0.1,
"hidden_act": "gelu_new",
"hidden_dropout_prob": 0.1,
"embedding_size": 64,
"hidden_size": 256,
"initializer_range": 0.02,
"intermediate_size": 1024,
"max_position_embeddings": 512,
"num_attention_heads": 2, # smaller than usual
"num_hidden_layers": 2, # smaller than usual
"num_hidden_groups": 1,
"net_structure_type": 0,
"gap_size": 0,
"num_memory_blocks": 0,
"inner_group_num": 1,
"down_scale_factor": 1,
"type_vocab_size": 2,
"vocab_size": vocab_size,
"pad_token_id": pad_token_id,
"bos_token_id": bos_token_id,
"eos_token_id": eos_token_id,
}
albert_config = AlbertConfig(**config)
encoder: AlbertModel = AlbertModel(albert_config)
masked_lm = AlbertMLMHead(albert_config)
optimizer = torch.optim.Adam(encoder.parameters(), lr=0.0001)
encoder = encoder.cuda()
model, optimizer = apex.amp.initialize(encoder, optimizer, opt_level="O1", )
# When giving LongTensor as input, the class heads are half tensors,
# but hidden representations are not half!
long_input = torch.randint(1, 10, (10,5)).cuda()
f1, m1= encoder(long_input)
f1.type()
"""
'torch.cuda.FloatTensor'
"""
m1.type()
"""
'torch.cuda.HalfTensor'
"""
# When giving HalfTensor as input, it crashes
half_input = long_input.half()
f2, m2= encoder(half_input)
"""
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/mnt/castor/seas_home/r/rasooli/torch_env/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/mnt/castor/seas_home/r/rasooli/torch_env/lib64/python3.6/site-packages/transformers/modeling_albert.py", line 570, in forward
input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds
File "/mnt/castor/seas_home/r/rasooli/torch_env/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/mnt/castor/seas_home/r/rasooli/torch_env/lib64/python3.6/site-packages/transformers/modeling_bert.py", line 173, in forward
inputs_embeds = self.word_embeddings(input_ids)
File "/mnt/castor/seas_home/r/rasooli/torch_env/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/mnt/castor/seas_home/r/rasooli/torch_env/lib64/python3.6/site-packages/torch/nn/modules/sparse.py", line 114, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/mnt/castor/seas_home/r/rasooli/torch_env/lib64/python3.6/site-packages/torch/nn/functional.py", line 1484, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError:
"""
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.7.0
- Platform:
- Python version: Python 3.6.10
- PyTorch version (GPU?): 1.4.0
- Tensorflow version (GPU?):
- Using GPU in script?: Yes, NVIDIA-SMI 418.67; GeForce RTX 208
- Using distributed or parallel set-up in script?: Optional parallel
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3831/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3831/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3830 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3830/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3830/comments | https://api.github.com/repos/huggingface/transformers/issues/3830/events | https://github.com/huggingface/transformers/issues/3830 | 601,196,542 | MDU6SXNzdWU2MDExOTY1NDI= | 3,830 | Faster mask computation | {
"login": "rasoolims",
"id": 1611317,
"node_id": "MDQ6VXNlcjE2MTEzMTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1611317?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rasoolims",
"html_url": "https://github.com/rasoolims",
"followers_url": "https://api.github.com/users/rasoolims/followers",
"following_url": "https://api.github.com/users/rasoolims/following{/other_user}",
"gists_url": "https://api.github.com/users/rasoolims/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rasoolims/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rasoolims/subscriptions",
"organizations_url": "https://api.github.com/users/rasoolims/orgs",
"repos_url": "https://api.github.com/users/rasoolims/repos",
"events_url": "https://api.github.com/users/rasoolims/events{/privacy}",
"received_events_url": "https://api.github.com/users/rasoolims/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @rasoolims ,\r\n\r\nThat sounds reasonable what you are saying! I wanted to take a look into masking optimization in a couple of weeks. If you feel like it, you could also open a PR and we take a look together :-) ",
"Feel free to open a PR for this :-) Closing for now"
] | 1,587 | 1,591 | 1,591 | NONE | null | # π Feature request
Currently, the masking is done using the full prediction matrix which is both memory and computation inefficient. One example is in [here](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_albert.py#L675). I think with Pytorch indexing, first and foremost, we don't need to do full mask matrix construction. Second, it can be much faster. Fairseq does a similar thing in [here](https://github.com/pytorch/fairseq/blob/cce6dcb1cca85955a82879ea5064fe8202e8f412/fairseq/models/roberta/model.py#L217)
## Motivation
If input is [n,m], currently the code creates a clone of [n,m] mask where non-masked inputs are -100. Then it does a full output projection: if input in [n, m] and hidden representation is [n, m, h], the final output will be a huge [n, m, v] if v is the vocabulary size. Instead we can think of mask indices of size k<< n* m, and thus the we can extract a [k, h] submatrix from [n, m, h], then can have a smaller output result [k, v].
## Your contribution
This a sample code (I extracted from a bigger code of mine; sorry if some variables are not defined in the snippet):
```py
mask_prob = 0.15
mask = torch.empty(input_ids.size()).uniform_(0, 1) < mask_prob
mask[pads] = False # We should not mask pads.
masked_ids = input_ids[mask]
replacements = masked_ids.clone()
for i in range(len(replacements)):
r = random.random()
if r < 0.8:
replacements[i] = mask_id
elif r < 0.9:
# Replace with another random word.
random_index = random.randint(0, vocab_size - 1)
replacements[i] = random_index
else:
# keep the word
pass
input_ids[mask] = replacements
text_hidden, text_cls_head = albert_model(texts, attention_mask=pads)
masked_hidden_state = text_hidden[mask]
output_predictions = F.log_softmax(albertMLMHead(masked_hidden_state), dim=1)
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3830/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3830/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3829 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3829/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3829/comments | https://api.github.com/repos/huggingface/transformers/issues/3829/events | https://github.com/huggingface/transformers/issues/3829 | 601,180,126 | MDU6SXNzdWU2MDExODAxMjY= | 3,829 | Can't install transformers in conda environment | {
"login": "chen-bowen",
"id": 18410378,
"node_id": "MDQ6VXNlcjE4NDEwMzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/18410378?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chen-bowen",
"html_url": "https://github.com/chen-bowen",
"followers_url": "https://api.github.com/users/chen-bowen/followers",
"following_url": "https://api.github.com/users/chen-bowen/following{/other_user}",
"gists_url": "https://api.github.com/users/chen-bowen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chen-bowen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chen-bowen/subscriptions",
"organizations_url": "https://api.github.com/users/chen-bowen/orgs",
"repos_url": "https://api.github.com/users/chen-bowen/repos",
"events_url": "https://api.github.com/users/chen-bowen/events{/privacy}",
"received_events_url": "https://api.github.com/users/chen-bowen/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Looking at the error message, you seem to be running into an error with the sentencepiece package, not transformers.\r\n\r\nI looked at the sentencepiece GitHub repo and there is an open issue on this here:\r\nhttps://github.com/google/sentencepiece/issues/452",
"Looks like this issue can be closed now. @chen-bowen can you confirm?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This happened to me while installing Transformers. The issue is with sentnecepiece as stated above. I did the following steps: \r\n- To install sentencepiece: `conda install -c powerai sentencepiece`\r\nAfter, I did the usual _pip install transformers_. \r\nWas able to get it set and running. \r\n"
] | 1,587 | 1,603 | 1,596 | NONE | null | # π Bug
I tried to install transformers into a conda environment
```
pip install transformers
Collecting transformers
Using cached transformers-2.8.0-py3-none-any.whl (563 kB)
Collecting tokenizers==0.5.2
Downloading tokenizers-0.5.2-cp38-cp38-macosx_10_15_x86_64.whl (1.1 MB)
|ββββββββββββββββββββββββββββββββ| 1.1 MB 1.6 MB/s
Collecting tqdm>=4.27
Downloading tqdm-4.45.0-py2.py3-none-any.whl (60 kB)
|ββββββββββββββββββββββββββββββββ| 60 kB 23.6 MB/s
Collecting filelock
Downloading filelock-3.0.12-py3-none-any.whl (7.6 kB)
Collecting requests
Using cached requests-2.23.0-py2.py3-none-any.whl (58 kB)
Collecting regex!=2019.12.17
Using cached regex-2020.4.4.tar.gz (695 kB)
Collecting boto3
Using cached boto3-1.12.39-py2.py3-none-any.whl (128 kB)
Requirement already satisfied: numpy in ./Anaconda/anaconda3/envs/nlp/lib/python3.8/site-packages (from transformers) (1.18.2)
Collecting sentencepiece
Using cached sentencepiece-0.1.83.tar.gz (497 kB)
ERROR: Command errored out with exit status 1:
command: /Users/chen_bowen/Anaconda/anaconda3/envs/nlp/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/lz/65hfhw790_z85w_09kvvm37r0000gn/T/pip-install-lezphia4/sentencepiece/setup.py'"'"'; __file__='"'"'/private/var/folders/lz/65hfhw790_z85w_09kvvm37r0000gn/T/pip-install-lezphia4/sentencepiece/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /private/var/folders/lz/65hfhw790_z85w_09kvvm37r0000gn/T/pip-install-lezphia4/sentencepiece/pip-egg-info
cwd: /private/var/folders/lz/65hfhw790_z85w_09kvvm37r0000gn/T/pip-install-lezphia4/sentencepiece/
Complete output (7 lines):
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/private/var/folders/lz/65hfhw790_z85w_09kvvm37r0000gn/T/pip-install-lezphia4/sentencepiece/setup.py", line 29, in <module>
with codecs.open(os.path.join('..', 'VERSION'), 'r', 'utf-8') as f:
File "/Users/chen_bowen/Anaconda/anaconda3/envs/nlp/lib/python3.8/codecs.py", line 905, in open
file = builtins.open(filename, mode, buffering)
FileNotFoundError: [Errno 2] No such file or directory: '../VERSION'
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.```
python version: Python 3.8.2
OS: Mac OSX 10.15.3
Anaconda version: conda 4.8.0 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3829/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3829/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3828 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3828/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3828/comments | https://api.github.com/repos/huggingface/transformers/issues/3828/events | https://github.com/huggingface/transformers/pull/3828 | 601,164,041 | MDExOlB1bGxSZXF1ZXN0NDA0NDYzOTg5 | 3,828 | Tanh torch warnings | {
"login": "aryanshomray",
"id": 50213704,
"node_id": "MDQ6VXNlcjUwMjEzNzA0",
"avatar_url": "https://avatars.githubusercontent.com/u/50213704?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aryanshomray",
"html_url": "https://github.com/aryanshomray",
"followers_url": "https://api.github.com/users/aryanshomray/followers",
"following_url": "https://api.github.com/users/aryanshomray/following{/other_user}",
"gists_url": "https://api.github.com/users/aryanshomray/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aryanshomray/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aryanshomray/subscriptions",
"organizations_url": "https://api.github.com/users/aryanshomray/orgs",
"repos_url": "https://api.github.com/users/aryanshomray/repos",
"events_url": "https://api.github.com/users/aryanshomray/events{/privacy}",
"received_events_url": "https://api.github.com/users/aryanshomray/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3828?src=pr&el=h1) Report\n> Merging [#3828](https://codecov.io/gh/huggingface/transformers/pull/3828?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b1e2368b32f3af88a920dac47cfc02a869409b20&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3828?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3828 +/- ##\n=======================================\n Coverage 78.47% 78.47% \n=======================================\n Files 106 106 \n Lines 17924 17924 \n=======================================\n Hits 14066 14066 \n Misses 3858 3858 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3828?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/3828/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `82.35% <ΓΈ> (ΓΈ)` | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3828?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3828?src=pr&el=footer). Last update [b1e2368...06daacd](https://codecov.io/gh/huggingface/transformers/pull/3828?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"LGTM"
] | 1,587 | 1,587 | 1,587 | CONTRIBUTOR | null | This pull request the warning generated by using torch.nn.functional.tanh (which is deprecated). This pull request changes it to torch.tanh to remove the warning. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3828/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3828/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3828",
"html_url": "https://github.com/huggingface/transformers/pull/3828",
"diff_url": "https://github.com/huggingface/transformers/pull/3828.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3828.patch",
"merged_at": 1587064236000
} |
https://api.github.com/repos/huggingface/transformers/issues/3827 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3827/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3827/comments | https://api.github.com/repos/huggingface/transformers/issues/3827/events | https://github.com/huggingface/transformers/issues/3827 | 601,133,560 | MDU6SXNzdWU2MDExMzM1NjA= | 3,827 | ModuleNotFoundError: No module named '__main__.utils_summarization'; '__main__' is not a package | {
"login": "stellaywu",
"id": 10345436,
"node_id": "MDQ6VXNlcjEwMzQ1NDM2",
"avatar_url": "https://avatars.githubusercontent.com/u/10345436?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stellaywu",
"html_url": "https://github.com/stellaywu",
"followers_url": "https://api.github.com/users/stellaywu/followers",
"following_url": "https://api.github.com/users/stellaywu/following{/other_user}",
"gists_url": "https://api.github.com/users/stellaywu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stellaywu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stellaywu/subscriptions",
"organizations_url": "https://api.github.com/users/stellaywu/orgs",
"repos_url": "https://api.github.com/users/stellaywu/repos",
"events_url": "https://api.github.com/users/stellaywu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stellaywu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1990944155,
"node_id": "MDU6TGFiZWwxOTkwOTQ0MTU1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bertabs",
"name": "bertabs",
"color": "9ab22e",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"closing the issue \r\nchanged the from `.utils_summarization import ( `\r\nto ` from utils_summarization import ( `\r\nin run_summarization.py\r\nand solved the issue "
] | 1,587 | 1,587 | 1,587 | NONE | null | # π Bug
## Information
I am using ./examples/summarization/bertabs/
`python run_summarization.py \
--documents_dir $data_dir\
--summaries_output_dir$output_dir \
--no_cuda true \
--batch_size 4 \
--min_length 50 \
--max_length 200 \
--beam_size 5 \
--alpha 0.95 \
--block_trigram true
`
returns
`ModuleNotFoundError: No module named '__main__.utils_summarization'; '__main__' is not a package`
My environment
- `transformers` version: 2.8.0
- Platform: Darwin-18.7.0-x86_64-i386-64bit
- Python version: 3.7.3
- PyTorch version (GPU?): 1.1.0 (False)
- Tensorflow version (GPU?): 2.0.0-beta1 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
Thank you!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3827/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3827/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3826 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3826/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3826/comments | https://api.github.com/repos/huggingface/transformers/issues/3826/events | https://github.com/huggingface/transformers/issues/3826 | 601,129,719 | MDU6SXNzdWU2MDExMjk3MTk= | 3,826 | [readability] consolidate examples/summarization/bart and examples/summarization/t5 | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1841528858,
"node_id": "MDU6TGFiZWwxODQxNTI4ODU4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Summarization",
"name": "Summarization",
"color": "b6f97f",
"default": false,
"description": ""
},
{
"id": 1936351150,
"node_id": "MDU6TGFiZWwxOTM2MzUxMTUw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Examples",
"name": "Examples",
"color": "d4c5f9",
"default": false,
"description": "Which is related to examples in general"
}
] | closed | false | null | [] | [
"Hey @sshleifer !\r\nI would like to work on the issue. It's my first issue so I appreciate any help!",
"Awesome, helpful command to get quick feedback on whether your change is working:\r\n\r\n```bash\r\npytest --tb=short -p no:warnings examples/summarization/bart\r\n```\r\n\r\nMake sure you tag @sshleifer when you send a PR and I will give a careful review :)"
] | 1,587 | 1,591 | 1,591 | CONTRIBUTOR | null | This involves
- [ ] consolidating README.md
- [ ] consolidating evaluate_cnn.py scripts.
- [ ] evaluate_wmt.py should also work
- [ ] consolidating unittests
- [ ] updating bash scripts
- [ ] checking that consolidated stuff works, adding appropriate test coverage.
the resulting code should probably all be in `summarization/` with only a `bertabs/` subdirectory.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3826/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3826/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3825 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3825/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3825/comments | https://api.github.com/repos/huggingface/transformers/issues/3825/events | https://github.com/huggingface/transformers/issues/3825 | 601,125,823 | MDU6SXNzdWU2MDExMjU4MjM= | 3,825 | [readability] Consolidate prune_heads logic to PretrainedModel. | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1834053813,
"node_id": "MDU6TGFiZWwxODM0MDUzODEz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/PyTorch",
"name": "PyTorch",
"color": "a12bef",
"default": false,
"description": "Anything PyTorch"
},
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Hi @sshleifer , \r\nI am new to opensource and would like to help out with this issue. Could you please point me to some guide to setting up the project locally.\r\nThanks.",
"I got the contributing.md file. Thanks anyways. :)",
"Hi @sshleifer \r\nI am looking for a contribution. Is the issue is still open?\r\nThanks and Regards",
"@Shandilya21 It looks open. I was trying to work on it and got stuck at a point. Do let me know if you would like to discuss",
"Hi @noelmat Okay, can you tell me where you got stuck? I am happy to discuss this.",
"Is anybody working on this? I'm new to open source but I'd like to give it a shot",
"Go for it!",
"> Is anybody working on this? I'm new to open source but I'd like to give it a shot\r\n\r\nYeah, I am in.. I also wanna work on this issue.\r\nIssue is to implement `prune_heads` as a method in `PretrainedModel`",
"@yugaljain1999 made some progress on this?",
"I think this is done. Happy to find new bugs if anyone is on the hunt!"
] | 1,587 | 1,594 | 1,594 | CONTRIBUTOR | null | Many models have identical implementations of `prune_heads` it would be nice to store that implementation as a method on `PretrainedModel` and reduce the redundancy. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3825/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3825/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3824 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3824/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3824/comments | https://api.github.com/repos/huggingface/transformers/issues/3824/events | https://github.com/huggingface/transformers/pull/3824 | 601,121,396 | MDExOlB1bGxSZXF1ZXN0NDA0NDI3Mjc2 | 3,824 | [examples] summarization/bart/finetune.py supports t5 | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Super! LGTM",
"Thanks @sshleifer for the quick fix. Just a small query where it will save the output sentences. A file is generated in output_dir with only losses specified when --do_predict is passed in Argument.\r\nWhat if I want to generate for unknown inputs using fine-tuned model."
] | 1,587 | 1,587 | 1,587 | CONTRIBUTOR | null | - we were passing attention_mask as an arg, not a kwarg, causing `test_step` to break.
- That case is now covered in the unittest, unittests also cover the t5 model.
- renamed run_bart_sum.py to finetune.py since it is model agnostic.
- The `bart/` and `t5` subdirectories should be consolidated in a future PR.
This took 15 mins because of underlying infrastructure: unittests for examples and tiny models on S3 :)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3824/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3824/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3824",
"html_url": "https://github.com/huggingface/transformers/pull/3824",
"diff_url": "https://github.com/huggingface/transformers/pull/3824.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3824.patch",
"merged_at": 1587064519000
} |
https://api.github.com/repos/huggingface/transformers/issues/3823 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3823/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3823/comments | https://api.github.com/repos/huggingface/transformers/issues/3823/events | https://github.com/huggingface/transformers/issues/3823 | 601,070,523 | MDU6SXNzdWU2MDEwNzA1MjM= | 3,823 | lowercasing on LM with cased models | {
"login": "boxorange",
"id": 49205846,
"node_id": "MDQ6VXNlcjQ5MjA1ODQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/49205846?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/boxorange",
"html_url": "https://github.com/boxorange",
"followers_url": "https://api.github.com/users/boxorange/followers",
"following_url": "https://api.github.com/users/boxorange/following{/other_user}",
"gists_url": "https://api.github.com/users/boxorange/gists{/gist_id}",
"starred_url": "https://api.github.com/users/boxorange/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/boxorange/subscriptions",
"organizations_url": "https://api.github.com/users/boxorange/orgs",
"repos_url": "https://api.github.com/users/boxorange/repos",
"events_url": "https://api.github.com/users/boxorange/events{/privacy}",
"received_events_url": "https://api.github.com/users/boxorange/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Unfortunately, this is because these models didn't get uploaded with the `tokenizer_config.json` specifying they shouldn't be lowercased.\r\n\r\ncc @julien-c ",
"I see. Thanks for the explanation. I think it would be helpful to mention that in the instruction or somewhere for newbies like me though. They may overlook it unless they actually test a model before training it.",
"Yes, if those models really are lowercase, we should add a `tokenizer_config.json` (and let their authors/uploaders know about it). Also cc @patrickvonplaten ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,593 | 1,593 | NONE | null | # π Bug
## Information
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Execute 'run_language_modeling.py' with one of the following cased models.
(example of cased models: sciBert cased, BioBert, ClinicalBert)
2. Check the tokenizer's do_lower_case option.
## Expected behavior
I'm language modeling with my own data on the top of a pre-trained model. I'm using cased models, but the tokenizer lower case input data since its default lowercase option is True. When I used 'bert-base-cased', the tokenizer didn't lower case, but it happened with other cased models mentioned above.
- tokens with 'bert-base-cased' model
['[CLS]', 'This', 'text', 'is', 'included', 'to', 'make', 'sure', 'Uni', '##code',...
- tokens with 'scibert_scivocab_cased' model
['[CLS]', 'this', 'text', 'is', 'included', 'to', 'make', 'sure', 'unic', '##ode',...
Is it a bug? or am I missing something?
As an alternative, I'm using a workaround code by adding additional command parameter.
```python
parser.add_argument("--do_lower_case", action="store_true", help="Should be added for uncased models.")
tokenizer = AutoTokenizer.from_pretrained(args.model_name_or_path, cache_dir=args.cache_dir, do_lower_case=args.do_lower_case)
```
Thanks in advance.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3823/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3823/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3822 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3822/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3822/comments | https://api.github.com/repos/huggingface/transformers/issues/3822/events | https://github.com/huggingface/transformers/issues/3822 | 601,065,394 | MDU6SXNzdWU2MDEwNjUzOTQ= | 3,822 | getting random results when running run_glue | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi\r\nAny comment on this? I also tested with these versions, run_glue with BERT gets fully random results. Like this I cannot run experiments, could you please have a look?\r\n \r\npython 3.6.9 h265db76_0 \r\npytorch 1.2.0 py3.6_cuda10.0.130_cudnn7.6.2_0 pytorch\r\ntorchvision 0.4.0 py36_cu100 pytorch\r\ntransformers 2.5.0 <pip>\r\n\r\nthanks\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,592 | 1,592 | NONE | null | Hi
I am running run_glue.py script on RTE dataset with BERT base model, on different gpus and I am getting very random results, like changing a lot by changing the gpu. I am using python 3.6 and transformer version 2.5.0. I tried with two gpu types like
Kepler , GTX1080ti and P40, ...
such randomness really affects the benchmarking and appreciate your help. thanks
I have a deadline and appreciate your prompt response.
thanks.
Best
Rabeeh | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3822/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3822/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3821 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3821/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3821/comments | https://api.github.com/repos/huggingface/transformers/issues/3821/events | https://github.com/huggingface/transformers/pull/3821 | 601,027,248 | MDExOlB1bGxSZXF1ZXN0NDA0MzQ4MDI0 | 3,821 | Typo fix | {
"login": "davidefiocco",
"id": 4547987,
"node_id": "MDQ6VXNlcjQ1NDc5ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4547987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidefiocco",
"html_url": "https://github.com/davidefiocco",
"followers_url": "https://api.github.com/users/davidefiocco/followers",
"following_url": "https://api.github.com/users/davidefiocco/following{/other_user}",
"gists_url": "https://api.github.com/users/davidefiocco/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davidefiocco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidefiocco/subscriptions",
"organizations_url": "https://api.github.com/users/davidefiocco/orgs",
"repos_url": "https://api.github.com/users/davidefiocco/repos",
"events_url": "https://api.github.com/users/davidefiocco/events{/privacy}",
"received_events_url": "https://api.github.com/users/davidefiocco/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,587 | 1,587 | 1,587 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3821/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3821/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3821",
"html_url": "https://github.com/huggingface/transformers/pull/3821",
"diff_url": "https://github.com/huggingface/transformers/pull/3821.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3821.patch",
"merged_at": 1587049473000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3820 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3820/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3820/comments | https://api.github.com/repos/huggingface/transformers/issues/3820/events | https://github.com/huggingface/transformers/pull/3820 | 600,995,719 | MDExOlB1bGxSZXF1ZXN0NDA0MzIxNTMx | 3,820 | #3787 Fixing the pip install issue by installing from git | {
"login": "JonathanSum",
"id": 21982975,
"node_id": "MDQ6VXNlcjIxOTgyOTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/21982975?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JonathanSum",
"html_url": "https://github.com/JonathanSum",
"followers_url": "https://api.github.com/users/JonathanSum/followers",
"following_url": "https://api.github.com/users/JonathanSum/following{/other_user}",
"gists_url": "https://api.github.com/users/JonathanSum/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JonathanSum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JonathanSum/subscriptions",
"organizations_url": "https://api.github.com/users/JonathanSum/orgs",
"repos_url": "https://api.github.com/users/JonathanSum/repos",
"events_url": "https://api.github.com/users/JonathanSum/events{/privacy}",
"received_events_url": "https://api.github.com/users/JonathanSum/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,594 | 1,594 | CONTRIBUTOR | null | #3787 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3820/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3820/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3820",
"html_url": "https://github.com/huggingface/transformers/pull/3820",
"diff_url": "https://github.com/huggingface/transformers/pull/3820.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3820.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3819 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3819/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3819/comments | https://api.github.com/repos/huggingface/transformers/issues/3819/events | https://github.com/huggingface/transformers/issues/3819 | 600,871,020 | MDU6SXNzdWU2MDA4NzEwMjA= | 3,819 | Tokenizers Notebook Issue | {
"login": "uunal",
"id": 2520197,
"node_id": "MDQ6VXNlcjI1MjAxOTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2520197?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/uunal",
"html_url": "https://github.com/uunal",
"followers_url": "https://api.github.com/users/uunal/followers",
"following_url": "https://api.github.com/users/uunal/following{/other_user}",
"gists_url": "https://api.github.com/users/uunal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/uunal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/uunal/subscriptions",
"organizations_url": "https://api.github.com/users/uunal/orgs",
"repos_url": "https://api.github.com/users/uunal/repos",
"events_url": "https://api.github.com/users/uunal/events{/privacy}",
"received_events_url": "https://api.github.com/users/uunal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"pip version of installation has a problem, dependencies are not defined so it installs tokenizers-0.5.2 with transformers 2.8.0. \r\nDownload from source and don't care warning of dependency, works fine :) "
] | 1,587 | 1,587 | 1,587 | NONE | null | # β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
Hello everyone,
While playing around with tokenizers notebook : https://github.com/huggingface/transformers/blob/master/notebooks/01-training-tokenizers.ipynb
`---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
in
----> 1 tokenizer = Tokenizer(BPE()) # byte-pair encoding model
2 # now using normalizers
3 tokenizer.normalizer = Sequence([
4 NFKC(),
5 Lowercase()
TypeError: cannot create 'BPE' instances`
Could not find a resolution for this. thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3819/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3819/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3818 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3818/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3818/comments | https://api.github.com/repos/huggingface/transformers/issues/3818/events | https://github.com/huggingface/transformers/issues/3818 | 600,849,737 | MDU6SXNzdWU2MDA4NDk3Mzc= | 3,818 | What are the GPU RAM requirements of popular models? | {
"login": "r0levrai",
"id": 22660388,
"node_id": "MDQ6VXNlcjIyNjYwMzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22660388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/r0levrai",
"html_url": "https://github.com/r0levrai",
"followers_url": "https://api.github.com/users/r0levrai/followers",
"following_url": "https://api.github.com/users/r0levrai/following{/other_user}",
"gists_url": "https://api.github.com/users/r0levrai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/r0levrai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/r0levrai/subscriptions",
"organizations_url": "https://api.github.com/users/r0levrai/orgs",
"repos_url": "https://api.github.com/users/r0levrai/repos",
"events_url": "https://api.github.com/users/r0levrai/events{/privacy}",
"received_events_url": "https://api.github.com/users/r0levrai/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @r0levrai,\r\n\r\nGood question! We are actually thinking about a good visualization for exactly that. Maybe in 2,3 weeks :-) \r\n\r\nWe already have a very useful script to test RAM requirements which you can find here: \r\n`https://github.com/huggingface/transformers/blob/master/examples/benchmarks.py`",
"It should work with any model for a given `batch_size` and `sequence_length`. Let me know if you encounter problems with the script!",
"any updates on that visualization? π \r\n\r\nAlso that link 404's now.",
"Hey @thesofakillers - note that we don't support benchmarking utils in Transformers anymore"
] | 1,587 | 1,661 | 1,587 | NONE | null | # β Questions & Help
What are the GPU RAM requirement of `gpt2`, `gpt2-medium`, `distilgpt2`, `bert-base-uncased` and/or `distilroberta-base`
* for training?
* for inference?
Additionally, how do you calculate or find this information for other models?
original StackOverflow question: https://stackoverflow.com/questions/61226569/what-are-the-gpu-ram-requirements-of-popular-huggingface-transformers-models
related: #1750 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3818/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3818/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3817 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3817/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3817/comments | https://api.github.com/repos/huggingface/transformers/issues/3817/events | https://github.com/huggingface/transformers/pull/3817 | 600,815,036 | MDExOlB1bGxSZXF1ZXN0NDA0MTcyMzA1 | 3,817 | [Examples, T5] Change newstest2013 to newstest2014 and clean up | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,587 | 1,587 | 1,587 | MEMBER | null | This PR just adds a small change to #3802 to make the code quality happy. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3817/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3817/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3817",
"html_url": "https://github.com/huggingface/transformers/pull/3817",
"diff_url": "https://github.com/huggingface/transformers/pull/3817.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3817.patch",
"merged_at": 1587060042000
} |
https://api.github.com/repos/huggingface/transformers/issues/3816 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3816/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3816/comments | https://api.github.com/repos/huggingface/transformers/issues/3816/events | https://github.com/huggingface/transformers/issues/3816 | 600,788,894 | MDU6SXNzdWU2MDA3ODg4OTQ= | 3,816 | Aborted (core dumped) or Kernel dies | {
"login": "yashwatwani",
"id": 22069499,
"node_id": "MDQ6VXNlcjIyMDY5NDk5",
"avatar_url": "https://avatars.githubusercontent.com/u/22069499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yashwatwani",
"html_url": "https://github.com/yashwatwani",
"followers_url": "https://api.github.com/users/yashwatwani/followers",
"following_url": "https://api.github.com/users/yashwatwani/following{/other_user}",
"gists_url": "https://api.github.com/users/yashwatwani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yashwatwani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yashwatwani/subscriptions",
"organizations_url": "https://api.github.com/users/yashwatwani/orgs",
"repos_url": "https://api.github.com/users/yashwatwani/repos",
"events_url": "https://api.github.com/users/yashwatwani/events{/privacy}",
"received_events_url": "https://api.github.com/users/yashwatwani/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Do you mind showing us the lines that crash? Do you have a reproducible example?",
"+1. @yashwatwani do you have resolved it? I have the same problem too.\r\n\r\nIf I install transformers 2.8.0, it will produce error:\r\n```\r\n[1] 11267 segmentation fault (core dumped) PYTHONPATH=. python apps/absa/main.py\r\n```\r\n\r\nIf I upgrade to the latest version 2.11.0, no error happens.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,597 | 1,597 | NONE | null | Whenever I am trying to import tranfomers my kernel dies off in jupyter notebook .
tranformer version - 2.8.0
python version -3.7.7
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3816/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3816/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3815 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3815/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3815/comments | https://api.github.com/repos/huggingface/transformers/issues/3815/events | https://github.com/huggingface/transformers/issues/3815 | 600,764,359 | MDU6SXNzdWU2MDA3NjQzNTk= | 3,815 | How to speed up getting answers? | {
"login": "MaheshChandrra",
"id": 13826929,
"node_id": "MDQ6VXNlcjEzODI2OTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/13826929?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MaheshChandrra",
"html_url": "https://github.com/MaheshChandrra",
"followers_url": "https://api.github.com/users/MaheshChandrra/followers",
"following_url": "https://api.github.com/users/MaheshChandrra/following{/other_user}",
"gists_url": "https://api.github.com/users/MaheshChandrra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MaheshChandrra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MaheshChandrra/subscriptions",
"organizations_url": "https://api.github.com/users/MaheshChandrra/orgs",
"repos_url": "https://api.github.com/users/MaheshChandrra/repos",
"events_url": "https://api.github.com/users/MaheshChandrra/events{/privacy}",
"received_events_url": "https://api.github.com/users/MaheshChandrra/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,592 | 1,592 | NONE | null | Hi ,
I'm facing the issues using BertForQuestionAnswering ,can you please help me fixing these:
1. I'm trying to use BertForQuestionAnswering pretrained model to get answers from news,
answer_question is the function where in you pass the context and the question and get the relevant answers,but if I have 100 contexts its taking around 100 seconds to get the answer,may I please know by any chance I can get the answers in much lesser time.
2. Some times I get the answer where in start index and end index are pointing to [SEP],that means the whole context,can I avoid this.
```
BERT_SQUAD = 'bert-large-uncased-whole-word-masking-finetuned-squad'
model = BertForQuestionAnswering.from_pretrained(BERT_SQUAD)
tokenizer = BertTokenizer.from_pretrained(BERT_SQUAD)
def answer_question(question, context):
"""
Answer questions
"""
try:
print("Type:",type(context))
print(context)
encoded_dict = tokenizer.encode_plus(
question, context, # Sentence to encode.
add_special_tokens = True, # Add '[CLS]' and '[SEP]'
max_length = 256, # Pad & truncate all sentences.
pad_to_max_length = True,
return_attention_mask = True, # Construct attn. masks.
return_tensors = 'pt' # Return pytorch tensors.
)
print(encoded_dict)
input_ids = encoded_dict['input_ids'].to(torch_device)
token_type_ids = encoded_dict['token_type_ids'].to(torch_device) # segments
start_scores, end_scores = model(input_ids, token_type_ids=token_type_ids)
print('Start Scores:',start_scores)
print('End Scores:',end_scores)
all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0])
print(all_tokens)
answer = tokenizer.convert_tokens_to_string(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1])
answer = answer.replace('[CLS]', '')
#answer= answer.replace('[SEP]','')
print("Start index:",all_tokens[torch.argmax(start_scores)])
print("End index:",all_tokens[torch.argmax(end_scores)])
print(answer)
except ValueError:
print("Error in fetching answer")
answer=''
return answer
```
Thanks in advance!!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3815/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3815/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3814 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3814/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3814/comments | https://api.github.com/repos/huggingface/transformers/issues/3814/events | https://github.com/huggingface/transformers/issues/3814 | 600,642,093 | MDU6SXNzdWU2MDA2NDIwOTM= | 3,814 | A bug in the padding of input examples in the NER fine-tuning example | {
"login": "AMR-KELEG",
"id": 8365743,
"node_id": "MDQ6VXNlcjgzNjU3NDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8365743?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AMR-KELEG",
"html_url": "https://github.com/AMR-KELEG",
"followers_url": "https://api.github.com/users/AMR-KELEG/followers",
"following_url": "https://api.github.com/users/AMR-KELEG/following{/other_user}",
"gists_url": "https://api.github.com/users/AMR-KELEG/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AMR-KELEG/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AMR-KELEG/subscriptions",
"organizations_url": "https://api.github.com/users/AMR-KELEG/orgs",
"repos_url": "https://api.github.com/users/AMR-KELEG/repos",
"events_url": "https://api.github.com/users/AMR-KELEG/events{/privacy}",
"received_events_url": "https://api.github.com/users/AMR-KELEG/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This PR https://github.com/huggingface/transformers/pull/3803 might be related to the bug but my initial thought is that it will not fix it.",
"Hello! Why do you think it should return 3 for RoBERTa models?\r\n\r\nIt should return 2, single sequences for RoBERTa are built like: \r\n\r\n`<s> tok_0 ... tok_n </s>`, \r\n\r\nwith only two special tokens added.\r\n\r\nFor sequence pairs however, 4 tokens are added: \r\n\r\n`<s> tok_0 ... tok_n </s></s> tok_(n + 1) ... tok2n </s>`",
"> Hello! Why do you think it should return 3 for RoBERTa models?\r\n> \r\n> It should return 2, single sequences for RoBERTa are built like:\r\n> \r\n> `<s> tok_0 ... tok_n </s>`,\r\n> \r\n> with only two special tokens added.\r\n> \r\n> For sequence pairs however, 4 tokens are added:\r\n> \r\n> `<s> tok_0 ... tok_n </s></s> tok_(n + 1) ... tok2n </s>`\r\n\r\nWell, this line suggested this:\r\nhttps://github.com/huggingface/transformers/blob/c59b1e682d6ebaf7295c63418d4570228904e690/examples/ner/utils_ner.py#L122\r\n\r\nAdditionally, the current code produced lists of length > `max_seq_length` so for sure there is a problem there.",
"I had an issue with the running the NER model. In this commit https://github.com/huggingface/transformers/commit/96ab75b8dd48a9384a74ba4307a4ebfb197343cd `num_added_tokens` got changed into `num_special_tokens_to_add`. Just changing the name of the variable in the `utils_ner.py` fixed the issue for me. However, I had an issue with variable name not being found. Let me know if this fixes you problem.",
"> I had an issue with the running the NER model. In this commit [96ab75b](https://github.com/huggingface/transformers/commit/96ab75b8dd48a9384a74ba4307a4ebfb197343cd) `num_added_tokens` got changed into `num_special_tokens_to_add`. Just changing the name of the variable in the `utils_ner.py` fixed the issue for me. However, I had an issue with variable name not being found. Let me know if this fixes you problem.\r\n\r\nHi @TarasPriadka \r\nYes, the edit you have suggested solved the problem.\r\nI have found that you have already reported the issue before (https://github.com/huggingface/transformers/issues/3686).\r\nDon't you think that we should open a simple Pull Request to fix this problem?",
"@AMR-KELEG I think it got fixed just now with this huggingface/transformers#3800 PR",
"@TarasPriadka, @AMR-KELEG \r\n\r\nI had a similar issue using `preprocess.py` on an NER dataset.\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"preprocess.py\", line 12, in <module>\r\n max_len -= tokenizer.num_special_tokens_to_add()\r\nAttributeError: 'BertTokenizer' object has no attribute 'num_special_tokens_to_add'\r\n```\r\n\r\nI think the PyPi file hasn't been updated, so `pip install transformers` won't have the files you need. I built from source and the errors went away. If you try building from source, I think your problem might go away too. ",
"> @TarasPriadka, @AMR-KELEG\r\n> \r\n> I had a similar issue using `preprocess.py` on an NER dataset.\r\n> \r\n> ```\r\n> Traceback (most recent call last):\r\n> File \"preprocess.py\", line 12, in <module>\r\n> max_len -= tokenizer.num_special_tokens_to_add()\r\n> AttributeError: 'BertTokenizer' object has no attribute 'num_special_tokens_to_add'\r\n> ```\r\n> \r\n> I think the PyPi file hasn't been updated, so `pip install transformers` won't have the files you need. I built from source and the errors went away. If you try building from source, I think your problem might go away too.\r\n\r\nWell, I was using the source version but as said before, seems like the bug was there and got fixed in later commits.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,586 | 1,594 | 1,594 | NONE | null | # π Bug
## Information
Model I am using (Bert, XLNet ...): Roberta
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. TODO
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
https://github.com/huggingface/transformers/blob/c59b1e682d6ebaf7295c63418d4570228904e690/examples/ner/utils_ner.py#L123
This line is supposed to return 3 for Roberta models but it's just returning 2 causing the length of the input_ids to be more than the max_seq_len.
This might be the reason for that: https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_roberta.py#L288
TODO: Share the notebook.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.8.0
- Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): 2.2.0-rc2 (True)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3814/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3814/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3813 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3813/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3813/comments | https://api.github.com/repos/huggingface/transformers/issues/3813/events | https://github.com/huggingface/transformers/issues/3813 | 600,622,906 | MDU6SXNzdWU2MDA2MjI5MDY= | 3,813 | T5 prediction using fine-tuned model | {
"login": "prabalbansal",
"id": 30004110,
"node_id": "MDQ6VXNlcjMwMDA0MTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/30004110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prabalbansal",
"html_url": "https://github.com/prabalbansal",
"followers_url": "https://api.github.com/users/prabalbansal/followers",
"following_url": "https://api.github.com/users/prabalbansal/following{/other_user}",
"gists_url": "https://api.github.com/users/prabalbansal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/prabalbansal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prabalbansal/subscriptions",
"organizations_url": "https://api.github.com/users/prabalbansal/orgs",
"repos_url": "https://api.github.com/users/prabalbansal/repos",
"events_url": "https://api.github.com/users/prabalbansal/events{/privacy}",
"received_events_url": "https://api.github.com/users/prabalbansal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"@sshleifer @patrickvonplaten ",
"Hi, I am curious about how you fine-tined T5. Did you used the run_bart_sum.py script by changing the model type from bart to T5? Thanks!",
"@sshleifer - could you take a look at this if you find some time? `T5` should more or less work out-of-the-box with the `run_bart_sum` script no? ",
"@MichaelZhouwang yes. Please look at this. #3576 "
] | 1,586 | 1,587 | 1,587 | NONE | null | After fine-tuning the T5 model on my own dataset, when I use the fine-tuned model to predict for test set using the following command:
python '/content/transformers-master/examples/summarization/bart/run_bart_sum.py' --data_dir='/content/drive/My Drive/two_keywords/' --model_type=t5 --output_dir=/content/t5 --do_predict --model_name_or_path=t5-small
Error generated:
<img width="1010" alt="Screenshot 2020-04-13 at 6 18 47 PM" src="https://user-images.githubusercontent.com/30004110/79394604-31692980-7f78-11ea-8c87-3c04e542e962.png">
# β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3813/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3813/timeline | completed | null | null |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.