url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/4920 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4920/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4920/comments | https://api.github.com/repos/huggingface/transformers/issues/4920/events | https://github.com/huggingface/transformers/pull/4920 | 636,521,373 | MDExOlB1bGxSZXF1ZXN0NDMyNjc4NDM0 | 4,920 | Support multiple choice in tf common model tests | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4920?src=pr&el=h1) Report\n> Merging [#4920](https://codecov.io/gh/huggingface/transformers/pull/4920?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5d63ca6c38cc0f583cdec4c3efcfce13c0a41fdc&el=desc) will **decrease** coverage by `0.04%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4920?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4920 +/- ##\n==========================================\n- Coverage 77.10% 77.05% -0.05% \n==========================================\n Files 128 128 \n Lines 21617 21618 +1 \n==========================================\n- Hits 16667 16657 -10 \n- Misses 4950 4961 +11 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4920?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4920/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `95.16% <100.00%> (+0.89%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4920/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `76.31% <0.00%> (-2.31%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4920/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.49% <0.00%> (-0.12%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4920?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4920?src=pr&el=footer). Last update [5d63ca6...05e5aa7](https://codecov.io/gh/huggingface/transformers/pull/4920?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Very clean. LGTM!"
] | 1,591 | 1,591 | 1,591 | COLLABORATOR | null | This the same as #4886, but for tensorflow (first time ever of me coding in tf!) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4920/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4920/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4920",
"html_url": "https://github.com/huggingface/transformers/pull/4920",
"diff_url": "https://github.com/huggingface/transformers/pull/4920.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4920.patch",
"merged_at": 1591885886000
} |
https://api.github.com/repos/huggingface/transformers/issues/4919 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4919/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4919/comments | https://api.github.com/repos/huggingface/transformers/issues/4919/events | https://github.com/huggingface/transformers/issues/4919 | 636,520,368 | MDU6SXNzdWU2MzY1MjAzNjg= | 4,919 | File is not found due to extension | {
"login": "halidziya",
"id": 9038065,
"node_id": "MDQ6VXNlcjkwMzgwNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/9038065?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/halidziya",
"html_url": "https://github.com/halidziya",
"followers_url": "https://api.github.com/users/halidziya/followers",
"following_url": "https://api.github.com/users/halidziya/following{/other_user}",
"gists_url": "https://api.github.com/users/halidziya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/halidziya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/halidziya/subscriptions",
"organizations_url": "https://api.github.com/users/halidziya/orgs",
"repos_url": "https://api.github.com/users/halidziya/repos",
"events_url": "https://api.github.com/users/halidziya/events{/privacy}",
"received_events_url": "https://api.github.com/users/halidziya/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,591 | 1,597 | 1,597 | NONE | null | Hi,
In the configuration where the internet is not available the system searches the cache directory. However url_to_filename(url, etag) returns filename with extension in Windows. Thus this line is not able to find the file locations. One remedy could be using filename.split('.')[0] instead of filename
https://github.com/huggingface/transformers/blob/466aa57a45bfb9fc47d4b75d22c02c34b4b4b0fc/src/transformers/file_utils.py#L404 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4919/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4919/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4918 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4918/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4918/comments | https://api.github.com/repos/huggingface/transformers/issues/4918/events | https://github.com/huggingface/transformers/issues/4918 | 636,516,876 | MDU6SXNzdWU2MzY1MTY4NzY= | 4,918 | Pegasus for summarization ! | {
"login": "jpcorb20",
"id": 17169406,
"node_id": "MDQ6VXNlcjE3MTY5NDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/17169406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jpcorb20",
"html_url": "https://github.com/jpcorb20",
"followers_url": "https://api.github.com/users/jpcorb20/followers",
"following_url": "https://api.github.com/users/jpcorb20/following{/other_user}",
"gists_url": "https://api.github.com/users/jpcorb20/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jpcorb20/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jpcorb20/subscriptions",
"organizations_url": "https://api.github.com/users/jpcorb20/orgs",
"repos_url": "https://api.github.com/users/jpcorb20/repos",
"events_url": "https://api.github.com/users/jpcorb20/events{/privacy}",
"received_events_url": "https://api.github.com/users/jpcorb20/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1841528858,
"node_id": "MDU6TGFiZWwxODQxNTI4ODU4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Summarization",
"name": "Summarization",
"color": "b6f97f",
"default": false,
"description": ""
},
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Thanks! The model checkpoints are available actually. [Check here](https://github.com/google-research/pegasus#install-library-and-dependencies) :)",
"Hope to provide a pytorch version code ",
"I might try the Huggingface's weight transfer code from tensorflow to pytorch in July if nobody's working on this post ",
"Work has started on this, but we are still a few weeks out. ",
"Just wanted to know when this model will be available",
"We're a little behind schedule. I'd say 60% by August 1, 90% by Sept 1.",
"this is awesome.",
"Very cool! Can it also be evaluated with Bert-Score?",
"Can't wait for this... ",
"Converted torch checkpoints are now available on master if you build from source.\r\n[Here](https://huggingface.co/models?search=pegasus) is a list of available checkpoints.\r\nPR: #6340 \r\n\r\nUsage:\r\n\r\n```python\r\nfrom transformers import PegasusForConditionalGeneration, PegasusTokenizer\r\nsrc_text = [\r\n \"\"\" PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow.\"\"\"\r\n]\r\n\r\nmodel_name = 'google/pegasus-xsum'\r\ntorch_device = 'cuda' if torch.cuda.is_available() else 'cpu'\r\ntokenizer = PegasusTokenizer.from_pretrained(model_name)\r\nmodel = PegasusForConditionalGeneration.from_pretrained(model_name).to(torch_device)\r\nbatch = tokenizer.prepare_seq2seq_batch(src_text, truncation=True, padding='longest').to(torch_device)\r\ntranslated = model.generate(**batch)\r\ntgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)\r\nassert tgt_text[0] == \"California's largest electricity provider has turned off power to tens of thousands of customers.\"\r\n```\r\n\r\nPlease make a **new issue** if you encounter a bug with the torch checkpoints and assign @sshleifer .\r\nFor conceptual/how to questions, ask on discuss.huggingface.co, (you can also tag @sshleifer. )\r\n\r\nStill TODO:\r\n- Tensorflow 2.0 implementation.\r\n- ROUGE score is slightly worse than the original paper because we don't implement length penalty the same way. If anyone wants to try it, see #6420 .\r\n- fp16 doesn't work for generation or finetuning\r\n- I have not tried finetuning yet, no guarantees on that working well or replicating the paper.",
"I assume these checkpoints are based on Mixed & Stochastic models, as opposed to models trained exclusively on either C4 or HugeNews?",
"Yes!",
"@sshleifer I am trying this code on Colab but running into below error. Can you let me know what is the issue?\r\n\r\n`ImportError: cannot import name 'PegasusForConditionalGeneration'`",
"I'm having the same issue as @chetanambi \r\n",
"I think you need to install from source, it's not part of the latest release. (will be in the next release).",
"@sshleifer :\r\n\r\nfor the following model:\r\nmodel_name = 'google/pegasus-cnn_dailymail';\r\n\r\nI encountered this error when running:\r\n`translated = model.generate(**batch)`\r\n'---------------------------------------------------------------------------\r\nIndexError Traceback (most recent call last)\r\n<ipython-input-42-635894de22cc> in <module>\r\n 1 batch = tokenizer.prepare_seq2seq_batch(src_text, truncation=True, padding='longest').to(torch_device)\r\n----> 2 translated = model.generate(**batch)\r\n 3 tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)\r\n\r\n~/anaconda3/envs/abstractive_summarizer/lib/python3.8/site-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs)\r\n 13 def decorate_context(*args, **kwargs):\r\n 14 with self:\r\n---> 15 return func(*args, **kwargs)\r\n 16 return decorate_context\r\n 17 \r\n\r\n~/projects/transformers/src/transformers/generation_utils.py in generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, num_return_sequences, attention_mask, decoder_start_token_id, use_cache, **model_specific_kwargs)\r\n 394 encoder = self.get_encoder()\r\n 395 \r\n--> 396 encoder_outputs: tuple = encoder(input_ids, attention_mask=attention_mask)\r\n 397 \r\n 398 # Expand input ids if num_beams > 1 or num_return_sequences > 1\r\n\r\n~/anaconda3/envs/abstractive_summarizer/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)\r\n 720 result = self._slow_forward(*input, **kwargs)\r\n 721 else:\r\n--> 722 result = self.forward(*input, **kwargs)\r\n 723 for hook in itertools.chain(\r\n 724 _global_forward_hooks.values(),\r\n\r\n~/projects/transformers/src/transformers/modeling_bart.py in forward(self, input_ids, attention_mask, output_attentions, output_hidden_states, return_dict)\r\n 328 \r\n 329 inputs_embeds = self.embed_tokens(input_ids) * self.embed_scale\r\n--> 330 embed_pos = self.embed_positions(input_ids)\r\n 331 x = inputs_embeds + embed_pos\r\n 332 x = self.layernorm_embedding(x)\r\n\r\n~/anaconda3/envs/abstractive_summarizer/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)\r\n 720 result = self._slow_forward(*input, **kwargs)\r\n 721 else:\r\n--> 722 result = self.forward(*input, **kwargs)\r\n 723 for hook in itertools.chain(\r\n 724 _global_forward_hooks.values(),\r\n\r\n~/anaconda3/envs/abstractive_summarizer/lib/python3.8/site-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs)\r\n 13 def decorate_context(*args, **kwargs):\r\n 14 with self:\r\n---> 15 return func(*args, **kwargs)\r\n 16 return decorate_context\r\n 17 \r\n\r\n~/projects/transformers/src/transformers/modeling_bart.py in forward(self, input_ids, use_cache)\r\n 1337 # starts at 0, ends at 1-seq_len\r\n 1338 positions = torch.arange(seq_len, dtype=torch.long, device=self.weight.device)\r\n-> 1339 return super().forward(positions)\r\n\r\n~/anaconda3/envs/abstractive_summarizer/lib/python3.8/site-packages/torch/nn/modules/sparse.py in forward(self, input)\r\n 122 \r\n 123 def forward(self, input: Tensor) -> Tensor:\r\n--> 124 return F.embedding(\r\n 125 input, self.weight, self.padding_idx, self.max_norm,\r\n 126 self.norm_type, self.scale_grad_by_freq, self.sparse)\r\n\r\n~/anaconda3/envs/abstractive_summarizer/lib/python3.8/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)\r\n 1812 # remove once script supports set_grad_enabled\r\n 1813 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)\r\n-> 1814 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)\r\n 1815 \r\n 1816 \r\n\r\nIndexError: index out of range in self'",
"@yxyzzz can you make a new issue and follow the bug-report template. I can't reproduce based on what you've provided. Thanks!",
"> I think you need to install from source, it's not part of the latest release. (will be in the next release).\r\n\r\nCould you please let me know how to do this. Thanks!!",
"@chetanambi The instructions are provided [here](https://github.com/huggingface/transformers#from-source)",
"@sshleifer\r\nI installed transformers from the source using the current `master` branch. \r\n\r\n```\r\nI experience the following issue. \r\n\r\n>>> from transformers import PegasusForConditionalGeneration, PegasusTokenizer\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/ubuntu/env5/lib/python3.6/site-packages/transformers/__init__.py\", line 21, in <module>\r\n from .configuration_albert import ALBERT_PRETRAINED_CONFIG_ARCHIVE_MAP, AlbertConfig\r\n File \"/home/ubuntu/env5/lib/python3.6/site-packages/transformers/configuration_albert.py\", line 18, in <module>\r\n from .configuration_utils import PretrainedConfig\r\n File \"/home/ubuntu/env5/lib/python3.6/site-packages/transformers/configuration_utils.py\", line 24, in <module>\r\n from .file_utils import CONFIG_NAME, cached_path, hf_bucket_url, is_remote_url\r\n File \"/home/ubuntu/env5/lib/python3.6/site-packages/transformers/file_utils.py\", line 32, in <module>\r\n from .utils import logging\r\nModuleNotFoundError: No module named 'transformers.utils'\r\n```\r\n\r\n**question** It is the problem with the current `master`. How many commits do I need to rollback to sucsessuly run PEGASUS before September release? \r\n\r\nThank you in advance for the info!\r\n",
"master fixed by #6754 .",
"> master fixed by #6754 .\r\n\r\n@sshleifer \r\n\r\n**(1)** I confirm that `master` is working now. So I was able to successfully run PEGASUS.\r\n\r\n**(2)** Is there any way to control a length of a resulting summary made by PEGASUS? I would like to generate longer summaries.",
"> **(2)** Is there any way to control a length of a resulting summary made by PEGASUS? I would like to generate longer summaries.\r\n\r\n@andrei-volkau \r\n\r\nYou can (1) fine-tune PEGASUS on a customised dataset which has longer summaries (2) tune the hyper-parameter `beam_alpha` which can lead to slightly longer/shorter summaries.\r\n",
"`beam_alpha` is called \"length penalty\" in this repo.\r\n\r\nBe that `length_penalty` is named confusingly: (#4915)\r\n\r\n- Increasing `length_penalty` will result in longer generations. \r\n- Decreasing `length_penalty` will result in shorter generations.\r\n- the formula differs slightly from the pegasus paper (#6420)",
"Is there a short finetuning example somewhere?",
"Nothing short. Finetuning with `examples/seq2seq/finetune.py` https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune_pegasus_xsum.sh is almost ready (will be ready after #6654). To use that you should read the README.MD which covers how to format your data.",
"> @chetanambi The instructions are provided [here](https://github.com/huggingface/transformers#from-source)\r\n\r\nI was able to run the models successfully. During the summarization I would like to run with different beam size. How can I do this?\r\n\r\nThanks!!",
"Interesting, when I ran the example in the documentation (copied below). \r\n\r\nI got the output: `California's largest electricity provider has turned off power to hundreds of thousands of customers.`\r\n\r\nWhereas the assertion output was: `California's largest electricity provider has turned off power to tens of thousands of customers.`\r\n\r\nCould someone shine a light on why this might be the case and which one is the 'correct' output? I'm certain I didn't change anything. \r\n\r\n```\r\nfrom transformers import PegasusForConditionalGeneration, PegasusTokenizer\r\nimport torch\r\nsrc_text = [\r\n \"\"\" PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow.\"\"\"\r\n]\r\n\r\nmodel_name = 'google/pegasus-xsum'\r\ntorch_device = 'cuda' if torch.cuda.is_available() else 'cpu'\r\ntokenizer = PegasusTokenizer.from_pretrained(model_name)\r\nmodel = PegasusForConditionalGeneration.from_pretrained(model_name).to(torch_device)\r\nbatch = tokenizer.prepare_seq2seq_batch(src_text, truncation=True, padding='longest').to(torch_device)\r\ntranslated = model.generate(**batch)\r\ntgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)\r\nassert tgt_text[0] == \"California's largest electricity provider has turned off power to tens of thousands of customers.\"\r\n```\r\n\r\n",
"The docs are wrong, the code is right:\r\n#6526 (merged since documentation was written) affected output (in a good way).\r\n**Update**: I fixed the docs.",
"@sshleifer I am trying to implement this in a machine that is not connected to internet. So, I will have to download the model (ex: reddit-tifu) and pass the location to from_pretrained. Could you please suggest what all the files I need to download. Apperciate your help. \r\n\r\n```\r\nfrom transformers import AutoTokenizer, AutoModelWithLMHead\r\ntokenizer = AutoTokenizer.from_pretrained(\"google/pegasus-reddit_tifu\")\r\nmodel = AutoModelWithLMHead.from_pretrained(\"google/pegasus-reddit_tifu\")\r\n```"
] | 1,591 | 1,604 | 1,604 | NONE | null | # 🌟 New model addition
## Model description
https://ai.googleblog.com/2020/06/pegasus-state-of-art-model-for.html?m=1
https://arxiv.org/abs/1912.08777
Abstract
Recent work pre-training Transformers with self-supervised objectives on large text corpora has shown great success when fine-tuned on downstream NLP tasks including text summarization. However, pre-training objectives tailored for abstractive text summarization have not been explored. Furthermore there is a lack of systematic evaluation across diverse domains. In this work, we propose pre-training large Transformer-based encoder-decoder models on massive text corpora with a new self-supervised objective. In PEGASUS, important sentences are removed/masked from an input document and are generated together as one output sequence from the remaining sentences, similar to an extractive summary. We evaluated our best PEGASUS model on 12 downstream summarization tasks spanning news, science, stories, instructions, emails, patents, and legislative bills. Experiments demonstrate it achieves state-of-the-art performance on all 12 downstream datasets measured by ROUGE scores. Our model also shows surprising performance on low-resource summarization, surpassing previous state-of-the-art results on 6 datasets with only 1000 examples. Finally we validated our results using human evaluation and show that our model summaries achieve human performance on multiple datasets.
## Open source status
* [x] the model implementation is available: https://github.com/google-research/pegasus
* [x] the model weights are available: https://github.com/google-research/pegasus
* [x] who are the authors: Jingqing Zhang @JingqingZ, Yao Zhao @yaozhaogoogle, Mohammad Saleh and Peter J. Liu
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4918/reactions",
"total_count": 17,
"+1": 17,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4918/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4917 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4917/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4917/comments | https://api.github.com/repos/huggingface/transformers/issues/4917/events | https://github.com/huggingface/transformers/pull/4917 | 636,506,387 | MDExOlB1bGxSZXF1ZXN0NDMyNjY2MTMy | 4,917 | enable invocation of run_ner.py and utils_ner.py in cython | {
"login": "mgoldey",
"id": 659477,
"node_id": "MDQ6VXNlcjY1OTQ3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/659477?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mgoldey",
"html_url": "https://github.com/mgoldey",
"followers_url": "https://api.github.com/users/mgoldey/followers",
"following_url": "https://api.github.com/users/mgoldey/following{/other_user}",
"gists_url": "https://api.github.com/users/mgoldey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mgoldey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mgoldey/subscriptions",
"organizations_url": "https://api.github.com/users/mgoldey/orgs",
"repos_url": "https://api.github.com/users/mgoldey/repos",
"events_url": "https://api.github.com/users/mgoldey/events{/privacy}",
"received_events_url": "https://api.github.com/users/mgoldey/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4917?src=pr&el=h1) Report\n> Merging [#4917](https://codecov.io/gh/huggingface/transformers/pull/4917?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ef2dcdccaa9a115aca44d81f31c6dc4d32bebb3f&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4917?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4917 +/- ##\n=======================================\n Coverage 77.12% 77.13% \n=======================================\n Files 128 128 \n Lines 21650 21650 \n=======================================\n+ Hits 16698 16700 +2 \n+ Misses 4952 4950 -2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4917?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4917/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.09% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4917/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.61% <0.00%> (+0.11%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4917/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.57% <0.00%> (+0.31%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4917?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4917?src=pr&el=footer). Last update [ef2dcdc...74e1e66](https://codecov.io/gh/huggingface/transformers/pull/4917?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,591 | 1,597 | 1,597 | CONTRIBUTOR | null | Due to extant issue https://github.com/cython/cython/issues/2903, `run_ner.py` and `utils_ner.py` (among others, I imagine) cannot be invoked inside Cython. By manually adding in annotations, these changes work around the missing features in Cython 3.7 re: PEP557.
https://github.com/cython/cython/pull/3400 ought to eventually fix the underlying issue.
I'm offering code here that works around this behavior in case it would be helpful to others. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4917/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4917/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4917",
"html_url": "https://github.com/huggingface/transformers/pull/4917",
"diff_url": "https://github.com/huggingface/transformers/pull/4917.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4917.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4916 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4916/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4916/comments | https://api.github.com/repos/huggingface/transformers/issues/4916/events | https://github.com/huggingface/transformers/pull/4916 | 636,500,859 | MDExOlB1bGxSZXF1ZXN0NDMyNjYxNTE1 | 4,916 | Don't init TPU device twice | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4916?src=pr&el=h1) Report\n> Merging [#4916](https://codecov.io/gh/huggingface/transformers/pull/4916?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ef2dcdccaa9a115aca44d81f31c6dc4d32bebb3f&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4916?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4916 +/- ##\n=======================================\n Coverage 77.12% 77.13% \n=======================================\n Files 128 128 \n Lines 21650 21649 -1 \n=======================================\n+ Hits 16698 16699 +1 \n+ Misses 4952 4950 -2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4916?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4916/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `74.19% <100.00%> (+0.69%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4916?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4916?src=pr&el=footer). Last update [ef2dcdc...acb4cb4](https://codecov.io/gh/huggingface/transformers/pull/4916?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Oh, my bad! Not sure why I left it there",
"No worries!"
] | 1,591 | 1,591 | 1,591 | MEMBER | null | closes #4893
The TPU device was initialized twice when using a the `xla_spawn.py` script. Removing this initialization solves the issue.
@patrickvonplaten, is this necessary for the benchmarking script? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4916/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4916/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4916",
"html_url": "https://github.com/huggingface/transformers/pull/4916",
"diff_url": "https://github.com/huggingface/transformers/pull/4916.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4916.patch",
"merged_at": 1591818796000
} |
https://api.github.com/repos/huggingface/transformers/issues/4915 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4915/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4915/comments | https://api.github.com/repos/huggingface/transformers/issues/4915/events | https://github.com/huggingface/transformers/issues/4915 | 636,488,193 | MDU6SXNzdWU2MzY0ODgxOTM= | 4,915 | [generate] Increasing length_penalty makes generations longer | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1845609017,
"node_id": "MDU6TGFiZWwxODQ1NjA5MDE3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/seq2seq",
"name": "seq2seq",
"color": "fef2c0",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Yes, I remember being confused about the name earlier as well...I would be in favor of keeping the code and renaming the variable, but I can't think of a good variable name (not a huge fan of `len_adjustment`, but can't really think of a better name - maybe `length_reward`?) ",
"Sorry just catching up :) \r\n\r\nI'd go for changing the name, but `length_reward` feels a little too connoted (makes me think of RL)\r\n\r\nHow about `length_normalization`?",
"I'm good with that. \r\nI propose:\r\n- rename the parameter from `length_penalty`-> `length_normalization_alpha`\r\n- if the user **OR** the config passes length_penalty, raise a `DeprecationWarning`.\r\n- Slowly update configs \r\n\r\nThis would eventually be a (very minor) breaking change @LysandreJik @thomwolf @julien-c .\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,591 | 1,598 | 1,598 | CONTRIBUTOR | null | In `generate`, we document
```python
length_penalty: Exponential penalty to the length. Default to 1.
```
Given the name and the docstring, you might expect that if you increase the `length_penalty` your model will, on average, produce shorter generations.
You would be wrong! (at least for `bart-large-xsum`)
When we decide the score of a hypothesis [here](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_utils.py#L1714), we calculate
```python
score = sum_logprobs / len(hyp) ** self.length_penalty
```
The issue is that the numerator, `sum_logprobs`, is negative (the result of `F.log_softmax`), and the denominator, `len(hyp) ** self.length_penalty`, is positive. If we increase `length_penalty` we increase the denominator (and the derivative of the denominator w.r.t length) and therefore make the score less negative, so greater.
Fairseq has the same [logic](https://github.com/pytorch/fairseq/blob/eb509f0c584ebae01834e773fb83584102a4f4da/fairseq/sequence_generator.py#L524).
I can think of two groups of solutions:
1) keep the name and change the code so that length is actually penalized:
```python
denominator = len(hyp) ** self.length_penalty
if numerator < 0: denominator *= -1
```
2) Change the name/docstring to something like `len_adjustment` and explain that increasing it is likely to make generations shorter.
@yjernite @patrickvonplaten @LysandreJik @thomwolf, have you guys seen this/do you think it's worth fixing or redocumenting?
### Empirical Evidence
```python
model = BartForConditionalGeneration.from_pretrained('facebook/bart-large-xsum')
tok = BartTokenizer.from_pretrained("facebook/bart-large")
PGE_ARTICLE = """ PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow."""
batch = tok.batch_encode_plus([PGE_ARTICLE], max_length=1024, pad_to_max_length=True, return_tensors="pt",)
ids_lp1 = model.generate(**batch, length_penalty=1.)
ids_lp2 = model.generate(**batch, length_penalty=2.)
text_a, text_b = [tok.batch_decode(x, skip_special_tokens=True,)[0] for x in [ids_lp1, ids_lp2]]
```
text a:
> "California's largest power company, PG&E, has shut off power to tens of thousands of customers across the state."
text_b:
>"California's largest power company, PG&E, has shut off power to tens of thousands of **homes and businesses in the north-east of** the state."
I found similar results for `bart-large-cnn`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4915/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4915/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4914 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4914/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4914/comments | https://api.github.com/repos/huggingface/transformers/issues/4914/events | https://github.com/huggingface/transformers/issues/4914 | 636,467,273 | MDU6SXNzdWU2MzY0NjcyNzM= | 4,914 | Simple way to convert a Python tokenizer to a fast tokenizer | {
"login": "pommedeterresautee",
"id": 1029874,
"node_id": "MDQ6VXNlcjEwMjk4NzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1029874?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pommedeterresautee",
"html_url": "https://github.com/pommedeterresautee",
"followers_url": "https://api.github.com/users/pommedeterresautee/followers",
"following_url": "https://api.github.com/users/pommedeterresautee/following{/other_user}",
"gists_url": "https://api.github.com/users/pommedeterresautee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pommedeterresautee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pommedeterresautee/subscriptions",
"organizations_url": "https://api.github.com/users/pommedeterresautee/orgs",
"repos_url": "https://api.github.com/users/pommedeterresautee/repos",
"events_url": "https://api.github.com/users/pommedeterresautee/events{/privacy}",
"received_events_url": "https://api.github.com/users/pommedeterresautee/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"requires unigram algo implemented on tokenizers\r\nhttps://github.com/huggingface/tokenizers/pull/292",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"PR merged, closing the issue",
"Hi @pommedeterresautee, could you please refer me to how I can convert an existing Python tokenizer to a Fast tokenizer?\r\n\r\nSorry if I missed something, and thanks so much for your help!",
"Maybe @SaulLu or @Narsil can comment and link to an example!",
"Hi @varun-tandon ,\r\n\r\nThe code to change from slow to fast is included here: https://github.com/huggingface/transformers/blob/main/src/transformers/convert_slow_tokenizer.py\r\n\r\nAs you can see there are many variables, depending on the actual model and what you want to achieve.\r\n\r\nUsually it involves understanding how a model actually does the tokenization (and all the bits like CLS, SEP etc..) and using the compponents of `tokenizers` to assemble them to make the output similar to what the python code does:\r\nhttps://huggingface.co/docs/tokenizers/components\r\n\r\nSometimes we're missing a brick and we simply add it (although it becomes rarer with time)\r\n"
] | 1,591 | 1,654 | 1,601 | CONTRIBUTOR | null | # 🚀 Feature request
Tokenizer are provided with each model, some have a fast version of their tokenizer (Rust based), others like CamemBERT have only the slow version.
## Motivation
Fast tokenizer improves inference times drastically (in real time inference for instance).
Plus there is no reason it should not be possible
## Your contribution
If you provide me with basic guidelines on how to manually make a conversion, I can submit a PR to offer such feature.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4914/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4914/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4913 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4913/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4913/comments | https://api.github.com/repos/huggingface/transformers/issues/4913/events | https://github.com/huggingface/transformers/pull/4913 | 636,441,695 | MDExOlB1bGxSZXF1ZXN0NDMyNjEzNDEy | 4,913 | ElectraForQuestionAnswering | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Great, thanks for your help! \r\nCould you also add the new model to [`all_model_classes`](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_electra.py#L41) which would test the model a little bit more?",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4913?src=pr&el=h1) Report\n> Merging [#4913](https://codecov.io/gh/huggingface/transformers/pull/4913?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5d63ca6c38cc0f583cdec4c3efcfce13c0a41fdc&el=desc) will **increase** coverage by `0.03%`.\n> The diff coverage is `93.93%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4913?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4913 +/- ##\n==========================================\n+ Coverage 77.10% 77.13% +0.03% \n==========================================\n Files 128 128 \n Lines 21617 21650 +33 \n==========================================\n+ Hits 16667 16699 +32 \n- Misses 4950 4951 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4913?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/4913/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.13% <ø> (ø)` | |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4913/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.40% <ø> (ø)` | |\n| [src/transformers/modeling\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/4913/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `78.16% <93.93%> (+2.07%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4913/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.49% <0.00%> (-0.12%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4913/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.42% <0.00%> (+0.15%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4913/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.49% <0.00%> (+0.40%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4913?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4913?src=pr&el=footer). Last update [5d63ca6...d8a5995](https://codecov.io/gh/huggingface/transformers/pull/4913?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"> Great, thanks for your help!\r\n> Could you also add the new model to [`all_model_classes`](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_electra.py#L41) which would test the model a little bit more?\r\n\r\nSure",
"Oh and before I forget, if you don't mind, could you add the new model in the docs as well in [this file](https://github.com/huggingface/transformers/blob/master/docs/source/model_doc/electra.rst) (between `ElectraForTokenClassification` and `TFElectraModel` ideally).",
"@sgugger \r\nsome tests are failing, but I'm not sure if they are related to this model\r\n\r\nThe test for this qa model is passed.",
"The failing tests are the test common for all models (that are applied to the one you're adding because of the change I made you do). One of the failure I see is linked to the `input_ids` not defaulting to `None` (for when you pass input embeddings instead). There is another one linked to the attentions, I pointed out the problems in comments.",
"> The failing tests are the test common for all models (that are applied to the one you're adding because of the change I made you do). One of the failure I see is linked to the `input_ids` not defaulting to `None` (for when you pass input embeddings instead). There is another one linked to the attentions, I pointed out the problems in comments.\r\n\r\nThanks @sgugger !\r\nThe tests are happy now :)",
"@sgugger this examples failure is related to `TestBartExamples.test_bart_summarization_dataset `",
"Cool! All green 🤗"
] | 1,591 | 1,591 | 1,591 | MEMBER | null | This PR adds `ElectraForQuestionAnswering`. One of the missing models in this [project](https://github.com/huggingface/transformers/projects/17)
@LysandreJik , @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4913/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4913/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4913",
"html_url": "https://github.com/huggingface/transformers/pull/4913",
"diff_url": "https://github.com/huggingface/transformers/pull/4913.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4913.patch",
"merged_at": 1591816672000
} |
https://api.github.com/repos/huggingface/transformers/issues/4912 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4912/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4912/comments | https://api.github.com/repos/huggingface/transformers/issues/4912/events | https://github.com/huggingface/transformers/pull/4912 | 636,412,543 | MDExOlB1bGxSZXF1ZXN0NDMyNTg5NzUy | 4,912 | Benchmarks | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4912?src=pr&el=h1) Report\n> Merging [#4912](https://codecov.io/gh/huggingface/transformers/pull/4912?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/355954ffca798bb81d9db8886e30ce10f11e8a40&el=desc) will **decrease** coverage by `0.94%`.\n> The diff coverage is `78.29%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4912?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4912 +/- ##\n==========================================\n- Coverage 77.28% 76.34% -0.95% \n==========================================\n Files 133 134 +1 \n Lines 22134 22369 +235 \n==========================================\n- Hits 17107 17078 -29 \n- Misses 5027 5291 +264 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4912?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/benchmark/benchmark\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4912/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `69.77% <68.51%> (-3.20%)` | :arrow_down: |\n| [src/transformers/benchmark/benchmark\\_args\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4912/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3NfdXRpbHMucHk=) | `89.13% <71.42%> (-7.75%)` | :arrow_down: |\n| [src/transformers/benchmark/benchmark.py](https://codecov.io/gh/huggingface/transformers/pull/4912/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrLnB5) | `79.13% <76.00%> (+9.70%)` | :arrow_up: |\n| [src/transformers/benchmark/benchmark\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/4912/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3RmLnB5) | `82.69% <82.69%> (ø)` | |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4912/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `76.80% <86.66%> (+1.40%)` | :arrow_up: |\n| [src/transformers/benchmark/benchmark\\_args\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/4912/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3NfdGYucHk=) | `87.50% <87.50%> (ø)` | |\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/4912/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.18% <100.00%> (+0.02%)` | :arrow_up: |\n| [src/transformers/benchmark/benchmark\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/4912/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3MucHk=) | `86.04% <100.00%> (+0.68%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4912/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.57% <100.00%> (+0.18%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4912/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `50.10% <0.00%> (-43.61%)` | :arrow_down: |\n| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/4912/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4912?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4912?src=pr&el=footer). Last update [355954f...8b71041](https://codecov.io/gh/huggingface/transformers/pull/4912?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"> This is great. I really like that you can import the benchmark if you want to use them during runtime, rather than the only option being to run a script.\r\n> \r\n> Some remarks after playing with it:\r\n> \r\n> * Maybe you should raise an error when no `model_names` are specified. Right now it crashes with `UnboundLocalError: local variable 'inference_summary' referenced before assignment` (pytorch version at least)\r\n> * There seems to be an error in the way the runtimes are computed. PyTorch using GPU, is slower than TensorFlow on CPU (10x times slower), while PyTorch on CPU is 150x slower than TensorFlow on CPU.\r\n> \r\n> Here are the results from my runs so far. The following is on CPU with TensorFlow (2ms per inference with `bert-base-cased`, seq len 8 and batch size 512 on a CPU??) I didn't test the memory usage so they're not in the results:\r\n> \r\n> ```\r\n> ==================== INFERENCE - SPEED - RESULT ====================\r\n> --------------------------------------------------------------------------------\r\n> Model Name Batch Size Seq Length Time in s \r\n> --------------------------------------------------------------------------------\r\n> bert-base-cased 8 8 0.001 \r\n> bert-base-cased 8 32 0.001 \r\n> bert-base-cased 8 128 0.001 \r\n> bert-base-cased 8 512 0.002 \r\n> --------------------------------------------------------------------------------\r\n> \r\n> ==================== ENVIRONMENT INFORMATION ====================\r\n> - transformers_version: 2.11.0\r\n> - framework: Tensorflow\r\n> - eager_mode: False\r\n> - use_xla: False\r\n> - framework_version: 2.2.0\r\n> - python_version: 3.6.10\r\n> - system: Linux\r\n> - cpu: \r\n> - architecture: 64bit\r\n> - date: 2020-06-18\r\n> - time: 11:57:18.595804\r\n> - fp16: False\r\n> - use_multiprocessing: True\r\n> - cpu_ram_mb: 64333\r\n> - use_gpu: False\r\n> - use_tpu: False\r\n> ```\r\n> \r\n> Here's the test with PyTorch on GPU:\r\n> \r\n> ```\r\n> ==================== INFERENCE - SPEED - RESULT ====================\r\n> --------------------------------------------------------------------------------\r\n> Model Name Batch Size Seq Length Time in s \r\n> --------------------------------------------------------------------------------\r\n> bert-base-cased 8 8 0.007 \r\n> bert-base-cased 8 32 0.007 \r\n> bert-base-cased 8 128 0.019 \r\n> bert-base-cased 8 512 0.074 \r\n> --------------------------------------------------------------------------------\r\n> \r\n> ==================== ENVIRONMENT INFORMATION ====================\r\n> - transformers_version: 2.11.0\r\n> - framework: PyTorch\r\n> - use_torchscript: False\r\n> - framework_version: 1.5.0\r\n> - python_version: 3.6.10\r\n> - system: Linux\r\n> - cpu: \r\n> - architecture: 64bit\r\n> - date: 2020-06-18\r\n> - time: 11:56:31.041360\r\n> - fp16: False\r\n> - use_multiprocessing: True\r\n> - cpu_ram_mb: 64333\r\n> - use_gpu: True\r\n> - num_gpus: 1\r\n> - gpu: N/A\r\n> - gpu_ram_mb: N/A\r\n> - gpu_power_watts: N/A\r\n> - gpu_performance_state: N/A\r\n> - use_tpu: False\r\n> ```\r\n> \r\n> I'm not sure that PyTorch on GPU is ~37x slower than TensorFlow on CPU I tried to debug but it's not easy to debug tf functions unfortunately\r\n\r\nThanks a lot for checking everything! Found the error :-) One just has to return a tensor out of the tf.function context so that it is actually computed. I guess before compilation TF compilation optimizes the function so that variables that are not used outside of the @tf.function scope are not computed.\r\n\r\nWill update the notebooks and should then getter more reasonable results :-) ",
"And will definitely add a better error message",
"The speed tests seem much more reasonable now, if you check the notebooks :-) @LysandreJik \r\nThere seems to be a problem with GPU memory in TF now :-/ Will check tomorrow again",
"## GPU locally gives reasonable results of TF vs. PT.\r\n\r\nAll tests were run in this environment:\r\n\r\n```\r\n- transformers_version: 2.11.0\r\n- python_version: 3.6.10\r\n- system: Linux\r\n- cpu: x86_64\r\n- architecture: 64bit\r\n- date: 2020-06-19\r\n- time: 13:49:57.455208\r\n- use_multiprocessing: True\r\n- cpu_ram_mb: 32088\r\n- use_gpu: True\r\n- num_gpus: 1\r\n- gpu: TITAN RTX\r\n- gpu_ram_mb: 24217\r\n- gpu_power_watts: 280.0\r\n- gpu_performance_state: 2\r\n```\r\n\r\nfor TF 2.2 and Pytorch 1.4.0\r\n\r\n### PyTorch\r\n`python run_benchmark.py --models gpt2 bert-base-cased --no_env_print --no_memory` gives:\r\n\r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Time in s \r\n--------------------------------------------------------------------------------\r\n gpt2 8 8 0.006 \r\n gpt2 8 32 0.007 \r\n gpt2 8 128 0.026 \r\n gpt2 8 512 0.104 \r\n bert-base-cased 8 8 0.006 \r\n bert-base-cased 8 32 0.006 \r\n bert-base-cased 8 128 0.021 \r\n bert-base-cased 8 512 0.094 \r\n--------------------------------------------------------------------------------\r\n\r\n",
"### PyTorch FP16\r\n`python run_benchmark.py --models gpt2 bert-base-cased --no_env_print --no_memory --fp16`\r\n\r\n\r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Time in s \r\n--------------------------------------------------------------------------------\r\n gpt2 8 8 0.006 \r\n gpt2 8 32 0.007 \r\n gpt2 8 128 0.009 \r\n gpt2 8 512 0.043 \r\n bert-base-cased 8 8 0.006 \r\n bert-base-cased 8 32 0.006 \r\n bert-base-cased 8 128 0.006 \r\n bert-base-cased 8 512 0.03 ",
"### TF no eager modus\r\n\r\n```python run_benchmark_tf.py --models gpt2 bert-base-cased --no_env_print --no_memory```\r\n\r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Time in s \r\n--------------------------------------------------------------------------------\r\n gpt2 8 8 0.005 \r\n gpt2 8 32 0.007 \r\n gpt2 8 128 0.029 \r\n gpt2 8 512 0.125 \r\n bert-base-cased 8 8 0.005 \r\n bert-base-cased 8 32 0.006 \r\n bert-base-cased 8 128 0.024 \r\n bert-base-cased 8 512 0.114 \r\n--------------------------------------------------------------------------------",
"### TF XLA\r\n\r\n```python run_benchmark_tf.py --models gpt2 bert-base-cased --no_env_print --no_memory --use_xla```\r\n\r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Time in s \r\n--------------------------------------------------------------------------------\r\n gpt2 8 8 0.002 \r\n gpt2 8 32 0.006 \r\n gpt2 8 128 0.021 \r\n gpt2 8 512 0.095 \r\n bert-base-cased 8 8 0.003 \r\n bert-base-cased 8 32 0.005 \r\n bert-base-cased 8 128 0.019 \r\n bert-base-cased 8 512 0.087 \r\n--------------------------------------------------------------------------------\r\n",
"## Memory measurements \r\n\r\nThey also seem reasonable for forward pass:.\r\n\r\n### TF no eager mode (keeping in mind that nvidia-smi is not accurate here and TF always allocates more than it needs):\r\n\r\n```python run_benchmark_tf.py --models gpt2 bert-base-cased --no_env_print --no_speed```\r\n\r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Memory in MB \r\n--------------------------------------------------------------------------------\r\n gpt2 64 8 1704 \r\n gpt2 64 32 1704 \r\n gpt2 64 128 2728 \r\n gpt2 64 512 8872 \r\n bert-base-cased 64 8 1192 \r\n bert-base-cased 64 32 1192 \r\n bert-base-cased 64 128 1704 \r\n bert-base-cased 64 512 4776 \r\n--------------------------------------------------------------------------------\r\n\r\n### PyTorch \r\n\r\n```python run_benchmark.py --models gpt2 bert-base-cased --no_env_print --no_speed```\r\n\r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Memory in MB \r\n--------------------------------------------------------------------------------\r\n gpt2 64 8 1150 \r\n gpt2 64 32 1384 \r\n gpt2 64 128 2290 \r\n gpt2 64 512 5890 \r\n bert-base-cased 64 8 1016 \r\n bert-base-cased 64 32 1104 \r\n bert-base-cased 64 128 1448 \r\n bert-base-cased 64 512 3224 \r\n--------------------------------------------------------------------------------\r\n\r\n### PyTorch FP16\r\n\r\n```python run_benchmark.py --models gpt2 bert-base-cased --no_env_print --no_speed --fp16```\r\n\r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Memory in MB \r\n--------------------------------------------------------------------------------\r\n gpt2 64 8 1170 \r\n gpt2 64 32 1164 \r\n gpt2 64 128 1596 \r\n gpt2 64 512 3420 \r\n bert-base-cased 64 8 1066 \r\n bert-base-cased 64 32 1060 \r\n bert-base-cased 64 128 1108 \r\n bert-base-cased 64 512 2118 \r\n--------------------------------------------------------------------------------\r\n"
] | 1,591 | 1,592 | 1,592 | MEMBER | null | # Benchmarks
This PR adds the functionality to measure the following functionalities for TF and PT:
**Tensorflow:**
- Inference: CPU, GPU, GPU + XLA, GPU + eager mode, CPU + eager mode, TPU
**PyTorch:**
- Inference: CPU, CPU + torchscript, GPU, GPU + torchscript, GPU + mixed precision, Torch/XLA TPU
- Training: CPU, GPU, GPU + mixed precision, Torch/XLA TPU
## How is memory measured?
**CPU**
We are always interested in the peak memory usage of the process. For CPU, the library `psutil` in combination with multiprocessing is leveraged
**GPU**
It is difficult to have exact memory measurement on GPU. Tensorflow allocates the full GPU memory by default. This is disabled with `tf.config.experimental.set_memory_growth=True`, but Tensorflow still allocates more memory than it needs for efficiency as far as I know.
=> Memory is therefore always measured to give the same maximal result as shown by `nvidia-smi`. This means that also memory for loading PyTorch / Tensorflow is taken into account which is for example not done when measuring via `torch.cuda.max_allocated_memory`.
Tensorflow also does not release GPU memory before the process is finished. Therefore, all measurement functions are wrapped into their own spawned process via Python's multiprocessing tools.
Also note that because TF does not release memory during the same process, memory and inference is measured using a multiprocess approach in TF. Also TF does not provide an official memory monitoring function, so that the same result that `nvidia-smi` would show for TF is used.
**TPU**
Memory measurement is currently not supported
## How is speed measured?
For all functionality that requires compilation (TPU, XLA, Torchscript), 5 warmup calls of the function are done beforehand.
Afterwards, the minimum of `self.args.repeat` x the time-averaged over 10 function calls.
## Example Colabs:
The colabs give quick examples for each functionality with little explanation for the moment:
Pytorch TPU: https://colab.research.google.com/drive/1GJFOdcBe1pW_FKWpA0jK_AOsIQ5epcvE?usp=sharing
Tensorflow TPU:
https://colab.research.google.com/drive/1t8DW1NxA4b1BsWSZ1ehFG9oT69l0h7os?usp=sharing
GPU: https://colab.research.google.com/drive/15XTPT_GPp42Zj7_f1W9X_T3NNXE9_1Te?usp=sharing
CPU: https://colab.research.google.com/drive/1OG2rZgo18KvliS-ratybld9pHD06-v5S?usp=sharing
## Future PR:
- [ ] Make nicer examples and explanations
- [ ] Update docs and think about automatic measuring on website
- [ ] Training in TF. Because the LM Head models currently do not accept `labels` parameter as an input, adding measurement for training is left for a future PR
- [ ] GPU fp16 in TF. We currently have a bug in the lib that does not allow to run TF models in fp16 on GPU: https://github.com/huggingface/transformers/issues/3320
- [ ] PyTorch's amp package has memory leaks, so that we simply do `model.half()` to measure fp16 in Pytorch. See issue here: https://github.com/NVIDIA/apex/issues/439 . Wait until amp is supported in upstream torch 1.6
- [ ] Currently memory is not measured on TPU. Wait for more functionality for TPU
- [ ] Allow multi-gpu measurments
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4912/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4912/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4912",
"html_url": "https://github.com/huggingface/transformers/pull/4912",
"diff_url": "https://github.com/huggingface/transformers/pull/4912.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4912.patch",
"merged_at": 1592820417000
} |
https://api.github.com/repos/huggingface/transformers/issues/4911 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4911/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4911/comments | https://api.github.com/repos/huggingface/transformers/issues/4911/events | https://github.com/huggingface/transformers/pull/4911 | 636,395,559 | MDExOlB1bGxSZXF1ZXN0NDMyNTc1Nzky | 4,911 | enable pickling for TF Bert models | {
"login": "btel",
"id": 41565,
"node_id": "MDQ6VXNlcjQxNTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/41565?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/btel",
"html_url": "https://github.com/btel",
"followers_url": "https://api.github.com/users/btel/followers",
"following_url": "https://api.github.com/users/btel/following{/other_user}",
"gists_url": "https://api.github.com/users/btel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/btel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/btel/subscriptions",
"organizations_url": "https://api.github.com/users/btel/orgs",
"repos_url": "https://api.github.com/users/btel/repos",
"events_url": "https://api.github.com/users/btel/events{/privacy}",
"received_events_url": "https://api.github.com/users/btel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi! Why would you prefer using pickle rather than `save_pretrained`/`from_pretrained` or `torch.save`/`torch.save`?",
"HI @LysandreJik, pickle is mainly useful for parallel processing frameworks like `joblib` or `dask`. The use case is to parallelize some (embarassingly parallel) computation on multiple CPUs/GPUs. Usually, they use pickled objects to send to workers.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4911?src=pr&el=h1) Report\n> Merging [#4911](https://codecov.io/gh/huggingface/transformers/pull/4911?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ac99217e92c43066af7ec96554054d75532565d7&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4911?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4911 +/- ##\n=======================================\n Coverage 76.99% 77.00% \n=======================================\n Files 128 128 \n Lines 21602 21607 +5 \n=======================================\n+ Hits 16633 16639 +6 \n+ Misses 4969 4968 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4911?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.51% <100.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.49% <0.00%> (-0.12%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.49% <0.00%> (+0.40%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4911?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4911?src=pr&el=footer). Last update [ac99217...dd7120b](https://codecov.io/gh/huggingface/transformers/pull/4911?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This is cool! Could we also add it to the PyTorch models?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,591 | 1,598 | 1,598 | CONTRIBUTOR | null | this implements `__getstate__` for BERT models to enable pickling (without this PR the pickle attempt due to `weakref` errors). It also adds a unit test. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4911/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4911/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4911",
"html_url": "https://github.com/huggingface/transformers/pull/4911",
"diff_url": "https://github.com/huggingface/transformers/pull/4911.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4911.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4910 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4910/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4910/comments | https://api.github.com/repos/huggingface/transformers/issues/4910/events | https://github.com/huggingface/transformers/pull/4910 | 636,386,793 | MDExOlB1bGxSZXF1ZXN0NDMyNTY4NjQy | 4,910 | Add more models to common tests | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4910?src=pr&el=h1) Report\n> Merging [#4910](https://codecov.io/gh/huggingface/transformers/pull/4910?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ac99217e92c43066af7ec96554054d75532565d7&el=desc) will **increase** coverage by `0.09%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4910?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4910 +/- ##\n==========================================\n+ Coverage 76.99% 77.08% +0.09% \n==========================================\n Files 128 128 \n Lines 21602 21604 +2 \n==========================================\n+ Hits 16633 16654 +21 \n+ Misses 4969 4950 -19 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4910?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/4910/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.50% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/4910/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `76.09% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4910/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `93.01% <100.00%> (+0.01%)` | :arrow_up: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4910/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `94.76% <100.00%> (+0.02%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4910/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.49% <0.00%> (-0.12%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4910/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.42% <0.00%> (+0.15%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4910/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.49% <0.00%> (+0.40%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4910/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.61% <0.00%> (+2.96%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4910?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4910?src=pr&el=footer). Last update [ac99217...9e50a38](https://codecov.io/gh/huggingface/transformers/pull/4910?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"LGTM!"
] | 1,591 | 1,591 | 1,591 | COLLABORATOR | null | Follow-up to #4886, adds all existing pt models to common tests (with the exception of the longformer task-specific ones, because of some problem with output attention).
Most of them required some fixes in the model files which are also added.
For longformer, a few of the needed fixes are present but there was still a standing failing test. @patrickvonplaten will look into it. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4910/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4910/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4910",
"html_url": "https://github.com/huggingface/transformers/pull/4910",
"diff_url": "https://github.com/huggingface/transformers/pull/4910.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4910.patch",
"merged_at": 1591809594000
} |
https://api.github.com/repos/huggingface/transformers/issues/4909 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4909/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4909/comments | https://api.github.com/repos/huggingface/transformers/issues/4909/events | https://github.com/huggingface/transformers/pull/4909 | 636,333,555 | MDExOlB1bGxSZXF1ZXN0NDMyNTI1Mzg2 | 4,909 | [All models] fix docs after adding output attentions to all forward functions | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4909?src=pr&el=h1) Report\n> Merging [#4909](https://codecov.io/gh/huggingface/transformers/pull/4909?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ac99217e92c43066af7ec96554054d75532565d7&el=desc) will **increase** coverage by `0.40%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4909?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4909 +/- ##\n==========================================\n+ Coverage 76.99% 77.40% +0.40% \n==========================================\n Files 128 128 \n Lines 21602 21602 \n==========================================\n+ Hits 16633 16720 +87 \n+ Misses 4969 4882 -87 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4909?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.37% <ø> (ø)` | |\n| [src/transformers/modeling\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `76.22% <ø> (ø)` | |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/4909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.40% <ø> (ø)` | |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.28% <ø> (ø)` | |\n| [src/transformers/modeling\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/4909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jYW1lbWJlcnQucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.63% <ø> (ø)` | |\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/4909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.50% <ø> (ø)` | |\n| [src/transformers/modeling\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/4909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `76.09% <ø> (ø)` | |\n| [src/transformers/modeling\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/4909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mbGF1YmVydC5weQ==) | `84.00% <ø> (ø)` | |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `85.43% <ø> (ø)` | |\n| ... and [27 more](https://codecov.io/gh/huggingface/transformers/pull/4909/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4909?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4909?src=pr&el=footer). Last update [ac99217...58379b0](https://codecov.io/gh/huggingface/transformers/pull/4909?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This is great! Thanks @patrickvonplaten "
] | 1,591 | 1,591 | 1,591 | MEMBER | null | Added the same docs for to all models for `output_attentions` following PR: https://github.com/huggingface/transformers/pull/4538 .
This PR only touches the docs.
Pinging @LysandreJik @Bharat123rox for notification. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4909/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4909/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4909",
"html_url": "https://github.com/huggingface/transformers/pull/4909",
"diff_url": "https://github.com/huggingface/transformers/pull/4909.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4909.patch",
"merged_at": 1591805460000
} |
https://api.github.com/repos/huggingface/transformers/issues/4908 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4908/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4908/comments | https://api.github.com/repos/huggingface/transformers/issues/4908/events | https://github.com/huggingface/transformers/pull/4908 | 636,324,098 | MDExOlB1bGxSZXF1ZXN0NDMyNTE3NjYy | 4,908 | BartForQuestionAnswering | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4908?src=pr&el=h1) Report\n> Merging [#4908](https://codecov.io/gh/huggingface/transformers/pull/4908?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ac99217e92c43066af7ec96554054d75532565d7&el=desc) will **increase** coverage by `0.03%`.\n> The diff coverage is `94.11%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4908?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4908 +/- ##\n==========================================\n+ Coverage 76.99% 77.02% +0.03% \n==========================================\n Files 128 128 \n Lines 21602 21635 +33 \n==========================================\n+ Hits 16633 16665 +32 \n- Misses 4969 4970 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4908?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/4908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.13% <ø> (ø)` | |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/4908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.26% <93.93%> (-0.15%)` | :arrow_down: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.40% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.49% <0.00%> (-0.12%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.42% <0.00%> (+0.15%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.49% <0.00%> (+0.40%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4908?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4908?src=pr&el=footer). Last update [ac99217...63eb191](https://codecov.io/gh/huggingface/transformers/pull/4908?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks for the contribution @patil-suraj !",
"> Hi! Very cool @patil-suraj.\r\n> \r\n> Could you also add `BartForQuestionAnswering` to the `all_model_classes` in `test_modeling_bart.py`?\r\n\r\nHi, @LysandreJik \r\nAfter adding `BartForQuestionAnswering` in `all_model_classes` I also had to add `output_attention` parameter to `forward`.\r\n\r\nNow for some reason `test_attention_outputs` is failing, I am not sure why, could you help me fix it ?\r\nThanks !",
"Awesome work @patil-suraj - I can help you with this test :-) ",
"I see what the problem is...it's actually not related to your PR at all. Can we you for now just remove `BartForQuestionAnswering` from the all_models tuples in the tests. @LysandreJik @sshleifer I will open a new PR after this one to fix it :-) ",
"> I see what the problem is...it's actually not related to your PR at all. Can we you for now just remove `BartForQuestionAnswering` from the all_models tuples in the tests. @LysandreJik @sshleifer I will open a new PR after this one to fix it :-)\r\n\r\nThank you @patrickvonplaten . I've removed it from `all_models` tuple for now"
] | 1,591 | 1,591 | 1,591 | MEMBER | null | This PR adds `BartForQuestionAnswering`.
Decided to add this models as `BART` is intended for both NLU and NLG tasks and also achieves comparable performance to `ROBERTa` on SQuAD.
Also fine-tuned the model [here](https://colab.research.google.com/drive/1I5cK1M_0dLaf5xoewh6swcm5nAInfwHy?usp=sharing). The metrics are slightly worse than given in the paper. Got following metrics on SQuADv1
`{'exact_match': 86.80227057710502, 'f1': 92.73424907872341}`
@sshleifer , @patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4908/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4908/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4908",
"html_url": "https://github.com/huggingface/transformers/pull/4908",
"diff_url": "https://github.com/huggingface/transformers/pull/4908.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4908.patch",
"merged_at": 1591991277000
} |
https://api.github.com/repos/huggingface/transformers/issues/4907 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4907/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4907/comments | https://api.github.com/repos/huggingface/transformers/issues/4907/events | https://github.com/huggingface/transformers/issues/4907 | 636,318,717 | MDU6SXNzdWU2MzYzMTg3MTc= | 4,907 | ModuleNotFoundError: No module named 'xml.sax'; 'xml' is not a package | {
"login": "telnasser",
"id": 795814,
"node_id": "MDQ6VXNlcjc5NTgxNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/795814?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/telnasser",
"html_url": "https://github.com/telnasser",
"followers_url": "https://api.github.com/users/telnasser/followers",
"following_url": "https://api.github.com/users/telnasser/following{/other_user}",
"gists_url": "https://api.github.com/users/telnasser/gists{/gist_id}",
"starred_url": "https://api.github.com/users/telnasser/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/telnasser/subscriptions",
"organizations_url": "https://api.github.com/users/telnasser/orgs",
"repos_url": "https://api.github.com/users/telnasser/repos",
"events_url": "https://api.github.com/users/telnasser/events{/privacy}",
"received_events_url": "https://api.github.com/users/telnasser/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hello! Is `sacremoses` installed in your environment? Do you mind pasting the result of `pip list` in your environment?",
"Here you go:\r\n\r\n`Package Version \r\n---------------------- -----------\r\nabsl-py 0.9.0 \r\nastunparse 1.6.3 \r\nbeautifulsoup4 4.9.1 \r\nbs4 0.0.1 \r\ncachetools 4.1.0 \r\ncertifi 2020.4.5.1 \r\nchardet 3.0.4 \r\nclick 7.1.2 \r\nfilelock 3.0.12 \r\nfuture 0.18.2 \r\ngast 0.3.3 \r\ngoogle-auth 1.16.1 \r\ngoogle-auth-oauthlib 0.4.1 \r\ngoogle-pasta 0.2.0 \r\ngrpcio 1.29.0 \r\nh5py 2.10.0 \r\nidna 2.9 \r\njoblib 0.15.1 \r\nKeras-Preprocessing 1.1.2 \r\nMarkdown 3.2.2 \r\nnumpy 1.18.5 \r\noauthlib 3.1.0 \r\nopt-einsum 3.2.1 \r\npackaging 20.4 \r\nPillow 7.1.2 \r\npip 20.0.2 \r\nprotobuf 3.12.2 \r\npyasn1 0.4.8 \r\npyasn1-modules 0.2.8 \r\npyparsing 2.4.7 \r\nregex 2020.6.8 \r\nrequests 2.23.0 \r\nrequests-oauthlib 1.3.0 \r\nrsa 4.0 \r\nsacremoses 0.0.43 \r\nscipy 1.4.1 \r\nsentencepiece 0.1.91 \r\nsetuptools 46.0.0 \r\nsix 1.15.0 \r\nsoupsieve 2.0.1 \r\ntensorboard 2.2.2 \r\ntensorboard-plugin-wit 1.6.0.post3\r\ntensorflow 2.2.0 \r\ntensorflow-estimator 2.2.0 \r\ntermcolor 1.1.0 \r\ntokenizers 0.7.0 \r\ntorch 1.5.0 \r\ntorchvision 0.6.0 \r\ntqdm 4.46.1 \r\ntransformers 2.11.0 \r\nurllib3 1.25.9 \r\nWerkzeug 1.0.1 \r\nwheel 0.34.2 \r\nwrapt 1.12.1 `",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,591 | 1,597 | 1,597 | NONE | null | I'm running this example:
`from transformers import pipeline
nlp = pipeline("sentiment-analysis")
print(nlp("I hate you"))
print(nlp("I love you"))`
I get this error:
`
Traceback (most recent call last):
File "ttt.py", line 1, in <module>
from transformers import pipeline as ppp
File "/usr/local/lib/python3.8/site-packages/transformers/__init__.py", line 99, in <module>
from .pipelines import (
File "/usr/local/lib/python3.8/site-packages/transformers/pipelines.py", line 36, in <module>
from .tokenization_auto import AutoTokenizer
File "/usr/local/lib/python3.8/site-packages/transformers/tokenization_auto.py", line 52, in <module>
from .tokenization_flaubert import FlaubertTokenizer
File "/usr/local/lib/python3.8/site-packages/transformers/tokenization_flaubert.py", line 23, in <module>
from .tokenization_xlm import XLMTokenizer
File "/usr/local/lib/python3.8/site-packages/transformers/tokenization_xlm.py", line 26, in <module>
import sacremoses as sm
File "/usr/local/lib/python3.8/site-packages/sacremoses/__init__.py", line 2, in <module>
from sacremoses.tokenize import *
File "/usr/local/lib/python3.8/site-packages/sacremoses/tokenize.py", line 10, in <module>
from sacremoses.util import is_cjk
File "/usr/local/lib/python3.8/site-packages/sacremoses/util.py", line 9, in <module>
from xml.sax.saxutils import escape, unescape
ModuleNotFoundError: No module named 'xml.sax'; 'xml' is not a package
` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4907/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4907/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4906 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4906/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4906/comments | https://api.github.com/repos/huggingface/transformers/issues/4906/events | https://github.com/huggingface/transformers/issues/4906 | 636,296,487 | MDU6SXNzdWU2MzYyOTY0ODc= | 4,906 | TypeError: export() got an unexpected keyword argument 'use_external_data_format' | {
"login": "ZLKong",
"id": 28882362,
"node_id": "MDQ6VXNlcjI4ODgyMzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/28882362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZLKong",
"html_url": "https://github.com/ZLKong",
"followers_url": "https://api.github.com/users/ZLKong/followers",
"following_url": "https://api.github.com/users/ZLKong/following{/other_user}",
"gists_url": "https://api.github.com/users/ZLKong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZLKong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZLKong/subscriptions",
"organizations_url": "https://api.github.com/users/ZLKong/orgs",
"repos_url": "https://api.github.com/users/ZLKong/repos",
"events_url": "https://api.github.com/users/ZLKong/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZLKong/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I just did this too and it seems to work :S ",
"I looked into this and I believe the issue is that use_external_data_format is a recent [addition to PyTorch from the onnx team](https://github.com/pytorch/pytorch/commit/96989a2a114de9b77e7dd9495d62c4a8a549b40d). If you upgrade to torch>=1.5.0 it should work. Also I added a PR #5687 to make this issue more straightforward.",
"i have torch==1.13.1+cu116 installed and got this error, but i'm using in conjunction with this: https://github.com/quic/aimet"
] | 1,591 | 1,702 | 1,596 | NONE | null | Hi,
I tried to run `convert_graph_to_onnx.py` using
`convert(framework="pt", model="bert-base-uncased", output="onnx/bert-base-uncased.onnx", opset=11)`
But I get error in the
```
export(
nlp.model,
model_args,
f=output,
input_names=ordered_input_names,
output_names=output_names,
dynamic_axes=dynamic_axes,
do_constant_folding=True,
use_external_data_format=use_external_format,
enable_onnx_checker=True,
opset_version=opset,
)
```
the error is :
```
TypeError: export() got an unexpected keyword argument 'use_external_data_format'
TypeError: export() got an unexpected keyword argument 'enable_onnx_checker'
```
When I deleted these two lines, it does not report errors. Is it OK to remove these two lines?
Thanks,
ZLK | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4906/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 4
} | https://api.github.com/repos/huggingface/transformers/issues/4906/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4905 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4905/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4905/comments | https://api.github.com/repos/huggingface/transformers/issues/4905/events | https://github.com/huggingface/transformers/issues/4905 | 636,258,243 | MDU6SXNzdWU2MzYyNTgyNDM= | 4,905 | [How to] Carefully designing the head of a Transformer model? | {
"login": "innat",
"id": 17668390,
"node_id": "MDQ6VXNlcjE3NjY4Mzkw",
"avatar_url": "https://avatars.githubusercontent.com/u/17668390?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/innat",
"html_url": "https://github.com/innat",
"followers_url": "https://api.github.com/users/innat/followers",
"following_url": "https://api.github.com/users/innat/following{/other_user}",
"gists_url": "https://api.github.com/users/innat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/innat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/innat/subscriptions",
"organizations_url": "https://api.github.com/users/innat/orgs",
"repos_url": "https://api.github.com/users/innat/repos",
"events_url": "https://api.github.com/users/innat/events{/privacy}",
"received_events_url": "https://api.github.com/users/innat/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,591 | 1,592 | 1,592 | NONE | null | # ❓ Questions & Help
While using the pre-trained any transformer model, what are the main things we normally should consider while designing the head? Simply like,
```python
distill = transformer.distilbert.... <----- slicing the first position
x = Dense(n, activation = ' ')(distill) <-------- simple classifer head
```
Is it really necessary to design an additional head? (I'm using `tensorflow` backend.) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4905/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4905/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4904 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4904/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4904/comments | https://api.github.com/repos/huggingface/transformers/issues/4904/events | https://github.com/huggingface/transformers/pull/4904 | 636,221,952 | MDExOlB1bGxSZXF1ZXN0NDMyNDM1MDQx | 4,904 | [ctrl] fix pruning of MultiHeadAttention | {
"login": "aretius",
"id": 18247856,
"node_id": "MDQ6VXNlcjE4MjQ3ODU2",
"avatar_url": "https://avatars.githubusercontent.com/u/18247856?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aretius",
"html_url": "https://github.com/aretius",
"followers_url": "https://api.github.com/users/aretius/followers",
"following_url": "https://api.github.com/users/aretius/following{/other_user}",
"gists_url": "https://api.github.com/users/aretius/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aretius/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aretius/subscriptions",
"organizations_url": "https://api.github.com/users/aretius/orgs",
"repos_url": "https://api.github.com/users/aretius/repos",
"events_url": "https://api.github.com/users/aretius/events{/privacy}",
"received_events_url": "https://api.github.com/users/aretius/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @aretius, some (if not all) the tests failing are unrelated to your PR, and should have been solved by the recently merged #4903. Do you mind rebasing on `master` and force pushing so that we may see if all the tests pass?",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4904?src=pr&el=h1) Report\n> Merging [#4904](https://codecov.io/gh/huggingface/transformers/pull/4904?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ac99217e92c43066af7ec96554054d75532565d7&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `93.33%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4904?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4904 +/- ##\n==========================================\n+ Coverage 76.99% 77.01% +0.01% \n==========================================\n Files 128 128 \n Lines 21602 21615 +13 \n==========================================\n+ Hits 16633 16647 +14 \n+ Misses 4969 4968 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4904?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4904/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `99.14% <93.33%> (+0.50%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4904?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4904?src=pr&el=footer). Last update [ac99217...bf94b7e](https://codecov.io/gh/huggingface/transformers/pull/4904?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@sshleifer do you want to take a look?"
] | 1,591 | 1,591 | 1,591 | CONTRIBUTOR | null | @sshleifer
Implemented the pruning logic. Fixes - #4798
After enabling `test_pruning` all the previously failing tests are passing. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4904/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4904/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4904",
"html_url": "https://github.com/huggingface/transformers/pull/4904",
"diff_url": "https://github.com/huggingface/transformers/pull/4904.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4904.patch",
"merged_at": 1591812416000
} |
https://api.github.com/repos/huggingface/transformers/issues/4903 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4903/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4903/comments | https://api.github.com/repos/huggingface/transformers/issues/4903/events | https://github.com/huggingface/transformers/pull/4903 | 636,212,566 | MDExOlB1bGxSZXF1ZXN0NDMyNDI3NDIx | 4,903 | Fix the CI | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4903?src=pr&el=h1) Report\n> Merging [#4903](https://codecov.io/gh/huggingface/transformers/pull/4903?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0a375f5abdefcde1424639f712cf40247135cd64&el=desc) will **increase** coverage by `36.43%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4903?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4903 +/- ##\n===========================================\n+ Coverage 40.56% 76.99% +36.43% \n===========================================\n Files 128 128 \n Lines 21602 21602 \n===========================================\n+ Hits 8762 16633 +7871 \n+ Misses 12840 4969 -7871 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4903?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (+0.63%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.69% <0.00%> (+0.93%)` | :arrow_up: |\n| [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/4903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `27.63% <0.00%> (+1.31%)` | :arrow_up: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.37% <0.00%> (+1.44%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.42% <0.00%> (+1.70%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.09% <0.00%> (+5.22%)` | :arrow_up: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/4903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `92.79% <0.00%> (+6.30%)` | :arrow_up: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/4903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `77.41% <0.00%> (+11.82%)` | :arrow_up: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/4903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `76.28% <0.00%> (+14.33%)` | :arrow_up: |\n| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/4903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `49.44% <0.00%> (+17.34%)` | :arrow_up: |\n| ... and [44 more](https://codecov.io/gh/huggingface/transformers/pull/4903/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4903?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4903?src=pr&el=footer). Last update [0a375f5...0c9840c](https://codecov.io/gh/huggingface/transformers/pull/4903?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,591 | 1,591 | 1,591 | COLLABORATOR | null | The CI was broken by the merge of #4886 since #4538 was merged between the moment #4886 was tested and the moment it was merged.
This PR fixes the tests. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4903/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4903/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4903",
"html_url": "https://github.com/huggingface/transformers/pull/4903",
"diff_url": "https://github.com/huggingface/transformers/pull/4903.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4903.patch",
"merged_at": 1591795566000
} |
https://api.github.com/repos/huggingface/transformers/issues/4902 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4902/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4902/comments | https://api.github.com/repos/huggingface/transformers/issues/4902/events | https://github.com/huggingface/transformers/issues/4902 | 636,182,046 | MDU6SXNzdWU2MzYxODIwNDY= | 4,902 | [cleanup] Hoist ModelTester objects to toplevel | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@sshleifer It would be beneficial to provide more context, apologies for the beginner question!\r\nWould be happy to pick it up :)",
"Yeah sure. The high level goal is to reduce the amount of boilerplate code in the unittests.\r\n\r\nFor example, if you look at [T5ModelTester](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_t5.py#L43), there are a few code quality issues:\r\n1) The class is defined inside `T5ModelTest` class and indented. It should be defined outside. \r\n2) The class inherits from `object`. It should not.\r\n3) the class has 18 lines of keyword arguments that are never used. They should be hardcoded. For example, instead of lines 47 (`batch_size=13`) and line 68, (`self.batch_size=batch_size`), we could simply set `self.batch_size = 13` in one line.\r\n\r\n\r\nThese 3 problems occurr in nearly all of the following files:\r\n\r\n```bash\r\ngit grep \"ModelTester(object)\"\r\n```\r\nResults:\r\n```bash\r\n\r\ntests/test_modeling_albert.py: class AlbertModelTester(object):\r\ntests/test_modeling_ctrl.py: class CTRLModelTester(object):\r\ntests/test_modeling_distilbert.py: class DistilBertModelTester(object):\r\ntests/test_modeling_electra.py: class ElectraModelTester(object):\r\ntests/test_modeling_flaubert.py: class FlaubertModelTester(object):\r\ntests/test_modeling_gpt2.py: class GPT2ModelTester(object):\r\ntests/test_modeling_longformer.py:class LongformerModelTester(object):\r\ntests/test_modeling_openai.py: class OpenAIGPTModelTester(object):\r\ntests/test_modeling_roberta.py: class RobertaModelTester(object):\r\ntests/test_modeling_t5.py: class T5ModelTester(object):\r\ntests/test_modeling_tf_albert.py: class TFAlbertModelTester(object):\r\ntests/test_modeling_tf_bert.py: class TFBertModelTester(object):\r\ntests/test_modeling_tf_ctrl.py: class TFCTRLModelTester(object):\r\ntests/test_modeling_tf_distilbert.py: class TFDistilBertModelTester(object):\r\ntests/test_modeling_tf_electra.py: class TFElectraModelTester(object):\r\ntests/test_modeling_tf_gpt2.py: class TFGPT2ModelTester(object):\r\ntests/test_modeling_tf_openai_gpt.py: class TFOpenAIGPTModelTester(object):\r\ntests/test_modeling_tf_roberta.py: class TFRobertaModelTester(object):\r\ntests/test_modeling_tf_t5.py: class TFT5ModelTester(object):\r\ntests/test_modeling_tf_transfo_xl.py: class TFTransfoXLModelTester(object):\r\ntests/test_modeling_tf_xlm.py: class TFXLMModelTester(object):\r\ntests/test_modeling_tf_xlnet.py: class TFXLNetModelTester(object):\r\ntests/test_modeling_transfo_xl.py: class TransfoXLModelTester(object):\r\ntests/test_modeling_xlm.py: class XLMModelTester(object):\r\ntests/test_modeling_xlnet.py: class XLNetModelTester(object):\r\n```\r\n\r\nOnce this is done, we can update the instructions in\r\n```bash\r\ntemplates/adding_a_new_model/tests/test_modeling_tf_xxx.py\r\ntemplates/adding_a_new_model/tests/test_modeling_xxx.py\r\n```",
"Indeed, code quality could be improved here!"
] | 1,591 | 1,592 | 1,592 | CONTRIBUTOR | null | https://github.com/huggingface/transformers/pull/4046#issuecomment-628236744
many `ModelTester` objects are defined within classes. If we move them to the top level of the module, we can share code where possible and also have less complexity.
The task here is to move `ModelTester` objects to the top level.
Bonus: if the kwargs are never used, replace
```python
def__init(self, num_layers=2):
self.num_layers=num_layers
```
with
```python
def__init(self):
self.num_layers=2
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4902/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4902/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4901 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4901/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4901/comments | https://api.github.com/repos/huggingface/transformers/issues/4901/events | https://github.com/huggingface/transformers/pull/4901 | 636,134,892 | MDExOlB1bGxSZXF1ZXN0NDMyMzYzMDQ1 | 4,901 | Add MobileBert | {
"login": "vshampor",
"id": 31695470,
"node_id": "MDQ6VXNlcjMxNjk1NDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/31695470?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vshampor",
"html_url": "https://github.com/vshampor",
"followers_url": "https://api.github.com/users/vshampor/followers",
"following_url": "https://api.github.com/users/vshampor/following{/other_user}",
"gists_url": "https://api.github.com/users/vshampor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vshampor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vshampor/subscriptions",
"organizations_url": "https://api.github.com/users/vshampor/orgs",
"repos_url": "https://api.github.com/users/vshampor/repos",
"events_url": "https://api.github.com/users/vshampor/events{/privacy}",
"received_events_url": "https://api.github.com/users/vshampor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @vshampor, can you let me know when this is ready for review? Thanks!",
"@LysandreJik it is quite impossible to make the CI pass for the check_code_quality stage since locally the `black` and the `isort` commands seem to produce opposite changes when applied at once with `make style` and therefore cancel out; meanwhile neither check is satisfied on the CI side since the checks for `black` and `isort` are done separately. \r\n\r\nOtherwise it's ok to review this now, I believe.",
"Ah yes, this happens when you have conflicting versions of black/isort. It's painful because isort should be installed from a specific commit.\r\n\r\nDon't worry about it though, we'll fix that later on!",
"Credit goes to @lonePatient, I am merely integrating this to transformers because we at [nncf_pytorch](https://github.com/openvinotoolkit/nncf_pytorch) leverage this excellent repo for compression experiments with NLP models and would like to try out MobileBERT as well.\r\n\r\nWill address the remarks and update the PR.\r\n",
"> * Upload the checkpoints to S3. Seeing as there's a single checkpoint released by google, I guess it would be under the name `google/mobilebert-uncased` @julien-c ?\r\n\r\nYes! We can ping the authors and check that they're ok.",
"@LysandreJik @julien-c so may I upload the model to S3 already or should we wait for @saberkun's approval for this?",
"I think the conversion script lacks\r\n```py\r\nname = name.replace(\"bert\", \"mobilebert\")\r\n```\r\nin order to work\r\n\r\nI'd like to update this as well as the code quality, do you mind if I push directly on your fork?",
"> I think the conversion script lacks\r\n> \r\n> ```python\r\n> name = name.replace(\"bert\", \"mobilebert\")\r\n> ```\r\n> \r\n> in order to work\r\n> \r\n> I'd like to update this as well as the code quality, do you mind if I push directly on your fork?\r\n\r\nSure.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4901?src=pr&el=h1) Report\n> Merging [#4901](https://codecov.io/gh/huggingface/transformers/pull/4901?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f45e873910e60d89511ae0193711e71c5c710468&el=desc) will **increase** coverage by `0.72%`.\n> The diff coverage is `91.26%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4901?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4901 +/- ##\n==========================================\n+ Coverage 77.19% 77.91% +0.72% \n==========================================\n Files 133 137 +4 \n Lines 22233 23470 +1237 \n==========================================\n+ Hits 17163 18287 +1124 \n- Misses 5070 5183 +113 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4901?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/4901/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `88.74% <88.74%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/4901/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `93.32% <93.32%> (ø)` | |\n| [src/transformers/configuration\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/4901/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX21vYmlsZWJlcnQucHk=) | `97.05% <97.05%> (ø)` | |\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/4901/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.19% <100.00%> (+0.02%)` | :arrow_up: |\n| [src/transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4901/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `93.18% <100.00%> (+0.15%)` | :arrow_up: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4901/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `70.93% <100.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4901/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `72.50% <100.00%> (+0.23%)` | :arrow_up: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4901/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.77% <100.00%> (+0.10%)` | :arrow_up: |\n| [src/transformers/tokenization\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/4901/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbW9iaWxlYmVydC5weQ==) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4901/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.51% <0.00%> (-1.39%)` | :arrow_down: |\n| ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/4901/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4901?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4901?src=pr&el=footer). Last update [f45e873...e73fde7](https://codecov.io/gh/huggingface/transformers/pull/4901?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Will add the documentation in the following commit and ping you for review then @patrickvonplaten @sgugger ",
"I just pushed the TensorFlow implementation, and added several models: `MobileBertFor{MaskedLM, NextSentencePrediction, MultipleChoice, TokenClassification}` alongside their tests and documentation.\r\n\r\nWill solve the two remaining tests on Monday, put the TensorFlow checkpoints on S3 and we'll be good to merge!",
"Thanks for your reviews @patrickvonplaten @sgugger!"
] | 1,591 | 1,592 | 1,592 | CONTRIBUTOR | null | Grabbed the code from https://github.com/lonePatient/MobileBert_PyTorch and added the question answering downstream task.
Should address #4185. Also got the backbone weights' representation in the Pytorch/transformers format (i.e. the `pytorch_model.bin`, `config.json` and `vocab.txt` files) via converting the original TF [uncased_L-24_H-128_B-512_A-4_F-4_OPT](https://storage.googleapis.com/cloud-tpu-checkpoints/mobilebert/uncased_L-24_H-128_B-512_A-4_F-4_OPT.tar.gz) checkpoint - need guidance on how to upload these if necessary.
The converted backbone weights as loaded in transformers via the PyTorch checkpoint loading method allow to reproduce the original paper's results for SST-2 somewhat - paper claims 92.8% accuracy, while I got 91.7% using the hyperparameters in https://github.com/lonePatient/MobileBert_PyTorch. Not sure if there is another fast way to confirm that these weights indeed correspond to the original pretrained MobileBert backbone. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4901/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4901/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4901",
"html_url": "https://github.com/huggingface/transformers/pull/4901",
"diff_url": "https://github.com/huggingface/transformers/pull/4901.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4901.patch",
"merged_at": 1592599117000
} |
https://api.github.com/repos/huggingface/transformers/issues/4900 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4900/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4900/comments | https://api.github.com/repos/huggingface/transformers/issues/4900/events | https://github.com/huggingface/transformers/issues/4900 | 636,133,342 | MDU6SXNzdWU2MzYxMzMzNDI= | 4,900 | Latest version of transformers available via conda-forge? | {
"login": "brik3012",
"id": 30080120,
"node_id": "MDQ6VXNlcjMwMDgwMTIw",
"avatar_url": "https://avatars.githubusercontent.com/u/30080120?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brik3012",
"html_url": "https://github.com/brik3012",
"followers_url": "https://api.github.com/users/brik3012/followers",
"following_url": "https://api.github.com/users/brik3012/following{/other_user}",
"gists_url": "https://api.github.com/users/brik3012/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brik3012/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brik3012/subscriptions",
"organizations_url": "https://api.github.com/users/brik3012/orgs",
"repos_url": "https://api.github.com/users/brik3012/repos",
"events_url": "https://api.github.com/users/brik3012/events{/privacy}",
"received_events_url": "https://api.github.com/users/brik3012/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,591 | 1,597 | 1,597 | NONE | null | On conda forge the last version is the 2.1.1,
Why is not updated to the latest? Is it in plan to update the version?
We work in a limited environment and we are forced to work with conda-forge channel.
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4900/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4900/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4899 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4899/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4899/comments | https://api.github.com/repos/huggingface/transformers/issues/4899/events | https://github.com/huggingface/transformers/issues/4899 | 636,129,917 | MDU6SXNzdWU2MzYxMjk5MTc= | 4,899 | Error using inputs_embeds argument in TFXLNetModel | {
"login": "rojagtap",
"id": 42299342,
"node_id": "MDQ6VXNlcjQyMjk5MzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/42299342?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rojagtap",
"html_url": "https://github.com/rojagtap",
"followers_url": "https://api.github.com/users/rojagtap/followers",
"following_url": "https://api.github.com/users/rojagtap/following{/other_user}",
"gists_url": "https://api.github.com/users/rojagtap/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rojagtap/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rojagtap/subscriptions",
"organizations_url": "https://api.github.com/users/rojagtap/orgs",
"repos_url": "https://api.github.com/users/rojagtap/repos",
"events_url": "https://api.github.com/users/rojagtap/events{/privacy}",
"received_events_url": "https://api.github.com/users/rojagtap/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey, @patrickvonplaten I observed the same with TFBertModel. So pretty much evident that the issue must be with the parent (if at all that helps)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"> \r\n> \r\n> While using the TFXLNetModel:\r\n> `xlnet = TFXLNetModel.from_pretrained('xlnet-base-cased')`\r\n> according to the docs, `input_ids` and `inputs_embeds` can be optionally used. However, when I tried using:\r\n> \r\n> `xlnet(inputs_embeds=embeddings, attention_mask=attn_masks)[0]`\r\n> it throws: `ValueError: The first argument to Layer.call must always be passed.`\r\n\r\ntry something like this:\r\nxlnet({'attention_mask':attention_mask, 'token_type_ids':token_type_ids},inputs_embeds=embeddings, training=training)\r\n\r\nworked for me when getting the same error for a bert model\r\n",
"```python\r\nmodel_outputs = self.transformer(\r\n input_ids=None, # add this line\r\n inputs_embeds=dense_feature,\r\n attention_mask=attention_mask\r\n)\r\n```",
"Is this still relevant? Gently pinging @gante @Rocketknight1 here ",
"I got around the issue by changing the code from this\r\n```\r\nconfig = MobileBertConfig()\r\nmbert = TFMobileBertModel(config)\r\nmbert(inputs={\"input_ids\":input_ids, \"attention_mask\":attention_mask})\r\n```\r\nto \r\n```\r\nconfig = MobileBertConfig()\r\nmbert = TFMobileBertModel(config)\r\nmbert(input_ids=input_ids, attention_mask=attention_mask)\r\n```\r\n\r\ntransformers version : 4.22.1"
] | 1,591 | 1,664 | 1,604 | NONE | null | While using the TFXLNetModel:
`xlnet = TFXLNetModel.from_pretrained('xlnet-base-cased')`
according to the docs, `input_ids` and `inputs_embeds` can be optionally used. However, when I tried using:
`xlnet(inputs_embeds=embeddings, attention_mask=attn_masks)[0]`
it throws: `ValueError: The first argument to Layer.call must always be passed.`
which I thought is an issue with the `inputs` argument which must be a positional one:
`xlnet(inputs=None, inputs_embeds=embeddings, attention_mask=attn_masks)[0]` using this gave me:
`RuntimeError: Attempting to capture an EagerTensor without building a function.`
And finally passing both `inputs` and `inputs_embeds` gave: `ValueError: You cannot specify both input_ids and inputs_embeds at the same time`
Can someone suggest a workaround on this?
P.S. the `embeddings` variable is the `last_hidden_state` from another bert which matches the config for the `inputs_embeds` shape.
Note that `input_ids` parameter won't count as it gave the same error if I didn't use the `inputs` argument. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4899/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4899/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4898 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4898/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4898/comments | https://api.github.com/repos/huggingface/transformers/issues/4898/events | https://github.com/huggingface/transformers/pull/4898 | 636,129,902 | MDExOlB1bGxSZXF1ZXN0NDMyMzU4ODgz | 4,898 | update via web | {
"login": "alberduris",
"id": 7073086,
"node_id": "MDQ6VXNlcjcwNzMwODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7073086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alberduris",
"html_url": "https://github.com/alberduris",
"followers_url": "https://api.github.com/users/alberduris/followers",
"following_url": "https://api.github.com/users/alberduris/following{/other_user}",
"gists_url": "https://api.github.com/users/alberduris/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alberduris/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alberduris/subscriptions",
"organizations_url": "https://api.github.com/users/alberduris/orgs",
"repos_url": "https://api.github.com/users/alberduris/repos",
"events_url": "https://api.github.com/users/alberduris/events{/privacy}",
"received_events_url": "https://api.github.com/users/alberduris/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,591 | 1,591 | 1,591 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4898/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4898/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4898",
"html_url": "https://github.com/huggingface/transformers/pull/4898",
"diff_url": "https://github.com/huggingface/transformers/pull/4898.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4898.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4897 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4897/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4897/comments | https://api.github.com/repos/huggingface/transformers/issues/4897/events | https://github.com/huggingface/transformers/issues/4897 | 636,120,883 | MDU6SXNzdWU2MzYxMjA4ODM= | 4,897 | KeyError when using non-default models in Huggingface transformers pipeline | {
"login": "hesamuel",
"id": 60181534,
"node_id": "MDQ6VXNlcjYwMTgxNTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/60181534?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hesamuel",
"html_url": "https://github.com/hesamuel",
"followers_url": "https://api.github.com/users/hesamuel/followers",
"following_url": "https://api.github.com/users/hesamuel/following{/other_user}",
"gists_url": "https://api.github.com/users/hesamuel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hesamuel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hesamuel/subscriptions",
"organizations_url": "https://api.github.com/users/hesamuel/orgs",
"repos_url": "https://api.github.com/users/hesamuel/repos",
"events_url": "https://api.github.com/users/hesamuel/events{/privacy}",
"received_events_url": "https://api.github.com/users/hesamuel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The \"sentiment-analysis\" pipeline is only compatible with text classification models, i.e. those that can be loaded without error with `AutoModelForSequenceClassification`"
] | 1,591 | 1,591 | 1,591 | NONE | null |
I have no problems using the default model in the sentiment analysis pipeline.
```# Allocate a pipeline for sentiment-analysis
nlp = pipeline('sentiment-analysis')
nlp('I am a black man.')
>>>[{'label': 'NEGATIVE', 'score': 0.5723695158958435}]
```
But, when I try to customise the pipeline a little by adding a specific model. It throws a KeyError.
```
nlp = pipeline('sentiment-analysis',
tokenizer = AutoTokenizer.from_pretrained("DeepPavlov/bert-base-cased-conversational"),
model = AutoModelWithLMHead.from_pretrained("DeepPavlov/bert-base-cased-conversational"))
nlp('I am a black man.')
>>>---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-55-af7e46d6c6c9> in <module>
3 tokenizer = AutoTokenizer.from_pretrained("DeepPavlov/bert-base-cased-conversational"),
4 model = AutoModelWithLMHead.from_pretrained("DeepPavlov/bert-base-cased-conversational"))
----> 5 nlp('I am a black man.')
6
7
~/opt/anaconda3/lib/python3.7/site-packages/transformers/pipelines.py in __call__(self, *args, **kwargs)
721 outputs = super().__call__(*args, **kwargs)
722 scores = np.exp(outputs) / np.exp(outputs).sum(-1, keepdims=True)
--> 723 return [{"label": self.model.config.id2label[item.argmax()], "score": item.max().item()} for item in scores]
724
725
~/opt/anaconda3/lib/python3.7/site-packages/transformers/pipelines.py in <listcomp>(.0)
721 outputs = super().__call__(*args, **kwargs)
722 scores = np.exp(outputs) / np.exp(outputs).sum(-1, keepdims=True)
--> 723 return [{"label": self.model.config.id2label[item.argmax()], "score": item.max().item()} for item in scores]
724
725
KeyError: 58129
```
#
Question on SO -->https://stackoverflow.com/questions/62300836/keyerror-when-using-non-default-models-in-huggingface-transformers-pipeline
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4897/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4897/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4896 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4896/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4896/comments | https://api.github.com/repos/huggingface/transformers/issues/4896/events | https://github.com/huggingface/transformers/pull/4896 | 636,108,996 | MDExOlB1bGxSZXF1ZXN0NDMyMzQyMTc5 | 4,896 | [WIP] Add early stopping to the trainer | {
"login": "primaprashant",
"id": 18608293,
"node_id": "MDQ6VXNlcjE4NjA4Mjkz",
"avatar_url": "https://avatars.githubusercontent.com/u/18608293?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/primaprashant",
"html_url": "https://github.com/primaprashant",
"followers_url": "https://api.github.com/users/primaprashant/followers",
"following_url": "https://api.github.com/users/primaprashant/following{/other_user}",
"gists_url": "https://api.github.com/users/primaprashant/gists{/gist_id}",
"starred_url": "https://api.github.com/users/primaprashant/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/primaprashant/subscriptions",
"organizations_url": "https://api.github.com/users/primaprashant/orgs",
"repos_url": "https://api.github.com/users/primaprashant/repos",
"events_url": "https://api.github.com/users/primaprashant/events{/privacy}",
"received_events_url": "https://api.github.com/users/primaprashant/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Well that was quick. Awesome! Let me know if you need some help (for the pytorch part) or code review.",
"Looks like a duplicate of https://github.com/huggingface/transformers/pull/4186",
"> Looks like a duplicate of #4186\r\n\r\nYou are absolutely right. Closing this in favour of https://github.com/huggingface/transformers/pull/4186"
] | 1,591 | 1,592 | 1,592 | NONE | null | closes #4894 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4896/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4896/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4896",
"html_url": "https://github.com/huggingface/transformers/pull/4896",
"diff_url": "https://github.com/huggingface/transformers/pull/4896.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4896.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4895 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4895/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4895/comments | https://api.github.com/repos/huggingface/transformers/issues/4895/events | https://github.com/huggingface/transformers/issues/4895 | 636,099,377 | MDU6SXNzdWU2MzYwOTkzNzc= | 4,895 | How do I fine-tune hyperparameters for a model from Huggingface library | {
"login": "thak123",
"id": 3891859,
"node_id": "MDQ6VXNlcjM4OTE4NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3891859?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thak123",
"html_url": "https://github.com/thak123",
"followers_url": "https://api.github.com/users/thak123/followers",
"following_url": "https://api.github.com/users/thak123/following{/other_user}",
"gists_url": "https://api.github.com/users/thak123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thak123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thak123/subscriptions",
"organizations_url": "https://api.github.com/users/thak123/orgs",
"repos_url": "https://api.github.com/users/thak123/repos",
"events_url": "https://api.github.com/users/thak123/events{/privacy}",
"received_events_url": "https://api.github.com/users/thak123/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You can take a look at this blog post https://mccormickml.com/2019/07/22/BERT-fine-tuning/",
"Thank You"
] | 1,591 | 1,591 | 1,591 | NONE | null | # ❓ Questions & Help
## Details
Hi, I am new to Hugging-face library and want to fine tune hyper-parameters of the mBert model. I have a simple classification head on top the cls token. I am getting around 76% accuracy.
Is there any read-me or notebook available for doing the same.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4895/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4895/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4894 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4894/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4894/comments | https://api.github.com/repos/huggingface/transformers/issues/4894/events | https://github.com/huggingface/transformers/issues/4894 | 636,041,737 | MDU6SXNzdWU2MzYwNDE3Mzc= | 4,894 | 🚀 Add early stopping to the trainer | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834052129,
"node_id": "MDU6TGFiZWwxODM0MDUyMTI5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/High-Level%20feature",
"name": "High-Level feature",
"color": "f7c9a3",
"default": false,
"description": ""
},
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Looking at the interest this topic has, I am bumping it to re-open it.",
"Hi,\r\n\r\nSo when #4186 is closed, this will close as well? Or is there any more changes expected. on this issue, apart from what #4186 adds?\r\n\r\nThanks\r\n",
"If I've understood things correctly, I think #4186 only addresses the Pytorch implementation of the trainer. @BramVanroy if that's the case I'm happy to work on implementing this feature in Tensorflow (trainer_tf.py).",
"@san7988 @KMFODA This issue should not directly be closed when that PR is merged because as @KMFODA mentions, it only seems to address PyTorch. A PR for Tensorflow is also welcome!",
"Thanks for clarifying @BramVanroy. Apologies I was out for the past month due to a personal issue. I'll submit a PR for Tensorflow early stopping now.",
"An early stopping callback has now been introduced in the PyTorch trainer by @cbrochtrup! 👏 \r\n\r\nAFAIK the implementation the TF Trainer is still under way (https://github.com/huggingface/transformers/pull/7533) so I'll keep this topic open for now.",
"I gather from the conversation on #7533 that this issue should now be closed; is that correct, @BramVanroy ?"
] | 1,591 | 1,646 | 1,646 | COLLABORATOR | null | # 🚀 Feature request
The trainer ([pt](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py), [tf](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_tf.py)) is an easy access point for users who rather not spend too much time building their own trainer class but prefer an out-of-the-box solution. Even though `transformers` was never meant to be a fully fletched training library, it might please users to add an additional feature: early stopping.
## Motivation
Early stopping ensures that the trainer does not needlessly keep training when the loss does not improve. This saves time, money, and let's not forget the trees. 😉 Performance-wise this should not lead to different results.
## Your contribution
At the moment I cannot work on this, but here are my thoughts:
- a training argument should be added ([pt](https://github.com/huggingface/transformers/blob/master/src/transformers/training_args.py), [tf](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_tf.py)). This would only work when `evaluate_during_training` is enabled.
- for PyTorch: at every evaluation step, an early stopper (can be a separate class even) checks if the loss has improved in the last n steps. Potentially with a minimal threshold that the loss should have improved. If not, the trainer should stop
- for Tensorflow: I don't have experience with TF myself, but I assume one could use [`tf.keras.callbacks.EarlyStopping`](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/EarlyStopping). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4894/reactions",
"total_count": 34,
"+1": 34,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4894/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4893 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4893/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4893/comments | https://api.github.com/repos/huggingface/transformers/issues/4893/events | https://github.com/huggingface/transformers/issues/4893 | 636,015,473 | MDU6SXNzdWU2MzYwMTU0NzM= | 4,893 | 🐛 TPU Training broken due to recent changes | {
"login": "misrasaurabh1",
"id": 1271289,
"node_id": "MDQ6VXNlcjEyNzEyODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1271289?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/misrasaurabh1",
"html_url": "https://github.com/misrasaurabh1",
"followers_url": "https://api.github.com/users/misrasaurabh1/followers",
"following_url": "https://api.github.com/users/misrasaurabh1/following{/other_user}",
"gists_url": "https://api.github.com/users/misrasaurabh1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/misrasaurabh1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/misrasaurabh1/subscriptions",
"organizations_url": "https://api.github.com/users/misrasaurabh1/orgs",
"repos_url": "https://api.github.com/users/misrasaurabh1/repos",
"events_url": "https://api.github.com/users/misrasaurabh1/events{/privacy}",
"received_events_url": "https://api.github.com/users/misrasaurabh1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"See https://github.com/huggingface/transformers/issues/4814 as well. Over there the TPU evaluation is broken.\r\nTo make the TPU pipeline reliable, a end-to-end test could really help.",
"Hi! Thank you for raising this issue, I'll take a look.\r\n\r\nOf course, having an end-to-end test would really help. Unfortunately, such suites don't exist with TPU right now.",
"Thank you for fixing this so quickly!"
] | 1,591 | 1,591 | 1,591 | CONTRIBUTOR | null | # 🐛 Bug
Looks like due to changes in file_utils.py, the TPU Training has become broken. Reverting transformers to a version before https://github.com/huggingface/transformers/commit/2cfb947f59861d5d910f84eba3be57da200b5599 fixes the problem.
## Information
Seems like file_utils.py is trying to reinitialize the TPU system right after being imported. This fails because xla_spawn.py has already initialized the TPU.
Model I am using (Bert, XLNet ...): roberta (but doesn't matter)
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
With a setup capable of training on TPU, replicating the official language modeling example
```
/transformers/examples$ python xla_spawn.py --num_cores 8 language-modeling/run_language_modeling.py --output_dir=output --model_type=roberta --model_name_or_path=roberta-base --do_train --train_data_file=$TRAIN_FILE --do_eval --eval_data_file=$TEST_FILE --mlm
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
The failure stacktrace-
```
File "/home/saurabh/chat-ai/vendor/transformers/examples/language-modeling/run_language_modeling.py", line 29, in <module>
self = reduction.pickle.load(from_parent)
from transformers import (
File "/home/saurabh/chat-ai/vendor/transformers/examples/language-modeling/run_language_modeling.py", line 29, in <module>
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/transformers/__init__.py", line 23, in <module>
from transformers import (
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/transformers/__init__.py", line 23, in <module>
from transformers import (
from .configuration_albert import ALBERT_PRETRAINED_CONFIG_ARCHIVE_MAP, AlbertConfig
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/transformers/__init__.py", line 23, in <module>
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/transformers/configuration_albert.py", line 18, in <modul
e>
from .configuration_albert import ALBERT_PRETRAINED_CONFIG_ARCHIVE_MAP, AlbertConfig
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/transformers/configuration_albert.py", line 18, in <modul
e>
from .configuration_utils import PretrainedConfig
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/transformers/configuration_utils.py", line 25, in <module
> from .configuration_albert import ALBERT_PRETRAINED_CONFIG_ARCHIVE_MAP, AlbertConfig
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/transformers/configuration_albert.py", line 18, in <modul
e>
from .configuration_utils import PretrainedConfig
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/transformers/configuration_utils.py", line 25, in <module
>
from .file_utils import CONFIG_NAME, cached_path, hf_bucket_url, is_remote_url
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/transformers/file_utils.py", line 76, in <module>
from .configuration_utils import PretrainedConfig
from .file_utils import CONFIG_NAME, cached_path, hf_bucket_url, is_remote_url
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/transformers/configuration_utils.py", line 25, in <module
>
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/transformers/file_utils.py", line 76, in <module>
tpu_device = xm.xla_device()
from .file_utils import CONFIG_NAME, cached_path, hf_bucket_url, is_remote_url
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch_xla/core/xla_model.py", line 146, in xla_device
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/transformers/file_utils.py", line 76, in <module>
tpu_device = xm.xla_device()
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch_xla/core/xla_model.py", line 146, in xla_device
tpu_device = xm.xla_device()
devkind=[devkind] if devkind is not None else None)
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch_xla/core/xla_model.py", line 146, in xla_device
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch_xla/core/xla_model.py", line 50, in get_xla_support
ed_devices
devkind=[devkind] if devkind is not None else None)
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch_xla/core/xla_model.py", line 50, in get_xla_support
ed_devices
xla_devices = torch_xla._XLAC._xla_get_devices()
devkind=[devkind] if devkind is not None else None)
RuntimeError: tensorflow/compiler/xla/xla_client/xrt_computation_client.cc:1245 : Check failed: session.Run({tensorflow::Output
(result, 0)}, &outputs) == ::tensorflow::Status::OK() (Already exists: From /job:tpu_worker/replica:0/task:0:
2 root error(s) found.
(0) Already exists: Resource localhost/tpu_mesh_common_state/N10tensorflow3tpu21TpuMeshStateInterfaceE
[[{{node configure_distributed_tpu/_0}}]]
(1) Already exists: Resource localhost/tpu_mesh_common_state/N10tensorflow3tpu21TpuMeshStateInterfaceE
[[{{node configure_distributed_tpu/_0}}]]
0 successful operations.
0 derived errors ignored. vs. OK)
```
## Expected behavior
Model trains
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11.0 (master)
- Platform: Linux-4.9.0-12-amd64-x86_64-with-debian-9.12
- Python version: 3.6.10
- PyTorch version (GPU?): 1.6.0a0+af05158 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: yes, 8 way parallelism with xla_spawn.py
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4893/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4893/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4892 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4892/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4892/comments | https://api.github.com/repos/huggingface/transformers/issues/4892/events | https://github.com/huggingface/transformers/issues/4892 | 635,991,463 | MDU6SXNzdWU2MzU5OTE0NjM= | 4,892 | Training RoBerta using transformers on masked language task giving weird results | {
"login": "cabhijith",
"id": 45108441,
"node_id": "MDQ6VXNlcjQ1MTA4NDQx",
"avatar_url": "https://avatars.githubusercontent.com/u/45108441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cabhijith",
"html_url": "https://github.com/cabhijith",
"followers_url": "https://api.github.com/users/cabhijith/followers",
"following_url": "https://api.github.com/users/cabhijith/following{/other_user}",
"gists_url": "https://api.github.com/users/cabhijith/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cabhijith/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cabhijith/subscriptions",
"organizations_url": "https://api.github.com/users/cabhijith/orgs",
"repos_url": "https://api.github.com/users/cabhijith/repos",
"events_url": "https://api.github.com/users/cabhijith/events{/privacy}",
"received_events_url": "https://api.github.com/users/cabhijith/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Still facing this problem. Has anyone else encountered something similar? ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I also face this problem. How to solve it?"
] | 1,591 | 1,628 | 1,600 | NONE | null | # ❓ Questions & Help
## Details
I trained a RoBERTa model following this colab - https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb#scrollTo=XaFAsB_fnU3K
Here is how my data looked:
```
Merkel bemoans lack of rain as Germany fears for its forests .\n
Germany’s forests, covering a third of its territory and as much a part of its cultural landscape as its physical one, are in danger.\n
An aerial view shows a forest near Gummersbach, Germany, April 24, 2020, following an unusually warm, dry winter after a summer of record temperatures leaving forests dried out.\n
Picture taken with a drone.\n
The last two exceptionally hot and dry summers have weakened millions of trees, undermining their defences against the bark beetle, which can be fatal to ancient woodlands.\n
And after an exceptionally dry April, with summer still two months away, a forest fire has already had to be put out near the town of Gummersbach in western Germany this week.\n
“We’re already noticing these days that it’s not raining enough in many areas.
```
After training the model I used pipeline from the transforms library for the fill_mask task
```
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="./output",
tokenizer="./output"
fill_mask("Merkel bemoans lack of rain as <mask> fears for its forests")
)
```
These are the results:
```
[{'sequence': '<s> Merkel bemoans lack of rain as. fears for its forests</s>',
'score': 0.040456026792526245,
'token': 18},
{'sequence': '<s> Merkel bemoans lack of rain as, fears for its forests</s>',
'score': 0.03502459451556206,
'token': 16},
{'sequence': '<s> Merkel bemoans lack of rain as the fears for its forests</s>',
'score': 0.03497963398694992,
'token': 269},
{'sequence': '<s> Merkel bemoans lack of rain as\n fears for its forests</s>',
'score': 0.03180328756570816,
'token': 203},
{'sequence': '<s> Merkel bemoans lack of rain as to fears for its forests</s>',
'score': 0.020796578377485275,
'token': 288}]
```
As you can see there is no meaningful word(s) returned only punctuations and one other word (to) which doesn't make sense. What am i doing wrong here? Do I have to remove all punctuations?
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
https://stackoverflow.com/questions/62276011/training-roberta-using-transformers-on-masked-language-task-giving-weird-results
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4892/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4892/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4891 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4891/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4891/comments | https://api.github.com/repos/huggingface/transformers/issues/4891/events | https://github.com/huggingface/transformers/issues/4891 | 635,938,166 | MDU6SXNzdWU2MzU5MzgxNjY= | 4,891 | 🐛 [TFTrainer] `dataloader_drop_last` unused | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for bringing this up, @Colanim! I worked on #4757, and didn't realize the same could automatically extend to TFTrainer as well.\r\n\r\nPlease take a look at #4925 to see if it'd solve this for you."
] | 1,591 | 1,591 | 1,591 | CONTRIBUTOR | null | # 🐛 Bug
The argument `dataloader_drop_last` appear to not be used in `TFTrainer`. This is a problem when we need a static batch size.
https://github.com/huggingface/transformers/blob/e8db8b845a971b0cf63a0896b9deb5b316028a8b/src/transformers/trainer_tf.py#L68-L73
## Expected behavior
The argument `dataloader_drop_last` is used when batching the dataset. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4891/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4891/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4890 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4890/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4890/comments | https://api.github.com/repos/huggingface/transformers/issues/4890/events | https://github.com/huggingface/transformers/issues/4890 | 635,851,233 | MDU6SXNzdWU2MzU4NTEyMzM= | 4,890 | encode_plus( ) function for the GPT-2 Tokenizer | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Sure, and you can use the `add_prefix_space` flag to do that."
] | 1,591 | 1,591 | 1,591 | NONE | null | Hello,
From the GPT-2 Tokenizer section of the Hugging Face Transformer documentation, the documentation says that:
```
GPT-2 BPE tokenizer.
Peculiarities:
Byte-level Byte-Pair-Encoding
Requires a space to start the input string => the encoding methods should be called with the add_prefix_space flag set to True. Otherwise, this tokenizer encode and decode method will not conserve the absence of a space at the beginning of a string:
```
If I use the `encode_plus()` function (not `encode( )`) to encode my sentences (doing something like `encode_plus("Hi there")['input_ids']`, instead of directly using the `encode()` function), do I still need to place a space at the start of every input string?
Thank you,
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4890/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4890/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4889 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4889/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4889/comments | https://api.github.com/repos/huggingface/transformers/issues/4889/events | https://github.com/huggingface/transformers/pull/4889 | 635,850,122 | MDExOlB1bGxSZXF1ZXN0NDMyMTM5NjAy | 4,889 | [RFC] Tokenizer.prepare_seq2seq_batch | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4889?src=pr&el=h1) Report\n> Merging [#4889](https://codecov.io/gh/huggingface/transformers/pull/4889?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6e603cb7892b49a2cbbc10ba859759f92c3fb7a6&el=desc) will **decrease** coverage by `0.03%`.\n> The diff coverage is `16.66%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4889?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4889 +/- ##\n==========================================\n- Coverage 77.00% 76.96% -0.04% \n==========================================\n Files 128 128 \n Lines 21602 21614 +12 \n==========================================\n+ Hits 16634 16636 +2 \n- Misses 4968 4978 +10 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4889?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4889/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `88.68% <16.66%> (-1.02%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4889/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.49% <0.00%> (-0.12%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4889/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.42% <0.00%> (ø)` | |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4889/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.49% <0.00%> (+0.40%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4889?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4889?src=pr&el=footer). Last update [6e603cb...560abea](https://codecov.io/gh/huggingface/transformers/pull/4889?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I don't think this method is really necessary. Also, it doesn't allow to prepare decider_inputs in batches since kwargs is only given to then encoder_inputs",
"I agree with @patrickvonplaten .\r\n\r\nAlso I'm currently removing `trim_batch` from `tokenization_utils` since only the BART summarization example uses it.\r\n\r\nI think it's better to keep all these small helpers in examples scripts unless you find other members of the team interested in using them as well or unless you can propose a larger modification of the general API to incorporate them seamlessly in the general abstractions we use (tokenizers, etc.).",
"I think @joeddav used `trim_batch`, but fine to keep it in examples/"
] | 1,591 | 1,591 | 1,591 | CONTRIBUTOR | null | This introduces a method: `Tokenizer.prepare_seq2seq_batch` that calls batch encode plus twice and prepares inputs for seq2seq models.
The seq2seq finetuning example and some seq2seq unittests calls batch encode plus twice. This seems like it should be the work of the tokenizer, and MarianTokenizer, BartTokenizer, T5 Tokenizer can expose/ overwrite this method.
Wondering what others think before I add tests/fix callers.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4889/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4889/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4889",
"html_url": "https://github.com/huggingface/transformers/pull/4889",
"diff_url": "https://github.com/huggingface/transformers/pull/4889.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4889.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4888 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4888/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4888/comments | https://api.github.com/repos/huggingface/transformers/issues/4888/events | https://github.com/huggingface/transformers/issues/4888 | 635,839,787 | MDU6SXNzdWU2MzU4Mzk3ODc= | 4,888 | Previous commit introduces bug in `convert_pytorch_checkpoint_to_tf2.py` | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I encountered the same problem.\r\nIf you were converting a local `pytorch_model.bin` model, you can try this\r\n\r\nsomewhere around line 258 in `convert_pytorch_checkpoint_to_tf2.py`\r\n\r\n```\r\naws_model_maps = {}\r\nconfig_class, model_class, pt_model_class, aws_config_map = MODEL_CLASSES[model_type]\r\n# config_class, model_class, pt_model_class, aws_model_maps, aws_config_map = MODEL_CLASSES[model_type]\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,591 | 1,599 | 1,599 | MEMBER | null | This issue concerns the conversion tool `convert_pytorch_checkpoint_to_tf2.py`.
Commit d4c2cb402d6674211726fd5f4803d1090664e438 removed the `*PRETRAINED_MODEL_ARCHIVE_MAP` from the imports so that for each key in `MODEL_CLASSES` is associated to a set of 4 values.
For instance: https://github.com/huggingface/transformers/blob/e8db8b845a971b0cf63a0896b9deb5b316028a8b/src/transformers/convert_pytorch_checkpoint_to_tf2.py#L109
However, in `convert_all_pt_checkpoints_to_tf`, `MODEL_CLASSES` is still expected to unpack 5 values (among which the model maps) which raise an error:
https://github.com/huggingface/transformers/blob/e8db8b845a971b0cf63a0896b9deb5b316028a8b/src/transformers/convert_pytorch_checkpoint_to_tf2.py#L259
Typical command is:
```bash
python src/transformers/convert_pytorch_checkpoint_to_tf2.py \
--tf_dump_path serialization_dir/weights_release/1st_weight_release/prunebert-base-uncased-6-finepruned-w-distil-squad/ \
--model_type bert-large-uncased-whole-word-masking-finetuned-squad \
--pytorch_checkpoint_path /serialization_dir/weights_release/1st_weight_release/prunebert-base-uncased-6-finepruned-w-distil-squad/ \
--compare_with_pt_model
```
I didn't really follow why the `*PRETRAINED_MODEL_ARCHIVE_MAP` were removed so I'm not sure what is the best course of action here.
Victor | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4888/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4888/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4887 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4887/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4887/comments | https://api.github.com/repos/huggingface/transformers/issues/4887/events | https://github.com/huggingface/transformers/pull/4887 | 635,812,913 | MDExOlB1bGxSZXF1ZXN0NDMyMTEwMDkz | 4,887 | warn with FutureWarning when using `output_attentions` in the configu… | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4887?src=pr&el=h1) Report\n> Merging [#4887](https://codecov.io/gh/huggingface/transformers/pull/4887?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/13aa174112f0c2ee794c44188ecf13b241694db0&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `80.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4887?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4887 +/- ##\n==========================================\n+ Coverage 76.97% 76.99% +0.01% \n==========================================\n Files 128 128 \n Lines 21602 21607 +5 \n==========================================\n+ Hits 16629 16636 +7 \n+ Misses 4973 4971 -2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4887?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4887/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.80% <80.00%> (-0.58%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4887/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.26% <0.00%> (-0.16%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4887/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+0.94%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4887?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4887?src=pr&el=footer). Last update [13aa174...513ba3b](https://codecov.io/gh/huggingface/transformers/pull/4887?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Are we actually planning on completely removing `output_attentions` from the config? I changed my mind a bit in that I think we can keep the hierarchy 1. use function argument 2. if nothing is provided => use config parameter as is done for the generation arguments. Also another small advantage of keeping it in the config would be that attentions can easily be outputted when using torchscript. \r\n\r\nWhat is your opinion on that @thomwolf ?",
"Ah, I wasn't aware of that, I thought we were deprecating them to be later removed :sweat_smile: \r\n\r\nIn that case, we should add back the documentation regarding `output_attentions` that [was removed](https://github.com/huggingface/transformers/commit/6e603cb7892b49a2cbbc10ba859759f92c3fb7a6#diff-0f9b535706b4f09eb22f7189c6c9039cL46-L49) in #4538.",
"I wanted to do a PR about this and also add `use_cache` correctly back to the configs",
"Cool, sounds good @patrickvonplaten "
] | 1,591 | 1,651 | 1,591 | MEMBER | null | …ration | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4887/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4887/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4887",
"html_url": "https://github.com/huggingface/transformers/pull/4887",
"diff_url": "https://github.com/huggingface/transformers/pull/4887.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4887.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4886 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4886/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4886/comments | https://api.github.com/repos/huggingface/transformers/issues/4886/events | https://github.com/huggingface/transformers/pull/4886 | 635,778,420 | MDExOlB1bGxSZXF1ZXN0NDMyMDgxODc4 | 4,886 | Deal with multiple choice in common tests | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4886?src=pr&el=h1) Report\n> Merging [#4886](https://codecov.io/gh/huggingface/transformers/pull/4886?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/02e5f79662d72cccdca81a47e3001a5f6d36e5b1&el=desc) will **increase** coverage by `0.09%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4886?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4886 +/- ##\n==========================================\n+ Coverage 76.46% 76.56% +0.09% \n==========================================\n Files 128 128 \n Lines 21502 21502 \n==========================================\n+ Hits 16442 16463 +21 \n+ Misses 5060 5039 -21 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4886?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4886/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `94.73% <0.00%> (+10.04%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4886?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4886?src=pr&el=footer). Last update [02e5f79...b78ed3a](https://codecov.io/gh/huggingface/transformers/pull/4886?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,591 | 1,595 | 1,591 | COLLABORATOR | null | It's a bit heavy but I didn't find another way to reshape the inputs when needed for the multiple choice model. With this, and skipping the input_embeds test when the model is a multiple choice one (current implementation requires `input_ids`), I manage to have the common tests passing for `BertForMultipleChoice`.
Let me know if you have other ideas! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4886/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 1,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/4886/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4886",
"html_url": "https://github.com/huggingface/transformers/pull/4886",
"diff_url": "https://github.com/huggingface/transformers/pull/4886.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4886.patch",
"merged_at": 1591791021000
} |
https://api.github.com/repos/huggingface/transformers/issues/4885 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4885/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4885/comments | https://api.github.com/repos/huggingface/transformers/issues/4885/events | https://github.com/huggingface/transformers/pull/4885 | 635,731,376 | MDExOlB1bGxSZXF1ZXN0NDMyMDQyOTE3 | 4,885 | Add AlbertForMultipleChoice | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4885?src=pr&el=h1) Report\n> Merging [#4885](https://codecov.io/gh/huggingface/transformers/pull/4885?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f0340b30310cb78555c6f78bed7262101f251940&el=desc) will **increase** coverage by `0.10%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4885?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4885 +/- ##\n==========================================\n+ Coverage 76.47% 76.57% +0.10% \n==========================================\n Files 128 128 \n Lines 21502 21528 +26 \n==========================================\n+ Hits 16443 16486 +43 \n+ Misses 5059 5042 -17 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4885?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/4885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.13% <ø> (ø)` | |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `73.60% <ø> (ø)` | |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `86.27% <ø> (ø)` | |\n| [src/transformers/modeling\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `77.68% <100.00%> (+1.34%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.58% <0.00%> (-1.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.28% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `94.73% <0.00%> (+10.04%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4885?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4885?src=pr&el=footer). Last update [f0340b3...09b0fd5](https://codecov.io/gh/huggingface/transformers/pull/4885?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This needs rework now that #4921 has been merged, so do not merge just yet.",
"Ugh, rebase went wrong, closing..."
] | 1,591 | 1,591 | 1,591 | COLLABORATOR | null | Another [model missing](https://github.com/huggingface/transformers/projects/17).
While implementing it I noticed two things:
- the example in `BertForMultipleChoice` wasn't working, so fixed it.
- some model classes where missing in the `all_model_classes` in the test, fixed it for albert and bert, will look at the other tests in a separate PR dedicated to that. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4885/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4885/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4885",
"html_url": "https://github.com/huggingface/transformers/pull/4885",
"diff_url": "https://github.com/huggingface/transformers/pull/4885.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4885.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4884 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4884/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4884/comments | https://api.github.com/repos/huggingface/transformers/issues/4884/events | https://github.com/huggingface/transformers/pull/4884 | 635,716,709 | MDExOlB1bGxSZXF1ZXN0NDMyMDMwODg4 | 4,884 | Fix a bug in the initialization and serialization of TFRobertaClassificationHead | {
"login": "harkous",
"id": 5602332,
"node_id": "MDQ6VXNlcjU2MDIzMzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5602332?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/harkous",
"html_url": "https://github.com/harkous",
"followers_url": "https://api.github.com/users/harkous/followers",
"following_url": "https://api.github.com/users/harkous/following{/other_user}",
"gists_url": "https://api.github.com/users/harkous/gists{/gist_id}",
"starred_url": "https://api.github.com/users/harkous/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harkous/subscriptions",
"organizations_url": "https://api.github.com/users/harkous/orgs",
"repos_url": "https://api.github.com/users/harkous/repos",
"events_url": "https://api.github.com/users/harkous/events{/privacy}",
"received_events_url": "https://api.github.com/users/harkous/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4884?src=pr&el=h1) Report\n> Merging [#4884](https://codecov.io/gh/huggingface/transformers/pull/4884?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/02e5f79662d72cccdca81a47e3001a5f6d36e5b1&el=desc) will **increase** coverage by `0.02%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4884?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4884 +/- ##\n==========================================\n+ Coverage 76.46% 76.49% +0.02% \n==========================================\n Files 128 128 \n Lines 21502 21502 \n==========================================\n+ Hits 16442 16448 +6 \n+ Misses 5060 5054 -6 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4884?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `74.74% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/4884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `55.55% <0.00%> (-33.34%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/4884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `86.48% <0.00%> (-6.31%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/4884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/4884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.87% <0.00%> (-0.57%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.46% <0.00%> (-0.24%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.28% <0.00%> (ø)` | |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `74.27% <0.00%> (+0.41%)` | :arrow_up: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `94.73% <0.00%> (+10.04%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4884?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4884?src=pr&el=footer). Last update [02e5f79...9693577](https://codecov.io/gh/huggingface/transformers/pull/4884?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Ahah @LysandreJik you have been too fast :smile: ",
"Not only this class should have been updated but all that inherit directly from `tf.keras.layers.Layer`. I will do the rest :) \r\n\r\nThanks a lot @harkous very nice catch!!!",
"Thanks!\r\n@jplu I tried to verify whether other classes that inherit directly from `tf.keras.layers.Layer` have the same issue but couldn't find any that directly passes `config`. Feel free to double check though.",
"Great, thanks a lot @jplu !",
"Oh I didn't know you have already checked that. It is more more than perfect then!! Sorry for my previous post, my bad."
] | 1,591 | 1,591 | 1,591 | CONTRIBUTOR | null | For `TFRobertaClassificationHead`, `config` was being passed as the first parameter to the `__init__` of the parent class `tf.keras.layers.Layer`. The latter [expects](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Layer) trainable as the first parameter.
This fixes #4709 and #3664, making the TFRoberta models serializable to `savedmodel` format too. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4884/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4884/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4884",
"html_url": "https://github.com/huggingface/transformers/pull/4884",
"diff_url": "https://github.com/huggingface/transformers/pull/4884.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4884.patch",
"merged_at": 1591733642000
} |
https://api.github.com/repos/huggingface/transformers/issues/4883 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4883/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4883/comments | https://api.github.com/repos/huggingface/transformers/issues/4883/events | https://github.com/huggingface/transformers/pull/4883 | 635,683,547 | MDExOlB1bGxSZXF1ZXN0NDMyMDAzNzQy | 4,883 | check type before logging in trainer to ensure values are scalars | {
"login": "mgoldey",
"id": 659477,
"node_id": "MDQ6VXNlcjY1OTQ3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/659477?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mgoldey",
"html_url": "https://github.com/mgoldey",
"followers_url": "https://api.github.com/users/mgoldey/followers",
"following_url": "https://api.github.com/users/mgoldey/following{/other_user}",
"gists_url": "https://api.github.com/users/mgoldey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mgoldey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mgoldey/subscriptions",
"organizations_url": "https://api.github.com/users/mgoldey/orgs",
"repos_url": "https://api.github.com/users/mgoldey/repos",
"events_url": "https://api.github.com/users/mgoldey/events{/privacy}",
"received_events_url": "https://api.github.com/users/mgoldey/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4883?src=pr&el=h1) Report\n> Merging [#4883](https://codecov.io/gh/huggingface/transformers/pull/4883?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9f5d5a531d769d07403f59661884e254f8420afe&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `66.66%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4883?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4883 +/- ##\n=======================================\n Coverage 76.55% 76.56% \n=======================================\n Files 128 128 \n Lines 21502 21504 +2 \n=======================================\n+ Hits 16461 16464 +3 \n+ Misses 5041 5040 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4883?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4883/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.67% <66.66%> (+0.99%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4883/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.28% <0.00%> (-0.32%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4883?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4883?src=pr&el=footer). Last update [9f5d5a5...9c9d9c3](https://codecov.io/gh/huggingface/transformers/pull/4883?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"> Wouldn't the more robust fix be to change these values to scalars if they're not?\r\n> \r\n> In the case were a string is passed (which seems to be your case), I think a warning would be better than a silently not registering anything. What do you think?\r\n\r\nI agree that logging the non-scalar values would be an improvement, and I'll update this PR to that effect. I haven't characterized what the exact error is, so I'm not sure that we can even cast the troublesome values to scalars. Thank you very much for the good feedback here @LysandreJik.",
"@LysandreJik please feel free to suggest a different log level or log message from that in a33e28b"
] | 1,591 | 1,591 | 1,591 | CONTRIBUTOR | null | This change was necessary to avoid https://github.com/lanpa/tensorboardX/issues/567 since a non-scalar value was being passed in. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4883/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4883/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4883",
"html_url": "https://github.com/huggingface/transformers/pull/4883",
"diff_url": "https://github.com/huggingface/transformers/pull/4883.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4883.patch",
"merged_at": 1591827956000
} |
https://api.github.com/repos/huggingface/transformers/issues/4882 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4882/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4882/comments | https://api.github.com/repos/huggingface/transformers/issues/4882/events | https://github.com/huggingface/transformers/pull/4882 | 635,664,337 | MDExOlB1bGxSZXF1ZXN0NDMxOTg3NTA2 | 4,882 | fix huggingface/tokenizers#297 in 0.8.0 | {
"login": "sarahwie",
"id": 8027676,
"node_id": "MDQ6VXNlcjgwMjc2NzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sarahwie",
"html_url": "https://github.com/sarahwie",
"followers_url": "https://api.github.com/users/sarahwie/followers",
"following_url": "https://api.github.com/users/sarahwie/following{/other_user}",
"gists_url": "https://api.github.com/users/sarahwie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sarahwie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sarahwie/subscriptions",
"organizations_url": "https://api.github.com/users/sarahwie/orgs",
"repos_url": "https://api.github.com/users/sarahwie/repos",
"events_url": "https://api.github.com/users/sarahwie/events{/privacy}",
"received_events_url": "https://api.github.com/users/sarahwie/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4882?src=pr&el=h1) Report\n> Merging [#4882](https://codecov.io/gh/huggingface/transformers/pull/4882?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/02e5f79662d72cccdca81a47e3001a5f6d36e5b1&el=desc) will **increase** coverage by `0.09%`.\n> The diff coverage is `0.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4882?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4882 +/- ##\n==========================================\n+ Coverage 76.46% 76.56% +0.09% \n==========================================\n Files 128 128 \n Lines 21502 21502 \n==========================================\n+ Hits 16442 16462 +20 \n+ Misses 5060 5040 -20 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4882?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4882/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.69% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4882/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.12% <0.00%> (-0.16%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4882/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `94.73% <0.00%> (+10.04%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4882?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4882?src=pr&el=footer). Last update [02e5f79...1ce1fd3](https://codecov.io/gh/huggingface/transformers/pull/4882?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,591 | 1,597 | 1,597 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4882/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4882/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4882",
"html_url": "https://github.com/huggingface/transformers/pull/4882",
"diff_url": "https://github.com/huggingface/transformers/pull/4882.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4882.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4881 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4881/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4881/comments | https://api.github.com/repos/huggingface/transformers/issues/4881/events | https://github.com/huggingface/transformers/pull/4881 | 635,658,084 | MDExOlB1bGxSZXF1ZXN0NDMxOTgyMjY4 | 4,881 | Fix TensorFlow dataset generator | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Nice!! I didn't know that parameter ^^\r\n\r\nDoes it seems better now?",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4881?src=pr&el=h1) Report\n> Merging [#4881](https://codecov.io/gh/huggingface/transformers/pull/4881?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/02e5f79662d72cccdca81a47e3001a5f6d36e5b1&el=desc) will **increase** coverage by `0.07%`.\n> The diff coverage is `7.69%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4881?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4881 +/- ##\n==========================================\n+ Coverage 76.46% 76.54% +0.07% \n==========================================\n Files 128 128 \n Lines 21502 21511 +9 \n==========================================\n+ Hits 16442 16465 +23 \n+ Misses 5060 5046 -14 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4881?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4881/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.21% <0.00%> (-0.45%)` | :arrow_down: |\n| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/4881/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `49.09% <20.00%> (-0.36%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4881/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `74.27% <0.00%> (+0.41%)` | :arrow_up: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4881/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `94.73% <0.00%> (+10.04%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4881?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4881?src=pr&el=footer). Last update [02e5f79...2060038](https://codecov.io/gh/huggingface/transformers/pull/4881?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Ok, if this one is ok I'm gonna update the other methods the same way.",
"Should be ok now.",
"@julien-c @thomwolf @LysandreJik any issue to merge this?",
"@LysandreJik anything else to merge?"
] | 1,591 | 1,594 | 1,593 | CONTRIBUTOR | null | Should fix #4856
The method `glue_convert_examples_to_features` returns a badly formated TensorFlow dataset in case the used model doesn't use `token_type_ids` as feature such as DistilBert.
The fix is to detect when the feature `token_type_ids` should belong to the TensorFlow dataset or not. I'm not really happy of the fix, @julien-c and @LysandreJik do you have a better way to handle this?
Note: do not forget that the same fix should be applied for the other `xxx_examples_to_features` for other datasets processor in `src/data/processors`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4881/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4881/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4881",
"html_url": "https://github.com/huggingface/transformers/pull/4881",
"diff_url": "https://github.com/huggingface/transformers/pull/4881.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4881.patch",
"merged_at": 1593560952000
} |
https://api.github.com/repos/huggingface/transformers/issues/4880 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4880/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4880/comments | https://api.github.com/repos/huggingface/transformers/issues/4880/events | https://github.com/huggingface/transformers/issues/4880 | 635,654,985 | MDU6SXNzdWU2MzU2NTQ5ODU= | 4,880 | AutoModelForSequenceClassification not working with prunebert model | {
"login": "atowey01",
"id": 50136999,
"node_id": "MDQ6VXNlcjUwMTM2OTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/50136999?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/atowey01",
"html_url": "https://github.com/atowey01",
"followers_url": "https://api.github.com/users/atowey01/followers",
"following_url": "https://api.github.com/users/atowey01/following{/other_user}",
"gists_url": "https://api.github.com/users/atowey01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/atowey01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/atowey01/subscriptions",
"organizations_url": "https://api.github.com/users/atowey01/orgs",
"repos_url": "https://api.github.com/users/atowey01/repos",
"events_url": "https://api.github.com/users/atowey01/events{/privacy}",
"received_events_url": "https://api.github.com/users/atowey01/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"This is because the `huggingface/prunebert-xxx` configurations have a model type `masked_bert`.\r\n\r\nSince these files should be loaded directly in `BertForXXX` classes, it would probably be best to update that field to `bert`, right @julien-c?",
"Yes. Can you confirm @VictorSanh?",
"Seems to be working fine now, thanks @LysandreJik and @julien-c "
] | 1,591 | 1,591 | 1,591 | NONE | null | I am having issues loading the new prunebert model for sequence classification using AutoModelForSequenceClassification.from_pretrained().
```
from transformers import AutoModelForSequenceClassification, BertForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("huggingface/prunebert-base-uncased-6-finepruned-w-distil-mnli")
```
The above code produces the error: KeyError: 'masked_bert'.
The model loads fine using BertForSequenceClassification.from_pretrained, the issue only seems to occur with AutoModelForSequenceClassification.
```
model = BertForSequenceClassification.from_pretrained('huggingface/prunebert-base-uncased-6-finepruned-w-distil-mnli')
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4880/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4880/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4879 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4879/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4879/comments | https://api.github.com/repos/huggingface/transformers/issues/4879/events | https://github.com/huggingface/transformers/pull/4879 | 635,618,011 | MDExOlB1bGxSZXF1ZXN0NDMxOTQ4OTk5 | 4,879 | [Draft] Prevent KeyError in QA pipeline | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,591 | 1,591 | 1,591 | MEMBER | null | closes #4873
With the question answering pipeline, sometimes the model selects an answer that is out of bounds. This ensures that it will select the maximum token it can select, and prevents KeyErrors from happening | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4879/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4879/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4879",
"html_url": "https://github.com/huggingface/transformers/pull/4879",
"diff_url": "https://github.com/huggingface/transformers/pull/4879.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4879.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4878 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4878/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4878/comments | https://api.github.com/repos/huggingface/transformers/issues/4878/events | https://github.com/huggingface/transformers/pull/4878 | 635,607,332 | MDExOlB1bGxSZXF1ZXN0NDMxOTM5OTA3 | 4,878 | BartTokenizerFast | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4878?src=pr&el=h1) Report\n> Merging [#4878](https://codecov.io/gh/huggingface/transformers/pull/4878?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/86578bb04c9b34f9d8e35cd4fad42a85910dd9e9&el=desc) will **decrease** coverage by `0.38%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4878?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4878 +/- ##\n==========================================\n- Coverage 77.55% 77.16% -0.39% \n==========================================\n Files 128 128 \n Lines 21791 21794 +3 \n==========================================\n- Hits 16899 16818 -81 \n- Misses 4892 4976 +84 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4878?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/4878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.13% <100.00%> (ø)` | |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/4878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.12% <100.00%> (+0.38%)` | :arrow_up: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-28.03%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.61% <0.00%> (+0.11%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.73% <0.00%> (+0.46%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4878?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4878?src=pr&el=footer). Last update [86578bb...1752816](https://codecov.io/gh/huggingface/transformers/pull/4878?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I think we will need a test, there are no tests for `RobertaTokenizerFast` in `test_tokenization_roberta.py` file.",
"The test is in `test_tokenization_fast`, so I think we're OK on that front.\r\nGoing to wait for @n1t0 or @mfuntowicz to approve and merge, because they are working on this concurrently. "
] | 1,591 | 1,592 | 1,592 | MEMBER | null | This PR adds `BartTokenizerFast` by subclassing `RobertaTokenizerFast`
@sshleifer @mfuntowicz | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4878/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4878/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4878",
"html_url": "https://github.com/huggingface/transformers/pull/4878",
"diff_url": "https://github.com/huggingface/transformers/pull/4878.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4878.patch",
"merged_at": 1592154289000
} |
https://api.github.com/repos/huggingface/transformers/issues/4877 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4877/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4877/comments | https://api.github.com/repos/huggingface/transformers/issues/4877/events | https://github.com/huggingface/transformers/issues/4877 | 635,574,329 | MDU6SXNzdWU2MzU1NzQzMjk= | 4,877 | ProphetNet | {
"login": "aretius",
"id": 18247856,
"node_id": "MDQ6VXNlcjE4MjQ3ODU2",
"avatar_url": "https://avatars.githubusercontent.com/u/18247856?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aretius",
"html_url": "https://github.com/aretius",
"followers_url": "https://api.github.com/users/aretius/followers",
"following_url": "https://api.github.com/users/aretius/following{/other_user}",
"gists_url": "https://api.github.com/users/aretius/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aretius/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aretius/subscriptions",
"organizations_url": "https://api.github.com/users/aretius/orgs",
"repos_url": "https://api.github.com/users/aretius/repos",
"events_url": "https://api.github.com/users/aretius/events{/privacy}",
"received_events_url": "https://api.github.com/users/aretius/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"@aretius Thank you for mentioning ProphetNet. ProphetNet for huggingface is sheduled as you suggested. ",
"@qiweizhen this sounds great, I would love to give it a go. Any planned date for delivering this?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,591 | 1,598 | 1,598 | CONTRIBUTOR | null | # 🌟 New model addition
ProphetNet
## Model description
ProphetNet introduces a novel self-supervised objective named future n-gram prediction and the proposed n stream self-attention mechanism. Instead of the optimization of one-step-ahead prediction in the traditional sequence-to-sequence model, the ProphetNet is optimized by n-step ahead prediction which predicts the next n tokens simultaneously based on previous context tokens at each time step. The future n-gram prediction explicitly encourages the model to plan for the future tokens and prevent overfitting on strong local correlations
<!-- Important information -->
## Open source status
* [X] the model implementation is available: (give details)
https://github.com/microsoft/ProphetNet
* [X] the model weights are available: (give details)
Weights for both small and large pre-trained dataset version of models are available
https://github.com/microsoft/ProphetNet#pre-trained-models
* [X] who are the authors: (mention them, if possible by @gh-username)
Yu Yan @yuyan2do , Weizhen Qi @weizhen
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4877/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4877/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4876 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4876/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4876/comments | https://api.github.com/repos/huggingface/transformers/issues/4876/events | https://github.com/huggingface/transformers/pull/4876 | 635,529,757 | MDExOlB1bGxSZXF1ZXN0NDMxODc5MzY4 | 4,876 | [examples] Cleanup summarization docs | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4876?src=pr&el=h1) Report\n> Merging [#4876](https://codecov.io/gh/huggingface/transformers/pull/4876?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/02e5f79662d72cccdca81a47e3001a5f6d36e5b1&el=desc) will **decrease** coverage by `0.59%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4876?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4876 +/- ##\n==========================================\n- Coverage 76.46% 75.86% -0.60% \n==========================================\n Files 128 128 \n Lines 21502 21502 \n==========================================\n- Hits 16442 16313 -129 \n- Misses 5060 5189 +129 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4876?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/4876/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `28.15% <0.00%> (-63.03%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4876/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `94.73% <0.00%> (+10.04%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4876?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4876?src=pr&el=footer). Last update [02e5f79...954557d](https://codecov.io/gh/huggingface/transformers/pull/4876?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,591 | 1,591 | 1,591 | CONTRIBUTOR | null | Don't think we need a download_cnn_dailymail.py script
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4876/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4876/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4876",
"html_url": "https://github.com/huggingface/transformers/pull/4876",
"diff_url": "https://github.com/huggingface/transformers/pull/4876.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4876.patch",
"merged_at": 1591738708000
} |
https://api.github.com/repos/huggingface/transformers/issues/4875 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4875/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4875/comments | https://api.github.com/repos/huggingface/transformers/issues/4875/events | https://github.com/huggingface/transformers/issues/4875 | 635,471,552 | MDU6SXNzdWU2MzU0NzE1NTI= | 4,875 | Inconsistent number of vocab from pretrained T5Tokenizer and T5ForConditionalGeneration | {
"login": "cstorm125",
"id": 15519308,
"node_id": "MDQ6VXNlcjE1NTE5MzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cstorm125",
"html_url": "https://github.com/cstorm125",
"followers_url": "https://api.github.com/users/cstorm125/followers",
"following_url": "https://api.github.com/users/cstorm125/following{/other_user}",
"gists_url": "https://api.github.com/users/cstorm125/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cstorm125/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cstorm125/subscriptions",
"organizations_url": "https://api.github.com/users/cstorm125/orgs",
"repos_url": "https://api.github.com/users/cstorm125/repos",
"events_url": "https://api.github.com/users/cstorm125/events{/privacy}",
"received_events_url": "https://api.github.com/users/cstorm125/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @cstorm125, \r\n\r\nI think, those `28` leftover embeddings are simply not used. The reason why the embedding matrix is of length 32128 as far as I know is simply because 32128 is a more GPU friendly number `32128 = 128 * 251` than `32100 = 4 * 8025`. That means that the GPU is probably more efficient if it can directly deal with a power of 2 shape. \r\n\r\nAlso see: https://www.quora.com/Why-should-I-choose-a-mini-batch-size-of-32-64-128-256-etc-i-e-a-power-of-two-and-not-a-size-of-50-100-500-1000-Is-there-any-benefit-of-choosing-power-of-two-mini-batch-sizes\r\n\r\n",
"Hi all, I ran into this too. But I did find a bug as a result of this mismatch. I try to resize the embedding to be smaller and got a Cuda assert error. See bug report. \r\n\r\nhttps://github.com/huggingface/transformers/issues/8643\r\n",
"I found this mismatch recently and I think this may result in many bugs. Wish someone can fix it.",
"> Hey @cstorm125,\r\n> \r\n> I think, those `28` leftover embeddings are simply not used. The reason why the embedding matrix is of length 32128 as far as I know is simply because 32128 is a more GPU friendly number `32128 = 128 * 251` than `32100 = 4 * 8025`. That means that the GPU is probably more efficient if it can directly deal with a power of 2 shape.\r\n> \r\n> Also see: https://www.quora.com/Why-should-I-choose-a-mini-batch-size-of-32-64-128-256-etc-i-e-a-power-of-two-and-not-a-size-of-50-100-500-1000-Is-there-any-benefit-of-choosing-power-of-two-mini-batch-sizes\r\n\r\nThis is wrong. It shouldn't be this way. In case model predicts wong index and when you calculate loss, it will cause serious issues. Its hard to believe no one cares this. ",
"Hey @s4sarath, \r\n\r\nDuring training all input_ids and labels are defined by the tokenizer. If the tokenizer has a vocab_size of 32000 there is no way that it will tokenize to an id >= 32000 neither for `input_ids` nor for `labels`. Because no label ever has an id >= 32000 the model learns to never predict those ids. I don't really see a problem with this to be honest",
"Hi Patrick,\n\nThanks for the reply.\nIf the embedding matrix is 32128 x d , for an example if the predicted id\nis say 32099, if we are using Sentencepiece tokenizer ( not huggingface ),\nit will fail to decode that.\n\nAnd special tokens ( 100 tokens ) are added extra, right. Which are\nactually not a part of official sentecepice model. That's why I told, it\nshouldn't be that way.\n\nThanks anyway, I really appreciate your reply.:-)\n\nOn Fri, 10 Dec, 2021, 7:11 pm Patrick von Platen, ***@***.***>\nwrote:\n\n> Hey @s4sarath <https://github.com/s4sarath>,\n>\n> During training all input_ids and labels are defined by the tokenizer. If\n> the tokenizer has a vocab_size of 32000 there is no way that it will\n> tokenize to an id >= 32000 neither for input_ids nor for labels. Because\n> no label ever has an id >= 32000 the model learns to never predict those\n> ids. I don't really see a problem with this to be honest\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/4875#issuecomment-990983491>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ACRE6KAZNTAF4DX5ZXOHUH3UQH7P5ANCNFSM4NZOPKUQ>\n> .\n> Triage notifications on the go with GitHub Mobile for iOS\n> <https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>\n> or Android\n> <https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.\n>\n>\n",
"Upvoting this. Another subtle bug this causes is when doing prompt tuning. The common way to do it is to call `add_tokens` to add some special prompt tokens, and also create a special embedding class that consists of two embedding matrics, the original one + one for the prompt tokens, and the forward call simply indexes into the two matrices concatenated together. Then all parameters but the prompt token embedding matrix are frozen. The expected behavior is that the IDs of the added tokens correspond to the prompt token embeddings when concatenated with the original. However, this mismatch causes the tokenizer to assign IDs starting from 32100, which are still a part of the original embedding matrix, which doesn't get gradients.",
"Temporary Solution : `model.resize_token_embeddings(len(tokenizer))`\r\n",
"I just found that it sometimes generates > 32100 input ids in generate function. Especially that happens if I evaluate a fine-tuned model in the very early step while training. Thanks, @Darshan2104 ! model.resize_token_embeddings(len(tokenizer)) temporally resolves my issue.",
"I am also facing the `IndexError: index out of range in self` issue due to this difference between the vocab size in the t5 tokenizer and the model for ConditionalGeneration. Should I resize model token_embeddings?",
"> model.resize_token_embeddings(len(tokenizer)\r\n\r\nI tried this but not helping.",
"> > model.resize_token_embeddings(len(tokenizer)\r\n> \r\n> I tried this but not helping.\r\n\r\n@kanak8278 , could you double-check that you are using the right tokenizer for the model?\r\n\r\nFor the model, could you show me what happens when you run this code?\r\n```python\r\n{n:p.shape for n, p in model.named_parameters() if \"embedding\" in n}\r\n```\r\n\r\nFor the tokenizer, could you do `len(tokenizer)` and report what it says?\r\n\r\nAnd then could you do this on your input ids? `torch.tensor(input_ids).max()`",
"This is a bit troubling, especially because I'm only interested in using a model for inference. I'm generating some sequences using multinomial sampling from `pythia-70M` model. When I attempt to obtain the corresponding scores to this sequence, I obtain a CUDA assertion (which, when running in the CPU, reveals itself as an indexing error). Upon checking the size of the model and the tokenizer, I find these are different, and although I understand @patrickvonplaten's justification, I am not sure how to proceed in terms of replacing these tokens, the fact is that they are being selected during the random sampling (even though they shouldn't since they were learned...). The other troubling problem of having a model head greater than the vocab size is that, by definition, these tokens will still contain some probability mass.",
"@PastelBelem8 \r\n\r\nThe model was never incentivized to predict those tokens so the weights for the tokens with ids > len(tokenizer) will have extraordinarily low scores. I did a quick test and the scores for those extra tokens summed together to be on the order of 1e-30 for each token. That is basically 0. \r\n\r\nCould you share your sampling approach?",
"Never mind, it was an error on my end! I apologize for the confusion! I thought I had tried everything and was desperate. "
] | 1,591 | 1,680 | 1,592 | NONE | null | # ❓ Questions & Help
Pretrained `T5Tokenizer ` has vocab size of 32100 (32000 tokens plus 100 extra_ids) but the shared embedding layer of `T5ForConditionalGeneration` has size of (32128, 768). I checked the google-research implementation of T5 and also found that they have vocab size of 32100 also.
Where did the extra 28 embeddings come from and how can we map it to the tokenizer?
## To reproduce
```
from transformers import (
T5Tokenizer,
T5ForConditionalGeneration,
)
tokenizer_pretrained = T5Tokenizer.from_pretrained('t5-base')
model_pretrained = T5ForConditionalGeneration.from_pretrained('t5-base')
len(tokenizer_pretrained.get_vocab()), model_pretrained.state_dict()['shared.weight'].shape
```
Output:
```
(32100, torch.Size([32128, 768]))
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4875/reactions",
"total_count": 12,
"+1": 12,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4875/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4874 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4874/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4874/comments | https://api.github.com/repos/huggingface/transformers/issues/4874/events | https://github.com/huggingface/transformers/pull/4874 | 635,457,878 | MDExOlB1bGxSZXF1ZXN0NDMxODE4MDAw | 4,874 | Split LMBert model in two | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4874?src=pr&el=h1) Report\n> Merging [#4874](https://codecov.io/gh/huggingface/transformers/pull/4874?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3ae2e86baffc1fea8b8b93695fb5a10941fd63dc&el=desc) will **decrease** coverage by `0.68%`.\n> The diff coverage is `88.88%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4874?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4874 +/- ##\n==========================================\n- Coverage 77.11% 76.43% -0.69% \n==========================================\n Files 128 128 \n Lines 21651 21671 +20 \n==========================================\n- Hits 16697 16564 -133 \n- Misses 4954 5107 +153 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4874?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/4874/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `92.20% <ø> (ø)` | |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4874/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.17% <88.88%> (-0.12%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4874/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0.00%> (-81.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4874/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.56% <0.00%> (-2.58%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4874/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `76.31% <0.00%> (-2.31%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4874/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.51% <0.00%> (-1.39%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4874/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `38.26% <0.00%> (-1.18%)` | :arrow_down: |\n| [src/transformers/benchmark/benchmark\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4874/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `72.80% <0.00%> (-0.30%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4874/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.79% <0.00%> (+0.40%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4874?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4874?src=pr&el=footer). Last update [3ae2e86...56a698f](https://codecov.io/gh/huggingface/transformers/pull/4874?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"It's possible to do it in a non-breaking way with a deprecation warning if a not-None `labels_lm` is passed to `BertForMaskedLM`. I was following the discussion of #4711 that implied it was okay to have a breaking change for this.",
"I'm fine with this PR. IMO, `BertForMaskedLM` was never really used before for causal language modeling except when using Bert in an encoder-decoder setting and the encoder-decoder code is not really released yet. Also since we keep the same names for the submodules `self.bert` and `self.cls`, there won't be any errors or inconsistencies when loading pre-trained weights into the Bert2Bert encoder-decoder. \r\n\r\nIn my opinion, this change is necessary to have a clean separation between masked lm and causal lm (Reformer and Longformer will eventually run into the same issue).\r\n\r\nThe heavily used `BertForMaskedLM` for the normal masked encoder bert model does not change at all except for `lm_labels`, so that's good in terms of backward compatibility. \r\n\r\nOne thing which is problematic though is that the `MODEL_WITH_LM_HEAD_MAPPING` contains a mixture of causal models and masked encoder models at the moment: https://github.com/huggingface/transformers/blob/02e5f79662d72cccdca81a47e3001a5f6d36e5b1/src/transformers/modeling_auto.py#L187\r\n\r\nNow since Bert has both a causal model and a masked encoder model we need two mappings. \r\n\r\nI would suggest here to create 2 new mappings `MODEL_FOR_MASKED_LM_MAPPING` and `MODEL_FOR_CAUSAL_LM_MAPPING` and two new AutoModels: `AutoModelForMaksedLM` , `AutoModelForCausalLM` and for now keep `AutoModelWithLMHead` as it is and add a depreciated warning to it. \r\n\r\nWe can add `BertLMHeadModel` to `MODEL_FOR_CAUSAL_LM_MAPPING` and change to `AutoModelForCausalLM` in the encoder-decoder model. Also @thomwolf and @julien-c here",
"I agree with @patrickvonplaten on the need to split `AutoModelWithLMHead` in two. Note that if the name `AutoModelForCausalLM` is picked, we should then rename (with a deprecation first of course) all `ModeltypeLMHeadModel` to `ModeltypeForCausalLM` for consistency (and clarity since just saying it has an LM head doesn't tell us if it's intended to be masked or causal).",
"I agree that having two additional `AutoXXX` classes for the distinction between masked/causal would be nice. We should, however, keep the `AutoModelWithLMHead` class available for backwards compatibility.\r\n\r\nI don't agree with renaming all causal model with language modeling heads `XXXForCausalLM`. It would be more consistent, but is an aesthetic change with a very big breaking change. Even adding aliases to keep backwards compatibility would create a large overhead for the user, in my opinion, as all those classes would exist twice when importing from the library.",
"In that case I would advocate to keep `AutoModelWithLMHead` for causal language models and only add an `AutoModelForMaskedLM`. Consistency is cosmetic, I agree, but it also helps not confusing beginners.",
"1) For now, I think the best solution would be to keep `AutoModelForMaskedLM` as it is and add two new `AutoXXX` classes. The EncoderDecoderModel would be the first model to use `AutoModelForCausalLM` in its code. \r\n\r\n`AutoModelWithLMHead` is heavily used for all kinds of masked bert encoder models, so if we create an `AutoModelForMaskedLM` and move `BertForMaskedLM` there, we would have a lot of breaking change. I think we could add a depreciation warning to `AutoModelWithLMHead` though. \r\n\r\n2) I'm a bit indifferent to renaming all other model classes. While I'm also a big fan of consistency I agree with @LysandreJik in that I think it's a big user-facing API change that is not really urgent atm.",
"In the short term, I would advocate only exposing the classical \"masked-lm\" flavour of BERT through AutoModelWithLMHead (as is done in this PR), and not even documenting/adding BertLMHeadModel to the `__init__`, as it's only used as a building block to other models.\r\n\r\nIn the longer term, I'd be ok with creating `AutoModelFor{Masked,Causal}LM` (name TBD for the second one) and not even creating a deprecation for `AutoModelWithLMHead`, forcing users to explicitly choose one or the other. This would need to be a major release though.",
"@julien-c as long as we do a major release for the AutoModel renaming, I'm all for this!",
"> In the short term, I would advocate only exposing the classical \"masked-lm\" flavour of BERT through AutoModelWithLMHead (as is done in this PR), and not even documenting/adding BertLMHeadModel to the `__init__`, as it's only used as a building block to other models.\r\n> \r\n> In the longer term, I'd be ok with creating `AutoModelFor{Masked,Causal}LM` (name TBD for the second one) and not even creating a deprecation for `AutoModelWithLMHead`, forcing users to explicitly choose one or the other. This would need to be a major release though.\r\n\r\nFor the encoder decoder models, I think we need `BertLMHeadModel` in the `init` and we would also need a `AutoModelForCausalLM`. Here: https://github.com/huggingface/transformers/blob/29c36e9f3678702e5ffd3fe2f1c9f6c1d6672578/src/transformers/modeling_encoder_decoder.py#L160 we need to instantiate a `BertWithCausalLM`",
"I'm fine either way, I think you guys got all the important issues (backward compatibility versus cleanly building the future). I like what @patrickvonplaten and @julien-c are proposing.",
"Fixed conflicts and followed @julien-c advice. @LysandreJik or @patrickvonplaten, could you do one final review just to make sure everything is fine to merge?",
"This currently breaks the encoder-decoder framework `from_encoder_decoder_pretrained()` method. Will do a PR tomorrow to fix it."
] | 1,591 | 1,591 | 1,591 | COLLABORATOR | null | As discussed in #4711, the `BertForMaskedLM` model should be split in two to avoid having two different labels argument, one model for causal LM, one for masked LM. This PR follows up on that and does the split.
It introduces a new `BertLMHeadModel` (also added to the `__init__` and the docs) with a test. As discussed, there is no deprecation warning if someone tries to use the `lm_labels` in `BertForMaskedLM` (since it was experimental), but an error message telling the user to use `BertLMHeadModel`.
I did not add `BertLMHeadModel` in the automodel logic since we probably want users to use causal models for this? Let me know if I should add it even if it's not the best model for that task.
I also removed `lm_labels` in the `EncoderDecoderModel` since it was only there to support that argument in `BertForMaskedLM` (which then removes the corresponding test). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4874/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4874/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4874",
"html_url": "https://github.com/huggingface/transformers/pull/4874",
"diff_url": "https://github.com/huggingface/transformers/pull/4874.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4874.patch",
"merged_at": 1591828003000
} |
https://api.github.com/repos/huggingface/transformers/issues/4873 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4873/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4873/comments | https://api.github.com/repos/huggingface/transformers/issues/4873/events | https://github.com/huggingface/transformers/issues/4873 | 635,435,240 | MDU6SXNzdWU2MzU0MzUyNDA= | 4,873 | KeyError in Camembert in QuestionAnsweringPipeline | {
"login": "tuanardouin",
"id": 26484553,
"node_id": "MDQ6VXNlcjI2NDg0NTUz",
"avatar_url": "https://avatars.githubusercontent.com/u/26484553?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tuanardouin",
"html_url": "https://github.com/tuanardouin",
"followers_url": "https://api.github.com/users/tuanardouin/followers",
"following_url": "https://api.github.com/users/tuanardouin/following{/other_user}",
"gists_url": "https://api.github.com/users/tuanardouin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tuanardouin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tuanardouin/subscriptions",
"organizations_url": "https://api.github.com/users/tuanardouin/orgs",
"repos_url": "https://api.github.com/users/tuanardouin/repos",
"events_url": "https://api.github.com/users/tuanardouin/events{/privacy}",
"received_events_url": "https://api.github.com/users/tuanardouin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"Thank you for your fast answer.\r\n\r\nYour patch seems to fix some KeyError but not all of them.\r\n\r\nHere is an example of a context and a question that raise it :\r\n\r\n[context2.txt](https://github.com/huggingface/transformers/files/4758844/context2.txt)\r\n\r\nQuestion : \r\nQuel est l'étage se situe les locaux ?",
"Indeed, this PR was not the correct fix so I closed it. Will open a new one soon.",
"Just for your information, your patch also return an empty response string, but with the right location in the context.\r\n\r\nExample :\r\n[context_empty_response.txt](https://github.com/huggingface/transformers/files/4764895/context_empty_response.txt)\r\n\r\n\r\nQuestion :\r\n```\r\nQuel est la taille en mètres carrés des locaux ?\r\n```\r\n\r\nThanks for your help",
"@LysandreJik Do you have an update on this ?\r\nCan I help you in any way ? ",
"After some research it appears that my problem came from the fact that I was using a model trained with a `max_seq_length` set to 512 but was using the pipeline with this variable set to the default : 384.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,591 | 1,598 | 1,598 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
`Camembert ("illuin/camembert-large-fquad")`
Language I am using the model on (English, Chinese ...):
`French`
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
Maybe related to this issue :
https://github.com/huggingface/transformers/issues/4674
## To reproduce
Context file :
[context_mono.txt](https://github.com/huggingface/transformers/files/4752416/context_mono.txt)
```
import torch
from transformers import pipeline
def analayse():
if torch.cuda.is_available() == True:
print('GPU is available')
device = 0
else:
print('GPU is not available')
device = -1
nlp_camembert_gpu_f = pipeline("question-answering", model='illuin/camembert-large-fquad', tokenizer='illuin/camembert-large-fquad', device=device)
context = ''
with open('context_mono.txt') as file:
context_lines = [line for line in file]
for line in context_lines:
context += line
answer_C = nlp_camembert_gpu_f(question='Le loyer est-il révisé annuellement ou triennalemment ?', context=context)
def main_file():
analayse()
if __name__ == '__main__':
main_file()
```
## Error trace
```
Traceback (most recent call last):
File "qa_bug.py", line 26, in <module>
main_file()
File "qa_bug.py", line 23, in main_file
analayse()
File "qa_bug.py", line 20, in analayse
answer_C = nlp_camembert_gpu_f(question='Le loyer est-il révisé annuellement ou triennalemment ?', context=context)
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/transformers/pipelines.py", line 1229, in __call__
for s, e, score in zip(starts, ends, scores)
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/transformers/pipelines.py", line 1229, in <listcomp>
for s, e, score in zip(starts, ends, scores)
KeyError: 377
```
## Expected behavior
Getting an answer.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11.0
- Platform: Kernel: 5.3.0-1019-aws x86_64 Distro: Ubuntu 18.04.4 LTS
- Python version: 3.7.7
- PyTorch version (GPU?): 1.5.0 (True)
- Tensorflow version (GPU?): (False)
- Using GPU in script?: Yes (same problem on CPU)
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4873/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4873/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4872 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4872/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4872/comments | https://api.github.com/repos/huggingface/transformers/issues/4872/events | https://github.com/huggingface/transformers/pull/4872 | 635,433,754 | MDExOlB1bGxSZXF1ZXN0NDMxNzk3NTky | 4,872 | Create README.md | {
"login": "ypapanik",
"id": 22024955,
"node_id": "MDQ6VXNlcjIyMDI0OTU1",
"avatar_url": "https://avatars.githubusercontent.com/u/22024955?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ypapanik",
"html_url": "https://github.com/ypapanik",
"followers_url": "https://api.github.com/users/ypapanik/followers",
"following_url": "https://api.github.com/users/ypapanik/following{/other_user}",
"gists_url": "https://api.github.com/users/ypapanik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ypapanik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ypapanik/subscriptions",
"organizations_url": "https://api.github.com/users/ypapanik/orgs",
"repos_url": "https://api.github.com/users/ypapanik/repos",
"events_url": "https://api.github.com/users/ypapanik/events{/privacy}",
"received_events_url": "https://api.github.com/users/ypapanik/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4872?src=pr&el=h1) Report\n> Merging [#4872](https://codecov.io/gh/huggingface/transformers/pull/4872?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9f5d5a531d769d07403f59661884e254f8420afe&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4872?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4872 +/- ##\n==========================================\n+ Coverage 76.55% 76.56% +0.01% \n==========================================\n Files 128 128 \n Lines 21502 21502 \n==========================================\n+ Hits 16461 16464 +3 \n+ Misses 5041 5038 -3 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4872?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4872/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.28% <0.00%> (-0.32%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4872/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `74.27% <0.00%> (+0.41%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4872/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+0.94%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4872?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4872?src=pr&el=footer). Last update [9f5d5a5...aad9cb1](https://codecov.io/gh/huggingface/transformers/pull/4872?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,591 | 1,591 | 1,591 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4872/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4872/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4872",
"html_url": "https://github.com/huggingface/transformers/pull/4872",
"diff_url": "https://github.com/huggingface/transformers/pull/4872.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4872.patch",
"merged_at": 1591966993000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4871 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4871/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4871/comments | https://api.github.com/repos/huggingface/transformers/issues/4871/events | https://github.com/huggingface/transformers/pull/4871 | 635,428,579 | MDExOlB1bGxSZXF1ZXN0NDMxNzkyOTM4 | 4,871 | Create README.md for gpt-2-pubmed-medium | {
"login": "ypapanik",
"id": 22024955,
"node_id": "MDQ6VXNlcjIyMDI0OTU1",
"avatar_url": "https://avatars.githubusercontent.com/u/22024955?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ypapanik",
"html_url": "https://github.com/ypapanik",
"followers_url": "https://api.github.com/users/ypapanik/followers",
"following_url": "https://api.github.com/users/ypapanik/following{/other_user}",
"gists_url": "https://api.github.com/users/ypapanik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ypapanik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ypapanik/subscriptions",
"organizations_url": "https://api.github.com/users/ypapanik/orgs",
"repos_url": "https://api.github.com/users/ypapanik/repos",
"events_url": "https://api.github.com/users/ypapanik/events{/privacy}",
"received_events_url": "https://api.github.com/users/ypapanik/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4871?src=pr&el=h1) Report\n> Merging [#4871](https://codecov.io/gh/huggingface/transformers/pull/4871?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9f5d5a531d769d07403f59661884e254f8420afe&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4871?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4871 +/- ##\n=======================================\n Coverage 76.55% 76.56% \n=======================================\n Files 128 128 \n Lines 21502 21502 \n=======================================\n+ Hits 16461 16462 +1 \n+ Misses 5041 5040 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4871?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4871/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.12% <0.00%> (-0.48%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4871/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+0.94%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4871?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4871?src=pr&el=footer). Last update [9f5d5a5...40bc44a](https://codecov.io/gh/huggingface/transformers/pull/4871?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"cc @LysandreJik, you're going to like this:)"
] | 1,591 | 1,591 | 1,591 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4871/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4871/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4871",
"html_url": "https://github.com/huggingface/transformers/pull/4871",
"diff_url": "https://github.com/huggingface/transformers/pull/4871.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4871.patch",
"merged_at": 1591824582000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4870 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4870/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4870/comments | https://api.github.com/repos/huggingface/transformers/issues/4870/events | https://github.com/huggingface/transformers/pull/4870 | 635,390,754 | MDExOlB1bGxSZXF1ZXN0NDMxNzYwODQ4 | 4,870 | readme change | {
"login": "alberduris",
"id": 7073086,
"node_id": "MDQ6VXNlcjcwNzMwODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7073086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alberduris",
"html_url": "https://github.com/alberduris",
"followers_url": "https://api.github.com/users/alberduris/followers",
"following_url": "https://api.github.com/users/alberduris/following{/other_user}",
"gists_url": "https://api.github.com/users/alberduris/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alberduris/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alberduris/subscriptions",
"organizations_url": "https://api.github.com/users/alberduris/orgs",
"repos_url": "https://api.github.com/users/alberduris/repos",
"events_url": "https://api.github.com/users/alberduris/events{/privacy}",
"received_events_url": "https://api.github.com/users/alberduris/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,591 | 1,591 | 1,591 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4870/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4870/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4870",
"html_url": "https://github.com/huggingface/transformers/pull/4870",
"diff_url": "https://github.com/huggingface/transformers/pull/4870.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4870.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4869 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4869/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4869/comments | https://api.github.com/repos/huggingface/transformers/issues/4869/events | https://github.com/huggingface/transformers/pull/4869 | 635,377,474 | MDExOlB1bGxSZXF1ZXN0NDMxNzQ5Njk5 | 4,869 | parse arguments from dict | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4869?src=pr&el=h1) Report\n> Merging [#4869](https://codecov.io/gh/huggingface/transformers/pull/4869?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9f5d5a531d769d07403f59661884e254f8420afe&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4869?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4869 +/- ##\n==========================================\n+ Coverage 76.55% 76.57% +0.01% \n==========================================\n Files 128 128 \n Lines 21502 21510 +8 \n==========================================\n+ Hits 16461 16471 +10 \n+ Misses 5041 5039 -2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4869?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/hf\\_argparser.py](https://codecov.io/gh/huggingface/transformers/pull/4869/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9oZl9hcmdwYXJzZXIucHk=) | `69.23% <100.00%> (+2.96%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4869/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.12% <0.00%> (-0.48%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4869/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `74.27% <0.00%> (+0.41%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4869/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+0.94%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4869?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4869?src=pr&el=footer). Last update [9f5d5a5...4461f41](https://codecov.io/gh/huggingface/transformers/pull/4869?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Hi @LysandreJik , what do you think about this ? If it's not really necessary, I will close the PR. Thanks!"
] | 1,591 | 1,596 | 1,596 | MEMBER | null | This PR adds parse_dict method to HfArgumentParser to allow parsing arguments from dict
@julien-c
As suggested by you here #4791, I've added a simple unit test to check if the dataclass returned by `parse_dict` is same as manually initialised one. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4869/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4869/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4869",
"html_url": "https://github.com/huggingface/transformers/pull/4869",
"diff_url": "https://github.com/huggingface/transformers/pull/4869.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4869.patch",
"merged_at": 1596185063000
} |
https://api.github.com/repos/huggingface/transformers/issues/4868 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4868/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4868/comments | https://api.github.com/repos/huggingface/transformers/issues/4868/events | https://github.com/huggingface/transformers/issues/4868 | 635,368,183 | MDU6SXNzdWU2MzUzNjgxODM= | 4,868 | tokenizer.encode_plus stopped returning `attention_mask` and pad_to_max_length | {
"login": "shikharsingla",
"id": 23555486,
"node_id": "MDQ6VXNlcjIzNTU1NDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/23555486?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shikharsingla",
"html_url": "https://github.com/shikharsingla",
"followers_url": "https://api.github.com/users/shikharsingla/followers",
"following_url": "https://api.github.com/users/shikharsingla/following{/other_user}",
"gists_url": "https://api.github.com/users/shikharsingla/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shikharsingla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shikharsingla/subscriptions",
"organizations_url": "https://api.github.com/users/shikharsingla/orgs",
"repos_url": "https://api.github.com/users/shikharsingla/repos",
"events_url": "https://api.github.com/users/shikharsingla/events{/privacy}",
"received_events_url": "https://api.github.com/users/shikharsingla/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! You're using `transformers` version 2.1.1, which didn't have all of these features, as you can see in the [documentation of version 2.1.1](https://huggingface.co/transformers/v2.1.1/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.encode_plus).\r\n\r\nI would recommend upgrading your `transformers` version to the latest one to have access to all features!"
] | 1,591 | 1,591 | 1,591 | NONE | null | # 🐛 Bug
tokenizer.encode_plus stopped returning `attention_mask` and pad_to_max_length
## Information
Model I am using (Bert, XLNet ...):
Bert
Language I am using the model on (English, Chinese ...):
English
The problem arises when using:
my own modified scripts: (give details below)
The tasks I am working on is:
my own task or dataset: (give details below)
## To reproduce
import torch
import pandas as pd
# If there's a GPU available...
if torch.cuda.is_available():
# Tell PyTorch to use the GPU.
device = torch.device("cuda")
print('There are %d GPU(s) available.' % torch.cuda.device_count())
print('We will use the GPU:', torch.cuda.get_device_name(0))
# If not...
else:
print('No GPU available, using the CPU instead.')
device = torch.device("cpu")
# Load the dataset into a pandas dataframe.
df = pd.read_csv("/home/shikhar_singla/Downloads/cola_public/raw/in_domain_train.tsv", delimiter='\t', header=None, names=['sentence_source', 'label', 'label_notes', 'sentence'])
# Report the number of sentences.
print('Number of training sentences: {:,}\n'.format(df.shape[0]))
# Display 10 random rows from the data.
df.sample(10)
df.loc[df.label == 0].sample(5)[['sentence', 'label']]
# Get the lists of sentences and their labels.
sentences = df.sentence.values
labels = df.label.values
from transformers import BertTokenizer
# Load the BERT tokenizer.
print('Loading BERT tokenizer...')
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)
# Print the original sentence.
print(' Original: ', sentences[0])
# Print the sentence split into tokens.
print('Tokenized: ', tokenizer.tokenize(sentences[0]))
# Print the sentence mapped to token ids.
print('Token IDs: ', tokenizer.convert_tokens_to_ids(tokenizer.tokenize(sentences[0])))
max_len = 0
# For every sentence...
for sent in sentences:
# Tokenize the text and add `[CLS]` and `[SEP]` tokens.
input_ids = tokenizer.encode(sent, add_special_tokens=True)
# Update the maximum sentence length.
max_len = max(max_len, len(input_ids))
print('Max sentence length: ', max_len)
# Tokenize all of the sentences and map the tokens to thier word IDs.
input_ids = []
attention_masks = []
# For every sentence...
for sent in sentences:
# `encode_plus` will:
# (1) Tokenize the sentence.
# (2) Prepend the `[CLS]` token to the start.
# (3) Append the `[SEP]` token to the end.
# (4) Map tokens to their IDs.
# (5) Pad or truncate the sentence to `max_length`
# (6) Create attention masks for [PAD] tokens.
encoded_dict = tokenizer.encode_plus(
sent, # Sentence to encode.
add_special_tokens = True, # Add '[CLS]' and '[SEP]'
max_length = 64, # Pad & truncate all sentences.
pad_to_max_length = True,
return_attention_mask = True, # Construct attn. masks.
return_tensors = 'pt', # Return pytorch tensors.
)
# Add the encoded sentence to the list.
input_ids.append(encoded_dict['input_ids'])
# And its attention mask (simply differentiates padding from non-padding).
attention_masks.append(encoded_dict['attention_mask'])
# Convert the lists into tensors.
input_ids = torch.cat(input_ids, dim=0)
attention_masks = torch.cat(attention_masks, dim=0)
labels = torch.tensor(labels)
# Print sentence 0, now as a list of IDs.
print('Original: ', sentences[0])
print('Token IDs:', input_ids[0])
There are 1 GPU(s) available.
We will use the GPU: GeForce RTX 2080 Ti
Number of training sentences: 8,551
To use data.metrics please install scikit-learn. See https://scikit-learn.org/stable/index.html
Loading BERT tokenizer...
Original: Our friends won't buy this analysis, let alone the next one we propose.
Tokenized: ['our', 'friends', 'won', "'", 't', 'buy', 'this', 'analysis', ',', 'let', 'alone', 'the', 'next', 'one', 'we', 'propose', '.']
Token IDs: [2256, 2814, 2180, 1005, 1056, 4965, 2023, 4106, 1010, 2292, 2894, 1996, 2279, 2028, 2057, 16599, 1012]
Max sentence length: 47
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-1-e10b3c7561a8> in <module>
72 # (5) Pad or truncate the sentence to `max_length`
73 # (6) Create attention masks for [PAD] tokens.
---> 74 encoded_dict = tokenizer.encode_plus(
75 sent, # Sentence to encode.
76 add_special_tokens = True, # Add '[CLS]' and '[SEP]'
~/anaconda3/envs/bert_gpu_torch/lib/python3.8/site-packages/transformers/tokenization_utils.py in encode_plus(self, text, text_pair, add_special_tokens, max_length, stride, truncation_strategy, return_tensors, **kwargs)
784 raise ValueError("Input is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers.")
785
--> 786 first_ids = get_input_ids(text)
787 second_ids = get_input_ids(text_pair) if text_pair is not None else None
788
~/anaconda3/envs/bert_gpu_torch/lib/python3.8/site-packages/transformers/tokenization_utils.py in get_input_ids(text)
776 def get_input_ids(text):
777 if isinstance(text, six.string_types):
--> 778 return self.convert_tokens_to_ids(self.tokenize(text, **kwargs))
779 elif isinstance(text, (list, tuple)) and len(text) > 0 and isinstance(text[0], six.string_types):
780 return self.convert_tokens_to_ids(text)
~/anaconda3/envs/bert_gpu_torch/lib/python3.8/site-packages/transformers/tokenization_utils.py in tokenize(self, text, **kwargs)
647
648 added_tokens = list(self.added_tokens_encoder.keys()) + self.all_special_tokens
--> 649 tokenized_text = split_on_tokens(added_tokens, text)
650 return tokenized_text
651
~/anaconda3/envs/bert_gpu_torch/lib/python3.8/site-packages/transformers/tokenization_utils.py in split_on_tokens(tok_list, text)
642 text_list = tokenized_text
643
--> 644 return sum((self._tokenize(token, **kwargs) if token not \
645 in self.added_tokens_encoder and token not in self.all_special_tokens \
646 else [token] for token in tokenized_text), [])
~/anaconda3/envs/bert_gpu_torch/lib/python3.8/site-packages/transformers/tokenization_utils.py in <genexpr>(.0)
642 text_list = tokenized_text
643
--> 644 return sum((self._tokenize(token, **kwargs) if token not \
645 in self.added_tokens_encoder and token not in self.all_special_tokens \
646 else [token] for token in tokenized_text), [])
TypeError: _tokenize() got an unexpected keyword argument 'pad_to_max_length'
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.1.1
- Platform: Ubuntu 20.04
- Python version: 3.8.3
- PyTorch version (GPU?): 1.5.0
- Tensorflow version (GPU?):
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4868/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4868/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4867 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4867/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4867/comments | https://api.github.com/repos/huggingface/transformers/issues/4867/events | https://github.com/huggingface/transformers/pull/4867 | 635,367,659 | MDExOlB1bGxSZXF1ZXN0NDMxNzQxMjAx | 4,867 | run_pplm.py bug fix | {
"login": "songyouwei",
"id": 2573291,
"node_id": "MDQ6VXNlcjI1NzMyOTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2573291?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/songyouwei",
"html_url": "https://github.com/songyouwei",
"followers_url": "https://api.github.com/users/songyouwei/followers",
"following_url": "https://api.github.com/users/songyouwei/following{/other_user}",
"gists_url": "https://api.github.com/users/songyouwei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/songyouwei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/songyouwei/subscriptions",
"organizations_url": "https://api.github.com/users/songyouwei/orgs",
"repos_url": "https://api.github.com/users/songyouwei/repos",
"events_url": "https://api.github.com/users/songyouwei/events{/privacy}",
"received_events_url": "https://api.github.com/users/songyouwei/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4867?src=pr&el=h1) Report\n> Merging [#4867](https://codecov.io/gh/huggingface/transformers/pull/4867?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9f5d5a531d769d07403f59661884e254f8420afe&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4867?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4867 +/- ##\n=======================================\n Coverage 76.55% 76.56% \n=======================================\n Files 128 128 \n Lines 21502 21502 \n=======================================\n+ Hits 16461 16462 +1 \n+ Misses 5041 5040 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4867?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4867/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.12% <0.00%> (-0.48%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4867/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+0.94%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4867?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4867?src=pr&el=footer). Last update [9f5d5a5...26de790](https://codecov.io/gh/huggingface/transformers/pull/4867?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@songyouwei run_pplm.py Can you follow the Readme steps to success run it? And I report this error? Could you teach me? \r\n\r\n\r\n\r\n \r\n"
] | 1,591 | 1,592 | 1,591 | CONTRIBUTOR | null | `is_leaf` may become `False` after `.to(device=device)` function call. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4867/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4867/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4867",
"html_url": "https://github.com/huggingface/transformers/pull/4867",
"diff_url": "https://github.com/huggingface/transformers/pull/4867.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4867.patch",
"merged_at": 1591744468000
} |
https://api.github.com/repos/huggingface/transformers/issues/4866 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4866/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4866/comments | https://api.github.com/repos/huggingface/transformers/issues/4866/events | https://github.com/huggingface/transformers/issues/4866 | 635,367,503 | MDU6SXNzdWU2MzUzNjc1MDM= | 4,866 | Funnel Transformers | {
"login": "pchankh",
"id": 3161468,
"node_id": "MDQ6VXNlcjMxNjE0Njg=",
"avatar_url": "https://avatars.githubusercontent.com/u/3161468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pchankh",
"html_url": "https://github.com/pchankh",
"followers_url": "https://api.github.com/users/pchankh/followers",
"following_url": "https://api.github.com/users/pchankh/following{/other_user}",
"gists_url": "https://api.github.com/users/pchankh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pchankh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pchankh/subscriptions",
"organizations_url": "https://api.github.com/users/pchankh/orgs",
"repos_url": "https://api.github.com/users/pchankh/repos",
"events_url": "https://api.github.com/users/pchankh/events{/privacy}",
"received_events_url": "https://api.github.com/users/pchankh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Duplicate of #4844?",
"my bad. yes"
] | 1,591 | 1,591 | 1,591 | NONE | null | # 🌟 New model addition
Funnel-Transformer
## Model description
Funnel-Transformer is a new self-attention model that gradually compresses the sequence of hidden states to a shorter one and hence reduces the computation cost. More importantly, by re-investing the saved FLOPs from length reduction in constructing a deeper or wider model, Funnel-Transformer usually has a higher capacity given the same FLOPs. In addition, with a decoder, Funnel-Transformer is able to recover the token-level deep representation for each token from the reduced hidden sequence, which enables standard pretraining.
<!-- Important information -->
## Open source status
Released.
* [x] the model implementation is available: (give details)
https://github.com/laiguokun/Funnel-Transformer
* [x] the model weights are available: (give details)
https://github.com/laiguokun/Funnel-Transformer
* [x] who are the authors: (mention them, if possible by @gh-username)
Zihang Dai*, Guokun Lai*, Yiming Yang, Quoc V. Le
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4866/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4866/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4865 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4865/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4865/comments | https://api.github.com/repos/huggingface/transformers/issues/4865/events | https://github.com/huggingface/transformers/pull/4865 | 635,351,497 | MDExOlB1bGxSZXF1ZXN0NDMxNzI3NzQx | 4,865 | Create README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4865?src=pr&el=h1) Report\n> Merging [#4865](https://codecov.io/gh/huggingface/transformers/pull/4865?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9f5d5a531d769d07403f59661884e254f8420afe&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4865?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4865 +/- ##\n==========================================\n- Coverage 76.55% 76.54% -0.01% \n==========================================\n Files 128 128 \n Lines 21502 21502 \n==========================================\n- Hits 16461 16459 -2 \n- Misses 5041 5043 +2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4865?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4865/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.58% <0.00%> (-1.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4865/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.28% <0.00%> (-0.32%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4865/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+0.94%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4865?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4865?src=pr&el=footer). Last update [9f5d5a5...c1b9024](https://codecov.io/gh/huggingface/transformers/pull/4865?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,591 | 1,591 | 1,591 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4865/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4865/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4865",
"html_url": "https://github.com/huggingface/transformers/pull/4865",
"diff_url": "https://github.com/huggingface/transformers/pull/4865.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4865.patch",
"merged_at": 1591967024000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4864 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4864/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4864/comments | https://api.github.com/repos/huggingface/transformers/issues/4864/events | https://github.com/huggingface/transformers/pull/4864 | 635,317,596 | MDExOlB1bGxSZXF1ZXN0NDMxNzAwMDQ2 | 4,864 | Adding 🤗nlp in the examples | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Closing in favor of #5240"
] | 1,591 | 1,651 | 1,593 | MEMBER | null | This PR examines how to best make use of all the features of 🤗nlp in the examples.
First, example studied is GLUE. The main goal is to have a very explicit data processing (target: no data processing happening inside `transformers`) as well as add some efficiency features like dynamic batching. The second goal is to make this a lot more efficient, fast, and reproducible. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4864/reactions",
"total_count": 5,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 4,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4864/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4864",
"html_url": "https://github.com/huggingface/transformers/pull/4864",
"diff_url": "https://github.com/huggingface/transformers/pull/4864.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4864.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4863 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4863/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4863/comments | https://api.github.com/repos/huggingface/transformers/issues/4863/events | https://github.com/huggingface/transformers/issues/4863 | 635,293,463 | MDU6SXNzdWU2MzUyOTM0NjM= | 4,863 | how to train mask model e.g Bert using WordPieceToken | {
"login": "Yangxiaojun1230",
"id": 59246446,
"node_id": "MDQ6VXNlcjU5MjQ2NDQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/59246446?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Yangxiaojun1230",
"html_url": "https://github.com/Yangxiaojun1230",
"followers_url": "https://api.github.com/users/Yangxiaojun1230/followers",
"following_url": "https://api.github.com/users/Yangxiaojun1230/following{/other_user}",
"gists_url": "https://api.github.com/users/Yangxiaojun1230/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Yangxiaojun1230/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Yangxiaojun1230/subscriptions",
"organizations_url": "https://api.github.com/users/Yangxiaojun1230/orgs",
"repos_url": "https://api.github.com/users/Yangxiaojun1230/repos",
"events_url": "https://api.github.com/users/Yangxiaojun1230/events{/privacy}",
"received_events_url": "https://api.github.com/users/Yangxiaojun1230/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! Can you load your tokenizer using\r\n\r\n```py\r\nfrom transformers import BertTokenizer\r\n\r\ntokenizer = BertTokenizer.from_pretrained(directory_containing_vocab_txt)\r\n```\r\n?",
"> \r\n> \r\n> Hi! Can you load your tokenizer using\r\n> \r\n> ```python\r\n> from transformers import BertTokenizer\r\n> \r\n> tokenizer = BertTokenizer.from_pretrained(directory_containing_vocab_txt)\r\n> ```\r\n> \r\n> ?\r\n\r\nHi LysandreJik,\r\n Great ! It works . May I ask another question-- Do you have any experience on the loss score. Generally what score should be appropriate , my loss around 1.12 is it ok? ",
"This really depends of your training set and what model you use, what checkpoint you use, etc. "
] | 1,591 | 1,591 | 1,591 | NONE | null | # ❓ Questions & Help
When i trained a new token by WordPiece,that generized one vocab.txt file . It couldn't load in train-language-model.py ,since in source code it used bytepiece token. Is there a module out of box could save me a lot of time to do myself?
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4863/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4863/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4862 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4862/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4862/comments | https://api.github.com/repos/huggingface/transformers/issues/4862/events | https://github.com/huggingface/transformers/issues/4862 | 635,263,454 | MDU6SXNzdWU2MzUyNjM0NTQ= | 4,862 | how to extract several layers of BERT or GPT as a new model? | {
"login": "willanxywc",
"id": 24306827,
"node_id": "MDQ6VXNlcjI0MzA2ODI3",
"avatar_url": "https://avatars.githubusercontent.com/u/24306827?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/willanxywc",
"html_url": "https://github.com/willanxywc",
"followers_url": "https://api.github.com/users/willanxywc/followers",
"following_url": "https://api.github.com/users/willanxywc/following{/other_user}",
"gists_url": "https://api.github.com/users/willanxywc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/willanxywc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/willanxywc/subscriptions",
"organizations_url": "https://api.github.com/users/willanxywc/orgs",
"repos_url": "https://api.github.com/users/willanxywc/repos",
"events_url": "https://api.github.com/users/willanxywc/events{/privacy}",
"received_events_url": "https://api.github.com/users/willanxywc/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Interesting use-case!\r\n\r\nThe easiest way would be to simply load to models, one with the `bert-base-cased` checkpoint, the other randomly initialized, and to assign trained layers to the new model. Something like this:\r\n\r\n```py\r\nfrom transformers import BertModel, BertConfig\r\nimport torch\r\n\r\nbert_base_cased = BertModel.from_pretrained(\"bert-base-cased\") # Instantiate model using the trained weights\r\nmodel = BertModel(BertConfig.from_pretrained(\"bert-base-cased\")) # Randomly initialize model, with the same size as the trained model\r\n\r\nlayers_to_replace = [1, 2, 3, 8]\r\nfor layer in layers_to_replace:\r\n model.base_model.encoder.layer[layer] = bert_base_cased.base_model.encoder.layer[layer]\r\n\r\n# Let's compare the key values of the attention layers to make sure they're the same\r\ni = 0\r\nfor original_layer, new_layer in zip(model.base_model.encoder.layer, bert_base_cased.base_model.encoder.layer):\r\n original_attention_key = original_layer.attention.self.key.weight\r\n new_attention_key = new_layer.attention.self.key.weight\r\n difference = (torch.max(torch.abs(original_attention_key - new_attention_key)).item())\r\n\r\n print(f\"Layers {i} are {'not ' if difference else ''}the same.\")\r\n i += 1\r\n\r\n```\r\n\r\nThis outputs:\r\n\r\n```\r\nLayers 0 are not the same.\r\nLayers 1 are the same.\r\nLayers 2 are the same.\r\nLayers 3 are the same.\r\nLayers 4 are not the same.\r\nLayers 5 are not the same.\r\nLayers 6 are not the same.\r\nLayers 7 are not the same.\r\nLayers 8 are the same.\r\nLayers 9 are not the same.\r\nLayers 10 are not the same.\r\nLayers 11 are not the same.\r\n```",
"Thanks, but I find this does not work for GPT2LMHeadModel. How could I extract the hidden layers of a GPT2LMHeadModel please?",
"You can do the same, but the layers are under `model.base_model.h`:\r\n\r\n```py\r\n[...]\r\nfor layer in layers_to_replace:\r\n model.base_model.h[layer] = [...]\r\n[...]\r\n```",
"Thanks @LysandreJik !\r\n\r\nI am wondering how you would do this in the keras versions. From tinkering around, I think you access the layers with `model.layers[0].encoder.layer`, since the length of this is 12, so I'm guessing it's for the 12 layers in the Bert model. \r\n\r\nSo you would do something like \r\n\r\n```\r\nlayers_to_replace = [1, 2, 3, 8]\r\nfor layer in layers_to_replace:\r\n newModel.layers[0].encoder.layer[layer] = trainedModel.layers[0].encoder.layer[layer]\r\n```\r\n\r\nDoes that seem right to you?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@LysandreJik This solution randomly initialize Embedding layer's weight rather than load from bert pretrained Embedding,which leads to a huge performance decline and confuses me for a week. The correct solution is:\r\n```python\r\nfrom transformers import BertModel, BertConfig\r\nimport torch\r\n\r\nbert_version = \"bert-base-cased\"\r\nbert_base_cased = BertModel.from_pretrained(bert_version) # Instantiate model using the trained weights\r\nconfig = BertConfig.from_pretrained(bert_version)\r\nmodel = BertModel(config=config) # Randomly initialize model, with the same size as the trained model\r\n\r\n# add these two lines\r\nmodel.embeddings = bert_base_cased.embeddings\r\nmodel.pooler = bert_base_cased.pooler\r\n\r\nlayers_to_replace = [1, 2, 3, 8]\r\nfor layer in layers_to_replace:\r\n model.base_model.encoder.layer[layer] = bert_base_cased.base_model.encoder.layer[layer]\r\n```\r\n\r\nalso,if you just want the first 4 layers, the easier and safer way is:\r\n```python\r\nfrom transformers import BertModel, BertConfig\r\nimport torch\r\n\r\nbert_version = \"bert-base-cased\"\r\nbert_base_cased = BertModel.from_pretrained(bert_version) # Instantiate model using the trained weights\r\nconfig = BertConfig.from_pretrained(bert_version)\r\nconfig.num_hidden_layers = 4\r\nmodel = BertModel.from_pretrained(bert_version, config=config) # auto skip unused layers\r\n\r\nfor param_name in model.state_dict():\r\n sub_param, full_param = model.state_dict()[param_name], bert_base_cased.state_dict()[param_name] # type: torch.Tensor, torch.Tensor\r\n assert (sub_param.cpu().numpy() == full_param.cpu().numpy()).all(), param_name\r\n \r\n```",
"@dalek-who hey I tried to run your code before my other code to construct the model (and this is all in Sagemaker), however got this error: \r\n\r\nCUDA error: device-side assert triggered\r\nCUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.\r\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1.\r\n\r\nHere's the code I ran after your code:\r\n\r\n```\r\nclass BERTClass(torch.nn.Module):\r\n def __init__(self):\r\n super(BERTClass, self).__init__()\r\n self.bert_model = model \r\n self.dropout = torch.nn.Dropout(0.5)\r\n self.linear = torch.nn.Linear(768, 9) \r\n def forward(self, input_ids, attn_mask, token_type_ids):\r\n output = self.bert_model(\r\n input_ids, \r\n attention_mask=attn_mask, \r\n token_type_ids=token_type_ids\r\n )\r\n output_dropout = self.dropout(output.pooler_output)\r\n output = self.linear(output_dropout)\r\n return output\r\nbert_model = BERTClass()\r\nbert_model.to(device)\r\n```\r\n\r\nAnyone has any idea why?",
"@Bambry When do you get this error? On construct the model, or on forward?\r\n`CUDA error: device-side assert triggered` often occurs when a layer receives illegal inputs, for example a `BCELoss` receives a illegal label `3`.\r\nMaybe you should check your tensor and parameter's shape, value or dtype."
] | 1,591 | 1,661 | 1,598 | NONE | null | How can I, for example, extract 8 layers from the 12 BertLayers of the _bert-base-uncased_ to form a new model? I want to use the _embedding_ and _pooler_ layer of orginal model, but use only a portion of the _encoder_ layers. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4862/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4862/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4861 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4861/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4861/comments | https://api.github.com/repos/huggingface/transformers/issues/4861/events | https://github.com/huggingface/transformers/issues/4861 | 635,233,480 | MDU6SXNzdWU2MzUyMzM0ODA= | 4,861 | can anyone tell me how to do the pretraining of Reformer model on my text data? | {
"login": "doppler21",
"id": 46111727,
"node_id": "MDQ6VXNlcjQ2MTExNzI3",
"avatar_url": "https://avatars.githubusercontent.com/u/46111727?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/doppler21",
"html_url": "https://github.com/doppler21",
"followers_url": "https://api.github.com/users/doppler21/followers",
"following_url": "https://api.github.com/users/doppler21/following{/other_user}",
"gists_url": "https://api.github.com/users/doppler21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/doppler21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/doppler21/subscriptions",
"organizations_url": "https://api.github.com/users/doppler21/orgs",
"repos_url": "https://api.github.com/users/doppler21/repos",
"events_url": "https://api.github.com/users/doppler21/events{/privacy}",
"received_events_url": "https://api.github.com/users/doppler21/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"```python\r\nfrom transformers import ReformerModelWithLMHead, ReformerConfig\r\nconfig = ReformerConfig() # define the config as you like\r\nmodel = ReformerModelWithLMHead(config)\r\nloss = model(input_ids, labels=input_ids) # input_ids are automatically shifted for labels\r\n=> train\r\n```\r\n\r\nAll this can also be done using the trainer. See this notebook for example:\r\nhttps://github.com/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb "
] | 1,591 | 1,591 | 1,591 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4861/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4861/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4860 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4860/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4860/comments | https://api.github.com/repos/huggingface/transformers/issues/4860/events | https://github.com/huggingface/transformers/issues/4860 | 635,176,864 | MDU6SXNzdWU2MzUxNzY4NjQ= | 4,860 | ROUGE_L score of summarization/t5 is very lower than that of paper. | {
"login": "takahiro971",
"id": 65151988,
"node_id": "MDQ6VXNlcjY1MTUxOTg4",
"avatar_url": "https://avatars.githubusercontent.com/u/65151988?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/takahiro971",
"html_url": "https://github.com/takahiro971",
"followers_url": "https://api.github.com/users/takahiro971/followers",
"following_url": "https://api.github.com/users/takahiro971/following{/other_user}",
"gists_url": "https://api.github.com/users/takahiro971/gists{/gist_id}",
"starred_url": "https://api.github.com/users/takahiro971/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/takahiro971/subscriptions",
"organizations_url": "https://api.github.com/users/takahiro971/orgs",
"repos_url": "https://api.github.com/users/takahiro971/repos",
"events_url": "https://api.github.com/users/takahiro971/events{/privacy}",
"received_events_url": "https://api.github.com/users/takahiro971/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"So, I investigated the google's code, then I found:\r\n\r\nhttps://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/evaluation/metrics.py#L76\r\n\r\nI think that they uses not \"rougeL\" but \"rougeLsum\".\r\nAnd also, they says:\r\n \"# Add newlines between sentences so that rougeLsum is computed correctly.\"\r\n\r\nhttps://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/evaluation/metrics.py#L82\r\n\r\n\r\nSo, I tried the following hacks:\r\n\r\n```\r\n$ g log -p\r\ncommit 11bd4a086438b100c47e5e2b7e8696fcd67e94d1\r\nAuthor: Takahiro Ito <[email protected]>\r\nDate: Tue Jun 9 14:35:19 2020 +0900\r\n\r\n スコア計算の不具合を修正\r\n\r\ndiff --git a/examples/summarization/t5/evaluate_cnn.py b/examples/summarization/t5/evaluate_cnn.py\r\nindex d2d6ee9..e1db944 100644\r\n--- a/examples/summarization/t5/evaluate_cnn.py\r\n+++ b/examples/summarization/t5/evaluate_cnn.py\r\n@@ -44,17 +44,27 @@ def generate_summaries(lns, output_file_path, model_size, batch_size, device):\r\n\r\n def calculate_rouge(output_lns, reference_lns, score_path):\r\n score_file = Path(score_path).open(\"w\")\r\n- scorer = rouge_scorer.RougeScorer([\"rouge1\", \"rouge2\", \"rougeL\"], use_stemmer=True)\r\n+ scorer = rouge_scorer.RougeScorer([\"rouge1\", \"rouge2\", \"rougeL\", \"rougeLsum\"], use_stemmer=True)\r\n aggregator = scoring.BootstrapAggregator()\r\n\r\n+ # copy from\r\n+ # https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/evaluation/metrics.py#L80\r\n+ def _prepare_summary(summary):\r\n+ # Make sure the summary is not bytes-type\r\n+ # Add newlines between sentences so that rougeLsum is computed correctly.\r\n+ summary = summary.replace(\" . \", \" .\\n\")\r\n+ return summary\r\n+\r\n for reference_ln, output_ln in zip(reference_lns, output_lns):\r\n+ reference_ln = _prepare_summary(reference_ln)\r\n+ output_ln = _prepare_summary(output_ln)\r\n scores = scorer.score(reference_ln, output_ln)\r\n aggregator.add_scores(scores)\r\n\r\n result = aggregator.aggregate()\r\n score_file.write(\r\n- \"ROUGE_1: \\n{} \\n\\n ROUGE_2: \\n{} \\n\\n ROUGE_L: \\n{} \\n\\n\".format(\r\n- result[\"rouge1\"], result[\"rouge2\"], result[\"rougeL\"]\r\n+ \"ROUGE_1: \\n{} \\n\\n ROUGE_2: \\n{} \\n\\n ROUGE_L: \\n{} \\n\\n ROUGE_Lsum: \\n{} \\n\\n\".format(\r\n+ result[\"rouge1\"], result[\"rouge2\"], result[\"rougeL\"], result[\"rougeLsum\"]\r\n )\r\n )\r\n```\r\n\r\n, and I got a score (37.94), near paper score.\r\nNote that: the above my code shows both \"rougeL\" and \"rougeLsum\".\r\n\r\nQuestion:\r\nWhy don't your code use \"rougeLsum\" ?\r\n\r\nhttps://github.com/huggingface/transformers/blob/master/examples/summarization/t5/evaluate_cnn.py#L47\r\n\r\nI'm sorry, I'm not good at English.\r\nI hope some kind people fix this and create PR, thanks.\r\n\r\nBest,",
"P.S.\r\nthe above hack is based on 41a1d27cdefd6417c298518198f99e3b8431a5c0:\r\n\r\n```\r\n$ gglv\r\n* commit 11bd4a086438b100c47e5e2b7e8696fcd67e94d1 (HEAD, master)\r\n| Author: Takahiro Ito <[email protected]>\r\n| Date: Tue Jun 9 14:35:19 2020 +0900\r\n| \r\n| スコア計算の不具合を修正\r\n| \r\n* commit 41a1d27cdefd6417c298518198f99e3b8431a5c0 (origin/master, origin/HEAD)\r\n| Author: Sylvain Gugger <[email protected]>\r\n| Date: Mon Jun 8 21:22:37 2020 -0400\r\n```\r\n",
"Sorry, I accidentally closed issue ... ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,591 | 1,597 | 1,597 | NONE | null | # 🐛 Bug
## Information
I try to use summarization/t5 in examples.
The ROUGE_1 and ROUGE_2 is equals to that of google's paper.
But, only ROUGE_L is very low!
```
ROUGE_1: paper=41.12 | my result=40.48 (almost equal)
ROUGE_2: paper=19.56 | my result=18.59 (almost equal)
ROUGE_L: paper=38.35 | my result=28.22 (very low ?)
```
Model I am using (Bert, XLNet ...): T5
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Cent OS 7 (64bit)
- Python version: 3.7
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4860/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4860/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4859 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4859/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4859/comments | https://api.github.com/repos/huggingface/transformers/issues/4859/events | https://github.com/huggingface/transformers/issues/4859 | 635,107,710 | MDU6SXNzdWU2MzUxMDc3MTA= | 4,859 | Memory issues in Transformers | {
"login": "AishwaryaVerma",
"id": 53822388,
"node_id": "MDQ6VXNlcjUzODIyMzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/53822388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AishwaryaVerma",
"html_url": "https://github.com/AishwaryaVerma",
"followers_url": "https://api.github.com/users/AishwaryaVerma/followers",
"following_url": "https://api.github.com/users/AishwaryaVerma/following{/other_user}",
"gists_url": "https://api.github.com/users/AishwaryaVerma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AishwaryaVerma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AishwaryaVerma/subscriptions",
"organizations_url": "https://api.github.com/users/AishwaryaVerma/orgs",
"repos_url": "https://api.github.com/users/AishwaryaVerma/repos",
"events_url": "https://api.github.com/users/AishwaryaVerma/events{/privacy}",
"received_events_url": "https://api.github.com/users/AishwaryaVerma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,591 | 1,591 | 1,591 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4859/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4859/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4858 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4858/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4858/comments | https://api.github.com/repos/huggingface/transformers/issues/4858/events | https://github.com/huggingface/transformers/issues/4858 | 635,038,196 | MDU6SXNzdWU2MzUwMzgxOTY= | 4,858 | Add support for DeBERTa | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Hello, Our code has just been released at [DeBERTa]( https://github.com/microsoft/DeBERTa)\r\nPlease take a try and your feedback will be good for our improvements, we also welcome the community to work together with us to improve it.\r\nWe are also glad to integrate DeBERTa into transformers.\r\n",
"PR Add deberta model #5929 ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Unstale - very close to merge!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,591 | 1,606 | 1,606 | COLLABORATOR | null | # 🌟 New model addition
## Model description
DeBERTa (Decoding-enhanced BERT with disentangled attention) is a new model architecture:
> In this paper we propose a new model architecture DeBERTa (Decoding-enhanced BERT with disentangled attention) that improves the BERT and RoBERTa models using two novel techniques. The first is the disentangled attention mechanism, where each word is represented using two vectors that encode its content and position, respectively, and the attention weights among words are computed using disentangled matrices on their contents and relative positions. Second, an enhanced mask decoder is used to replace the output softmax layer to predict the masked tokens for model pretraining. We show that these two techniques significantly improve the efficiency of model pre-training and performance of downstream tasks.
The paper can be found [here](https://arxiv.org/abs/2006.03654).
## Open source status
* [x] the model implementation is available: [GitHub](https://github.com/microsoft/DeBERTa)
* [x] the model weights are available: [GitHub release](https://github.com/microsoft/DeBERTa/releases/tag/v0.1)
* [ ] who are the authors: @BigBird01
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4858/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4858/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4857 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4857/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4857/comments | https://api.github.com/repos/huggingface/transformers/issues/4857/events | https://github.com/huggingface/transformers/issues/4857 | 635,016,815 | MDU6SXNzdWU2MzUwMTY4MTU= | 4,857 | sentencepiece==0.1.92 causing segmentation fault | {
"login": "boy2000-007man",
"id": 4197489,
"node_id": "MDQ6VXNlcjQxOTc0ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4197489?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/boy2000-007man",
"html_url": "https://github.com/boy2000-007man",
"followers_url": "https://api.github.com/users/boy2000-007man/followers",
"following_url": "https://api.github.com/users/boy2000-007man/following{/other_user}",
"gists_url": "https://api.github.com/users/boy2000-007man/gists{/gist_id}",
"starred_url": "https://api.github.com/users/boy2000-007man/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/boy2000-007man/subscriptions",
"organizations_url": "https://api.github.com/users/boy2000-007man/orgs",
"repos_url": "https://api.github.com/users/boy2000-007man/repos",
"events_url": "https://api.github.com/users/boy2000-007man/events{/privacy}",
"received_events_url": "https://api.github.com/users/boy2000-007man/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] | closed | false | null | [] | [
"@boy2000-007man Hi, folk. Just curious about how do you find this bug? It costs me almost the whole day... Anyway, thank you so much!",
"OMG!!! Awesome Advice!!!! ",
"I spent a whole night to address the dependencies problems and almost lost my mind. This answer saved my life. Appreciate!",
"Thanks for this! Also curious how you worked this out - I've spent a whole day trying to figure this out!",
"Thanks so much you saved my day.",
"I was dreading the thought of having to dive into this issue with faulthandler and meticulously cross referencing dependencies with a working version....but this post just saved my night. Thanks @boy2000-007man \r\n\r\nThis seems like a new pytorch v1.4.0 incompatibility issue with the latest huggingface releases. I'm assuming this may have been missed due to the focus on v1.5.0 support, but it seems like many people cannot make the jump to cuda 10.2/pytorch 1.5.0 currently, so this seems like a pretty big headache that should be addressed.",
"Closing this as solved by #5418",
"You are excellent!",
"same problem when use sentencepiece==0.1.94",
"Having the same problem with sentencepiece==0.1.94",
"Cf #8199 we will remove the hard dependency on sentencepiece (replaced by the `tokenizers` library) in a coming release, probably end of next week.",
"Thank you a lot! You saved my day!"
] | 1,591 | 1,605 | 1,596 | CONTRIBUTOR | null | # 🐛 Bug
## Information
`transformers==2.9.1`
`torch==1.4.0`
Starting from today,
I notice newly released `sentencepiece==0.1.92` causing segmentation fault while calling torch functions.
Downgrade to `sentencepiece==0.1.91` solve it.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4857/reactions",
"total_count": 97,
"+1": 52,
"-1": 0,
"laugh": 0,
"hooray": 15,
"confused": 0,
"heart": 17,
"rocket": 13,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4857/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4856 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4856/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4856/comments | https://api.github.com/repos/huggingface/transformers/issues/4856/events | https://github.com/huggingface/transformers/issues/4856 | 634,979,117 | MDU6SXNzdWU2MzQ5NzkxMTc= | 4,856 | Tensorflow Glue example script for finetuning not usable with DistilBert | {
"login": "maggy96",
"id": 3636686,
"node_id": "MDQ6VXNlcjM2MzY2ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/3636686?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maggy96",
"html_url": "https://github.com/maggy96",
"followers_url": "https://api.github.com/users/maggy96/followers",
"following_url": "https://api.github.com/users/maggy96/following{/other_user}",
"gists_url": "https://api.github.com/users/maggy96/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maggy96/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maggy96/subscriptions",
"organizations_url": "https://api.github.com/users/maggy96/orgs",
"repos_url": "https://api.github.com/users/maggy96/repos",
"events_url": "https://api.github.com/users/maggy96/events{/privacy}",
"received_events_url": "https://api.github.com/users/maggy96/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834054694,
"node_id": "MDU6TGFiZWwxODM0MDU0Njk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/TensorFlow",
"name": "TensorFlow",
"color": "FF6F00",
"default": false,
"description": "Anything TensorFlow"
},
{
"id": 1862634478,
"node_id": "MDU6TGFiZWwxODYyNjM0NDc4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Should%20Fix",
"name": "Should Fix",
"color": "FF0000",
"default": false,
"description": "This has been identified as a bug and should be fixed."
}
] | closed | false | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hello!\r\n\r\nIndeed, it is a bug in the way the TensorFlow dataset is generated. A fix is on its way :)"
] | 1,591 | 1,593 | 1,593 | NONE | null | # 🐛 Bug
## Information
Model I am using: DistilBert
```
python run_glue.py \
--model_name_or_path distilbert-base-cased \
--task_name MRPC\
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--output_dir /tmp/distilbert/
```
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
Running the example as above, the script gives me the following error:
```
Traceback (most recent call last):
File "run_glue.py", line 229, in <module>
main()
File "run_glue.py", line 199, in main
compute_metrics=compute_metrics,
File "/home/ubuntu/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/transformers/trainer_tf.py", line 48, in __init__
self._setup_training()
File "/home/ubuntu/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/transformers/trainer_tf.py", line 58, in _setup_training
self._prepare_dataset()
File "/home/ubuntu/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/transformers/trainer_tf.py", line 95, in _prepare_dataset
self.num_train_examples = self.train_dataset.reduce(tf.constant(0), lambda x, _: x + 1).numpy()
File "/home/ubuntu/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/tensorflow_core/python/data/ops/dataset_ops.py", line 1934, in reduce
output_types=structure.get_flat_tensor_types(state_structure)))
File "/home/ubuntu/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/tensorflow_core/python/ops/gen_dataset_ops.py", line 4661, in reduce_dataset
_ops.raise_from_not_ok_status(e, name)
File "/home/ubuntu/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 6606, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: TypeError: `generator` yielded an element that could not be converted to the expected type. The expected type was int32, but the yielded element was None.
Traceback (most recent call last):
File "/home/ubuntu/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/tensorflow_core/python/data/ops/dataset_ops.py", line 805, in generator_py_func
ret, dtype=dtype.as_numpy_dtype))
File "/home/ubuntu/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/tensorflow_core/python/ops/script_ops.py", line 196, in _convert
result = np.asarray(value, dtype=dtype, order="C")
File "/home/ubuntu/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/numpy/core/_asarray.py", line 85, in asarray
return array(a, dtype, copy=False, order=order)
TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ubuntu/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/tensorflow_core/python/ops/script_ops.py", line 236, in __call__
ret = func(*args)
File "/home/ubuntu/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/tensorflow_core/python/data/ops/dataset_ops.py", line 810, in generator_py_func
"element was %s." % (dtype.name, ret)), sys.exc_info()[2])
File "/home/ubuntu/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/six.py", line 702, in reraise
raise value.with_traceback(tb)
File "/home/ubuntu/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/tensorflow_core/python/data/ops/dataset_ops.py", line 805, in generator_py_func
ret, dtype=dtype.as_numpy_dtype))
File "/home/ubuntu/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/tensorflow_core/python/ops/script_ops.py", line 196, in _convert
result = np.asarray(value, dtype=dtype, order="C")
File "/home/ubuntu/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/numpy/core/_asarray.py", line 85, in asarray
return array(a, dtype, copy=False, order=order)
TypeError: `generator` yielded an element that could not be converted to the expected type. The expected type was int32, but the yielded element was None.
[[{{node PyFunc}}]] [Op:ReduceDataset]
```
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: MRPC
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. clone repo
2. run command from above using `examples/text-classification/run_tf_glue.py`
## Expected behavior
Fine tuning works on Distilbert too, not only on Bert.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11.0
- Platform: Linux-5.3.0-1017-aws-x86_64-with-debian-buster-sid
- Python version: 3.6.10
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.1.0 (True)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4856/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4856/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4855 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4855/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4855/comments | https://api.github.com/repos/huggingface/transformers/issues/4855/events | https://github.com/huggingface/transformers/pull/4855 | 634,963,823 | MDExOlB1bGxSZXF1ZXN0NDMxNDIwNTA2 | 4,855 | Add XLMRobertaForQuestionAnswering | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4855?src=pr&el=h1) Report\n> Merging [#4855](https://codecov.io/gh/huggingface/transformers/pull/4855?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a139d1a1602ee72ca98d5e0412efbd68f746d2c8&el=desc) will **increase** coverage by `2.61%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4855?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4855 +/- ##\n==========================================\n+ Coverage 73.93% 76.54% +2.61% \n==========================================\n Files 128 128 \n Lines 21498 21501 +3 \n==========================================\n+ Hits 15894 16458 +564 \n+ Misses 5604 5043 -561 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4855?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/4855/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.13% <ø> (ø)` | |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4855/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `73.60% <ø> (ø)` | |\n| [src/transformers/modeling\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4855/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG1fcm9iZXJ0YS5weQ==) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4855/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.58% <0.00%> (-1.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4855/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.28% <0.00%> (+0.15%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4855/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+0.23%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4855/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.26% <0.00%> (+1.42%)` | :arrow_up: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/4855/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `88.19% <0.00%> (+72.36%)` | :arrow_up: |\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/4855/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `100.00% <0.00%> (+78.94%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4855?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4855?src=pr&el=footer). Last update [a139d1a...b2b4f9c](https://codecov.io/gh/huggingface/transformers/pull/4855?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,591 | 1,591 | 1,591 | COLLABORATOR | null | One of the missing [model task](https://github.com/huggingface/transformers/projects/17). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4855/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4855/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4855",
"html_url": "https://github.com/huggingface/transformers/pull/4855",
"diff_url": "https://github.com/huggingface/transformers/pull/4855.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4855.patch",
"merged_at": 1591665757000
} |
https://api.github.com/repos/huggingface/transformers/issues/4854 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4854/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4854/comments | https://api.github.com/repos/huggingface/transformers/issues/4854/events | https://github.com/huggingface/transformers/pull/4854 | 634,944,824 | MDExOlB1bGxSZXF1ZXN0NDMxNDA1MjQ4 | 4,854 | Hans data | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4854?src=pr&el=h1) Report\n> Merging [#4854](https://codecov.io/gh/huggingface/transformers/pull/4854?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ca5e1cdf8e314288bd0242a531815a6c75d8178e&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4854?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4854 +/- ##\n=======================================\n Coverage 77.26% 77.26% \n=======================================\n Files 128 128 \n Lines 21851 21851 \n=======================================\n Hits 16884 16884 \n Misses 4967 4967 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4854?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.38% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.61% <0.00%> (+0.11%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4854?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4854?src=pr&el=footer). Last update [ca5e1cd...a58291b](https://codecov.io/gh/huggingface/transformers/pull/4854?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,591 | 1,592 | 1,592 | COLLABORATOR | null | This is the first step toward solving #4742: to be able to use the Trainer API, we first need to remove the TensorDataset and have datasets with dict items. This PR addresses that and updates the training and evaluation script accordingly.
It takes the multiple choice as a reference implementation, using the same file structure (hence the removal of "hans_processor.py") and implements in "utils_hans.py":
- a `HansDataset` and a `TFHansDataset` that implement the logic of the old method `load_and_cache_examples`
- a `HansProcessor` (copied from before)
- a `hans_convert_examples_to_features` with the same logic as before but using the tokenizer method for padding instead of re-implementing it.
Side question: it doesn't look like the `TFMultipleChoiceDataset` I use as a reference for this implementation uses caching, maybe it should be added in the future? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4854/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4854/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4854",
"html_url": "https://github.com/huggingface/transformers/pull/4854",
"diff_url": "https://github.com/huggingface/transformers/pull/4854.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4854.patch",
"merged_at": 1592055314000
} |
https://api.github.com/repos/huggingface/transformers/issues/4853 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4853/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4853/comments | https://api.github.com/repos/huggingface/transformers/issues/4853/events | https://github.com/huggingface/transformers/pull/4853 | 634,857,353 | MDExOlB1bGxSZXF1ZXN0NDMxMzMzODI4 | 4,853 | Remove unused arguments in Multiple Choice example | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4853?src=pr&el=h1) Report\n> Merging [#4853](https://codecov.io/gh/huggingface/transformers/pull/4853?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/37be3786cf1de9d21233f543c231866e68954998&el=desc) will **increase** coverage by `0.14%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4853?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4853 +/- ##\n==========================================\n+ Coverage 76.40% 76.54% +0.14% \n==========================================\n Files 128 128 \n Lines 21533 21533 \n==========================================\n+ Hits 16452 16483 +31 \n+ Misses 5081 5050 -31 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4853?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4853/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.15% <0.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4853/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.00% <0.00%> (+10.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4853?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4853?src=pr&el=footer). Last update [37be378...9e1b14a](https://codecov.io/gh/huggingface/transformers/pull/4853?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Also, should the `DataProcessor` in this file simply use the one in `transformers`?",
"LGTM"
] | 1,591 | 1,592 | 1,591 | COLLABORATOR | null | In the dataset preparation, the arguments `pad_token_segment_id`, `pad_on_left`, `pad_token` and `mask_padding_with_zero` are inferred from the tokenizer to be sent to `convert_examples_to_features` which then does not use them (since `tokenizer.encode_plus` does all of this using the tokenizer state).
This PR cleans that up (and removes the TODO). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4853/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4853/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4853",
"html_url": "https://github.com/huggingface/transformers/pull/4853",
"diff_url": "https://github.com/huggingface/transformers/pull/4853.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4853.patch",
"merged_at": 1591747510000
} |
https://api.github.com/repos/huggingface/transformers/issues/4852 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4852/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4852/comments | https://api.github.com/repos/huggingface/transformers/issues/4852/events | https://github.com/huggingface/transformers/issues/4852 | 634,806,519 | MDU6SXNzdWU2MzQ4MDY1MTk= | 4,852 | issue in pretraining language model with checkpoint | {
"login": "008karan",
"id": 18630864,
"node_id": "MDQ6VXNlcjE4NjMwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/18630864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/008karan",
"html_url": "https://github.com/008karan",
"followers_url": "https://api.github.com/users/008karan/followers",
"following_url": "https://api.github.com/users/008karan/following{/other_user}",
"gists_url": "https://api.github.com/users/008karan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/008karan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/008karan/subscriptions",
"organizations_url": "https://api.github.com/users/008karan/orgs",
"repos_url": "https://api.github.com/users/008karan/repos",
"events_url": "https://api.github.com/users/008karan/events{/privacy}",
"received_events_url": "https://api.github.com/users/008karan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,591 | 1,597 | 1,597 | NONE | null | # 🐛 Bug
## Information
I am pre-training albert from scratch and it was going fine.(8 v100)
But when I am training using a checkpoint its using single gpu and that also 1gb/32gb GPU ram.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
`transformers` version: 2.10.0
launching script with:
```
python transformers/examples/language-modeling/run_language_modeling.py --train_data_file text.txt --output_dir albert_model --model_type albert --mlm --config_name test --tokenizer_name test --do_train --line_by_line --learning_rate 5e-5 --num_train_epochs 3 --save_total_limit 50 --save_steps 5000 --per_gpu_train_batch_size 150 --seed 42 --overwrite_output_dir --max_steps 200000 --fp16 --model_name_or_path albert_model/checkpoint-200000
```
It seems that its starting it properly with global step but not using gpu properly which really weird
```
/language_model/lm/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:218: UserWarning: Please also save or load the state of the optimzer when saving or loading the scheduler.
warnings.warn(SAVE_STATE_WARNING, UserWarning)
Selected optimization level O1: Insert automatic casts around Pytorch functions and Tensor methods.
Defaults for this optimization level are:
enabled : True
opt_level : O1
cast_model_type : None
patch_torch_functions : True
keep_batchnorm_fp32 : None
master_weights : None
loss_scale : dynamic
Processing user overrides (additional kwargs that are not None)...
After processing overrides, optimization options are:
enabled : True
opt_level : O1
cast_model_type : None
patch_torch_functions : True
keep_batchnorm_fp32 : None
master_weights : None
loss_scale : dynamic
Warning: multi_tensor_applier fused unscale kernel is unavailable, possibly because apex was installed without --cuda_ext --cpp_ext. Using Python fallback. Original ImportError was: ModuleNotFoundError("No module named 'amp_C'",)
06/08/2020 11:25:09 - INFO - transformers.trainer - ***** Running training *****
06/08/2020 11:25:09 - INFO - transformers.trainer - Num examples = 28236463
06/08/2020 11:25:09 - INFO - transformers.trainer - Num Epochs = 43
06/08/2020 11:25:09 - INFO - transformers.trainer - Instantaneous batch size per device = 150
06/08/2020 11:25:09 - INFO - transformers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 1200
06/08/2020 11:25:09 - INFO - transformers.trainer - Gradient Accumulation steps = 1
06/08/2020 11:25:09 - INFO - transformers.trainer - Total optimization steps = 1000000
06/08/2020 11:25:09 - INFO - transformers.trainer - Continuing training from checkpoint, will skip to saved global_step
06/08/2020 11:25:09 - INFO - transformers.trainer - Continuing training from epoch 8
06/08/2020 11:25:09 - INFO - transformers.trainer - Continuing training from global step 200000
06/08/2020 11:25:09 - INFO - transformers.trainer - Will skip the first 11752 steps in the first epoch
Epoch: 0%| | 0/35 [00:00<?, ?it/s]
Iteration: 42%|███████████████████████████████████████████▋ | 9781/23531 [1:20:59<1:53:51, 2.01it/s]
```
can any one suggest something here? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4852/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4852/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4851 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4851/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4851/comments | https://api.github.com/repos/huggingface/transformers/issues/4851/events | https://github.com/huggingface/transformers/pull/4851 | 634,757,515 | MDExOlB1bGxSZXF1ZXN0NDMxMjUzOTk4 | 4,851 | Run a single wandb instance per TPU run | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Why is it specific to tpu only?\r\nWould the same logic apply in all cases?",
"Yes I'm guessing we only want the global master to log to wandb in DDP, like we do for Tensorboard.\r\n\r\nNot sure why we hadn't done it like that before, @borisdayma – thoughts?",
"I did not consider DP/DDP at the time. Actually Tensorboard logging does not consider world master either (only for logging config parameters but not metrics).\r\n\r\nI understand we should wrap the entire `wandb` and Tensorboard logics within a simple `if self.is_world_master`.\r\n\r\nMay I suggest the following:\r\n\r\n* refactor logging through PR #4756 \r\n* add an equivalent `TFTrainer.is_world_master`\r\n* wrap relevant Tensorboard & wandb sections of `log_metrics` by checking `is_world_master`\r\n* call `setup_wandb` only for world master (checked either within `Trainer` & `TFTrainer` or within `setup_wandb`)\r\n\r\nLet me know if you want me to add those changes.",
"I think those changes would be welcome. Do you agree @julien-c ?",
"Yes I agree. Should we already merge this PR though?",
"Sure, it won't do any harm."
] | 1,591 | 1,591 | 1,591 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4851/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4851/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4851",
"html_url": "https://github.com/huggingface/transformers/pull/4851",
"diff_url": "https://github.com/huggingface/transformers/pull/4851.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4851.patch",
"merged_at": 1591820899000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4850 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4850/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4850/comments | https://api.github.com/repos/huggingface/transformers/issues/4850/events | https://github.com/huggingface/transformers/pull/4850 | 634,715,244 | MDExOlB1bGxSZXF1ZXN0NDMxMjIwMjYx | 4,850 | [Benchmark] add tpu and torchscipt for benchmark | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4850?src=pr&el=h1) Report\n> Merging [#4850](https://codecov.io/gh/huggingface/transformers/pull/4850?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/42860e92a4a99a8be338644462cfc3f62d1379a3&el=desc) will **increase** coverage by `0.03%`.\n> The diff coverage is `78.62%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4850?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4850 +/- ##\n==========================================\n+ Coverage 76.97% 77.01% +0.03% \n==========================================\n Files 128 128 \n Lines 21533 21615 +82 \n==========================================\n+ Hits 16575 16646 +71 \n- Misses 4958 4969 +11 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4850?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/4850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.13% <ø> (ø)` | |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.38% <46.66%> (-0.24%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.09% <50.00%> (-1.19%)` | :arrow_down: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/4850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `77.41% <66.66%> (-0.71%)` | :arrow_down: |\n| [src/transformers/benchmark/benchmark.py](https://codecov.io/gh/huggingface/transformers/pull/4850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrLnB5) | `68.85% <69.76%> (+0.16%)` | :arrow_up: |\n| [src/transformers/benchmark/benchmark\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `73.09% <96.49%> (+5.85%)` | :arrow_up: |\n| [src/transformers/benchmark/benchmark\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/4850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3MucHk=) | `85.36% <100.00%> (-0.35%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4850?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4850?src=pr&el=footer). Last update [42860e9...0267668](https://codecov.io/gh/huggingface/transformers/pull/4850?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"**Baseline GPU results**:\r\n\r\n======= INFERENCE - SPEED - RESULT =======\r\n\t======= MODEL CHECKPOINT: distilbert-base-uncased =======\r\n\t\tdistilbert-base-uncased/8/8: 0.007s\r\n\t\tdistilbert-base-uncased/8/32: 0.009s\r\n\t\tdistilbert-base-uncased/8/128: 0.022s\r\n\t\tdistilbert-base-uncased/8/512: 0.1s\r\n\t======= MODEL CHECKPOINT: bert-base-cased =======\r\n\t\tbert-base-cased/8/8: 0.015s\r\n\t\tbert-base-cased/8/32: 0.025s\r\n\t\tbert-base-cased/8/128: 0.072s\r\n\t\tbert-base-cased/8/512: 0.332s\r\n======= INFERENCE - MEMORY - RESULT =======\r\n\t======= MODEL CHECKPOINT: distilbert-base-uncased =======\r\n\t\tdistilbert-base-uncased/8/8: 274 MB\r\n\t\tdistilbert-base-uncased/8/32: 298 MB\r\n\t\tdistilbert-base-uncased/8/128: 324 MB\r\n\t\tdistilbert-base-uncased/8/512: 552 MB\r\n\t======= MODEL CHECKPOINT: bert-base-cased =======\r\n\t\tbert-base-cased/8/8: 458 MB\r\n\t\tbert-base-cased/8/32: 462 MB\r\n\t\tbert-base-cased/8/128: 488 MB\r\n\t\tbert-base-cased/8/512: 728 MB\r\n",
"**Torchscript GPU results:**\r\n\r\n======= INFERENCE - SPEED - RESULT =======\r\n\t======= MODEL CHECKPOINT: distilbert-base-uncased =======\r\n\t\tdistilbert-base-uncased/8/8: 0.005s\r\n\t\tdistilbert-base-uncased/8/32: 0.009s\r\n\t\tdistilbert-base-uncased/8/128: 0.02s\r\n\t\tdistilbert-base-uncased/8/512: 0.096s\r\n\t======= MODEL CHECKPOINT: bert-base-cased =======\r\n\t\tbert-base-cased/8/8: 0.012s\r\n\t\tbert-base-cased/8/32: 0.025s\r\n\t\tbert-base-cased/8/128: 0.073s\r\n\t\tbert-base-cased/8/512: 0.328s\r\n======= INFERENCE - MEMORY - RESULT =======\r\n\t======= MODEL CHECKPOINT: distilbert-base-uncased =======\r\n\t\tdistilbert-base-uncased/8/8: 274 MB\r\n\t\tdistilbert-base-uncased/8/32: 296 MB\r\n\t\tdistilbert-base-uncased/8/128: 312 MB\r\n\t\tdistilbert-base-uncased/8/512: 552 MB\r\n\t======= MODEL CHECKPOINT: bert-base-cased =======\r\n\t\tbert-base-cased/8/8: 458 MB\r\n\t\tbert-base-cased/8/32: 460 MB\r\n\t\tbert-base-cased/8/128: 488 MB\r\n\t\tbert-base-cased/8/512: 716 MB\r\n\r\ncheck colab here: https://colab.research.google.com/drive/10KSu_6X6unsKXPOiwiGP6QDC1fLtADFJ?usp=sharing\r\n\r\nThe differences seem very small to me. What do you think @LysandreJik ?",
"**TPU memory and time usage**\r\n\r\n======= INFERENCE - SPEED - RESULT =======\r\n\t======= MODEL CHECKPOINT: distilbert-base-uncased =======\r\n\t\tdistilbert-base-uncased/8/8: 0.004s\r\n\t\tdistilbert-base-uncased/8/32: 0.005s\r\n\t\tdistilbert-base-uncased/8/128: 0.004s\r\n\t\tdistilbert-base-uncased/8/512: 0.005s\r\n\t======= MODEL CHECKPOINT: bert-base-cased =======\r\n\t\tbert-base-cased/8/8: 0.01s\r\n\t\tbert-base-cased/8/32: 0.008s\r\n\t\tbert-base-cased/8/128: 0.009s\r\n\t\tbert-base-cased/8/512: 0.009s\r\nTPU was used for inference. Note that the time after compilation stabilized (after ~10 inferences model.forward(..) calls) was measured.\r\n======= INFERENCE - MEMORY - RESULT =======\r\n\t======= MODEL CHECKPOINT: distilbert-base-uncased =======\r\n\t\tdistilbert-base-uncased/8/8: 1027 MB\r\n\t\tdistilbert-base-uncased/8/32: 1118 MB\r\n\t\tdistilbert-base-uncased/8/128: 1118 MB\r\n\t\tdistilbert-base-uncased/8/512: 1118 MB\r\n\t\tdistilbert-base-uncased/32/512: 1028 MB\r\n\t\tdistilbert-base-uncased/64/512: 1066 MB\r\n\t======= MODEL CHECKPOINT: bert-base-cased =======\r\n\t\tbert-base-cased/8/8: 1330 MB\r\n\t\tbert-base-cased/8/32: 1332 MB\r\n\t\tbert-base-cased/8/128: 1332 MB\r\n\t\tbert-base-cased/8/512: 1332 MB\r\n\t\tbert-base-cased/32/512: 1314 MB\r\n\t\tbert-base-cased/64/512: 1334 MB\r\n\r\n\r\nIn comparison to the GPU times - this seems reasonable to me. Kind of weird that for longer sequences it takes teh same amount of time or less...\r\n\r\nAt the moment I'm measuring CPU usage for TPU - not at all sure how to measure memory usage correctly for TPU...any ideas @LysandreJik ? UPDATE: Pretty sure that memory usage is wrong for TPU \r\n\r\nGoogle colab is here: \r\nhttps://colab.research.google.com/drive/1vp9y7R2bLYTrK8hWOIo8VFHGm6M7ft0B?usp=sharing",
"Requesting @julien-c's review for the move of `is_tpu_available` to the utils.",
"UPDATE:\r\n\r\nI'm fine with PyTorch for CPU, GPU with and without torchscript results:\r\nhttps://docs.google.com/spreadsheets/d/1vgAIG7P3AOdBp5X91rVVu8AqnZ_hAvFzKj_fTNolAlU/edit?usp=sharing\r\n\r\nTPU running times also seem to be fine. TPU memory is not yet implemented - will probably wait here until there is a PyTorch XLA API: https://github.com/pytorch/xla/issues/2180",
"Good to merge for me, waiting for @julien-c to check it out",
"Good for me",
"Okey changed it to `is_torch_tpu_available()`. Think that's fine. Pinging @julien-c @LysandreJik to notice the change.",
"Indeed, nice change!"
] | 1,591 | 1,591 | 1,591 | MEMBER | null | This PR adds:
- Torchscript memory and time benchmarking
- TPU memory and time benchmarking | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4850/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4850/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4850",
"html_url": "https://github.com/huggingface/transformers/pull/4850",
"diff_url": "https://github.com/huggingface/transformers/pull/4850.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4850.patch",
"merged_at": 1591737163000
} |
https://api.github.com/repos/huggingface/transformers/issues/4849 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4849/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4849/comments | https://api.github.com/repos/huggingface/transformers/issues/4849/events | https://github.com/huggingface/transformers/pull/4849 | 634,694,915 | MDExOlB1bGxSZXF1ZXN0NDMxMjAzODQw | 4,849 | Clean documentation | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4849?src=pr&el=h1) Report\n> Merging [#4849](https://codecov.io/gh/huggingface/transformers/pull/4849?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e817747941c75c8e14f0e93755ec648269f8a14d&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4849?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4849 +/- ##\n=======================================\n Coverage 76.57% 76.57% \n=======================================\n Files 128 128 \n Lines 21497 21497 \n=======================================\n Hits 16462 16462 \n Misses 5035 5035 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4849?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/4849/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `76.00% <ø> (ø)` | |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4849/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `93.02% <ø> (ø)` | |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4849/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `94.73% <ø> (ø)` | |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4849/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.69% <ø> (ø)` | |\n| [src/transformers/modeling\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/4849/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `27.27% <0.00%> (-64.94%)` | :arrow_down: |\n| [src/transformers/configuration\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/4849/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VuY29kZXJfZGVjb2Rlci5weQ==) | `35.71% <0.00%> (-64.29%)` | :arrow_down: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4849/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `73.60% <0.00%> (-4.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4849/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `86.42% <0.00%> (-2.02%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4849/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.15% <0.00%> (-0.25%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4849/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.04% <0.00%> (-0.16%)` | :arrow_down: |\n| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/4849/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4849?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4849?src=pr&el=footer). Last update [e817747...d94e884](https://codecov.io/gh/huggingface/transformers/pull/4849?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Awesome!"
] | 1,591 | 1,591 | 1,591 | COLLABORATOR | null | This PR addresses several problems in the documentation:
- not all existing models were present, I added them
- made sure to always follow the same order of sections/classes as bert for consistency, added an Overview section when not present, moved tips at the end of that overview section if they were elsewhere
- fixed a few problems (links not appearing or badly formatted rst)
- one example was copy-pasted without adapting the model names, fixed that too
Made a list of models missing as I went by, they are tracked in [this project](https://github.com/huggingface/transformers/projects/17). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4849/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4849/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4849",
"html_url": "https://github.com/huggingface/transformers/pull/4849",
"diff_url": "https://github.com/huggingface/transformers/pull/4849.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4849.patch",
"merged_at": 1591630100000
} |
https://api.github.com/repos/huggingface/transformers/issues/4848 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4848/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4848/comments | https://api.github.com/repos/huggingface/transformers/issues/4848/events | https://github.com/huggingface/transformers/issues/4848 | 634,657,085 | MDU6SXNzdWU2MzQ2NTcwODU= | 4,848 | TFXLMRobertaForSequenceClassification: call() got an unexpected keyword argument 'labels' | {
"login": "QixinLi",
"id": 25460447,
"node_id": "MDQ6VXNlcjI1NDYwNDQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/25460447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/QixinLi",
"html_url": "https://github.com/QixinLi",
"followers_url": "https://api.github.com/users/QixinLi/followers",
"following_url": "https://api.github.com/users/QixinLi/following{/other_user}",
"gists_url": "https://api.github.com/users/QixinLi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/QixinLi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/QixinLi/subscriptions",
"organizations_url": "https://api.github.com/users/QixinLi/orgs",
"repos_url": "https://api.github.com/users/QixinLi/repos",
"events_url": "https://api.github.com/users/QixinLi/events{/privacy}",
"received_events_url": "https://api.github.com/users/QixinLi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! You're on `transformers` version `v2.5.1`, but the TensorFlow models can only accept labels since this PR https://github.com/huggingface/transformers/pull/4530 was merged, 4 days ago.\r\n\r\nThis currently isn't available in any release, so you will have to install from source to use that feature:\r\n\r\n```\r\npip install git+https://github.com/huggingface/transformers\r\n```",
"@LysandreJik thanks for your reminder!"
] | 1,591 | 1,591 | 1,591 | NONE | null | # 🐛 Bug
## Information
Model I am using : TFXLMRoberta
Language I am using the model on : cross-lingual
The problem arises when using:
* [ ] the official example scripts:
* [√] my own modified scripts:
```python
import tensorflow as tf
from transformers import TFXLMRobertaForSequenceClassification,XLMRobertaTokenizer,XLMRobertaConfig
tokenizer = XLMRobertaTokenizer.from_pretrained('jplu/tf-xlm-roberta-base')
model = TFXLMRobertaForSequenceClassification.from_pretrained('jplu/tf-xlm-roberta-base')
input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute"))[None, :] # Batch size 1
labels = tf.reshape(tf.constant(1), (-1, 1)) # Batch size 1
outputs = model(input_ids, labels=labels)
loss, logits = outputs[:2]
```
and I got error here:
```
File "run_classifier.py", line 180, in train
outputs = self.model(input_ids,attention_mask = input_mask, token_type_ids = token_type_ids, labels=labels, training = True)
File "/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 891, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/transformers/modeling_tf_roberta.py", line 379, in call
outputs = self.roberta(inputs, **kwargs)
File "/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 891, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
TypeError: call() got an unexpected keyword argument 'labels'
```
I read the notes, it says TFXLMRobertaForSequenceClassification class **overrides** TFRobertaForSequenceClassification.
And [TFRobertaForSequenceClassification](https://huggingface.co/transformers/model_doc/roberta.html#tfrobertaforsequenceclassification) class's call() method accepts 'labels' argument.
For the TFRobertaForSequenceClassification model's example code, I just change the model and tokenizer to XLM, and it got an error.
## Environment info
- `transformers` version:2.5.1
- Platform:MacOS Catalina
- Python version: 3.5
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 2.0
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4848/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4848/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4847 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4847/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4847/comments | https://api.github.com/repos/huggingface/transformers/issues/4847/events | https://github.com/huggingface/transformers/issues/4847 | 634,586,574 | MDU6SXNzdWU2MzQ1ODY1NzQ= | 4,847 | Add optimal model size and stopping time feature | {
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834052129,
"node_id": "MDU6TGFiZWwxODM0MDUyMTI5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/High-Level%20feature",
"name": "High-Level feature",
"color": "f7c9a3",
"default": false,
"description": ""
},
{
"id": 1834053007,
"node_id": "MDU6TGFiZWwxODM0MDUzMDA3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20LM%20(Pretraining)",
"name": "Ex: LM (Pretraining)",
"color": "76FFAF",
"default": false,
"description": "Related to language modeling pre-training"
}
] | closed | false | {
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
}
] | [
"Great stuff, thank you! The energy estimates look 1000 worse than reality though, V100 running for 12 h should not consume 5432 kWh I think, else we'd be all dead. 5.4 kWh looks more reasonable.\r\n\r\n<img width=\"424\" alt=\"Screenshot 2020-06-09 at 00 26 45\" src=\"https://user-images.githubusercontent.com/424613/84082595-c9b7e380-a9e8-11ea-8c64-f221029aa60b.png\">\r\n",
"> Great stuff, thank you! The energy estimates look 1000 worse than reality though, V100 running for 12 h should not consume 5432 kWh I think, else we'd be all dead. 5.4 kWh looks more reasonable.\r\n> \r\n> <img alt=\"Screenshot 2020-06-09 at 00 26 45\" width=\"424\" src=\"https://user-images.githubusercontent.com/424613/84082595-c9b7e380-a9e8-11ea-8c64-f221029aa60b.png\">\r\n\r\nAh yes - I remembered having a doubt on that, I checked again the library we used to estimate those and there might have been a unit conversion error, I'll fix that ASAP tomorrow! \r\n\r\nEdit: it's fixed, thank you @lopuhin !",
"This is already looking very promising! Good stuff.\r\n\r\nWhen clicking the \"initialize in transformers\" button, the code block should probably not center-align the code, but left align instead. That makes the code a lot more readable.",
"> This is already looking very promising! Good stuff.\r\n> \r\n> When clicking the \"initialize in transformers\" button, the code block should probably not center-align the code, but left align instead. That makes the code a lot more readable.\r\n\r\nYeah that was a bit of an aesthetic choice to not break the flow of the web page, it definitely wouldn't be like this in a tool rather than a demo!\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"unstale, what's the status on this @TevenLeScao? Should we close?",
"@julien-c we had originally decided not to go forward with this, but I started working on it amongst the discussions about the scale of GPT-3. I didn't get to finish it before leaving for holidays two weeks ago, but the PR will be ready this week.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi! The \"initialize in Huggingface\" button is broken -- is there something I can do locally to solve it? I just wanted the lines of training code for a given wall-clock time.",
"Hey! The page seems broken, not sure why, I'll relaunch it",
"@TevenLeScao Thanks for the immediate reply! The button to launch in Huggingface Transformers still isn't working, but I'm happy to help debug / send any reports if it helps! Alternatively, do you think you could help me understand what the button does? i'm just hoping to generate the configuration string `n_layers=N_LAYERS,n_ctx=N_CTX`, with the variables filled in by the calculator.\r\n\r\nThanks for your time!",
"I've relaunched, it should work now (just gotta figure why the page doesn't center on my desktop).",
"@TevenLeScao Yes, it works -- thanks!\r\n\r\nOut of curiosity, why did you use Transformer-XL as opposed to something like GPT-2? Does Transformer-XL reach a lower validation loss on Wikitext-103 as opposed to GPT-2 when training for the same number of steps?",
"Yeah, it was the state-of-the-art at the time!"
] | 1,591 | 1,623 | 1,604 | CONTRIBUTOR | null | # 🚀 Feature request
The [calculator](https://huggingface.co/calculator/) blog post presented an automated way to find scaling laws with model size and compute budget on language modeling tasks. Adding it to the library would help save on training costs by picking an optimal model size and training time.
## Motivation
Estimating how big of a model to use and how long to train for is more of an art than a science. An automated tool to perform that task would allow researchers and practitioners to concentrate on the the high-level parts of their projects as opposed to parameter tweaking.
## Your contribution
I can submit a PR with my existing work, probably integrating it within `Trainer` and/or [`knocknock`](https://github.com/huggingface/knockknock).
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4847/reactions",
"total_count": 43,
"+1": 23,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 17,
"rocket": 2,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/4847/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4846 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4846/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4846/comments | https://api.github.com/repos/huggingface/transformers/issues/4846/events | https://github.com/huggingface/transformers/issues/4846 | 634,577,470 | MDU6SXNzdWU2MzQ1Nzc0NzA= | 4,846 | Memory issue in Transformers | {
"login": "AishwaryaVerma",
"id": 53822388,
"node_id": "MDQ6VXNlcjUzODIyMzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/53822388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AishwaryaVerma",
"html_url": "https://github.com/AishwaryaVerma",
"followers_url": "https://api.github.com/users/AishwaryaVerma/followers",
"following_url": "https://api.github.com/users/AishwaryaVerma/following{/other_user}",
"gists_url": "https://api.github.com/users/AishwaryaVerma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AishwaryaVerma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AishwaryaVerma/subscriptions",
"organizations_url": "https://api.github.com/users/AishwaryaVerma/orgs",
"repos_url": "https://api.github.com/users/AishwaryaVerma/repos",
"events_url": "https://api.github.com/users/AishwaryaVerma/events{/privacy}",
"received_events_url": "https://api.github.com/users/AishwaryaVerma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Do you have a paper/link about those \"add\" and \"slots\" that you mention? I have never heard of it.",
"> Do you have a paper/link about those \"add\" and \"slots\" that you mention? I have never heard of it.\r\nSorry, it was __all__.\r\nPlease check the documentation link.\r\nhttps://docs.python.org/3/tutorial/modules.html#importing-from-a-package\r\nhttps://docs.python.org/3/reference/datamodel.html#slots",
"> Do you have a paper/link about those \"add\" and \"slots\" that you mention? I have never heard of it.\r\n\r\nSome stackoverflow links:\r\nhttps://stackoverflow.com/questions/44834/can-someone-explain-all-in-python\r\nhttps://stackoverflow.com/questions/472000/usage-of-slots\r\nhttps://stackoverflow.com/questions/14118564/how-does-slots-avoid-a-dictionary-lookup/14119024#14119024",
"I thought you meant some kind of deep learning optimization. I am curious to see the impact of slots. I guess it could be useful to have a closer look at how much it decreases memory usage. _However_ it seems highly unlikely that you will get the consumption down by a lot. When I read through that top answer, the memory that you save is in the _bytes_, not even kilobytes, let alone hundreds of megabytes. \r\n\r\nIf you want, you can rewrite parts of transformers and benchmark whether you'll find a difference, but I doubt it.\r\n\r\nOther things that may help:\r\n\r\n- use evaluation mode and no_grad\r\n- trace your model\r\n- use something like ONNX to improve inference",
"> I thought you meant some kind of deep learning optimization. I am curious to see the impact of slots. I guess it could be useful to have a closer look at how much it decreases memory usage. _However_ it seems highly unlikely that you will get the consumption down by a lot. When I read through that top answer, the memory that you save is in the _bytes_, not even kilobytes, let alone hundreds of megabytes.\r\n> \r\n> If you want, you can rewrite parts of transformers and benchmark whether you'll find a difference, but I doubt it.\r\n> \r\n> Other things that may help:\r\n> \r\n> * use evaluation mode and no_grad\r\n> \r\n> * trace your model\r\n> \r\n> * use something like ONNX to improve inference\r\n\r\nOk. Thank you.",
"> this class is taking nearly 500 MB of the memory though the model is taking only 100 MB\r\n\r\n@AishwaryaVerma which memory do you mean when you're speaking about 100 MB? Does your model take 100 MB disk space but the loaded one takes 500 MB RAM?",
"Sorry for replying yo\r\n\r\n> \r\n> \r\n> > this class is taking nearly 500 MB of the memory though the model is taking only 100 MB\r\n> \r\n> @AishwaryaVerma which memory do you mean when you're speaking about 100 MB? Does your model take 100 MB disk space but the loaded one takes 500 MB RAM?\r\n\r\nSorry for replying you late. But I was talking about RAM."
] | 1,591 | 1,615 | 1,591 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
Hello everyone,
I am using distilBert from hugging face transformers and loading its tokenizer and model in my class. I have already quantised it to reduce the size of the model but still this class is taking nearly 500 MB of the memory though the model is taking only 100 MB. Then I looked into github repo of HuggingFace Transformers they are not using __all__ and __slots__ to their classes and functions, to reduce the memory size of the classes. My question is how do I reduce the memory size while loading any transformer model and why hugging face is not using __all__ and __slots__ in their codebase to make it more efficient.
Thank you in advance.
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4846/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4846/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4845 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4845/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4845/comments | https://api.github.com/repos/huggingface/transformers/issues/4845/events | https://github.com/huggingface/transformers/pull/4845 | 634,538,879 | MDExOlB1bGxSZXF1ZXN0NDMxMDc1MjE3 | 4,845 | [Generate] beam search should generate without replacement | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4845?src=pr&el=h1) Report\n> Merging [#4845](https://codecov.io/gh/huggingface/transformers/pull/4845?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f9414f7553d3f1872b372990ef03205c0d1141df&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4845?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4845 +/- ##\n==========================================\n- Coverage 76.06% 76.05% -0.01% \n==========================================\n Files 128 128 \n Lines 21498 21502 +4 \n==========================================\n+ Hits 16352 16354 +2 \n- Misses 5146 5148 +2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4845?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.12% <100.00%> (-0.24%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4845?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4845?src=pr&el=footer). Last update [f9414f7...d2201fe](https://codecov.io/gh/huggingface/transformers/pull/4845?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,591 | 1,591 | 1,591 | MEMBER | null | When doing beam search decoding and sampling instead of argmax (edge case, probably very rarely used), we need to sample **without** replacement. This is implemented by default in torch, but not in TF, see https://pytorch.org/docs/master/generated/torch.multinomial.html#torch.multinomial (torch). An easy solution is to use the Gumbel max trick instead: https://github.com/tensorflow/tensorflow/issues/9260
This will fix the sometimes flaky TFBeamSearchGenerate Tests as well: #4447 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4845/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4845/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4845",
"html_url": "https://github.com/huggingface/transformers/pull/4845",
"diff_url": "https://github.com/huggingface/transformers/pull/4845.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4845.patch",
"merged_at": 1591623093000
} |
https://api.github.com/repos/huggingface/transformers/issues/4844 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4844/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4844/comments | https://api.github.com/repos/huggingface/transformers/issues/4844/events | https://github.com/huggingface/transformers/issues/4844 | 634,355,930 | MDU6SXNzdWU2MzQzNTU5MzA= | 4,844 | Add support for Funnel-Transformer | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
] | [
"Will start to look into this.",
"@sgugger Any updates on this? Thanks! ",
"The first models are uploaded and the base models are available in PyTorch (`FunnelModel` has encoder + decoder and `FunnelBaseModel` just the encoder, for sequence classification and multiple choice) in [this branch](https://github.com/huggingface/transformers/tree/funnel_transformer). Should have all checkpoints on the HuggingFace S3 and all PyTorch models on the same branch by the end of this week.\r\n\r\nNote that there might be some changes in the names as this goes under review once it's ready.\r\n\r\n"
] | 1,591 | 1,599 | 1,599 | COLLABORATOR | null | # 🌟 New model addition
## Model description
The recently introduced Funnel-Transformer architecture and models would be a great feature for Transformers:
>Funnel-Transformer is a new self-attention model that gradually compresses the sequence of hidden states to a shorter one and hence reduces the computation cost. More importantly, by re-investing the saved FLOPs from length reduction in constructing a deeper or wider model, Funnel-Transformer usually has a higher capacity given the same FLOPs. In addition, with a decoder, Funnel-Transformer is able to recover the token-level deep representation for each token from the reduced hidden sequence, which enables standard pretraining.
The paper can be found [here](https://arxiv.org/abs/2006.03236).
## Open source status
* [x] the model implementation is available: [official GitHub repo](https://github.com/laiguokun/Funnel-Transformer)
* [x] the model weights are available: [Google Cloud Bucket](https://github.com/laiguokun/Funnel-Transformer/blob/master/download_all_ckpts.sh)
* [x] who are the authors: @zihangdai and @laiguokun
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4844/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4844/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4843 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4843/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4843/comments | https://api.github.com/repos/huggingface/transformers/issues/4843/events | https://github.com/huggingface/transformers/issues/4843 | 634,227,342 | MDU6SXNzdWU2MzQyMjczNDI= | 4,843 | remove words from vocabulary | {
"login": "vineeth567",
"id": 49122145,
"node_id": "MDQ6VXNlcjQ5MTIyMTQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/49122145?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vineeth567",
"html_url": "https://github.com/vineeth567",
"followers_url": "https://api.github.com/users/vineeth567/followers",
"following_url": "https://api.github.com/users/vineeth567/following{/other_user}",
"gists_url": "https://api.github.com/users/vineeth567/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vineeth567/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vineeth567/subscriptions",
"organizations_url": "https://api.github.com/users/vineeth567/orgs",
"repos_url": "https://api.github.com/users/vineeth567/repos",
"events_url": "https://api.github.com/users/vineeth567/events{/privacy}",
"received_events_url": "https://api.github.com/users/vineeth567/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Please refer to this issue: [https://github.com/huggingface/transformers/issues/4827](https://github.com/huggingface/transformers/issues/4827)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,591 | 1,597 | 1,597 | NONE | null | Is there any way to remove the words from the vocabulary of pretrained model? and is there a way to see the vocabulary of the model?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4843/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4843/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4842 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4842/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4842/comments | https://api.github.com/repos/huggingface/transformers/issues/4842/events | https://github.com/huggingface/transformers/issues/4842 | 634,171,167 | MDU6SXNzdWU2MzQxNzExNjc= | 4,842 | [Benchmark] Add optimization notebook | {
"login": "pommedeterresautee",
"id": 1029874,
"node_id": "MDQ6VXNlcjEwMjk4NzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1029874?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pommedeterresautee",
"html_url": "https://github.com/pommedeterresautee",
"followers_url": "https://api.github.com/users/pommedeterresautee/followers",
"following_url": "https://api.github.com/users/pommedeterresautee/following{/other_user}",
"gists_url": "https://api.github.com/users/pommedeterresautee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pommedeterresautee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pommedeterresautee/subscriptions",
"organizations_url": "https://api.github.com/users/pommedeterresautee/orgs",
"repos_url": "https://api.github.com/users/pommedeterresautee/repos",
"events_url": "https://api.github.com/users/pommedeterresautee/events{/privacy}",
"received_events_url": "https://api.github.com/users/pommedeterresautee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"That'd be great! Do you want to open a PR? :-) \r\nI would use this line as the github line: https://github.com/ELS-RD/transformers-notebook/blob/master/Divide_Hugging_Face_Transformers_training_time_by_2_or_more.ipynb\r\n\r\nand this one as the colab notebook line:\r\nhttps://colab.research.google.com/drive/1CBfRU1zbfu7-ijiOqAAQUA-RJaxfcJoO?usp=sharing\r\n\r\n",
"Haha, I thought you were managing the page (special order of article or whatever)... :-)\r\nSo the PR is done and waiting for your validation."
] | 1,591 | 1,592 | 1,592 | CONTRIBUTOR | null | # 🖥 Benchmarking `transformers`
## Benchmark
This notebook is about benchmarking model training with/without dynamic padding optimization.
https://github.com/ELS-RD/transformers-notebook
**Would it be possible to add it to the [community notebook](https://github.com/huggingface/transformers/tree/master/notebooks) list?** (a link to the Google collab version is provided) @julien-c @patrickvonplaten
## Set-up
GPU : Nvidia P100 provided by Google Collab
## Results
Using dynamic padding on MNLI provides a **4.7 times training time reduction**, with max pad length set to 512. The effect is strong because few examples are >> 400 tokens in this dataset. IRL, it will depend of the dataset, but it always bring improvement and, after more than 20 experiments listed in this [article](https://towardsdatascience.com/divide-hugging-face-transformers-training-time-by-2-or-more-21bf7129db9q-21bf7129db9e?source=friends_link&sk=10a45a0ace94b3255643d81b6475f409), it seems to not hurt performance.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4842/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4842/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4841 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4841/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4841/comments | https://api.github.com/repos/huggingface/transformers/issues/4841/events | https://github.com/huggingface/transformers/issues/4841 | 634,170,551 | MDU6SXNzdWU2MzQxNzA1NTE= | 4,841 | Multi-output regression support for Transformer models | {
"login": "mangalgishreyas",
"id": 7473358,
"node_id": "MDQ6VXNlcjc0NzMzNTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/7473358?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mangalgishreyas",
"html_url": "https://github.com/mangalgishreyas",
"followers_url": "https://api.github.com/users/mangalgishreyas/followers",
"following_url": "https://api.github.com/users/mangalgishreyas/following{/other_user}",
"gists_url": "https://api.github.com/users/mangalgishreyas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mangalgishreyas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mangalgishreyas/subscriptions",
"organizations_url": "https://api.github.com/users/mangalgishreyas/orgs",
"repos_url": "https://api.github.com/users/mangalgishreyas/repos",
"events_url": "https://api.github.com/users/mangalgishreyas/events{/privacy}",
"received_events_url": "https://api.github.com/users/mangalgishreyas/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hello! The cross entropy loss is only used if you provide the `labels` for the model to compute the loss. If you don't provide the labels, the model doesn't output any loss, only the logits!\r\n\r\nYou can then use these logits with your labels and the loss of your choosing.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,591 | 1,597 | 1,597 | NONE | null | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
## Motivation
I am trying to build a shipping address to geo-code predictor using RoBERTa. Here the shipping address would be the text input and the output would be a gecode (latitude and longitude).
I tried using _robertaforsequenceclassification_ but it mentions that when the final layer consists of more than one class, a cross entropy loss would be used automatically. But I want to perform regression using the RMSE loss. It would be great if we can add the multi-output regression feature to the existing sequence classification pipeline.
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4841/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4841/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4840 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4840/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4840/comments | https://api.github.com/repos/huggingface/transformers/issues/4840/events | https://github.com/huggingface/transformers/issues/4840 | 634,090,667 | MDU6SXNzdWU2MzQwOTA2Njc= | 4,840 | BUG while calculate LM loss in AlbertForMaskedLM | {
"login": "slczgwh",
"id": 58211043,
"node_id": "MDQ6VXNlcjU4MjExMDQz",
"avatar_url": "https://avatars.githubusercontent.com/u/58211043?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/slczgwh",
"html_url": "https://github.com/slczgwh",
"followers_url": "https://api.github.com/users/slczgwh/followers",
"following_url": "https://api.github.com/users/slczgwh/following{/other_user}",
"gists_url": "https://api.github.com/users/slczgwh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/slczgwh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/slczgwh/subscriptions",
"organizations_url": "https://api.github.com/users/slczgwh/orgs",
"repos_url": "https://api.github.com/users/slczgwh/repos",
"events_url": "https://api.github.com/users/slczgwh/events{/privacy}",
"received_events_url": "https://api.github.com/users/slczgwh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"That's because the labels of all tokens *except* the mask tokens should be set to -100, as it's written in the [documentation](https://huggingface.co/transformers/model_doc/albert.html#albertformaskedlm).\r\n\r\nSetting these tokens to -100 will result in the cross entropy ignoring them."
] | 1,591 | 1,591 | 1,591 | NONE | null | # 🐛 Bug
Error Code Here: https://github.com/huggingface/transformers/blob/e33fdc93b4ecb571dd7a8002a74789ec8bfffc09/src/transformers/modeling_albert.py#L822
Here calculate the loss of all tokens in a seq. While MLM actually only calculate loss on [MASK] | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4840/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4840/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4839 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4839/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4839/comments | https://api.github.com/repos/huggingface/transformers/issues/4839/events | https://github.com/huggingface/transformers/pull/4839 | 634,004,783 | MDExOlB1bGxSZXF1ZXN0NDMwNjAzMjc5 | 4,839 | [Longformer] Remove redundant code | {
"login": "ZhuBaohe",
"id": 35796307,
"node_id": "MDQ6VXNlcjM1Nzk2MzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/35796307?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZhuBaohe",
"html_url": "https://github.com/ZhuBaohe",
"followers_url": "https://api.github.com/users/ZhuBaohe/followers",
"following_url": "https://api.github.com/users/ZhuBaohe/following{/other_user}",
"gists_url": "https://api.github.com/users/ZhuBaohe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZhuBaohe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZhuBaohe/subscriptions",
"organizations_url": "https://api.github.com/users/ZhuBaohe/orgs",
"repos_url": "https://api.github.com/users/ZhuBaohe/repos",
"events_url": "https://api.github.com/users/ZhuBaohe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZhuBaohe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4839?src=pr&el=h1) Report\n> Merging [#4839](https://codecov.io/gh/huggingface/transformers/pull/4839?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e33fdc93b4ecb571dd7a8002a74789ec8bfffc09&el=desc) will **decrease** coverage by `0.23%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4839?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4839 +/- ##\n==========================================\n- Coverage 76.17% 75.94% -0.24% \n==========================================\n Files 128 128 \n Lines 21497 21495 -2 \n==========================================\n- Hits 16375 16324 -51 \n- Misses 5122 5171 +49 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4839?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4839/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `92.99% <100.00%> (-0.04%)` | :arrow_down: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4839/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `71.83% <0.00%> (-13.93%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4839/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.00% <0.00%> (-1.36%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4839/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4839/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `89.91% <0.00%> (-0.25%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4839/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.36% <0.00%> (+0.31%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4839?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4839?src=pr&el=footer). Last update [e33fdc9...6eb5f7d](https://codecov.io/gh/huggingface/transformers/pull/4839?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"LGTM! All the `RUN_SLOW=1` tests pass. Also pinging @ibeltagy to make sure.",
"LGTM",
"looks good to me. Thanks, @ZhuBaohe.\r\n\r\n> w is always less than seqlen\r\n\r\nJust wanted to mention that this is true only because of the [padding](https://github.com/huggingface/transformers/blob/6eb5f7d3441c2eb768da8f70dc8a602c02468267/src/transformers/modeling_longformer.py#L647). \r\n"
] | 1,591 | 1,591 | 1,591 | CONTRIBUTOR | null | This PR fixes class LongformerSelfAttention as follow:
1. Since the method **_mask_invalid_locations()** was already run in **_sliding_chunks_matmul_qk()**, it should be removed from **forward()** to avoid code duplication.
2. In the method **_mask_invalid_locations()**, since the size of the variable **beginning_mask** is (1, w, 1, w+1) and w is always less than **seqlen**, the index **beginning_mask[:, :seqlen]** is needless.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4839/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4839/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4839",
"html_url": "https://github.com/huggingface/transformers/pull/4839",
"diff_url": "https://github.com/huggingface/transformers/pull/4839.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4839.patch",
"merged_at": 1591633731000
} |
https://api.github.com/repos/huggingface/transformers/issues/4838 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4838/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4838/comments | https://api.github.com/repos/huggingface/transformers/issues/4838/events | https://github.com/huggingface/transformers/issues/4838 | 633,764,413 | MDU6SXNzdWU2MzM3NjQ0MTM= | 4,838 | [Bert Model] ValueError: not enough values to unpack (expected 3, got 2) | {
"login": "viiids",
"id": 763253,
"node_id": "MDQ6VXNlcjc2MzI1Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/763253?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/viiids",
"html_url": "https://github.com/viiids",
"followers_url": "https://api.github.com/users/viiids/followers",
"following_url": "https://api.github.com/users/viiids/following{/other_user}",
"gists_url": "https://api.github.com/users/viiids/gists{/gist_id}",
"starred_url": "https://api.github.com/users/viiids/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/viiids/subscriptions",
"organizations_url": "https://api.github.com/users/viiids/orgs",
"repos_url": "https://api.github.com/users/viiids/repos",
"events_url": "https://api.github.com/users/viiids/events{/privacy}",
"received_events_url": "https://api.github.com/users/viiids/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"From the first sight it seems to me you did not specify in your `config` file you want to output the hidden states. You may use these two lines of code:\r\n```\r\nconfig = BertConfig.from_pretrained( 'bert-base-uncased', output_hidden_states=True) \r\nself.bert_model = BertModel.from_pretrained('bert-base-uncased', config=config)\r\n```\r\nP.S. good luck with the Tweet Sentiment competition! :)",
"Oh, didn't know that `output_hidden_states=True` is needed to return the hidden states. Going to try it tonight. Might be good to modify the Transformers documentation for `forward` to reflect that too. Lot of lazy individuals like me may skip reading the config docs, use defaults and proceed to model docs.\r\n\r\nThanks for the wishes! Very little time but hoping to make a submission. Are you participating?",
"To tell the truth it is written straightforward in docs:\r\n`hidden_states (tuple(torch.FloatTensor), optional, returned when config.output_hidden_states=True)` :)",
"I knew I was lazy, but would not finish reading a line all the way, I would surely blame it to aging. This is not a bug at all then but thanks a lot for being patient and still answering! Closing it now.",
"Hi viiids, how did you manage to overcome this problem? I am having the same one and not being able to solve it so far. Many thanks!",
"Hi AKtsvigun, I have tried the solution suggested by you but the issue still persist. Can anybody share how did they manage to solve this. thanks.",
"Hi,\nyou can now access to hidden states via the dot (in case you did not forget to set `output_hidden_states=True` either in config or when calling `forward` method): \n`hidden_states = model(...).hidden_states`\n> Hi AKtsvigun, I have tried the solution suggested by you but the issue still persist. Can anybody share how did they manage to solve this. thanks.\n\n",
"I set output_hidden_states=True in the forward method, however, the same error keep showing. I restarted the kernel and doubled checked the rest of the code. not sure if it’s related to some other parameter in the training. \r\nhere is my forward pass:\r\n\r\n> def forward(self,\r\n> input_ids: torch.tensor, # Indices of input sequence tokens in the vocabulary.\r\n> attention_mask: torch.tensor, # Mask to avoid performing attention on padding token indices. \r\n> # Mask values selected in [0, 1]: 1 for tokens 0 for non-tokens [PAD]\r\n> token_type_ids: torch.tensor,# Indices to indicate first and second portions of the inputs.\r\n> # 0 sentence A token and 1 sentence B token\r\n> # [CLS] SEQUENCE_A [SEP] SEQUENCE_B [SEP]\r\n> intent_labels: torch.tensor = None,# The labels of the Intent classifier \r\n> \r\n> slot_labels: torch.tensor = None # The labels for the slot tagging [NER]\r\n> \r\n> ):\r\n> \r\n> # Feeding the input to BERT model to obtain hidden_states of all the layers\r\n> last_hidden_states, pooler_output = self.bert_model(input_ids=input_ids,\r\n> attention_mask=attention_mask,\r\n> token_type_ids=token_type_ids,\r\n> output_hidden_states=True,\r\n> return_dict=False)\r\n> # 7. Define huggingface model\r\n> dropout = 0.2\r\n> num_intent_labels = len(intent_vocab)\r\n> num_slot_labels = len(slot_vocab)\r\n> \r\n> model = ParserModel(model_name_or_path='bert-base-uncased',\r\n> dropout=dropout, \r\n> num_intent_labels=num_intent_labels, \r\n> num_slot_labels=num_slot_labels,\r\n> \r\n> )\r\n\r\n**And here is is the training code where the issue occurs:**\r\n\r\n> outputs = model(input_ids=input_ids, \r\n> attention_mask=attention_mask,\r\n> token_type_ids=token_type_ids, \r\n> slot_labels=slot_labels,\r\n> intent_labels=intent_labels)\r\n> \r\n> \r\n\r\n\r\n\r\n> ---------------------------------------------------------------------------\r\n> ValueError Traceback (most recent call last)\r\n> <ipython-input-54-3d510ec5d296> in <module>()\r\n> 31 token_type_ids=token_type_ids,\r\n> 32 slot_labels=slot_labels,\r\n> ---> 33 intent_labels=intent_labels)\r\n> 34 slot_loss, intent_loss = outputs[2],outputs[3]\r\n> 35 slot_loss.backward(retain_graph=True) #need to retain_graph when working with multiple losses\r\n> \r\n> 3 frames\r\n> /usr/local/lib/python3.7/dist-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)\r\n> 923 elif input_ids is not None:\r\n> 924 input_shape = input_ids.size()\r\n> --> 925 batch_size, seq_length = input_shape\r\n> 926 elif inputs_embeds is not None:\r\n> 927 input_shape = inputs_embeds.size()[:-1]\r\n> \r\n> ValueError: not enough values to unpack (expected 2, got 1)\r\n\r\n**Are you suspecting other places of the code to be the issue?**",
"@ENGSamShamsan \r\n> last_hidden_states, pooler_output = self.bert_model(..)\r\n\r\nThe first element of the output is `loss`, the next is `logits` and only then come the hidden states. You need to make it \r\n`loss, logits, last_hidden_states, pooler_output = self.bert_model(...)`",
"@Aktsvigun \r\n\r\nThank you so much for the quick respond. I applied the four variables previously and once more after you your last post but still the same error persist. this error took more time than I expect lol",
"I have solved the same issue with u but in a different situation. You should double-check the batch size of your input data.\r\n\r\n`tokens_ids_tensor` and `attn_mask` should be a 2d tensor but not 1d. \r\nWhile batch size is 1, they should look like:\r\n```\r\ntensor([[ 101, 1030, 1054, 2595, 2015, 21486, 2620, 1030, 3841, 7377,\r\n 8197, 3217, 1030, 1054, 2595, 2015, 21486, 2620, 1024, 1030,\r\n 3841, 7377, 8197, 3217, 1001, 15333, 6342, 2483, 9103, 102]],\r\n device='cuda:0') \r\n tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1]], device='cuda:0')\r\n```\r\nbut not\r\n```\r\ntensor([ 101, 1030, 1054, 2595, 2015, 21486, 2620, 1030, 3841, 7377,\r\n 8197, 3217, 1030, 1054, 2595, 2015, 21486, 2620, 1024, 1030,\r\n 3841, 7377, 8197, 3217, 1001, 15333, 6342, 2483, 9103, 102],\r\n device='cuda:0') \r\n tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1], device='cuda:0')\r\n```\r\nFurther, for *n* batch size, they should look like:\r\n```\r\nseq is tensor([[ 101, 4911, 1024, ..., 0, 0, 0],\r\n [ 101, 2054, 2057, ..., 2860, 28400, 102],\r\n [ 101, 7409, 2000, ..., 1037, 19062, 102],\r\n ...,\r\n [ 101, 1001, 2446, ..., 1024, 1013, 102],\r\n [ 101, 1001, 1037, ..., 2522, 1013, 102],\r\n [ 101, 1001, 4918, ..., 1013, 1017, 102]], device='cuda:0') \r\n attn_masks is tensor([[1, 1, 1, ..., 0, 0, 0],\r\n [1, 1, 1, ..., 1, 1, 1],\r\n [1, 1, 1, ..., 1, 1, 1],\r\n ...,\r\n [1, 1, 1, ..., 1, 1, 1],\r\n [1, 1, 1, ..., 1, 1, 1],\r\n [1, 1, 1, ..., 1, 1, 1]], device='cuda:0') \r\n```",
"> I have solved the same issue with u but in a different situation. You should double-check the batch size of your input data.\r\n> \r\n> `tokens_ids_tensor` and `attn_mask` should be a 2d tensor but not 1d.\r\n> While batch size is 1, they should look like:\r\n> \r\n> ```\r\n> tensor([[ 101, 1030, 1054, 2595, 2015, 21486, 2620, 1030, 3841, 7377,\r\n> 8197, 3217, 1030, 1054, 2595, 2015, 21486, 2620, 1024, 1030,\r\n> 3841, 7377, 8197, 3217, 1001, 15333, 6342, 2483, 9103, 102]],\r\n> device='cuda:0') \r\n> tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n> 1, 1, 1, 1, 1, 1]], device='cuda:0')\r\n> ```\r\n> \r\n> but not\r\n> \r\n> ```\r\n> tensor([ 101, 1030, 1054, 2595, 2015, 21486, 2620, 1030, 3841, 7377,\r\n> 8197, 3217, 1030, 1054, 2595, 2015, 21486, 2620, 1024, 1030,\r\n> 3841, 7377, 8197, 3217, 1001, 15333, 6342, 2483, 9103, 102],\r\n> device='cuda:0') \r\n> tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n> 1, 1, 1, 1, 1, 1], device='cuda:0')\r\n> ```\r\n> \r\n> Further, for _n_ batch size, they should look like:\r\n> \r\n> ```\r\n> seq is tensor([[ 101, 4911, 1024, ..., 0, 0, 0],\r\n> [ 101, 2054, 2057, ..., 2860, 28400, 102],\r\n> [ 101, 7409, 2000, ..., 1037, 19062, 102],\r\n> ...,\r\n> [ 101, 1001, 2446, ..., 1024, 1013, 102],\r\n> [ 101, 1001, 1037, ..., 2522, 1013, 102],\r\n> [ 101, 1001, 4918, ..., 1013, 1017, 102]], device='cuda:0') \r\n> attn_masks is tensor([[1, 1, 1, ..., 0, 0, 0],\r\n> [1, 1, 1, ..., 1, 1, 1],\r\n> [1, 1, 1, ..., 1, 1, 1],\r\n> ...,\r\n> [1, 1, 1, ..., 1, 1, 1],\r\n> [1, 1, 1, ..., 1, 1, 1],\r\n> [1, 1, 1, ..., 1, 1, 1]], device='cuda:0') \r\n> ```\r\n\r\n.unsqueeze(0) will do the job",
"Hi everyone, I am having the same issue in the forward function but with the distill bert uncased model. Can anybody help me with that\r\n\r\n```\r\n 12 def forward(self, ids, mask):\r\n---> 13 _, output_1= self.l1(ids, attention_mask = mask)\r\n 14 output_2 = self.l2(output_1)\r\n 15 output = `self.l3(output_2)`\r\n\r\nValueError: not enough values to unpack (expected 2, got 1)``\r\n```\r\n\r\nAlso distill bert does not encode token_type_ids so I set them as False. May be this is the issue.",
"@milind29 seems like your model returns only the output values: at least the mistake says `self.l1(ids, attention_mask = mask)` outputs precisely one variable, and you try to expose it into two variables."
] | 1,591 | 1,635 | 1,591 | NONE | null | # 🐛 Bug: ValueError: not enough values to unpack (expected 3, got 2)
## Information
I am using Bert initialized with 'bert-base-uncased', as per the [documentation](https://huggingface.co/transformers/model_doc/bert.html), the forward step is suppose to yield 4 outputs:
- last_hidden_state
- pooler_output
- hidden_states
- attentions
But when I try to intialize BERT and call forward method, it yields only 2 results. Based on the shape, I feel they are the hidden_states and pooler_output.
```
self.bert_model = BertModel.from_pretrained('bert-base-uncased')
_, _, hidden_states = self.bert_model(input_ids, attn_masks, token_type_ids)
```
**Error**
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-69-6d2cb1238cab> in <module>
45 for i, data in enumerate(trainloader):
46 input_ids, attn_mask, token_type_ids = data['tokens'], data['attention_mask'], data['token_type_ids']
---> 47 start_logits, end_logits = model.forward(input_ids, attn_mask, token_type_ids)
48 print(start_logits.shape)
49 print(end_logits.shape)
<ipython-input-69-6d2cb1238cab> in forward(self, input_ids, attn_masks, token_type_ids)
23
24 # Feeding the input to BERT model to obtain hidden_states of all the layers
---> 25 _, _, hidden_states = self.bert_model(input_ids, attn_masks, token_type_ids)
26
27 # Shape of hidden_states is (1, 50, 768)
ValueError: not enough values to unpack (expected 3, got 2)
```
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on English
The problem arises when using:
* [ ] the official example scripts: NA
* [x] my own modified scripts: Below are scripts details.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: NA
* [x] my own task or dataset: Fine tuning for my own task.
## To reproduce
Steps to reproduce the behavior:
1. Copy paste the full code below in a notebook.
2. Run as is.
Complete code:
```
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
# Dataset definition
class TweetDataset(Dataset):
def __init__(self, data, maxlen, tokenizer):
self.df = data
self.tokenizer = tokenizer
self.maxlen = maxlen
def __len__(self):
return len(self.df)
def __getitem__(self, index):
"""
Returns the token_ids_tensors, attn_mask for the item and text denoting the sentiment.
:param index:
:return:
"""
# Selecting the sentence and label at the specified index in the data frame
orig_sentence = self.df.iloc[index]['text']
sentiment = self.df.iloc[index]['sentiment']
selected_text = self.df.iloc[index]['selected_text']
# Preprocessing the text to be suitable for BERT
# Encode the sentence. Does the following:
# 1. Inserting the CLS and SEP token in the beginning and end of the sentence
# 2. Generates attention mask
# 3. Generate token_type_ids used to differentiate first part of the sentence from the second
encoded_dict = self.tokenizer.encode_plus(
sentiment,
orig_sentence,
max_length=self.maxlen,
truncation_strategy='only_second',
add_special_tokens=True,
pad_to_max_length=True,
return_tensors='pt',
return_token_type_ids=True,
return_attention_mask=True
)
tokens = encoded_dict['input_ids'][0]
token_type_ids = encoded_dict['token_type_ids'][0]
attn_mask = encoded_dict['attention_mask'][0]
# Determine the beginning and end of the sentence
def phrase_start_finder(sentence, phrase):
if phrase not in sentence:
raise ValueError('s2 not substring of s1')
start = sentence.find(phrase)
return len(sentence[:start].strip().split(' '))
def phrase_end_finder(sentence, phrase):
if phrase not in sentence:
raise ValueError('s2 not substring of s1')
return phrase_start_finder(sentence, phrase) + len(phrase.strip().split(' ')) - 1
start = phrase_start_finder(orig_sentence, selected_text)
end = phrase_end_finder(orig_sentence, selected_text)
return {
'tokens': tokens,
'attention_mask': attn_mask,
'token_type_ids': token_type_ids,
'start': float(start),
'end': float(end),
'sentence': orig_sentence,
'selected_text': selected_text,
'sentiment': sentiment
}
# Defining the loader
dataset = TweetDataset(train_data, 50, tokenizer)
trainloader = DataLoader(
dataset,
batch_size=BATCH_SIZE,
shuffle=True,
num_workers=4
)
# Defining the model
class TweetModel(nn.Module):
def __init__(self, freeze_bert=True):
super(TweetModel, self).__init__()
# Instantiating BERT model object
self.bert_model = BertModel.from_pretrained('bert-base-uncased')
# TODO(Viman): Before training on GPUs and finalization, remove this
# Freeze bert layers
# In first experiment, not training the previous layers
if freeze_bert:
for p in self.bert_model.parameters():
p.requires_grad = False
# Final layer. Needs two outputs which are supposed to be logits: startIndex and endIndex
self.dropout = nn.Dropout(0.2)
# 768 because output is a vector of size 768 (Dimensionality of the encoder layer)
self.fc = nn.Linear(768, 2)
# Intialize the fc layer
nn.init.normal_(self.fc.weight, std=0.02)
nn.init.normal_(self.fc.bias, 0)
def forward(self, input_ids, attn_masks, token_type_ids):
# Feeding the input to BERT model to obtain hidden_states of all the layers
_, _, hidden_states = self.bert_model(input_ids, attn_masks, token_type_ids)
# Shape of hidden_states is (1, 50, 768)
# TODO(Viman): Try mean as opposed to max
# hidden_states, _ = torch.max(hidden_states, dim=1)
# last_hidden_state = hidden_states[-1]
print(hidden_states.shape)
X = self.dropout(hidden_states)
logits = self.fc(X)
start_logits, end_logits = logits.split(1, dim=-1)
start_logits = start_logits.squeeze(-1)
end_logits = end_logits.squeeze(-1)
return start_logits, end_logits
model = TweetModel()
# Testing the model forward implementation
for i, data in enumerate(trainloader):
input_ids, attn_mask, token_type_ids = data['tokens'], data['attention_mask'], data['token_type_ids']
start_logits, end_logits = model.forward(input_ids, attn_mask, token_type_ids)
print(start_logits.shape)
print(end_logits.shape)
if i == 1:
break
```
## Expected behavior
The self.bert_model(input_ids, attn_masks, token_type_ids) line should return the a tuple containing 4 elements however it seems to return 2 only.
## Environment info
- `transformers` version: 2.9.0
- Platform: Linux-4.19.112+-x86_64-with-debian-buster-sid
- Python version: 3.7.6
- PyTorch version (GPU?): 1.5.0 (False)
- Tensorflow version (GPU?): 2.1.0 (False)
- Using GPU in script?: Not yet
- Using distributed or parallel set-up in script?: No
- `transformers` version: 2.11.0
- Platform: Mac/Kaggle notebook (Tried in both)
- Python version: 3.7
- PyTorch version (GPU?): No
- Tensorflow version (GPU?): NA
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4838/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4838/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4837 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4837/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4837/comments | https://api.github.com/repos/huggingface/transformers/issues/4837/events | https://github.com/huggingface/transformers/pull/4837 | 633,664,541 | MDExOlB1bGxSZXF1ZXN0NDMwMjkyNDIz | 4,837 | [examples] consolidate summarization examples | {
"login": "aretius",
"id": 18247856,
"node_id": "MDQ6VXNlcjE4MjQ3ODU2",
"avatar_url": "https://avatars.githubusercontent.com/u/18247856?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aretius",
"html_url": "https://github.com/aretius",
"followers_url": "https://api.github.com/users/aretius/followers",
"following_url": "https://api.github.com/users/aretius/following{/other_user}",
"gists_url": "https://api.github.com/users/aretius/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aretius/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aretius/subscriptions",
"organizations_url": "https://api.github.com/users/aretius/orgs",
"repos_url": "https://api.github.com/users/aretius/repos",
"events_url": "https://api.github.com/users/aretius/events{/privacy}",
"received_events_url": "https://api.github.com/users/aretius/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4837?src=pr&el=h1) Report\n> Merging [#4837](https://codecov.io/gh/huggingface/transformers/pull/4837?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e33fdc93b4ecb571dd7a8002a74789ec8bfffc09&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4837?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4837 +/- ##\n=======================================\n Coverage 76.17% 76.18% \n=======================================\n Files 128 128 \n Lines 21497 21497 \n=======================================\n+ Hits 16375 16377 +2 \n+ Misses 5122 5120 -2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4837?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4837/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.27% <0.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4837/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.20% <0.00%> (+0.15%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4837?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4837?src=pr&el=footer). Last update [e33fdc9...ed4de25](https://codecov.io/gh/huggingface/transformers/pull/4837?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"That's really cool, thanks for working on this @aretius!",
"@sshleifer Thanks for approving the PR!\r\nAlso, I am really interested to contribute more. It would be really great for me to be offered more chance to contribute to the repo :)"
] | 1,591 | 1,591 | 1,591 | CONTRIBUTOR | null | Consolidating summarization examples of T5 & Bertabs models into one - [#3826](https://github.com/huggingface/transformers/issues/3826)
@sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4837/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4837/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4837",
"html_url": "https://github.com/huggingface/transformers/pull/4837",
"diff_url": "https://github.com/huggingface/transformers/pull/4837.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4837.patch",
"merged_at": 1591715652000
} |
https://api.github.com/repos/huggingface/transformers/issues/4836 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4836/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4836/comments | https://api.github.com/repos/huggingface/transformers/issues/4836/events | https://github.com/huggingface/transformers/pull/4836 | 633,639,314 | MDExOlB1bGxSZXF1ZXN0NDMwMjY5NjY5 | 4,836 | Remove unneeded call convert_ids_to_tokens. | {
"login": "tshauck",
"id": 421839,
"node_id": "MDQ6VXNlcjQyMTgzOQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/421839?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tshauck",
"html_url": "https://github.com/tshauck",
"followers_url": "https://api.github.com/users/tshauck/followers",
"following_url": "https://api.github.com/users/tshauck/following{/other_user}",
"gists_url": "https://api.github.com/users/tshauck/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tshauck/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tshauck/subscriptions",
"organizations_url": "https://api.github.com/users/tshauck/orgs",
"repos_url": "https://api.github.com/users/tshauck/repos",
"events_url": "https://api.github.com/users/tshauck/events{/privacy}",
"received_events_url": "https://api.github.com/users/tshauck/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4836?src=pr&el=h1) Report\n> Merging [#4836](https://codecov.io/gh/huggingface/transformers/pull/4836?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ac921f0385616be40adbbd5302d7f58d5c976ca8&el=desc) will **decrease** coverage by `1.12%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4836?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4836 +/- ##\n==========================================\n- Coverage 78.45% 77.32% -1.13% \n==========================================\n Files 146 146 \n Lines 26047 26047 \n==========================================\n- Hits 20434 20142 -292 \n- Misses 5613 5905 +292 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4836?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4836/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.36% <100.00%> (+0.40%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/4836/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4836/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/4836/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4836?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4836?src=pr&el=footer). Last update [ac921f0...3284308](https://codecov.io/gh/huggingface/transformers/pull/4836?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,591 | 1,595 | 1,595 | NONE | null | On the base `PreTrainedTokenizer` object, calling `convert_tokens_to_string` returns an error because the method tries to call `convert_ids_to_tokens.` however the input should already be a list of tokens not ids.
I noticed this when creating a tokenizer that inherits from the Pretrained Tokenizer, and trying to test it with:
```python
assert transformers_tokenizer.decode([1, 15, 22, 15, 2]) == ">MVM<"
```
which eventually results in:
```python
self = <gcgc.third_party.GCGCTransformersTokenizer object at 0x16539ead0>, ids = ['>', 'M', 'V', 'M', '<'], skip_special_tokens = False
def convert_ids_to_tokens(
self, ids: Union[int, List[int]], skip_special_tokens: bool = False
) -> Union[int, List[int]]:
""" Converts a single index or a sequence of indices (integers) in a token "
(resp.) a sequence of tokens (str), using the vocabulary and added tokens.
Args:
skip_special_tokens: Don't decode special tokens (self.all_special_tokens). Default: False
"""
if isinstance(ids, int):
if ids in self.added_tokens_decoder:
return self.added_tokens_decoder[ids]
else:
return self._convert_id_to_token(ids)
tokens = []
for index in ids:
> index = int(index)
E ValueError: invalid literal for int() with base 10: '>'
```
But I don't think this should get called in the first place?
I think I'm actually go override this now that I understand the process better, but this still seemed incorrect. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4836/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4836/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4836",
"html_url": "https://github.com/huggingface/transformers/pull/4836",
"diff_url": "https://github.com/huggingface/transformers/pull/4836.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4836.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4835 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4835/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4835/comments | https://api.github.com/repos/huggingface/transformers/issues/4835/events | https://github.com/huggingface/transformers/issues/4835 | 633,585,818 | MDU6SXNzdWU2MzM1ODU4MTg= | 4,835 | Any reason why BART does not have a ForTokenClassification variant? | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"No theoretical reasons, go for it!\r\n\r\n",
"@BramVanroy, did you implement BartForTokenClassification?",
"@vgaraujov Sorry, no I did not."
] | 1,591 | 1,692 | 1,591 | COLLABORATOR | null | Are there any theoretical constraints to create a ForTokenClassification variant for BART? In my current project I am using a sequence classification head + a token classification head, so I would like to implement the token classification part manually. However, since it is not implemented in the repository, I wonder if there are any particular reasons why one should not do this. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4835/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4835/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4834 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4834/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4834/comments | https://api.github.com/repos/huggingface/transformers/issues/4834/events | https://github.com/huggingface/transformers/issues/4834 | 633,584,758 | MDU6SXNzdWU2MzM1ODQ3NTg= | 4,834 | Why init specific layers rather than whole model in BART | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"The line \r\n```python\r\nself.model = BartModel(config)\r\n```\r\nalready calls `init_weights`, so no need to run it twice.",
"Ah, you are of course correct. I noticed this because the other models (XXXForXXX) seem to just re-call init_weights, e.g.\r\n\r\nhttps://github.com/huggingface/transformers/blob/e33fdc93b4ecb571dd7a8002a74789ec8bfffc09/src/transformers/modeling_bert.py#L1008-L1015\r\n\r\nBut I guess that in practice it does not matter, it'll just be a bit slower but the result will be the same.\r\n\r\nThanks!"
] | 1,591 | 1,591 | 1,591 | COLLABORATOR | null | In BartForSequenceClassification I can see that rather than calling `self.init_weights()` (which most other models use) only specifically the classification head is initialized.
https://github.com/huggingface/transformers/blob/c58e6c129a153ca1a5021e5d7e642d00bf011e20/src/transformers/modeling_bart.py#L1046-L1047
Is there any advantage of doing this for the head(s) only rather than for the whole model? I can think of a speed improved, but apart from that I'm not sure.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4834/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4834/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4833 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4833/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4833/comments | https://api.github.com/repos/huggingface/transformers/issues/4833/events | https://github.com/huggingface/transformers/issues/4833 | 633,402,445 | MDU6SXNzdWU2MzM0MDI0NDU= | 4,833 | GlossBert adding | {
"login": "Oxi84",
"id": 25420033,
"node_id": "MDQ6VXNlcjI1NDIwMDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/25420033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Oxi84",
"html_url": "https://github.com/Oxi84",
"followers_url": "https://api.github.com/users/Oxi84/followers",
"following_url": "https://api.github.com/users/Oxi84/following{/other_user}",
"gists_url": "https://api.github.com/users/Oxi84/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Oxi84/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Oxi84/subscriptions",
"organizations_url": "https://api.github.com/users/Oxi84/orgs",
"repos_url": "https://api.github.com/users/Oxi84/repos",
"events_url": "https://api.github.com/users/Oxi84/events{/privacy}",
"received_events_url": "https://api.github.com/users/Oxi84/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
" Actually it already can be loaded and seem to work in some cases,a ll i tested now. \r\n model_a = BertModel.from_pretrained(\"/folder/\")\r\n tokenizer_a = BertTokenizer.from_pretrained(\"/folder/\")",
"It can be downloaded from the link.",
"As you say, it is already available for download in their repository. If you want them to add their model to the transformers model hub, you can open an issue on their GitHub repo and ask them to add the model [here](https://huggingface.co/models?search=glossbert)."
] | 1,591 | 1,591 | 1,591 | NONE | null | Here is a link https://github.com/HSLCY/GlossBERT - it should be able to do better for word vectors representations and there is already pretrained model than maybe need to be converted to a different format. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4833/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4833/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4832 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4832/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4832/comments | https://api.github.com/repos/huggingface/transformers/issues/4832/events | https://github.com/huggingface/transformers/issues/4832 | 633,322,230 | MDU6SXNzdWU2MzMzMjIyMzA= | 4,832 | Why exclude LayerNorm.bias from weight decay when finetuning? | {
"login": "xiaoda99",
"id": 6015633,
"node_id": "MDQ6VXNlcjYwMTU2MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6015633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiaoda99",
"html_url": "https://github.com/xiaoda99",
"followers_url": "https://api.github.com/users/xiaoda99/followers",
"following_url": "https://api.github.com/users/xiaoda99/following{/other_user}",
"gists_url": "https://api.github.com/users/xiaoda99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xiaoda99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiaoda99/subscriptions",
"organizations_url": "https://api.github.com/users/xiaoda99/orgs",
"repos_url": "https://api.github.com/users/xiaoda99/repos",
"events_url": "https://api.github.com/users/xiaoda99/events{/privacy}",
"received_events_url": "https://api.github.com/users/xiaoda99/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I have the same question",
"Check this [discussion](https://forums.fast.ai/t/is-weight-decay-applied-to-the-bias-term/73212/6)"
] | 1,591 | 1,619 | 1,597 | CONTRIBUTOR | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L306
In the original BERT implementation and in earlier versions of this repo, both LayerNorm.weight and LayerNorm.bias are decayed.
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4832/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4832/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4831 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4831/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4831/comments | https://api.github.com/repos/huggingface/transformers/issues/4831/events | https://github.com/huggingface/transformers/pull/4831 | 633,292,400 | MDExOlB1bGxSZXF1ZXN0NDI5OTU1NzE2 | 4,831 | TF Checkpoints | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4831?src=pr&el=h1) Report\n> Merging [#4831](https://codecov.io/gh/huggingface/transformers/pull/4831?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c58e6c129a153ca1a5021e5d7e642d00bf011e20&el=desc) will **increase** coverage by `1.63%`.\n> The diff coverage is `0.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4831?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4831 +/- ##\n==========================================\n+ Coverage 74.52% 76.16% +1.63% \n==========================================\n Files 128 128 \n Lines 21497 21495 -2 \n==========================================\n+ Hits 16021 16372 +351 \n+ Misses 5476 5123 -353 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4831?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/4831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `19.04% <0.00%> (+0.17%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.00% <0.00%> (-1.36%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.20% <0.00%> (-0.16%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `86.42% <0.00%> (+0.18%)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.64% <0.00%> (+2.71%)` | :arrow_up: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.26% <0.00%> (+6.29%)` | :arrow_up: |\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `100.00% <0.00%> (+25.00%)` | :arrow_up: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `93.02% <0.00%> (+75.48%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4831?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4831?src=pr&el=footer). Last update [c58e6c1...78f2040](https://codecov.io/gh/huggingface/transformers/pull/4831?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,591 | 1,591 | 1,591 | CONTRIBUTOR | null | Align how the checkpoints are managed the same way than in the PyTorch trainer. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4831/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4831/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4831",
"html_url": "https://github.com/huggingface/transformers/pull/4831",
"diff_url": "https://github.com/huggingface/transformers/pull/4831.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4831.patch",
"merged_at": 1591623924000
} |
https://api.github.com/repos/huggingface/transformers/issues/4830 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4830/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4830/comments | https://api.github.com/repos/huggingface/transformers/issues/4830/events | https://github.com/huggingface/transformers/pull/4830 | 633,143,662 | MDExOlB1bGxSZXF1ZXN0NDI5ODIwNzk5 | 4,830 | Add diagnostic dataset of glue tasks for prediction | {
"login": "stdcoutzyx",
"id": 1142862,
"node_id": "MDQ6VXNlcjExNDI4NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1142862?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stdcoutzyx",
"html_url": "https://github.com/stdcoutzyx",
"followers_url": "https://api.github.com/users/stdcoutzyx/followers",
"following_url": "https://api.github.com/users/stdcoutzyx/following{/other_user}",
"gists_url": "https://api.github.com/users/stdcoutzyx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stdcoutzyx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stdcoutzyx/subscriptions",
"organizations_url": "https://api.github.com/users/stdcoutzyx/orgs",
"repos_url": "https://api.github.com/users/stdcoutzyx/repos",
"events_url": "https://api.github.com/users/stdcoutzyx/events{/privacy}",
"received_events_url": "https://api.github.com/users/stdcoutzyx/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,591 | 1,598 | 1,598 | CONTRIBUTOR | null | As diagnostic dataset didn't have train set, it's very common in current research work to use model fine tuned on MNLI task to conduct prediction of diagnostic dataset.
As experimented, by adding logic in this request, we can achieve 47.1% on diagnostic dataset after fine-tuning 10k steps on MNLI dataset. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4830/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4830/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4830",
"html_url": "https://github.com/huggingface/transformers/pull/4830",
"diff_url": "https://github.com/huggingface/transformers/pull/4830.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4830.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4829 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4829/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4829/comments | https://api.github.com/repos/huggingface/transformers/issues/4829/events | https://github.com/huggingface/transformers/pull/4829 | 633,041,370 | MDExOlB1bGxSZXF1ZXN0NDI5NzI2NDk1 | 4,829 | [examples] Add trainer support for question-answering | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4829?src=pr&el=h1) Report\n> Merging [#4829](https://codecov.io/gh/huggingface/transformers/pull/4829?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d2a93991158f15993eba9ab421d82766b892f948&el=desc) will **increase** coverage by `1.00%`.\n> The diff coverage is `49.41%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4829?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4829 +/- ##\n==========================================\n+ Coverage 76.84% 77.84% +1.00% \n==========================================\n Files 141 142 +1 \n Lines 24685 24768 +83 \n==========================================\n+ Hits 18969 19281 +312 \n+ Misses 5716 5487 -229 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4829?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/datasets/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4829/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL3NxdWFkLnB5) | `47.56% <47.56%> (ø)` | |\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/4829/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.22% <100.00%> (ø)` | |\n| [src/transformers/data/datasets/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/4829/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL19faW5pdF9fLnB5) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4829/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.78% <0.00%> (-74.20%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4829/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `73.37% <0.00%> (-25.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4829/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.72% <0.00%> (-1.38%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4829/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+1.00%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/4829/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.38% <0.00%> (+68.46%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/4829/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.72% <0.00%> (+73.10%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4829?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4829?src=pr&el=footer). Last update [d2a9399...5497ae6](https://codecov.io/gh/huggingface/transformers/pull/4829?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Hi @patil-suraj,\r\n\r\nI think @julien-c can answer questions regarding the Trainer better :-) ",
"Just in case you wanted to use Weights & Biases, you should just have to do a `pip install wandb` and it should automatically track everything.",
">My main question is, have you reproduced the training results that are documented in https://github.com/huggingface/transformers/blob/master/examples/question-answering/README.md ?\r\n\r\nI didn't train `bert-base` (just trained for 1 epoch to see if the implementation was working) but instead I used it to train `electra-base` and it gave better results than mentioned in the paper\r\n\r\nIn the paper the authors mentioned that electra-base achieves 84.5 EM and 90.8 F1. I was able to achieve 85.05 EM and 91.60 F1. Sadly didn't use wandb, you can find the colab [here](https://colab.research.google.com/drive/11yo-LaFsgggwmDSy2P8zD3tzf5cCb-DU?usp=sharing)\r\n\r\nIt uses the same code, just copy pasted in colab. But if required I can try to reproduce the documented results.",
"> I didn't train `bert-base` (just trained for 1 epoch to see if the implementation was working) \r\n\r\nI can do it tomorrow morning, I currently have a V100 on hand:)",
"Just a note that I tried `python run_squad_trainer.py --model_name_or_path bert-base-uncased --model_type bert --data_dir squad --output_dir /tmp/debug_squad/ --overwrite_output_dir --do_train --do_eval --evaluate_during_training --logging_steps 100`.\r\n\r\nFor some reason I don't get any evaluation metric during training (I was expecting `loss` or `eval_loss`).",
"> Just in case you wanted to use Weights & Biases, you should just have to do a `pip install wandb` and it should automatically track everything.\r\n\r\n@borisdayma yes, there are no start and end positions in eval dataset which is why eval loss is not calculated. I will add that. Were you able to see training loss ?\r\nThanks !",
"> yes, there are no start and end positions in eval dataset which is why eval loss is not calculated. I will add that. Were you able to see training loss ?\r\n\r\nHmm, I'm pretty sure the dev-v1.1.json file has the same labels as the training one (start positions). Otherwise we wouldn't have any eval results at all in the readme. No?\r\n\r\npinging @LysandreJik on this:)",
"> @borisdayma yes, there are no start and end positions in eval dataset which is why eval loss is not calculated. I will add that. Were you able to see training loss ?\r\n\r\nYes, training loss was logged.",
"@julien-c In the two `TensorDatasets` created (one for training and one for evaluation), only the training has the correct `start_position` and `end_position`.\r\n\r\nI believe this is because while the training dataset only has one possible answer per question, the dev and validation datasets both have multiple answers per question (usually different-lengths spans).",
"@LysandreJik So I guess we should update the eval dataset to pick one start_position (or the most frequent one) – how do people do it usually with SQuAD eval, do you know @thomwolf?\r\n\r\nMaybe this can be done in a second PR though. Everyone ok with merging this (renaming `run_squad_trainer.py` to `run_squad.py`)?",
"@patil-suraj Can you resolve the conflicts and switch to the new `default_data_collator` now that it should work for your dict inputs?\r\nI can take over if you don't have time, but this is the only thing standing in the way of merging this PR.",
"@sgugger Yes, I'll switch to the new data collator. ",
"Hi @sgugger, you can take this over, I'm running short on time ;(",
"Thanks @sgugger :)",
"@sgugger can you please rename `run_squad_trainer.py` to `run_squad.py`? see also #5547"
] | 1,591 | 1,617 | 1,594 | MEMBER | null | This PR adds trainer support for question-answering task. Regarding issue #4784
**TODOs**
- [ ] Add automatic data loading. Right now it requires the user to specify data directory. Decided not to use `tfds` because I think it will be soon replaced by `nlp` here
- [ ] Add evaluation
- [ ] Test all models.
@julien-c @patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4829/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4829/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4829",
"html_url": "https://github.com/huggingface/transformers/pull/4829",
"diff_url": "https://github.com/huggingface/transformers/pull/4829.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4829.patch",
"merged_at": 1594126629000
} |
https://api.github.com/repos/huggingface/transformers/issues/4828 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4828/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4828/comments | https://api.github.com/repos/huggingface/transformers/issues/4828/events | https://github.com/huggingface/transformers/issues/4828 | 633,005,718 | MDU6SXNzdWU2MzMwMDU3MTg= | 4,828 | `run_glue.py` fails with models `bert-base-cased`, `distil-bert-cased`, others | {
"login": "jxmorris12",
"id": 13238952,
"node_id": "MDQ6VXNlcjEzMjM4OTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jxmorris12",
"html_url": "https://github.com/jxmorris12",
"followers_url": "https://api.github.com/users/jxmorris12/followers",
"following_url": "https://api.github.com/users/jxmorris12/following{/other_user}",
"gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions",
"organizations_url": "https://api.github.com/users/jxmorris12/orgs",
"repos_url": "https://api.github.com/users/jxmorris12/repos",
"events_url": "https://api.github.com/users/jxmorris12/events{/privacy}",
"received_events_url": "https://api.github.com/users/jxmorris12/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"update: this also happens with `distil-bert-cased` for `RTE` and `WNLI` tasks:\r\n\r\n```\r\nds) # (bs, seq_length, dim)\r\n File \"/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 532, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/u/.conda/envs/torch/lib/python3.7/site-packages/transformers/modeling_distilbert.py\", line 91, in forward\r\n word_embeddings = self.word_embeddings(input_ids) # (bs, max_seq_length, dim)\r\n File \"/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 532, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/modules/sparse.py\", line 114, in forward\r\n self.norm_type, self.scale_grad_by_freq, self.sparse)\r\n File \"/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/functional.py\", line 1484, in embedding\r\n return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)\r\nRuntimeError: index out of range: Tried to access index 29236 out of table with 28995 rows. at /opt/conda/conda-bld/pytorch_1579040055865/work/aten/src/TH/generic/THTensorEvenMoreMath.cpp:418\r\n```",
"update: a different error occurs with `roberta-base` on `STS-B`:\r\n\r\n```\r\n06/06/2020 23:49:26 - INFO - transformers.trainer - ***** Running training *****\r\n06/06/2020 23:49:26 - INFO - transformers.trainer - Num examples = 5749\r\n06/06/2020 23:49:26 - INFO - transformers.trainer - Num Epochs = 3\r\n06/06/2020 23:49:26 - INFO - transformers.trainer - Instantaneous batch size per device = 4\r\n06/06/2020 23:49:26 - INFO - transformers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 16\r\n06/06/2020 23:49:26 - INFO - transformers.trainer - Gradient Accumulation steps = 1\r\n06/06/2020 23:49:26 - INFO - transformers.trainer - Total optimization steps = 1080\r\nIteration: 0%| | 0/360 [00:09<?, ?it/s]\r\nEpoch: 0%| | 0/3 [00:09<?, ?it/s]\r\n\r\nwandb: Waiting for W&B process to finish, PID 7670\r\nTraceback (most recent call last):\r\n File \"./run_glue.py\", line 246, in <module>\r\n main()\r\n File \"./run_glue.py\", line 173, in main\r\n model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None\r\n File \"/u/.conda/envs/torch/lib/python3.7/site-packages/transformers/trainer.py\", line 471, in train\r\n tr_loss += self._training_step(model, inputs, optimizer)\r\n File \"/u/.conda/envs/torch/lib/python3.7/site-packages/transformers/trainer.py\", line 571, in _training_step\r\n outputs = model(**inputs)\r\n File \"/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 532, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py\", line 152, in forward\r\n outputs = self.parallel_apply(replicas, inputs, kwargs)\r\n File \"/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py\", line 162, in parallel_apply\r\n wandb: Program failed with code 1. Press ctrl-c to abort syncing.\r\nreturn parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])\r\n File \"/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py\", line 85, in parallel_apply\r\n output.reraise()\r\n File \"/u/.conda/envs/torch/lib/python3.7/site-packages/torch/_utils.py\", line 394, in reraise\r\n raise self.exc_type(msg)\r\nRuntimeError: Caught RuntimeError in replica 0 on device 0.\r\nOriginal Traceback (most recent call last):\r\n File \"/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py\", line 60, in _worker\r\n output = module(*input, **kwargs)\r\n File \"/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 532, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/u/.conda/envs/torch/lib/python3.7/site-packages/transformers/modeling_bart.py\", line 1103, in forward\r\n loss = F.cross_entropy(logits.view(-1, self.config.num_labels), labels.view(-1))\r\n File \"/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/functional.py\", line 2021, in cross_entropy\r\n return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)\r\n File \"/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/functional.py\", line 1838, in nll_loss\r\n ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)\r\nRuntimeError: Expected object of scalar type Long but got scalar type Float for argument #2 'target' in call to _thnn_nll_loss_forward\r\n```",
"Hello! I tried to reproduce, but couldn't get any results to crash.\r\nDo you mind showing me the exact command you use to launch the script?",
"Thanks for the response @LysandreJik -- here's an example of one command:\r\n\r\n```\r\nCUDA_VISIBLE_DEVICES=\"\" python run_glue.py --model_name_or_path bert-base-cased --tokenizer_name bert-base-cased --task_name MRPC --do_train --do_eval --save_steps -1 --data_dir=./glue_data/MRPC/ --max_seq_length 256 --per_device_eval_batch_size=16 --per_device_train_batch_size=16 --learning_rate 2e-5 --num_train_epochs 3 --output_dir=./glue_models/bert-base-cased/MRPC/\r\n```\r\n\r\nHere's the full output:\r\n```\r\n06/11/2020 13:19:37 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-vocab.txt from cache at /u/.cache/torch/transformers/5e8a2b4893d13790ed4150ca1906be5f7a03d6c4ddf62296c383f6db42814db2.e13dbb970cb325137104fb2e5f36fe865f27746c6b526f6352861b1980eb80b1\r\nMade tokenizer: <transformers.tokenization_bert.BertTokenizer object at 0x7fef6cd09550>\r\n06/11/2020 13:19:38 - INFO - transformers.modeling_utils - loading weights file https://cdn.huggingface.co/bert-base-cased-pytorch_model.bin from cache at /u/.cache/torch/transformers/d8f11f061e407be64c4d5d7867ee61d1465263e24085cfa26abf183fdc830569.3fadbea36527ae472139fe84cddaa65454d7429f12d543d80bfc3ad70de55ac2\r\n06/11/2020 13:19:41 - INFO - transformers.modeling_utils - Weights of BertForSequenceClassification not initialized from pretrained model: ['classifier.weight', 'classifier.bias']\r\n06/11/2020 13:19:41 - INFO - transformers.modeling_utils - Weights from pretrained model not used in BertForSequenceClassification: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias']\r\n06/11/2020 13:19:41 - INFO - filelock - Lock 140666173336656 acquired on ./glue_data/MRPC/cached_train_BertTokenizer_256_mrpc.lock\r\n06/11/2020 13:19:41 - INFO - transformers.data.datasets.glue - Loading features from cached file ./glue_data/MRPC/cached_train_BertTokenizer_256_mrpc [took 0.110 s]\r\n06/11/2020 13:19:41 - INFO - filelock - Lock 140666173336656 released on ./glue_data/MRPC/cached_train_BertTokenizer_256_mrpc.lock\r\n06/11/2020 13:19:41 - INFO - filelock - Lock 140666173337552 acquired on ./glue_data/MRPC/cached_dev_BertTokenizer_256_mrpc.lock\r\n06/11/2020 13:19:41 - INFO - transformers.data.datasets.glue - Loading features from cached file ./glue_data/MRPC/cached_dev_BertTokenizer_256_mrpc [took 0.013 s]\r\n06/11/2020 13:19:41 - INFO - filelock - Lock 140666173337552 released on ./glue_data/MRPC/cached_dev_BertTokenizer_256_mrpc.lock\r\n06/11/2020 13:19:41 - INFO - transformers.trainer - Automatic Weights & Biases logging enabled, to disable set os.environ[\"WANDB_DISABLED\"] = \"true\"\r\nwandb: Tracking run with wandb version 0.8.36\r\nwandb: Wandb version 0.9.1 is available! To upgrade, please run:\r\nwandb: $ pip install wandb --upgrade\r\nwandb: Run data is saved locally in wandb/run-20200611_171941-90an9vn0\r\nwandb: Syncing run solar-sun-89\r\nwandb: ⭐️ View project at https://app.wandb.ai/jxmorris12/huggingface\r\nwandb: 🚀 View run at https://app.wandb.ai/jxmorris12/huggingface/runs/90an9vn0\r\nwandb: Run `wandb off` to turn off syncing.\r\n\r\n06/11/2020 13:19:43 - INFO - transformers.trainer - ***** Running training *****\r\n06/11/2020 13:19:43 - INFO - transformers.trainer - Num examples = 3668\r\n06/11/2020 13:19:43 - INFO - transformers.trainer - Num Epochs = 3\r\n06/11/2020 13:19:43 - INFO - transformers.trainer - Instantaneous batch size per device = 16\r\n06/11/2020 13:19:43 - INFO - transformers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 16\r\n06/11/2020 13:19:43 - INFO - transformers.trainer - Gradient Accumulation steps = 1\r\n06/11/2020 13:19:43 - INFO - transformers.trainer - Total optimization steps = 690\r\nIteration: 1%|█▍ | 2/230 [00:16<31:42, 8.35s/it]\r\nEpoch: 0%| | 0/3 [00:16<?, ?it/s]\r\nTraceback (most recent call last):\r\n File \"run_glue.py\", line 247, in <module>\r\n main()\r\n File \"run_glue.py\", line 174, in main\r\n model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None\r\n File \"/u/.conda/envs/torch/lib/python3.7/site-packages/transformers/trainer.py\", line 471, in train\r\n tr_loss += self._training_step(model, inputs, optimizer)\r\n File \"/u/.conda/envs/torch/lib/python3.7/site-packages/transformers/trainer.py\", line 571, in _training_step\r\n outputs = model(**inputs)\r\n File \"/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 532, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/u/.conda/envs/torch/lib/python3.7/site-packages/transformers/modeling_bert.py\", line 1143, in forward\r\n inputs_embeds=inputs_embeds,\r\n File \"/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 532, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/u/.conda/envs/torch/lib/python3.7/site-packages/transformers/modeling_bert.py\", line 727, in forward\r\n input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds\r\n File \"/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 532, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/u/.conda/envs/torch/lib/python3.7/site-packages/transformers/modeling_bert.py\", line 174, in forward\r\n inputs_embeds = self.word_embeddings(input_ids)\r\n File \"/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 532, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/modules/sparse.py\", line 114, in forward\r\n self.norm_type, self.scale_grad_by_freq, self.sparse)\r\n File \"/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/functional.py\", line 1484, in embedding\r\n return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)\r\nRuntimeError: index out of range: Tried to acc\r\nwandb: Waiting for W&B process to finish, PID 46776\r\ness index 29597 out of table with 28995 rows. at /opt/conda/conda-bld/pytorch_1579040055865/work/aten/src/TH/generic/THTensorEvenMoreMath.cpp:418\r\nwandb: Program failed with code 1. Press ctrl-c to abort syncing.\r\nwandb: Process crashed early, not syncing files\r\n```\r\n\r\nBut -- at least on my machine -- I think a host of combinations of model/task combinations (`--model_name_or_path` and `--data_dir`) fail, as I mentioned above.",
"I had the same problem, but on my own data set. Have you solved your problem?",
"Nope @SizhaoXu.",
"@LysandreJik -- have you had a chance to check this out again? Thanks.",
"> Nope @SizhaoXu.\r\n\r\nFor question: RuntimeError: index out of range: Tried to access index 29597 out of table with 28995 rows.\r\nThe reason for this question is that the maximum sequence length of the model is 512. ",
"> > Nope @SizhaoXu.\r\n> \r\n> For question: RuntimeError: index out of range: Tried to access index 29597 out of table with 28995 rows.\r\n> The reason for this question is that the maximum sequence length of the model is 512.\r\n\r\nThis solves my problem. I hope that will help you",
"@SizhaoXu how did you fix it then? by truncating the inputs?",
"> @SizhaoXu how did you fix it then? by truncating the inputs?\r\n\r\nyes! you can try it. The maximum sequence length I set is 512 and I keep the first 200 words, the last 200 words and the middle 112 words",
"Thanks @SizhaoXu but my problems go beyond that specific one.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"> update: a different error occurs with `roberta-base` on `STS-B`:\r\n> \r\n> ```\r\n> 06/06/2020 23:49:26 - INFO - transformers.trainer - ***** Running training *****\r\n> 06/06/2020 23:49:26 - INFO - transformers.trainer - Num examples = 5749\r\n> 06/06/2020 23:49:26 - INFO - transformers.trainer - Num Epochs = 3\r\n> 06/06/2020 23:49:26 - INFO - transformers.trainer - Instantaneous batch size per device = 4\r\n> 06/06/2020 23:49:26 - INFO - transformers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 16\r\n> 06/06/2020 23:49:26 - INFO - transformers.trainer - Gradient Accumulation steps = 1\r\n> 06/06/2020 23:49:26 - INFO - transformers.trainer - Total optimization steps = 1080\r\n> Iteration: 0%| | 0/360 [00:09<?, ?it/s]\r\n> Epoch: 0%| | 0/3 [00:09<?, ?it/s]\r\n> \r\n> wandb: Waiting for W&B process to finish, PID 7670\r\n> Traceback (most recent call last):\r\n> File \"./run_glue.py\", line 246, in <module>\r\n> main()\r\n> File \"./run_glue.py\", line 173, in main\r\n> model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None\r\n> File \"/u/.conda/envs/torch/lib/python3.7/site-packages/transformers/trainer.py\", line 471, in train\r\n> tr_loss += self._training_step(model, inputs, optimizer)\r\n> File \"/u/.conda/envs/torch/lib/python3.7/site-packages/transformers/trainer.py\", line 571, in _training_step\r\n> outputs = model(**inputs)\r\n> File \"/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 532, in __call__\r\n> result = self.forward(*input, **kwargs)\r\n> File \"/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py\", line 152, in forward\r\n> outputs = self.parallel_apply(replicas, inputs, kwargs)\r\n> File \"/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py\", line 162, in parallel_apply\r\n> wandb: Program failed with code 1. Press ctrl-c to abort syncing.\r\n> return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])\r\n> File \"/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py\", line 85, in parallel_apply\r\n> output.reraise()\r\n> File \"/u/.conda/envs/torch/lib/python3.7/site-packages/torch/_utils.py\", line 394, in reraise\r\n> raise self.exc_type(msg)\r\n> RuntimeError: Caught RuntimeError in replica 0 on device 0.\r\n> Original Traceback (most recent call last):\r\n> File \"/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py\", line 60, in _worker\r\n> output = module(*input, **kwargs)\r\n> File \"/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 532, in __call__\r\n> result = self.forward(*input, **kwargs)\r\n> File \"/u/.conda/envs/torch/lib/python3.7/site-packages/transformers/modeling_bart.py\", line 1103, in forward\r\n> loss = F.cross_entropy(logits.view(-1, self.config.num_labels), labels.view(-1))\r\n> File \"/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/functional.py\", line 2021, in cross_entropy\r\n> return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)\r\n> File \"/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/functional.py\", line 1838, in nll_loss\r\n> ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)\r\n> RuntimeError: Expected object of scalar type Long but got scalar type Float for argument #2 'target' in call to _thnn_nll_loss_forward\r\n> ```\r\n\r\nHello ! I encountered the same problem when using the STSB dataset for fine-tuning BERT. How did you solve this problem?"
] | 1,591 | 1,630 | 1,598 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): `bert-base-cased`
Language I am using the model on (English, Chinese ...): `English`
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. download GLUE data
2. run `python run_glue.py --model_name_or_path bert-base-cased`
Error message:
```
06/07/2020 00:01:06 - INFO - transformers.trainer - ***** Running training *****
06/07/2020 00:01:06 - INFO - transformers.trainer - Num examples = 3668
06/07/2020 00:01:06 - INFO - transformers.trainer - Num Epochs = 3
06/07/2020 00:01:06 - INFO - transformers.trainer - Instantaneous batch size per device = 4
06/07/2020 00:01:06 - INFO - transformers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 4
06/07/2020 00:01:06 - INFO - transformers.trainer - Gradient Accumulation steps = 1
06/07/2020 00:01:06 - INFO - transformers.trainer - Total optimization steps = 2751
Iteration: 1%|█▊ | 9/917 [00:24<40:42, 2.69s/it]
Epoch: 0%| | 0/3 [00:24<?, ?it/s]
Traceback (most recent call last):
File "./run_glue.py", line 247, in <module>
main()
File "./run_glue.py", line 174, in main
model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None
File "/u/.conda/envs/torch/lib/python3.7/site-packages/transformers/trainer.py", line 471, in train
tr_loss += self._training_step(model, inputs, optimizer)
File "/u/.conda/envs/torch/lib/python3.7/site-packages/transformers/trainer.py", line 571, in _training_step
outputs = model(**inputs)
File "/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/u/.conda/envs/torch/lib/python3.7/site-packages/transformers/modeling_bert.py", line 1143, in forward
inputs_embeds=inputs_embeds,
File "/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/u/.conda/envs/torch/lib/python3.7/site-packages/transformers/modeling_bert.py", line 727, in forward
input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds
File "/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/u/.conda/envs/torch/lib/python3.7/site-packages/transformers/modeling_bert.py", line 174, in forward
inputs_embeds = self.word_embeddings(input_ids)
File "/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 114, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/u/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/functional.py", line 1484, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: index out of range: Tried to a
ccess index 29597 out of table with 28995 rows. at /opt/conda/conda-bld/pytorch_1579040055865/work/aten/src/TH/generic/THTensorEvenMoreMath.cpp:418
```
## Expected behavior
Model successfully trains. The script works well on my machine for many other models, including `bert-base-uncased` ~~and `distilbert-base-cased`~~.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
```
- `transformers` version: 2.11.0
- Platform: Linux-3.10.0-693.el7.x86_64-x86_64-with-centos-7.4.1708-Core
- Python version: 3.7.7
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): 2.0.0 (True)
- Using GPU in script?: [yes]
- Using distributed or parallel set-up in script?: [yes, parallel, but I received the same error when I ran the script just using the CPU]
```s | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4828/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4828/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4827 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4827/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4827/comments | https://api.github.com/repos/huggingface/transformers/issues/4827/events | https://github.com/huggingface/transformers/issues/4827 | 633,002,282 | MDU6SXNzdWU2MzMwMDIyODI= | 4,827 | How to remove token ? | {
"login": "LeslieOverfitting",
"id": 38348130,
"node_id": "MDQ6VXNlcjM4MzQ4MTMw",
"avatar_url": "https://avatars.githubusercontent.com/u/38348130?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LeslieOverfitting",
"html_url": "https://github.com/LeslieOverfitting",
"followers_url": "https://api.github.com/users/LeslieOverfitting/followers",
"following_url": "https://api.github.com/users/LeslieOverfitting/following{/other_user}",
"gists_url": "https://api.github.com/users/LeslieOverfitting/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LeslieOverfitting/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LeslieOverfitting/subscriptions",
"organizations_url": "https://api.github.com/users/LeslieOverfitting/orgs",
"repos_url": "https://api.github.com/users/LeslieOverfitting/repos",
"events_url": "https://api.github.com/users/LeslieOverfitting/events{/privacy}",
"received_events_url": "https://api.github.com/users/LeslieOverfitting/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"From what I can observe, there are two types of tokens in your tokenizer: base tokens, which can be derived with `tokenizer.encoder` and the added ones: `tokenizer.added_tokens_encoder`. Depending on which token you want to remove, you use `del tokenizer.encoder` or `del tokenizer.added_tokens_encoder`. \r\n¡NB! Do not forget to resize the embedding layer of your model with `model.resize_token_embeddings(len(tokenizer))`.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi, I can't seem to remove tokens from the main vocabulary with tokenizer.encoder. I get `AttributeError: 'BertTokenizerFast' object has no attribute 'encoder'`. \r\n\r\nAlso. if we remove some tokens from the middle of the whole vocabulary file, can the model set the right embeddings for new token ids? Will the specific token ids and embeddings be removed from our vocab file and model?\r\n\r\nWhat I currently do:\r\n\r\n```\r\ndel tokenizer.vocab[unwanted_words] \r\nmodel.resize_token_embeddings(len(tokenizer))\r\n```\r\nWe're decreasing vocabulary size here, but will my model understand which tokens were removed? ",
"@mitramir55 I can't imagine how the model would know which tokens were removed from the vocabulary. I have the same question. Perhaps we would have to remove weight elements one by one from the model's lookup embeddings. Any other ideas?",
"@mitramir55 \r\n\r\nDoes del deletes the token from the tokenizer? It didn't seem to work for me",
"Hi @snoop2head and @avi-jit,\r\nNo, I did not delete any word from the vocabulary. If you think about it, it's not even logical to delete a word - an id in the input or output of a trained model. All I did was adding the words I wanted to be in the model's vocabulary while training , and then setting the probability of some words I didn't want to minus infinity while using the model -predicting the next word. This way the model won't choose from them and will go to the next most probable option.\r\n\r\n\r\n```\r\n### Adding words before training\r\nmodel_path = 'HooshvareLab/distilbert-fa-zwnj-base'\r\n\r\nmodel = AutoModelForMaskedLM.from_pretrained(model_path)\r\ntokenizer = AutoTokenizer.from_pretrained(model_path,\r\n use_fast=True)\r\n\r\ntokenizer.add_tokens(['this', 'that', 'those'])\r\nmodel.resize_token_embeddings(len(tokenizer))\r\n\r\n# then the training...\r\n```\r\n Now let's say we want our trained transformer to suggest a word for an incomplete sentence without considering some specific \"banned\" words:\r\n\r\n```\r\n### setting the probability of some words being generated to -inf\r\n\r\nall_banned_tokens = ['«', ':', '،', '/', '*', ']', '[', '؟', '…', 'ی', tokenizer.unk_token]\r\nall_banned_tokens = [i.strip() for i in all_banned_tokens]\r\n\r\nbanned_ids = []\r\nbanned_ids = [i[0] for i in tokenizer.batch_encode_plus(all_banned_tokens, add_special_tokens=False).input_ids]\r\n\r\ndef get_transformer_suggestions(sequence, model, tokenizer, top_k=5, banned_ids = banned_ids):\r\n \"\"\" gets a sequence of words and outputs top_k suggested words\"\"\"\r\n\r\n suggestion = []\r\n ids_main = tokenizer.encode(sequence, return_tensors=\"pt\", add_special_tokens=True)\r\n\r\n ids_ = ids_main.detach().clone()\r\n position = torch.where(ids_main == tokenizer.mask_token_id)\r\n positions_list = position[1].numpy().tolist()\r\n\r\n model_logits = model(ids_)['logits'][0][positions_list[0]]\r\n model_logits[banned_ids] = -math.inf\r\n \r\n top_k_tokens = torch.topk(model_logits, top_k, dim=0).indices.tolist()\r\n \r\n for j in range(len(top_k_tokens)):\r\n suggestion.append(tokenizer.decode(top_k_tokens[j]))\r\n\r\n return suggestion \r\n\r\ncandidates = get_transformer_suggestions(input_sentence = f'this is an amazing {tokenizer.mask_token}', model= model, tokenizer=tokenizer, top_k=5, anned_ids=banned_ids)\r\n```\r\n\r\nI hope this was helpful. Tell me if there is anything else I can explain to make it clear.\r\n\r\n",
"@mitramir55 \r\nThere are occasions where you want to delete tokens from the tokenizer and resize the embedding layer accordingly.\r\n\r\nJust like I stated in issue #15032 , there are tokens such as `[unused363]`.\r\n\r\nI am figuring out way how to remove the surplus of 500 tokens from the tokenizer.\r\n\r\nThank you for your kind explanation though!",
"Hi @snoop2head ,\r\nI'm not sure what you want to do exactly, but I think [this post](https://github.com/huggingface/transformers/issues/1083#issuecomment-524303077) and [this one](https://github.com/huggingface/transformers/issues/4777) can be helpful.\r\n\r\n\r\nBasically, what you need to know is that you cannot change the embedding layer of a model, because this is part of a trained transformer with specific weights and layers. If you want to change the embedding, then you need to train the model. This is because each tokenizer has a `vocab.json` and `merge.txt` file that has been created during the process of training (with byte-level BPE) and if you want to change the tokenizer, you need to modify those. However, with a little search I found [this post ](https://discuss.huggingface.co/t/barttokenizer-with-vocab-json-and-merge-txt-which-were-created-by-bytelevelbpetokenizer-encode-s-into-3-tokens/3393/2 )where the author has changed the files (I think with another model's file). Maybe you can get some help from this.\r\n"
] | 1,591 | 1,641 | 1,597 | NONE | null | I only know how to add token, but how to remoce some special token | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4827/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4827/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4826 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4826/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4826/comments | https://api.github.com/repos/huggingface/transformers/issues/4826/events | https://github.com/huggingface/transformers/pull/4826 | 632,908,206 | MDExOlB1bGxSZXF1ZXN0NDI5NjAzNzQ2 | 4,826 | Fix use of mems in Transformer-XL text generation | {
"login": "tommccoy1",
"id": 19821261,
"node_id": "MDQ6VXNlcjE5ODIxMjYx",
"avatar_url": "https://avatars.githubusercontent.com/u/19821261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tommccoy1",
"html_url": "https://github.com/tommccoy1",
"followers_url": "https://api.github.com/users/tommccoy1/followers",
"following_url": "https://api.github.com/users/tommccoy1/following{/other_user}",
"gists_url": "https://api.github.com/users/tommccoy1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tommccoy1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tommccoy1/subscriptions",
"organizations_url": "https://api.github.com/users/tommccoy1/orgs",
"repos_url": "https://api.github.com/users/tommccoy1/repos",
"events_url": "https://api.github.com/users/tommccoy1/events{/privacy}",
"received_events_url": "https://api.github.com/users/tommccoy1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4826?src=pr&el=h1) Report\n> Merging [#4826](https://codecov.io/gh/huggingface/transformers/pull/4826?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c58e6c129a153ca1a5021e5d7e642d00bf011e20&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4826?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4826 +/- ##\n=======================================\n Coverage 74.52% 74.53% \n=======================================\n Files 128 128 \n Lines 21497 21499 +2 \n=======================================\n+ Hits 16021 16024 +3 \n+ Misses 5476 5475 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4826?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/4826/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `77.27% <100.00%> (+0.31%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4826/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4826/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.36% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4826/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.27% <0.00%> (+0.12%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4826?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4826?src=pr&el=footer). Last update [c58e6c1...07c4126](https://codecov.io/gh/huggingface/transformers/pull/4826?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Hey @tommccoy1,\r\n\r\nThanks a lot for your PR! From some initial tests with this change, the results seem good! \r\nThe bug you pointed out, might also be affecting `xlnet` actually...I will have to take a deeper look into both models to fully understand what's going on with `mems`. \r\n\r\nBut also given the discussion on the original Transfo-XL repo, here: https://github.com/kimiyoung/transformer-xl/issues/49 suggests that you are 100% correct with your PR here.\r\n\r\nIt seems like you are passing all the tests. They will surely be one `RUN_SLOW` test that will fail, but this might also be due to prior incorrect assumptions regarding the `.generate()` function. I will check this PR next week :-) ",
"Also related: #505",
"Hey, I'm taking a look at this atm - as expected by @patrickvonplaten, the slow test fails, but that's probably an issue with the slow test. I'll update it and merge the PR.",
"Hey, sorry for the delay - we've realized XLNet had a similar issue, and I'm opening up another PR to fix the slow tests to be consistent with this.",
"No worries re: the delay - thank you for looking over it & merging it!"
] | 1,591 | 1,594 | 1,593 | CONTRIBUTOR | null | In Transformer-XL, when ```mems``` is being used to save computation with the ```generate``` function, the inputs are not properly truncated, so that ```mems``` does not actually speed things up, and also seems to create inaccuracies in the output. I have attempted to fix this by changing Transformer-XL's ```prepare_inputs_for_generation``` function to make it more like that function as used in GPT-2.
See the issue at https://github.com/huggingface/transformers/issues/4752 for more details.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4826/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4826/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4826",
"html_url": "https://github.com/huggingface/transformers/pull/4826",
"diff_url": "https://github.com/huggingface/transformers/pull/4826.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4826.patch",
"merged_at": 1593681547000
} |
https://api.github.com/repos/huggingface/transformers/issues/4825 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4825/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4825/comments | https://api.github.com/repos/huggingface/transformers/issues/4825/events | https://github.com/huggingface/transformers/issues/4825 | 632,814,781 | MDU6SXNzdWU2MzI4MTQ3ODE= | 4,825 | Onnx converted model has its output shape modified when compared to original (finetuned) model | {
"login": "pommedeterresautee",
"id": 1029874,
"node_id": "MDQ6VXNlcjEwMjk4NzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1029874?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pommedeterresautee",
"html_url": "https://github.com/pommedeterresautee",
"followers_url": "https://api.github.com/users/pommedeterresautee/followers",
"following_url": "https://api.github.com/users/pommedeterresautee/following{/other_user}",
"gists_url": "https://api.github.com/users/pommedeterresautee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pommedeterresautee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pommedeterresautee/subscriptions",
"organizations_url": "https://api.github.com/users/pommedeterresautee/orgs",
"repos_url": "https://api.github.com/users/pommedeterresautee/repos",
"events_url": "https://api.github.com/users/pommedeterresautee/events{/privacy}",
"received_events_url": "https://api.github.com/users/pommedeterresautee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Facing the same problem with a BERT model fine-tuned on sequence classification and would love to get an answer :) ",
"This seems to be related to [this issue](https://github.com/huggingface/transformers/issues/4788). \r\n\r\nAs @hrsmanian points it, it seems that in[ convert_graph_to_onnx.py](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_graph_to_onnx.py), the model is currently converted by default to a 'feature-extraction' version where the classification layer is discarded. Changing the pipeline type (line 108 of the py file) to 'ner' in @hrsmanian's case seems to have worked. \r\n\r\nIn the case of binary classification, I tried changing the pipeline type to 'sentiment-analysis' (my model is a binary BertForSequenceClassification) but get a ValueError (ValueError: not enough values to unpack (expected 2, got 1)) when trying to run the session. I used simpletransformers (which is based on this repo) to do binary classification with BERT, followed the instructions for conversion and inference from the [blog post](https://medium.com/microsoftazure/accelerate-your-nlp-pipelines-using-hugging-face-transformers-and-onnx-runtime-2443578f4333). \r\n\r\nLet me know if you see what the problem is @mfuntowicz :) ",
"Actually, I managed to make it work.\r\n\r\nThe problem was that the session.run output shape changed and so writing:\r\n`output, pooled = session.run(None, tokens)` was not working anymore. \r\n\r\nWhen only writing `output = session.run(None, tokens)`, it works and I get the classification scores.\r\n\r\n\r\n\r\nHope that helps :) ",
"@manueltonneau You're right, we're currently enforcing the `feature-extraction` because not all our pipelines are compatible with ONNX graph representation. \r\n\r\nI'll have a look asap to identify which pipelines are compatible and which are not, so what we can add the possibility to export other kind of pipeline through the script. ",
"Tks @manueltonneau , works for me too! Btw you may prefer `output, = ...` to avoid the list :-) \r\n@mfuntowicz would it be possible to have a pipeline for the `multichoice` task (and a related onnx converter too if this is onnx compatible)? Not sure why it doesn't exist yet btw as all models I have used support the task.",
"It might be possible for pipelines such as **token classification** and **sequence classification** to be exportable out of the box. These pipelines generally just add a projection layer on top of the model followed by a argmax. All of these operators are natively supported by ONNX.\r\n\r\nFor more complex pipeline such as **qa** or **generation**, ONNX might not support all the operators used in the post-processing steps (i.e. _sampling_, _answer span extraction_) and thus would lead to the impossibility to export the model to ONNX. ",
"This is a very good news! \r\nSo in theory a multichoice pipeline should work as it s just a projection like classification but with a different shape, am I right? Would it be possible for your team to support this task on the pipeline?",
"I have another question, looking at the `convert` function code, the dumb input used to guess the architecture of the model in torch script is:\r\n\r\n```python\r\n tokens = nlp.tokenizer.encode_plus(\"This is a sample output\", return_tensors=framework)\r\n```\r\n\r\nMy understanding is that onnx uses torch script and torch script can only guess a fix input length.\r\n[Doc here](https://huggingface.co/transformers/torchscript.html#dummy-inputs-and-standard-lengths)\r\n\r\n@mfuntowicz Does that mean that onnx model truncates all inputs to less than 10 tokens?\r\n@manueltonneau On your model, does onnx predictions the same than pytorch ones? (for the same input)\r\nMy model is based on the `multichoice` task and it doesn't work (it compiles but the predictions are wrong). I don't know if it s because some input truncation or just because of the task.\r\n\r\n",
"@pommedeterresautee You're right here about how PyTorch & ONNX interact together. ONNX leverage the tracing provided by PyTorch to construct the ONNX IR. \r\n\r\nHowever on the input, it should not truncate anything because `convert_graph_to_onnx.py` exports the inputs with the[ sequence axis being dynamic](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_graph_to_onnx.py#L81)\r\n\r\n```python\r\n# Generate input names & axes\r\ninput_vars = list(tokens.keys())\r\ninput_dynamic_axes = {k: build_shape_dict(v, True, seq_len) for k, v in tokens.items()}\r\n```\r\n\r\nYou can set a breakpoint on [this line](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_graph_to_onnx.py#L97) and see the actual axes being dynamic (input and output). \r\n\r\nIf you find any incoherent behaviour we can dig further to understand why dynamic axes are not correctly exported in your case 👍 ",
"First, I have tried with a long sequence on classification task and it works (results are the same).\r\nAnyway, tks @mfuntowicz for the clear explanation\r\n\r\nNot a big surprise, the converter doesn't work when the task is `multichoice` and the pipeline used in the converter is \"sentiment-analysis\" (because the multichoice pipeline doesn't exist).\r\n\r\n* **What can I do to get the support of a multichoice task pipeline and check if onnx works in this setup?**\r\n\r\nCode to reproduce\r\n\r\n```python\r\nimport torch\r\nfrom transformers import AutoTokenizer, AutoModelForMultipleChoice\r\nfrom transformers.convert_graph_to_onnx import convert\r\nfrom onnxruntime import InferenceSession, SessionOptions, get_all_providers\r\n\r\n\r\ndef create_model_for_provider(model_path: str, provider: str) -> InferenceSession:\r\n assert provider in get_all_providers(), f\"provider {provider} not found, {get_all_providers()}\"\r\n\r\n # Few properties than might have an impact on performances (provided by MS)\r\n options = SessionOptions()\r\n options.intra_op_num_threads = 1\r\n\r\n # Load the model as a graph and prepare the CPU backend\r\n return InferenceSession(model_path, options, providers=[provider])\r\n\r\n\r\ntokenizer = AutoTokenizer.from_pretrained('xlm-roberta-base', use_fast=False)\r\n\r\nmodel = AutoModelForMultipleChoice.from_pretrained(pretrained_model_name_or_path=\"output/xlm-r\")\r\ndevice = torch.device(device='cuda')\r\nmodel.to(device)\r\nmodel.eval()\r\n\r\nconvert(framework=\"pt\",\r\n model=\"output/xlm-r\",\r\n tokenizer='xlm-roberta-base',\r\n output=\"output/onnx/xlm-r.onnx\",\r\n opset=11)\r\nmodel_onnx = create_model_for_provider(\"output/onnx/xlm-r.onnx\", \"CUDAExecutionProvider\")\r\n\r\n\r\ninputs = tokenizer.encode_plus(\"hello les amis, comment allez vous ? Moi pas mal\", \"je vais très bien\")\r\n\r\ntorch_inputs = {k: torch.tensor([[v, v]], dtype=torch.long).to(device) for k, v in inputs.items()}\r\noutput_pytorch = model(**torch_inputs)\r\ninputs_onnx = {k: v.cpu().detach().numpy() for k, v in torch_inputs.items()}\r\n\r\nsequence, = model_onnx.run(None, inputs_onnx)\r\n```\r\n\r\nIt crashes with:\r\n\r\n```python\r\nTraceback (most recent call last):\r\n File \"/home/geantvert/.local/share/virtualenvs/***/lib/python3.8/site-packages/IPython/core/interactiveshell.py\", line 3331, in run_code\r\n exec(code_obj, self.user_global_ns, self.user_ns)\r\n File \"<ipython-input-11-f614fb04d5d2>\", line 7, in <module>\r\n sequence, = model_onnx.run(None, inputs_onnx)\r\n File \"/home/geantvert/.local/share/virtualenvs/***/lib/python3.8/site-packages/onnxruntime/capi/session.py\", line 111, in run\r\n return self._sess.run(output_names, input_feed, run_options)\r\nonnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid rank for input: input_ids Got: 3 Expected: 2 Please fix either the inputs or the model.\r\n\r\n```\r\n",
">@manueltonneau On your model, does onnx predictions the same than pytorch ones? (for the same input)\r\n\r\nSorry for the late reply @pommedeterresautee. I did three tests and the predictions are almost exactly the same for all three. ",
"Now pipeline option can be provided via arguments to [`convert_graph_to_onnx.py`](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_graph_to_onnx.py#L32) using `--pipeline` argument:\r\n\r\nValid Options are:\r\n```\r\nSUPPORTED_PIPELINES = [\r\n \"feature-extraction\",\r\n \"ner\",\r\n \"sentiment-analysis\",\r\n \"fill-mask\",\r\n \"question-answering\",\r\n \"text-generation\",\r\n \"translation_en_to_fr\",\r\n \"translation_en_to_de\",\r\n \"translation_en_to_ro\",\r\n]\r\n```",
"Hi, is there any support for sequence classfication on sentence pairs?",
"sentence pairs are managed by the tokenizer, at the end it's just a sequence of tokens... so classic sequence classification pipeline works out of the box",
"> sentence pairs are managed by the tokenizer, at the end it's just a sequence of tokens... so classic sequence classification pipeline works out of the box\r\n\r\nWorks well. Thanks",
"I am facing issue while solving for multi-class classification problem.",
"My problem also got solved when I used the pipeline 'sentiment-analysis'"
] | 1,591 | 1,647 | 1,595 | CONTRIBUTOR | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): `mrm8488/distilroberta-base-finetuned-sentiment` from the hub
Language I am using the model on (English, Chinese ...): `English`
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
I use the `04-onnx-export.ipynb` Notebook and have only change the model name and the tokenizer:

The issue appeared on all finetuned model I tried, being classification or multichoice questions.
The tasks I am working on is:
* [ ] my own task or dataset: (give details below)
* [X] an official GLUE/SQUaD task: classification
## To reproduce
Steps to reproduce the behavior:
Import AutoTokenizer, AutoModelForSequenceClassification and change tokenizer and model name, the section we are interested into:
```python
# ...
!rm -rf onnx/
from transformers.convert_graph_to_onnx import convert
# Handles all the above steps for you
convert(framework="pt", model="mrm8488/distilroberta-base-finetuned-sentiment", output="onnx/bert-base-cased.onnx", opset=11)
# ...
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("mrm8488/distilroberta-base-finetuned-sentiment")
cpu_model = create_model_for_provider("onnx/bert-base-cased.onnx", "CPUExecutionProvider")
# Inputs are provided through numpy array
model_inputs = tokenizer.encode_plus("My name is Bert", return_tensors="pt")
inputs_onnx = {k: v.cpu().detach().numpy() for k, v in model_inputs.items()}
# Run the model (None = get all the outputs)
sequence, pooled = cpu_model.run(None, inputs_onnx)
# Print information about outputs
print(f"Sequence output: {sequence.shape}, Pooled output: {pooled.shape}")
pytorch_model = AutoModelForSequenceClassification.from_pretrained("mrm8488/distilroberta-base-finetuned-sentiment")
a, = pytorch_model(**model_inputs)
print(f"finetune non onnx pytorch model output: {a.shape}")
# ...
```
## Expected behavior
I was expecting that the onnx output shape would be the same than the non converted model output shape, but that's not the case:
```text
Sequence output: (1, 6, 768), Pooled output: (1, 768)
finetune non onnx pytorch model output: torch.Size([1, 6])
```
It is like the last layer of the model related to the classification task is not taken in onnx.
Does it make sense? @mfuntowicz
## Environment info
Google Colab with a GPU
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4825/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4825/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4824 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4824/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4824/comments | https://api.github.com/repos/huggingface/transformers/issues/4824/events | https://github.com/huggingface/transformers/issues/4824 | 632,797,782 | MDU6SXNzdWU2MzI3OTc3ODI= | 4,824 | Top-k sampling and top-p sampling for generating phrases on batches with GPT-2? | {
"login": "Barbara931120",
"id": 62270260,
"node_id": "MDQ6VXNlcjYyMjcwMjYw",
"avatar_url": "https://avatars.githubusercontent.com/u/62270260?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Barbara931120",
"html_url": "https://github.com/Barbara931120",
"followers_url": "https://api.github.com/users/Barbara931120/followers",
"following_url": "https://api.github.com/users/Barbara931120/following{/other_user}",
"gists_url": "https://api.github.com/users/Barbara931120/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Barbara931120/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Barbara931120/subscriptions",
"organizations_url": "https://api.github.com/users/Barbara931120/orgs",
"repos_url": "https://api.github.com/users/Barbara931120/repos",
"events_url": "https://api.github.com/users/Barbara931120/events{/privacy}",
"received_events_url": "https://api.github.com/users/Barbara931120/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It's on our ToDo-List :-) Currently batch generation with GPT2 is not possible, so you will have to rely on the code in https://github.com/huggingface/transformers/issues/3021",
"Can top-k and top-p sampling be implemented in batches?",
"Sure, the provided `top-k-top-p sampling` function provided that :-) "
] | 1,591 | 1,591 | 1,591 | NONE | null | How can I generate on batches with GPT-2 if I want to make use of these awesome sampling techniques: [top-k sampling and top-p sampling](https://huggingface.co/blog/how-to-generate)?.
There is this implementation for generating phrases on batches already in issue [#3021](https://github.com/huggingface/transformers/issues/3021).
Any advice? thanks!
@patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4824/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4824/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4823 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4823/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4823/comments | https://api.github.com/repos/huggingface/transformers/issues/4823/events | https://github.com/huggingface/transformers/issues/4823 | 632,792,540 | MDU6SXNzdWU2MzI3OTI1NDA= | 4,823 | Discriminative fine-tuning for new (added) words | {
"login": "Aktsvigun",
"id": 36672861,
"node_id": "MDQ6VXNlcjM2NjcyODYx",
"avatar_url": "https://avatars.githubusercontent.com/u/36672861?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aktsvigun",
"html_url": "https://github.com/Aktsvigun",
"followers_url": "https://api.github.com/users/Aktsvigun/followers",
"following_url": "https://api.github.com/users/Aktsvigun/following{/other_user}",
"gists_url": "https://api.github.com/users/Aktsvigun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Aktsvigun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aktsvigun/subscriptions",
"organizations_url": "https://api.github.com/users/Aktsvigun/orgs",
"repos_url": "https://api.github.com/users/Aktsvigun/repos",
"events_url": "https://api.github.com/users/Aktsvigun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Aktsvigun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Finally found the solution. The optimizer should look like this:\r\n```\r\noptim.SGD([\r\n {'params': model.base.parameters()},\r\n {'params': model.classifier.parameters(), 'lr': 1e-3}\r\n], lr=1e-2, momentum=0.9))\r\n```",
"However, this approach does not work when I need to specify another LR not for the whole layer, but only for a few weights from it. Hence, the issue is still open. @LysandreJik @patrickvonplaten will be very grateful if you could help.\r\n\r\nShould a bit specify the question, I need to fine-tune the model (as proposed [here](https://github.com/huggingface/transformers/tree/master/examples/language-modeling)), specifying special learning rate for only the new word embeddings (_ideally_) or at least for the whole embedding matrix.",
"Hi @Aktsvigun, \r\n\r\nThanks for your detailed question! I'm not super familiar with these kind of specifics for training, but I'm not sure that this is even possible in PyTorch. \r\n\r\nAlso, did you try to fine-tune the model normally as well without setting a specific learning rate to only one parameter of a layer? I would expect normal fine-tuning to also work quite well since the gradient (independently of the lr) for the newly added weight will be quite high and thus change significantly even for the same learning rate for all parameters.",
"@patrickvonplaten thank you for the answer! \r\n\r\nI did try with a simple LR but what urged me to the question is the difference in the results of a language model, fine-tuned without adding new words and the one with changed vocab. I took a small dataset (_~ 25 000 sentences_) and fine-tuned both models with `lr = 3e-5` (pretty standard as I know) and `num_epochs = 10`. The model without new words had an eval loss (on the validation sample) of **2.55**, while the loss for the one with 250 new words (not much really when the vocab size equals 50250) equaled **2.98**. The words are not so popular among the dataset itself (only 13 of them belong to top-250 most frequent words), therefore such a great difference can be caused only by underfitting in my view.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,591 | 1,597 | 1,597 | CONTRIBUTOR | null | Good afternoon,
I have a question about the way of changing the learning rate for some parameters of the model. Let's say we have a BERT model and we have added a few new tokens. Consequently, we need to resize the embedding layer and initialize the embeddings for new words randomly. Meanwhile the learning rate of 3e-5 is used to train the model (otherwise we "overjump" the global minimum). If we use this LR for the embeddings of the new words as well, they will hardly change and thus will be close to random, thereby, the reasonable way is to change the learning rate only for their embeddings (as it is done, for instance, in ULMFiT). The question is: is there a simple way to do it in HuggingFace? Or probably are there some examples of doing it? Thanks in advance! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4823/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4823/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4822 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4822/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4822/comments | https://api.github.com/repos/huggingface/transformers/issues/4822/events | https://github.com/huggingface/transformers/issues/4822 | 632,764,963 | MDU6SXNzdWU2MzI3NjQ5NjM= | 4,822 | EncoderDecoderModel forwards return different values every time. | {
"login": "mmsamiei",
"id": 12582703,
"node_id": "MDQ6VXNlcjEyNTgyNzAz",
"avatar_url": "https://avatars.githubusercontent.com/u/12582703?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mmsamiei",
"html_url": "https://github.com/mmsamiei",
"followers_url": "https://api.github.com/users/mmsamiei/followers",
"following_url": "https://api.github.com/users/mmsamiei/following{/other_user}",
"gists_url": "https://api.github.com/users/mmsamiei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mmsamiei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mmsamiei/subscriptions",
"organizations_url": "https://api.github.com/users/mmsamiei/orgs",
"repos_url": "https://api.github.com/users/mmsamiei/repos",
"events_url": "https://api.github.com/users/mmsamiei/events{/privacy}",
"received_events_url": "https://api.github.com/users/mmsamiei/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Could you please provide the whole code you use? Your structure works ideally for me, this code outputs the same values:\r\n```\r\nfrom transformers import EncoderDecoderModel, BertTokenizer\r\nimport torch\r\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\n\r\nmodel = EncoderDecoderModel.from_encoder_decoder_pretrained(\r\n 'bert-base-uncased', 'bert-base-uncased')\r\n\r\ninput_ids = torch.tensor(tokenizer.encode(\"Hello, my dog is cute\", add_special_tokens=True)).unsqueeze(0)\r\n\r\noutputs = []\r\nfor _ in range(5):\r\n result = model(input_ids=input_ids, decoder_input_ids=input_ids)[0]\r\n outputs.append(result)\r\noutputs\r\n```\r\n",
"I see, on each step you initialize your EncoderDecoder model. AFAIU the difference is caused by a randomly initialized layers for decoder in this architecture. You can check it with this code:\r\n\r\n```\r\nparams1, params2, models = [], [], []\r\nfor _ in range(2):\r\n tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\n\r\n model = EncoderDecoderModel.from_encoder_decoder_pretrained(\r\n 'bert-base-uncased', 'bert-base-uncased')\r\n \r\n models.append(model)\r\n\r\npars = models[0].decoder.bert.encoder.parameters()\r\nfor _ in range(1000):\r\n try:\r\n params1.append(next(pars))\r\n except:\r\n break\r\n\r\npars = models[1].decoder.bert.encoder.parameters()\r\nfor _ in range(1000):\r\n try:\r\n params2.append(next(pars))\r\n except:\r\n break\r\n\r\n[torch.all(params1[i] == params2[i]).item() for i in range(len(params1))]\r\n```",
"Thanks for answering @Aktsvigun ! Yes, in the encoder decoder framework, when you instantiate an encoder-decodel using two pretrained BERT models the cross attention layer weights are added and randomly initialized. This is an expected behavior. When you set your log level to INFO you will receive a notification about this as well :-) "
] | 1,591 | 1,591 | 1,591 | NONE | null | # 🐛 Bug in EncoderDecoderModel
I am using EncoderDecoderModel and I have tested the sample code of it which is written in its page.
```
from transformers import EncoderDecoderModel, BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = EncoderDecoderModel.from_encoder_decoder_pretrained(
'bert-base-uncased', 'bert-base-uncased')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0)
output = model(input_ids=input_ids, decoder_input_ids=input_ids)[0]
```
but every time I run this code i will get different values for output! I also have used model.eval() but it also couldn't help.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4822/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4822/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4821 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4821/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4821/comments | https://api.github.com/repos/huggingface/transformers/issues/4821/events | https://github.com/huggingface/transformers/pull/4821 | 632,736,979 | MDExOlB1bGxSZXF1ZXN0NDI5NDQ3MzA2 | 4,821 | Enable multiprocessing in glue datasets | {
"login": "zrxbeijing",
"id": 38594797,
"node_id": "MDQ6VXNlcjM4NTk0Nzk3",
"avatar_url": "https://avatars.githubusercontent.com/u/38594797?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zrxbeijing",
"html_url": "https://github.com/zrxbeijing",
"followers_url": "https://api.github.com/users/zrxbeijing/followers",
"following_url": "https://api.github.com/users/zrxbeijing/following{/other_user}",
"gists_url": "https://api.github.com/users/zrxbeijing/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zrxbeijing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zrxbeijing/subscriptions",
"organizations_url": "https://api.github.com/users/zrxbeijing/orgs",
"repos_url": "https://api.github.com/users/zrxbeijing/repos",
"events_url": "https://api.github.com/users/zrxbeijing/events{/privacy}",
"received_events_url": "https://api.github.com/users/zrxbeijing/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4821?src=pr&el=h1) Report\n> Merging [#4821](https://codecov.io/gh/huggingface/transformers/pull/4821?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c58e6c129a153ca1a5021e5d7e642d00bf011e20&el=desc) will **increase** coverage by `1.54%`.\n> The diff coverage is `88.88%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4821?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4821 +/- ##\n==========================================\n+ Coverage 74.52% 76.07% +1.54% \n==========================================\n Files 128 128 \n Lines 21497 21505 +8 \n==========================================\n+ Hits 16021 16360 +339 \n+ Misses 5476 5145 -331 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4821?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/4821/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `86.48% <88.88%> (+0.12%)` | :arrow_up: |\n| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/4821/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `40.95% <0.00%> (-8.49%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4821/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4821/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `86.42% <0.00%> (+0.18%)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4821/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.64% <0.00%> (+2.71%)` | :arrow_up: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4821/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.26% <0.00%> (+6.29%)` | :arrow_up: |\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4821/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `100.00% <0.00%> (+25.00%)` | :arrow_up: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4821/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `93.02% <0.00%> (+75.48%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4821?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4821?src=pr&el=footer). Last update [c58e6c1...ef63cb8](https://codecov.io/gh/huggingface/transformers/pull/4821?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,591 | 1,592 | 1,592 | NONE | null | The preprocessing of glue datasets is too slow. This change enables multiprocessing to speed up the process of converting examples to features by utilizing multiple cpu cores. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4821/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4821/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4821",
"html_url": "https://github.com/huggingface/transformers/pull/4821",
"diff_url": "https://github.com/huggingface/transformers/pull/4821.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4821.patch",
"merged_at": null
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.