url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
sequence
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/2913
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2913/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2913/comments
https://api.github.com/repos/huggingface/transformers/issues/2913/events
https://github.com/huggingface/transformers/pull/2913
567,827,276
MDExOlB1bGxSZXF1ZXN0Mzc3MzgwMjIx
2,913
make RobertaForMaskedLM implementation identical to fairseq
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Awesome. Could you check if the existing @slow tests break for Roberta, and add a new one that hardcodes the fairseq logits from your example and makes sure we also return them. Trying to avoid accidental breakage. Thanks again! ", "@sshleifer Not sure how I would hardcode a tensor of size 1, 12, 50265. Can I just add a small pickled file to the test instead?", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2913?src=pr&el=h1) Report\n> Merging [#2913](https://codecov.io/gh/huggingface/transformers/pull/2913?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/59c23ad9c931ac4fe719abeb3c3851df046ef3a6?src=pr&el=desc) will **decrease** coverage by `<.01%`.\n> The diff coverage is `85.71%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2913/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2913?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2913 +/- ##\n==========================================\n- Coverage 75.3% 75.29% -0.01% \n==========================================\n Files 94 94 \n Lines 15424 15424 \n==========================================\n- Hits 11615 11614 -1 \n- Misses 3809 3810 +1\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2913?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2913/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.3% <85.71%> (-0.47%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2913?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2913?src=pr&el=footer). Last update [59c23ad...51514a4](https://codecov.io/gh/huggingface/transformers/pull/2913?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Sorry for the close, had to do some rebasing." ]
1,582
1,582
1,582
COLLABORATOR
null
closes https://github.com/huggingface/transformers/issues/1874 The implementation of RoBERTa in `transformers` differs from the original implementation in [fairseq](https://github.com/pytorch/fairseq/tree/master/fairseq/models/roberta), as results showed (cf. https://github.com/huggingface/transformers/issues/1874). I have documented my findings here https://github.com/huggingface/transformers/issues/1874#issuecomment-588359143 and made the corresponding changes accordingly in this PR. Someone should check, however, that removing `get_output_embeddings()` does not have any adverse side-effects.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2913/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2913/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/2913", "html_url": "https://github.com/huggingface/transformers/pull/2913", "diff_url": "https://github.com/huggingface/transformers/pull/2913.diff", "patch_url": "https://github.com/huggingface/transformers/pull/2913.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/2912
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2912/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2912/comments
https://api.github.com/repos/huggingface/transformers/issues/2912/events
https://github.com/huggingface/transformers/pull/2912
567,824,937
MDExOlB1bGxSZXF1ZXN0Mzc3Mzc4MjQ5
2,912
Override build_inputs_with_special_tokens for fast tokenizers
{ "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2912?src=pr&el=h1) Report\n> Merging [#2912](https://codecov.io/gh/huggingface/transformers/pull/2912?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/59c23ad9c931ac4fe719abeb3c3851df046ef3a6?src=pr&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `100%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2912/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2912?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2912 +/- ##\n==========================================\n+ Coverage 75.3% 75.32% +0.01% \n==========================================\n Files 94 94 \n Lines 15424 15438 +14 \n==========================================\n+ Hits 11615 11628 +13 \n- Misses 3809 3810 +1\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2912?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2912/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.14% <100%> (+0.05%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2912/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `96.99% <100%> (+0.06%)` | :arrow_up: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2912/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `100% <100%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2912/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `61.15% <0%> (-0.18%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2912?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2912?src=pr&el=footer). Last update [59c23ad...3b7752b](https://codecov.io/gh/huggingface/transformers/pull/2912?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,582
1,582
1,582
MEMBER
null
Signed-off-by: Morgan Funtowicz <[email protected]>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2912/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2912/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/2912", "html_url": "https://github.com/huggingface/transformers/pull/2912", "diff_url": "https://github.com/huggingface/transformers/pull/2912.diff", "patch_url": "https://github.com/huggingface/transformers/pull/2912.patch", "merged_at": 1582146592000 }
https://api.github.com/repos/huggingface/transformers/issues/2911
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2911/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2911/comments
https://api.github.com/repos/huggingface/transformers/issues/2911/events
https://github.com/huggingface/transformers/issues/2911
567,769,840
MDU6SXNzdWU1Njc3Njk4NDA=
2,911
missing "para" attribute in ARC dataset for multiple choice question answering model
{ "login": "nrjvarshney", "id": 19836137, "node_id": "MDQ6VXNlcjE5ODM2MTM3", "avatar_url": "https://avatars.githubusercontent.com/u/19836137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nrjvarshney", "html_url": "https://github.com/nrjvarshney", "followers_url": "https://api.github.com/users/nrjvarshney/followers", "following_url": "https://api.github.com/users/nrjvarshney/following{/other_user}", "gists_url": "https://api.github.com/users/nrjvarshney/gists{/gist_id}", "starred_url": "https://api.github.com/users/nrjvarshney/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nrjvarshney/subscriptions", "organizations_url": "https://api.github.com/users/nrjvarshney/orgs", "repos_url": "https://api.github.com/users/nrjvarshney/repos", "events_url": "https://api.github.com/users/nrjvarshney/events{/privacy}", "received_events_url": "https://api.github.com/users/nrjvarshney/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Got it. \r\nThis parameter is for the context.", "May I know how do you solve this problem? I just ran into this problem.", "You will have to add a \"para\" field for every choice - This is for adding knowledge. To get a baseline you can simply use a dummy text in that field \r\n\r\n`{\r\n \"id\": \"MCAS_2000_4_6\",\r\n \"question\": {\r\n \"stem\": \"Which technology was developed most recently?\",\r\n \"choices\": [\r\n {\r\n \"text\": \"cellular telephone\",\r\n \"label\": \"A\",\r\n \"para\": \"fetched knowledge\"\r\n },\r\n {\r\n \"text\": \"television\",\r\n \"label\": \"B\",\r\n \"para\": \"fetched knowledge\"\r\n },\r\n {\r\n \"text\": \"refrigerator\",\r\n \"label\": \"C\",\r\n \"para\": \"fetched knowledge\"\r\n },\r\n {\r\n \"text\": \"airplane\",\r\n \"label\": \"D\",\r\n \"para\": \"fetched knowledge\"\r\n }\r\n ]\r\n },\r\n \"answerKey\": \"A\"\r\n}\r\n`" ]
1,582
1,586
1,582
NONE
null
# 🐛 Bug ## Information Model I am using Roberta. Language I am using the model on (English) The problem arises when using: * [ ] the official example scripts: (give details below) **https://github.com/huggingface/transformers/blob/master/examples/utils_multiple_choice.py** The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) Multiple choice question answering ## To reproduce Steps to reproduce the behavior: 1. run_multiple_choice.py with parameters as specified in the documentation replacing the task name to arc In the data, there is no such parameter called "para" contexts=[ options[0]["para"].replace("_", ""), options[1]["para"].replace("_", ""), options[2]["para"].replace("_", ""), options[3]["para"].replace("_", ""), ],
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2911/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2911/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2910
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2910/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2910/comments
https://api.github.com/repos/huggingface/transformers/issues/2910/events
https://github.com/huggingface/transformers/issues/2910
567,751,259
MDU6SXNzdWU1Njc3NTEyNTk=
2,910
`PreTrainedTokenizerFast.build_inputs_with_special_tokens` doesn't add the special tokens
{ "login": "bryant1410", "id": 3905501, "node_id": "MDQ6VXNlcjM5MDU1MDE=", "avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bryant1410", "html_url": "https://github.com/bryant1410", "followers_url": "https://api.github.com/users/bryant1410/followers", "following_url": "https://api.github.com/users/bryant1410/following{/other_user}", "gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}", "starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions", "organizations_url": "https://api.github.com/users/bryant1410/orgs", "repos_url": "https://api.github.com/users/bryant1410/repos", "events_url": "https://api.github.com/users/bryant1410/events{/privacy}", "received_events_url": "https://api.github.com/users/bryant1410/received_events", "type": "User", "site_admin": false }
[ { "id": 1834056635, "node_id": "MDU6TGFiZWwxODM0MDU2NjM1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization", "name": "Core: Tokenization", "color": "FF4446", "default": false, "description": "Internals of the library; Tokenization." } ]
closed
false
null
[]
[ "Hi @bryant1410, \r\n\r\nThanks for reporting the issue, as a workaround for now, can you try the following:\r\n\r\n```python\r\ntokenizer.tokenize(\"abc\")\r\ntokenizer.tokenizer(\"def\")\r\n```\r\n\r\nIt should do the same, let me know.\r\nIn the meantime I'll have a closer look at the function `tokenizer.build_inputs_with_special_tokens`", "Should be fixed in https://github.com/huggingface/transformers/pull/2912" ]
1,582
1,582
1,582
CONTRIBUTOR
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): BertTokenizer (but seems to apply to most) Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('bert-base-cased') tokenizer.build_inputs_with_special_tokens(["abc", "def"]) ``` The output is `['abc', 'def']`. ## Expected behavior The output should include the `[CLS]` and `[SEP]` tokens. The problem is neither `PreTrainedTokenizerFast` nor its subclasses override `build_inputs_with_special_tokens`. ## Environment info <!-- You can run the command `python transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.5.0 - Platform: Linux - Python version: 3.7.6 - PyTorch version (GPU?): 1.4.0, w/o GPU - Tensorflow version (GPU?): - - Using GPU in script?: No. - Using distributed or parallel set-up in script?: No.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2910/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2910/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2909
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2909/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2909/comments
https://api.github.com/repos/huggingface/transformers/issues/2909/events
https://github.com/huggingface/transformers/pull/2909
567,715,087
MDExOlB1bGxSZXF1ZXN0Mzc3Mjg5NDE1
2,909
Add slow generate tests for pretrained lm models
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "> Please incorporate my comments on this file in the other PR :)\r\n\r\nis on my radar :-) ", "UPDATE: Changed slow tests for language generation design according to discussion in PR #2885 .\r\nIf this looks alright, I'll add test cases for the other LMModels @LysandreJik & @sshleifer ", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2909?src=pr&el=h1) Report\n> Merging [#2909](https://codecov.io/gh/huggingface/transformers/pull/2909?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/38f5fe9e0277df67a01db80a1c640ac072a2381e?src=pr&el=desc) will **decrease** coverage by `1.03%`.\n> The diff coverage is `100%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2909/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2909?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2909 +/- ##\n==========================================\n- Coverage 77.16% 76.12% -1.04% \n==========================================\n Files 98 98 \n Lines 15997 15997 \n==========================================\n- Hits 12344 12178 -166 \n- Misses 3653 3819 +166\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2909?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/2909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `88.43% <100%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0%> (-10%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0%> (-2.3%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.03% <0%> (-2.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.2% <0%> (-0.17%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2909?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2909?src=pr&el=footer). Last update [38f5fe9...0c5bdef](https://codecov.io/gh/huggingface/transformers/pull/2909?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Updated the language model generation slow tests following Roberta's and Bart's Integration Test style. What do you think? @LysandreJik @sshleifer ", "Finished to add hard-coded tests for all models with LMHead: `GPT2, OpenAI, XLNet, TransfoXL, CTRL and XLM.`\r\n\r\nAll pretrained models generate reasonable results **except** `XLM`. Might need to take a closer look in a future PR.\r\n\r\nAlso future PRs TODO:\r\n\r\n- [ ] Add hardcoded tests for seq-to-seq language generation\r\n- [ ] Add hardcoded tests for DoubleHead language generation", "Don't feel strongly, but I would consider deleting the XLM example so that the tests don't enforce bad generations.\r\nAlso, if it is possible to make shorter examples (or more than one token per line), it would make the code more readable.\r\nOverall, I love this. Makes me feel a lot safer editing generation code!", "We could also add a sentence to `Contributing.md` telling people who change the generation code which command to run to make sure they didn't break stuff\r\n" ]
1,582
1,582
1,582
MEMBER
null
Move implementation of slow hardcoded generate models to this PR from. Checkout previous discussion in PR #2885
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2909/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2909/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/2909", "html_url": "https://github.com/huggingface/transformers/pull/2909", "diff_url": "https://github.com/huggingface/transformers/pull/2909.diff", "patch_url": "https://github.com/huggingface/transformers/pull/2909.patch", "merged_at": 1582563118000 }
https://api.github.com/repos/huggingface/transformers/issues/2908
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2908/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2908/comments
https://api.github.com/repos/huggingface/transformers/issues/2908/events
https://github.com/huggingface/transformers/issues/2908
567,634,883
MDU6SXNzdWU1Njc2MzQ4ODM=
2,908
Model I am using Roberta
{ "login": "nrjvarshney", "id": 19836137, "node_id": "MDQ6VXNlcjE5ODM2MTM3", "avatar_url": "https://avatars.githubusercontent.com/u/19836137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nrjvarshney", "html_url": "https://github.com/nrjvarshney", "followers_url": "https://api.github.com/users/nrjvarshney/followers", "following_url": "https://api.github.com/users/nrjvarshney/following{/other_user}", "gists_url": "https://api.github.com/users/nrjvarshney/gists{/gist_id}", "starred_url": "https://api.github.com/users/nrjvarshney/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nrjvarshney/subscriptions", "organizations_url": "https://api.github.com/users/nrjvarshney/orgs", "repos_url": "https://api.github.com/users/nrjvarshney/repos", "events_url": "https://api.github.com/users/nrjvarshney/events{/privacy}", "received_events_url": "https://api.github.com/users/nrjvarshney/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,582
1,582
1,582
NONE
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): Language I am using the model on (English, Chinese ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `python transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2908/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2908/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2907
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2907/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2907/comments
https://api.github.com/repos/huggingface/transformers/issues/2907/events
https://github.com/huggingface/transformers/issues/2907
567,579,828
MDU6SXNzdWU1Njc1Nzk4Mjg=
2,907
Help needed with interpretation of the MLP class
{ "login": "h56cho", "id": 52889259, "node_id": "MDQ6VXNlcjUyODg5MjU5", "avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4", "gravatar_id": "", "url": "https://api.github.com/users/h56cho", "html_url": "https://github.com/h56cho", "followers_url": "https://api.github.com/users/h56cho/followers", "following_url": "https://api.github.com/users/h56cho/following{/other_user}", "gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}", "starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/h56cho/subscriptions", "organizations_url": "https://api.github.com/users/h56cho/orgs", "repos_url": "https://api.github.com/users/h56cho/repos", "events_url": "https://api.github.com/users/h56cho/events{/privacy}", "received_events_url": "https://api.github.com/users/h56cho/received_events", "type": "User", "site_admin": false }
[ { "id": 1108649053, "node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz", "url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted", "name": "Help wanted", "color": "008672", "default": false, "description": "Extra attention is needed, help appreciated" }, { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1834056761, "node_id": "MDU6TGFiZWwxODM0MDU2NzYx", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling", "name": "Core: Modeling", "color": "FF8446", "default": false, "description": "Internals of the library; Models." } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "h56cho: I'm not sure if you ask about the code or the algorithm.\r\nAs far as I understand from the code, the class MLP is a basic 2-layer neural network (2 consecutive conv1d + gelu activation). This will be used to construct a bigger [network](https://github.com/huggingface/transformers/blob/73028c5df0c28ca179fbe565482a9c2143787f61/src/transformers/modeling_gpt2.py#L215). I hope this (partially) answers your Q2 question.\r\nQ1. For the code, I think the comment is related to the number 3072 mentioned in this [link](https://jalammar.github.io/illustrated-gpt2/)\r\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,582
1,593
1,593
NONE
null
Hello, I am having some trouble understanding the MLP function used in the Hugging Face GPT-2, which is found [here](https://github.com/huggingface/transformers/blob/73028c5df0c28ca179fbe565482a9c2143787f61/src/transformers/modeling_gpt2.py#L204). Q1. For MLP, why are we setting the n_state to be equal to 3072, which is 4 * n_embd? Q2. Below is the definition for the MLP class: ```python class MLP(nn.Module): def __init__(self, n_state, config): # in MLP: n_state=3072 (4 * n_embd) super().__init__() nx = config.n_embd self.c_fc = Conv1D(n_state, nx) self.c_proj = Conv1D(nx, n_state) self.act = gelu_new self.dropout = nn.Dropout(config.resid_pdrop) ``` in the MLP definition above, what exactly do the lines ``` Conv1D(n_state, nx)``` (the object ```self.c_fc```), and ``` Conv1D(nx, n_state)``` (the object ```self.c_proj```) do? Thank you,
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2907/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2907/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2906
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2906/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2906/comments
https://api.github.com/repos/huggingface/transformers/issues/2906/events
https://github.com/huggingface/transformers/issues/2906
567,579,770
MDU6SXNzdWU1Njc1Nzk3NzA=
2,906
documentation for TF models mentions non-existent methods
{ "login": "btel", "id": 41565, "node_id": "MDQ6VXNlcjQxNTY1", "avatar_url": "https://avatars.githubusercontent.com/u/41565?v=4", "gravatar_id": "", "url": "https://api.github.com/users/btel", "html_url": "https://github.com/btel", "followers_url": "https://api.github.com/users/btel/followers", "following_url": "https://api.github.com/users/btel/following{/other_user}", "gists_url": "https://api.github.com/users/btel/gists{/gist_id}", "starred_url": "https://api.github.com/users/btel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/btel/subscriptions", "organizations_url": "https://api.github.com/users/btel/orgs", "repos_url": "https://api.github.com/users/btel/repos", "events_url": "https://api.github.com/users/btel/events{/privacy}", "received_events_url": "https://api.github.com/users/btel/received_events", "type": "User", "site_admin": false }
[ { "id": 1834067346, "node_id": "MDU6TGFiZWwxODM0MDY3MzQ2", "url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation", "name": "Documentation", "color": "77cc3b", "default": false, "description": "" } ]
closed
false
null
[]
[]
1,582
1,582
1,582
CONTRIBUTOR
null
Documentation of `TFPreTrainedModel.from_pretrained` method mentions the `.model()` and `.eval()` methods that are not defined for tensorflow models: > The model is set in evaluation mode by default using ``model.eval()`` (Dropout modules are deactivated) > To train the model, you should first set it back in training mode with ``model.train()`` https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_tf_utils.py#L195
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2906/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2906/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2904
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2904/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2904/comments
https://api.github.com/repos/huggingface/transformers/issues/2904/events
https://github.com/huggingface/transformers/issues/2904
567,555,116
MDU6SXNzdWU1Njc1NTUxMTY=
2,904
squad_convert_example_to_features does not work with CamembertTokenizer
{ "login": "tbrendle", "id": 12280294, "node_id": "MDQ6VXNlcjEyMjgwMjk0", "avatar_url": "https://avatars.githubusercontent.com/u/12280294?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tbrendle", "html_url": "https://github.com/tbrendle", "followers_url": "https://api.github.com/users/tbrendle/followers", "following_url": "https://api.github.com/users/tbrendle/following{/other_user}", "gists_url": "https://api.github.com/users/tbrendle/gists{/gist_id}", "starred_url": "https://api.github.com/users/tbrendle/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tbrendle/subscriptions", "organizations_url": "https://api.github.com/users/tbrendle/orgs", "repos_url": "https://api.github.com/users/tbrendle/repos", "events_url": "https://api.github.com/users/tbrendle/events{/privacy}", "received_events_url": "https://api.github.com/users/tbrendle/received_events", "type": "User", "site_admin": false }
[ { "id": 1834056635, "node_id": "MDU6TGFiZWwxODM0MDU2NjM1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization", "name": "Core: Tokenization", "color": "FF4446", "default": false, "description": "Internals of the library; Tokenization." } ]
closed
false
null
[]
[ "Solved by #2746 " ]
1,582
1,582
1,582
NONE
null
# 🐛 Bug ## Information Model I am using : CamemBERT Language I am using the model on : French The problem arises when using: * [*] my own modified scripts: (give details below) The tasks I am working on is: * [*] an official GLUE/SQUaD task: SQUaD ## To reproduce Steps to reproduce the behavior: 1 - Copy paste this and run it ```python from transformers import CamembertTokenizer, SquadExample, squad_convert_examples_to_features tokenizer = CamembertTokenizer.from_pretrained('camembert-base') example = SquadExample( 'example_id', "Q", "C D E F G H", "D", 2, "title" ) features, _ = squad_convert_examples_to_features( examples=[ example ], tokenizer=tokenizer, max_seq_length=30, doc_stride=128, max_query_length=128, is_training=True, return_dataset="pt", threads=1, ) tokens = tokenizer.convert_ids_to_tokens(tokenizer.encode(example.question_text, example.context_text)) doc_token = example.doc_tokens print({tokens[k]: doc_token[v] for k, v in features[0].token_to_orig_map.items()}) # Outputs # {'</s>': 'C', '▁C': 'D', '▁D': 'E', '▁E': 'F', '▁F': 'G', '▁G': 'H'} # Should be # {'▁C': 'C', '▁D': 'D', '▁E': 'E', '▁F': 'F', '▁G': 'G', '▁H': 'H'} ``` ## Expected behavior The resulting features mapping is shifted by one when using the CamembertTokenizer. This seems to be caused by a weird check in the method `squad_convert_example_to_features`. (`if "roberta" in str(type(tokenizer))`) is evaluated to False when using the CamembertTokenizer (which is adapted from RobertaTokenizer) When I patch the line `if "roberta" in str(type(tokenizer))` by `if "roberta" in str(type(tokenizer)) or "camembert" in str(type(tokenizer))`, I get the expected behavior. I do not really know what would be the best way to handle this problem. ## Environment info - `transformers` version: 2.4.1 - Platform: MacOS - Python version: 3.7.6 - PyTorch version : 1.4 - Tensorflow version : None - Using GPU in script?: No - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2904/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2904/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2903
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2903/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2903/comments
https://api.github.com/repos/huggingface/transformers/issues/2903/events
https://github.com/huggingface/transformers/pull/2903
567,502,692
MDExOlB1bGxSZXF1ZXN0Mzc3MTE0NDk1
2,903
Update to include example of LM
{ "login": "iliaschalkidis", "id": 1626984, "node_id": "MDQ6VXNlcjE2MjY5ODQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1626984?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iliaschalkidis", "html_url": "https://github.com/iliaschalkidis", "followers_url": "https://api.github.com/users/iliaschalkidis/followers", "following_url": "https://api.github.com/users/iliaschalkidis/following{/other_user}", "gists_url": "https://api.github.com/users/iliaschalkidis/gists{/gist_id}", "starred_url": "https://api.github.com/users/iliaschalkidis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iliaschalkidis/subscriptions", "organizations_url": "https://api.github.com/users/iliaschalkidis/orgs", "repos_url": "https://api.github.com/users/iliaschalkidis/repos", "events_url": "https://api.github.com/users/iliaschalkidis/events{/privacy}", "received_events_url": "https://api.github.com/users/iliaschalkidis/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2903?src=pr&el=h1) Report\n> Merging [#2903](https://codecov.io/gh/huggingface/transformers/pull/2903?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/20fc18fbda3669c2f4a3510e0705b2acd54bff07?src=pr&el=desc) will **decrease** coverage by `1.07%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2903/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2903?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2903 +/- ##\n==========================================\n- Coverage 75% 73.92% -1.08% \n==========================================\n Files 94 94 \n Lines 15288 15288 \n==========================================\n- Hits 11467 11302 -165 \n- Misses 3821 3986 +165\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2903?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.91% <0%> (-9.86%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `70.86% <0%> (-2.3%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `92.07% <0%> (-2.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2903?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2903?src=pr&el=footer). Last update [20fc18f...afa57d9](https://codecov.io/gh/huggingface/transformers/pull/2903?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "[Looks good!](https://huggingface.co/nlpaueb/bert-base-greek-uncased-v1)", "@julien-c Could you perhaps shed some light on how the TF checkpoints should be uploaded? @iliaschalkidis asked about it here https://github.com/huggingface/transformers/issues/2901#issuecomment-588163505", "In the past for another project, this (`from_pt=True`) did the dirty trick for me:\r\n\r\n```python\r\nbert = BERT.from_pretrained(model_path+'pytorch_model.bin',\r\n from_pt=True,\r\n config=BertConfig().from_pretrained(model_path+'config.json'))\r\n```\r\nbut I definitely not recommend this...\r\n\r\nI have already uploaded the TF checkpoint files (`model_ckpt.data-00000-of-00001`, `model_ckpt.index`, `model_ckpt.meta`) in model's folder, so please feel free to troubleshoot.", "see https://github.com/huggingface/transformers/issues/2901#issuecomment-591710959" ]
1,582
1,582
1,582
NONE
null
The model files have been updated in order to include the classification layers, based on https://github.com/huggingface/transformers/issues/2901, and now can be also used as a LM.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2903/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2903/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/2903", "html_url": "https://github.com/huggingface/transformers/pull/2903", "diff_url": "https://github.com/huggingface/transformers/pull/2903.diff", "patch_url": "https://github.com/huggingface/transformers/pull/2903.patch", "merged_at": 1582214280000 }
https://api.github.com/repos/huggingface/transformers/issues/2902
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2902/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2902/comments
https://api.github.com/repos/huggingface/transformers/issues/2902/events
https://github.com/huggingface/transformers/issues/2902
567,467,949
MDU6SXNzdWU1Njc0Njc5NDk=
2,902
Convert BERT to RoBERTa
{ "login": "etetteh", "id": 28512232, "node_id": "MDQ6VXNlcjI4NTEyMjMy", "avatar_url": "https://avatars.githubusercontent.com/u/28512232?v=4", "gravatar_id": "", "url": "https://api.github.com/users/etetteh", "html_url": "https://github.com/etetteh", "followers_url": "https://api.github.com/users/etetteh/followers", "following_url": "https://api.github.com/users/etetteh/following{/other_user}", "gists_url": "https://api.github.com/users/etetteh/gists{/gist_id}", "starred_url": "https://api.github.com/users/etetteh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/etetteh/subscriptions", "organizations_url": "https://api.github.com/users/etetteh/orgs", "repos_url": "https://api.github.com/users/etetteh/repos", "events_url": "https://api.github.com/users/etetteh/events{/privacy}", "received_events_url": "https://api.github.com/users/etetteh/received_events", "type": "User", "site_admin": false }
[ { "id": 1834056761, "node_id": "MDU6TGFiZWwxODM0MDU2NzYx", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling", "name": "Core: Modeling", "color": "FF8446", "default": false, "description": "Internals of the library; Models." }, { "id": 1834081910, "node_id": "MDU6TGFiZWwxODM0MDgxOTEw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Usage", "name": "Usage", "color": "e28436", "default": false, "description": "General questions about the library" } ]
closed
false
null
[]
[ "The differences between the BERT and RoBERTa model are the following:\r\n\r\n- Different pre-training (larger batch size for RoBERTa, no NSP, no token type ids ...)\r\n- Different tokenizer\r\n\r\nThe model architecture is exactly the same. The only real difference after the pre-training is the difference in tokenization. Since you're working with a BERT model that was pre-trained, you unfortunately won't be able to change the tokenizer now from a WordPiece (BERT) to a Byte-level BPE (RoBERTa)." ]
1,582
1,582
1,582
NONE
null
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> ## Motivation Given that RoBERTa outperformed BERT on several tasks, yet having a slight architecture modification, I want to know if it is possible to convert a BERT pretrained model to RoBERTa. I am working with a BERT pretrained model on a domain specific corpora, given that I don't have the resources to train RoBERTa from scratch I want to know if I can convert the model to RoBERTa. If yes, how do I go about it?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2902/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2902/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2901
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2901/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2901/comments
https://api.github.com/repos/huggingface/transformers/issues/2901/events
https://github.com/huggingface/transformers/issues/2901
567,446,250
MDU6SXNzdWU1Njc0NDYyNTA=
2,901
Pre-trained BERT-LM missing LM Head - returns random token predictions
{ "login": "iliaschalkidis", "id": 1626984, "node_id": "MDQ6VXNlcjE2MjY5ODQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1626984?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iliaschalkidis", "html_url": "https://github.com/iliaschalkidis", "followers_url": "https://api.github.com/users/iliaschalkidis/followers", "following_url": "https://api.github.com/users/iliaschalkidis/following{/other_user}", "gists_url": "https://api.github.com/users/iliaschalkidis/gists{/gist_id}", "starred_url": "https://api.github.com/users/iliaschalkidis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iliaschalkidis/subscriptions", "organizations_url": "https://api.github.com/users/iliaschalkidis/orgs", "repos_url": "https://api.github.com/users/iliaschalkidis/repos", "events_url": "https://api.github.com/users/iliaschalkidis/events{/privacy}", "received_events_url": "https://api.github.com/users/iliaschalkidis/received_events", "type": "User", "site_admin": false }
[ { "id": 1834056761, "node_id": "MDU6TGFiZWwxODM0MDU2NzYx", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling", "name": "Core: Modeling", "color": "FF8446", "default": false, "description": "Internals of the library; Models." }, { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
null
[]
[ "My guess would be that you'd need to load (and then save) with `BertForPreTraining` rather than `BertModel`.\r\n\r\nhttps://github.com/huggingface/transformers/blob/20fc18fbda3669c2f4a3510e0705b2acd54bff07/src/transformers/modeling_bert.py#L806", "@BramVanroy you're my hero for today! Appreciated man! \r\n\r\nTest Cases:\r\n\r\n```python\r\nimport torch\r\nfrom transformers import *\r\n\r\nmodel_path = '/Users/kiddothe2b/Downloads/bert-base-greek-uncased-v2'\r\ntokenizer_greek = AutoTokenizer.from_pretrained(model_path)\r\nlm_model_greek = AutoModelWithLMHead.from_pretrained(model_path)\r\n\r\n# ================ EXAMPLE 1 ================\r\ntext_1 = 'O ποιητής έγραψε ένα [MASK] .'\r\n# EN: The poet wrote a [MASK] . '\r\ninput_ids = tokenizer_greek.encode(text_1)\r\nprint(tokenizer_greek.convert_ids_to_tokens(input_ids))\r\n# ['[CLS]', 'o', 'ποιητης', 'εγραψε', 'ενα', '[MASK]', '.', '[SEP]']\r\noutputs = lm_model_greek(torch.tensor([input_ids]))[0]\r\nprint(tokenizer_greek.convert_ids_to_tokens(outputs[0, 5].max(0)[1].item()))\r\n# the most plausible prediction for [MASK] is \"song\"\r\n\r\n# ================ EXAMPLE 2 ================\r\ntext_2 = 'Είναι ένας [MASK] άνθρωπος.'\r\n# EN: He is a [MASK] person. '\r\ninput_ids = tokenizer_greek.encode(text_1)\r\nprint(tokenizer_greek.convert_ids_to_tokens(input_ids))\r\n# ['[CLS]', 'ειναι', 'ενας', '[MASK]', 'ανθρωπος', '.', '[SEP]']\r\noutputs = lm_model_greek(torch.tensor([input_ids]))[0]\r\nprint(tokenizer_greek.convert_ids_to_tokens(outputs[0, 3].max(0)[1].item()))\r\n# the most plausible prediction for [MASK] is \"good\"\r\n\r\n# ================ EXAMPLE 3 ================\r\ntext_3 = 'Είναι ένας [MASK] άνθρωπος και κάνει συχνά [MASK].'\r\n# EN: He is a [MASK] person he does frequently [MASK]. '\r\ninput_ids = tokenizer_greek.encode(text_3)\r\nprint(tokenizer_greek.convert_ids_to_tokens(input_ids))\r\n# ['[CLS]', 'ειναι', 'ενας', '[MASK]', 'ανθρωπος', 'και', 'κανει', 'συχνα', '[MASK]', '.', '[SEP]']\r\noutputs = lm_model_greek(torch.tensor([input_ids]))[0]\r\nprint(tokenizer_greek.convert_ids_to_tokens(outputs[0, 8].max(0)[1].item()))\r\n# the most plausible prediction for the second [MASK] is \"trips\"\r\n```\r\n\r\nI will release the updated version later today. Although I think this is a very important technical detail and needs to be referred in the examples.\r\n\r\nIs it possible to load the same saved model with TF2 binding?\r\n\r\n", "Tensorflow checkpoints can be loaded when using `from_pretrained`. Have a look at the documentation, particularly this line:\r\n\r\nhttps://github.com/huggingface/transformers/blob/20fc18fbda3669c2f4a3510e0705b2acd54bff07/src/transformers/modeling_utils.py#L317", "Ok, so it is also suggested to upload the initial TF checkpoint files [ `variables.data-00000-of-00001`, `variables.index`, `variables.meta`] through the CLI, otherwise people cannot use the model in TF2 with `AutoModelWith.from_pretrained()`? \r\n", "I am not sure to be honest. Perhaps someone else can help you further along.", "In the past for another project, this (`from_pt=True`) did the dirty trick for me:\r\n\r\n```python\r\nbert = BERT.from_pretrained(model_path+'pytorch_model.bin',\r\n from_pt=True,\r\n config=BertConfig().from_pretrained(model_path+'config.json'))\r\n```\r\nbut I definitely not recommend this...\r\n\r\nI have already uploaded the TF checkpoint files (`model_ckpt.data-00000-of-00001`, `model_ckpt.index`, `model_ckpt.meta`) in model's folder, so please feel free to troubleshoot.", "Looks like you managed to do it in the meantime?\r\n\r\n**For reference here's how to do it:** starting with version 2.5.0 thanks to this commit https://github.com/huggingface/transformers/pull/2765/commits/961c69776f8a2c95b92407a086848ebca037de5d\r\n\r\nYou can now just do \r\n```python\r\ntf_model = TFAutoModelForPreTraining.from_pretrained(\r\n \"nlpaueb/bert-base-greek-uncased-v1\", \r\n from_pt=True\r\n)\r\ntf_model.save_pretrained(dirname)\r\n```\r\n\r\n**And then you can upload the TF weights using the CLI.**\r\n\r\nOf course, if you have the model in a local folder, you don't need to use the remote id.\r\n\r\ncc @LysandreJik ", "> Looks like you managed to do it in the meantime?\r\n> \r\n> **For reference here's how to do it:** starting with version 2.5.0 thanks to this commit [961c697](https://github.com/huggingface/transformers/commit/961c69776f8a2c95b92407a086848ebca037de5d)\r\n> \r\n> You can now just do\r\n> \r\n> ```python\r\n> tf_model = TFAutoModelForPreTraining.from_pretrained(\r\n> \"nlpaueb/bert-base-greek-uncased-v1\", \r\n> from_pt=True\r\n> )\r\n> tf_model.save_pretrained(dirname)\r\n> ```\r\n> \r\n> Of course, if you have the model in a local folder, you don't need to use the remote id.\r\n> \r\n> cc @LysandreJik\r\n\r\nDoes that imply that the CLI upload should only upload PyTorch checkpoints? (I suppose for saving space.) I am asking because the documentation emphasises that loading PT checkpoints to TF and the other way around is quite slow. Also, if all checkpoints are indeed PyTorch only, it might be useful to set from_pt=True automatically when a model is fetched from the HF bucket (since those would then all contain PT checkpoints anyway).", "Thanx @julien-c and @BramVanroy \r\n\r\nIndeed I followed a very similar process:\r\n\r\n```python\r\n\r\nfrom transformers import TFBertForPreTraining\r\nfrom transformers import BertConfig\r\nmodel_path = '/home/ichalkidis/greek_bert/'\r\nbert = TFBertForPreTraining.from_pretrained(model_path + 'pytorch_model.bin',\r\n config=BertConfig().from_pretrained(model_path + 'config.json'), from_pt=True)\r\n\r\nbert.save_pretrained('/home/ichalkidis/bert-base-greek-uncased-v1/')\r\n```\r\n\r\nFrom now on, we also serve the `tf_model.h5` and everyone will be able to load the model in TF2 without any further issue.", "Ah, it seems that I misunderstood, then. In terms of the models themselves, you can upload the tf_model.h5 and the pytorch_model.bin, and when someone requests `/home/ichalkidis/bert-base-greek-uncased-v1/` based on the framework (TF or PT), the appropriate model (.h5 or .bin) is downloaded?", "@BramVanroy, yes that's how it works! You can also explicitely specify `from_pt`/`from_tf` to `from_pretrained` for the model to fetch the other framework's checkpoint and convert it.", "Yep @BramVanroy , I added the line \"And then you can upload the TF weights using the CLI.\" to my comment above to try and make that clearer", "I guess we can close this issue now @iliaschalkidis?", "Of course @julien-c . In the end, this was just a misunderstanding, not a bug at all. Thank all of you for your help!" ]
1,582
1,582
1,582
NONE
null
# 🐛 Bug ## Information I released Greek BERT, almost a week ago and so far I'm exploring its use by running some benchmarks in Greek datasets. Although Greek BERT works just fine for sequence tagging (`AutoModelForTokenClassification`) and text classification (`AutoModelForSequenceClassification`), there are issues when we try to to use it as Language Model (`AutoModelWithLMHead`) in order to predict masked tokens. The bug was originally reported in (https://github.com/nlpaueb/greek-bert/issues/1) by @jzbjyb. ## To reproduce - `transformers` version: - Platform: Linux / Mac OS - Python version: 3.7 - PyTorch version (GPU?): 1.0.1 - Tensorflow version (GPU?): 2.1 - Using GPU in script?: No - Using distributed or parallel set-up in script?: No The model has been trained using the official BERT release (https://github.com/google-research/bert) originally converted from Tensorflow checkpoints using the library script: ``` python transformers/convert_bert_original_tf_checkpoint_to_pytorch.py --tf_checkpoint_path /home/ichalkidis/greek_bert/variables --bert_config_file=/home/ichalkidis/greek_bert/config.json --pytorch_dump_path=/home/ichalkidis/greek_bert/pytorch_model.bin ``` and then exported accompanied by the tokenizer files using: ```python from transformers import BertModel from transformers import BertConfig, BertTokenizer model_path = '/home/ichalkidis/greek_bert/' bert = BertModel.from_pretrained(model_path + 'pytorch_model.bin', config=BertConfig().from_pretrained(model_path + 'config.json')) bert.save_pretrained('/home/ichalkidis/bert-base-greek-uncased-v1/') tokenizer = BertTokenizer.from_pretrained(model_path+'vocab.txt') tokenizer.save_pretrained('/home/ichalkidis/bert-base-greek-uncased-v1/') ``` You can replicate the inconsistent behaviour of the LM with the following script: ```python import torch from transformers import * text = 'Είναι ένας [MASK] άνθρωπος.' tokenizer_greek = AutoTokenizer.from_pretrained('nlpaueb/bert-base-greek-uncased-v1') lm_model_greek = AutoModelWithLMHead.from_pretrained('nlpaueb/bert-base-greek-uncased-v1') input_ids = tokenizer_greek.encode(text) print(tokenizer_greek.convert_ids_to_tokens(input_ids)) # ['[CLS]', 'ειναι', 'ενας', '[MASK]', 'ανθρωπος', '.', '[SEP]'] outputs = lm_model_greek(torch.tensor([input_ids]))[0] print(tokenizer_greek.convert_ids_to_tokens(outputs[0, 3].max(0)[1].item())) # the most plausible prediction for [MASK] is changing in every single run ``` It is obvious that the LM Head (layer) is missing from `pytorch_model.bin`, my main questions and report are: * How we could preserve this layer moving from the TF checkpoint to the final PyTorch model? * Is it possible to serve the model both in Pytorch and TF binaries?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2901/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2901/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2900
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2900/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2900/comments
https://api.github.com/repos/huggingface/transformers/issues/2900/events
https://github.com/huggingface/transformers/pull/2900
567,403,615
MDExOlB1bGxSZXF1ZXN0Mzc3MDM0NjIw
2,900
pull from original
{ "login": "perfmjs", "id": 3114391, "node_id": "MDQ6VXNlcjMxMTQzOTE=", "avatar_url": "https://avatars.githubusercontent.com/u/3114391?v=4", "gravatar_id": "", "url": "https://api.github.com/users/perfmjs", "html_url": "https://github.com/perfmjs", "followers_url": "https://api.github.com/users/perfmjs/followers", "following_url": "https://api.github.com/users/perfmjs/following{/other_user}", "gists_url": "https://api.github.com/users/perfmjs/gists{/gist_id}", "starred_url": "https://api.github.com/users/perfmjs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/perfmjs/subscriptions", "organizations_url": "https://api.github.com/users/perfmjs/orgs", "repos_url": "https://api.github.com/users/perfmjs/repos", "events_url": "https://api.github.com/users/perfmjs/events{/privacy}", "received_events_url": "https://api.github.com/users/perfmjs/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,582
1,582
1,582
NONE
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2900/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2900/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/2900", "html_url": "https://github.com/huggingface/transformers/pull/2900", "diff_url": "https://github.com/huggingface/transformers/pull/2900.diff", "patch_url": "https://github.com/huggingface/transformers/pull/2900.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/2899
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2899/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2899/comments
https://api.github.com/repos/huggingface/transformers/issues/2899/events
https://github.com/huggingface/transformers/issues/2899
567,295,418
MDU6SXNzdWU1NjcyOTU0MTg=
2,899
RobertaTokenizer different than fairseq for 'world'
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 1834056635, "node_id": "MDU6TGFiZWwxODM0MDU2NjM1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization", "name": "Core: Tokenization", "color": "FF4446", "default": false, "description": "Internals of the library; Tokenization." } ]
closed
false
null
[]
[ "Reading more, pretty sure we only expect to have the same results as fairseq when the argument to fairseq starts with a space. Closing but would love verification/knowledge !", "Yes, this comes from #2778, which changes the default behavior to automatically prepending a space when `add_special_tokens=True` for Roberta, since you want a space after `<s>`. Can be overriden with `add_prefix_space=False`. This does deviate from fairseq's encode fn, ~~but reflects the behavior of their `fill_mask` [which also prepends a space](https://github.com/pytorch/fairseq/blob/master/fairseq/models/roberta/hub_interface.py#L149).~~ Nvm, fairseq's `fill_mask` function doesn't prepend a space after all. They expect the user to know that they have to prepend a space to get correctly encoded sequences." ]
1,582
1,582
1,582
CONTRIBUTOR
null
`pip install fairseq` ``` roberta = torch.hub.load('pytorch/fairseq', 'roberta.base') rt = RobertaTokenizer.from_pretrained('roberta-base') for ex in ['Hello world', ' Hello world', ' world', 'world', 'Hello', ' Hello']: print(f'{ex} fairseq: {roberta.encode(ex).tolist()}, Transformers: {rt.encode(ex, add_prefix_space=True)}') >>> Hello world fairseq: [0, 31414, 232, 2], Transformers: [0, 20920, 232, 2] Hello world fairseq: [0, 20920, 232, 2], Transformers: [0, 20920, 232, 2] world fairseq: [0, 232, 2], Transformers: [0, 232, 2] world fairseq: [0, 8331, 2], Transformers: [0, 232, 2] Hello fairseq: [0, 31414, 2], Transformers: [0, 20920, 2] Hello fairseq: [0, 20920, 2], Transformers: [0, 20920, 2] ``` Notice that even the token "world" is different, but results always the same with leading spaces. Is this related to @joeddav recent work? h/t @pnpnpn for uncovering:)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2899/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2899/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2898
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2898/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2898/comments
https://api.github.com/repos/huggingface/transformers/issues/2898/events
https://github.com/huggingface/transformers/issues/2898
567,242,064
MDU6SXNzdWU1NjcyNDIwNjQ=
2,898
Language modeling example script missing the next sentence predicion
{ "login": "salmanmashayekh", "id": 48693449, "node_id": "MDQ6VXNlcjQ4NjkzNDQ5", "avatar_url": "https://avatars.githubusercontent.com/u/48693449?v=4", "gravatar_id": "", "url": "https://api.github.com/users/salmanmashayekh", "html_url": "https://github.com/salmanmashayekh", "followers_url": "https://api.github.com/users/salmanmashayekh/followers", "following_url": "https://api.github.com/users/salmanmashayekh/following{/other_user}", "gists_url": "https://api.github.com/users/salmanmashayekh/gists{/gist_id}", "starred_url": "https://api.github.com/users/salmanmashayekh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/salmanmashayekh/subscriptions", "organizations_url": "https://api.github.com/users/salmanmashayekh/orgs", "repos_url": "https://api.github.com/users/salmanmashayekh/repos", "events_url": "https://api.github.com/users/salmanmashayekh/events{/privacy}", "received_events_url": "https://api.github.com/users/salmanmashayekh/received_events", "type": "User", "site_admin": false }
[ { "id": 1834053007, "node_id": "MDU6TGFiZWwxODM0MDUzMDA3", "url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20LM%20(Pretraining)", "name": "Ex: LM (Pretraining)", "color": "76FFAF", "default": false, "description": "Related to language modeling pre-training" } ]
closed
false
null
[]
[ "It is both b) and c) :).", "Hello @LysandreJik and @BramVanroy \r\n\r\nDid you have any results in training run_language_modeling.py for some language from scratch (i mean with and without NSP (next sentence prediction) as is in that script) ?\r\n\r\nDid you get better or relatively the same losses (and perplexity, accuracy) by first doing MLM then \r\nNSP ?\r\n\r\nwhat were your warmup steps and block size (maximum sequence length) for each (MLM , NSP) task , if you have down them separately ? ", "as @LysandreJik mentioned here\r\n \r\nhttps://github.com/huggingface/transformers/issues/2693#issuecomment-589819382\r\n\r\n\"the RoBERTa paper has proven that the NSP objective was not particularly helpful\"\r\n \r\nIs that also right for BERT ? (training for some language from scratch)\r\n", "If it is right , I think we can set block size (maximum sequence length) equal to 64 for MLM\r\n \r\nBecause the original paper used 128 (except for 10% of the last steps) , and it had two sentences (A and B for the sake of NSP and model learns that it shouldn't see B for filling masked words in A, because A and B aren't relevant to each other half of the times) which means average block size of 64 for each one \r\n\r\nAnd it means grate speed up in training BERT, if I say right\r\nbecause, I haven't got a TPU or even enough GPU to do that \r\n\r\nAs mentioned in original paper of BERT \"Longer sequences are disproportionately expensive because attention is quadratic to the sequence length\"", "Perhaps, another idea is that \r\nwords in sentence A do attention on sentence B, anyway\r\nand that attention is very important to get great results in MLM task (here I mean block size of 128 which means 64 for A and another 64 for B)\r\n\r\neven regarding the fact that A and B are related sentences, just half of the times \r\n(and that attentions are the main inputs for task NSP)\r\n\r\nAnd by using block size of 64 (and not doing task NSP during the training) , I will get very bad results " ]
1,582
1,590
1,582
NONE
null
The example script in `run_language_modeling.py` does not include the next sentence prediction for pre-training BERT. I was wondering if that is a) an oversight, b) for simplicity, or c) because you have found its impact to be non-significant?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2898/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2898/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2897
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2897/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2897/comments
https://api.github.com/repos/huggingface/transformers/issues/2897/events
https://github.com/huggingface/transformers/issues/2897
567,222,545
MDU6SXNzdWU1NjcyMjI1NDU=
2,897
save_pretrained doesn't work with GPT2FastTokenizer
{ "login": "bilal2vec", "id": 29356759, "node_id": "MDQ6VXNlcjI5MzU2NzU5", "avatar_url": "https://avatars.githubusercontent.com/u/29356759?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bilal2vec", "html_url": "https://github.com/bilal2vec", "followers_url": "https://api.github.com/users/bilal2vec/followers", "following_url": "https://api.github.com/users/bilal2vec/following{/other_user}", "gists_url": "https://api.github.com/users/bilal2vec/gists{/gist_id}", "starred_url": "https://api.github.com/users/bilal2vec/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bilal2vec/subscriptions", "organizations_url": "https://api.github.com/users/bilal2vec/orgs", "repos_url": "https://api.github.com/users/bilal2vec/repos", "events_url": "https://api.github.com/users/bilal2vec/events{/privacy}", "received_events_url": "https://api.github.com/users/bilal2vec/received_events", "type": "User", "site_admin": false }
[ { "id": 1834056635, "node_id": "MDU6TGFiZWwxODM0MDU2NjM1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization", "name": "Core: Tokenization", "color": "FF4446", "default": false, "description": "Internals of the library; Tokenization." } ]
closed
false
{ "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false }
[ { "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false } ]
[ "After upgrading to 2.5.0, the code now throws\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/Users/bilal/Documents/transformers/src/transformers/tokenization_utils.py\", line 587, in save_pretrained\r\n return vocab_files + (special_tokens_map_file, added_tokens_file)\r\nTypeError: unsupported operand type(s) for +: 'NoneType' and 'tuple'\r\n```", "Hi @bkkaggle , \r\n\r\nThanks for reporting the issue, it should be fixed through https://github.com/huggingface/transformers/pull/2918 and land very soon on master.\r\n\r\nI'll be included in the first maintenance release following 2.5\r\n\r\nMorgan", "Saving tokenizers works now, but restoring them doesn't\r\n```\r\n> from transformers import *\r\n> tok = GPT2TokenizerFast.from_pretrained('distilgpt2')\r\n> tok.save_pretrained('./')\r\n('./vocab.json-vocab.json', './vocab.json-merges.txt', './special_tokens_map.json', './added_tokens.json')\r\n> tok = GPT2TokenizerFast.from_pretrained('./')\r\nRobertaTokenizerFast has an issue when working on mask language modeling where it introduces an extra encoded space before the mask token.See https://github.com/huggingface/transformers/pull/2778 for more information.\r\n> tok.tokenize('test')\r\n[]\r\n```", "I can't reproduce on my side, using your code:\r\n\r\n```python\r\ntok = GPT2TokenizerFast.from_pretrained('distilgpt2')\r\ntok.save_pretrained('./')\r\n> ('./vocab.json-vocab.json', './vocab.json-merges.txt', './special_tokens_map.json', './added_tokens.json')\r\n\r\ntok = GPT2TokenizerFast.from_pretrained('./')\r\ntok.tokenize('test')\r\n> ['test']\r\n```", "I made a colab notebook to reproduce the error\r\nThe error appears when installing from source on the master branch\r\n\r\ncolab: https://colab.research.google.com/drive/1OJdm6LzVtyb-biVR1ky6joSX7gBgl6St", "Thanks, I'm able to reproduce now, I'll have a look hopefully tomorrow morning.\r\n\r\nI'll keep you posted here 👀 ", "Fixed" ]
1,582
1,582
1,582
CONTRIBUTOR
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): GPT2TokenizerFast Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [X ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ X] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: ```python > from transformers import * > tok = GPT2TokenizerFast.from_pretrained('distilgpt2') > tok.save_pretrained('./') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/bilal/Documents/transformers/src/transformers/tokenization_utils.py", line 519, in save_pretrained vocab_files = self.save_vocabulary(save_directory) File "/Users/bilal/Documents/transformers/src/transformers/tokenization_utils.py", line 529, in save_vocabulary raise NotImplementedError NotImplementedError ``` ## Expected behavior The tokenizer should be able to be saved ## Environment info - `transformers` version: 2.4.1 - Platform: Darwin-19.3.0-x86_64-i386-64bit - Python version: 3.7.5 - PyTorch version (GPU?): 1.3.1 (False) - Tensorflow version (GPU?): 2.0.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2897/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2897/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2896
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2896/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2896/comments
https://api.github.com/repos/huggingface/transformers/issues/2896/events
https://github.com/huggingface/transformers/issues/2896
567,173,047
MDU6SXNzdWU1NjcxNzMwNDc=
2,896
BertModel' object missing 'save_pretrained' attribute
{ "login": "ArashAskary", "id": 37027721, "node_id": "MDQ6VXNlcjM3MDI3NzIx", "avatar_url": "https://avatars.githubusercontent.com/u/37027721?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArashAskary", "html_url": "https://github.com/ArashAskary", "followers_url": "https://api.github.com/users/ArashAskary/followers", "following_url": "https://api.github.com/users/ArashAskary/following{/other_user}", "gists_url": "https://api.github.com/users/ArashAskary/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArashAskary/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArashAskary/subscriptions", "organizations_url": "https://api.github.com/users/ArashAskary/orgs", "repos_url": "https://api.github.com/users/ArashAskary/repos", "events_url": "https://api.github.com/users/ArashAskary/events{/privacy}", "received_events_url": "https://api.github.com/users/ArashAskary/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, `from_pretrained` appeared in an older version of the library. `pytorch-pretrained-BERT` is a year old, is less robust and lacks certain functionalities (such as the one you mentioned) which are present in `transformers`." ]
1,582
1,582
1,582
NONE
null
I was attempting to download a pre-trained BERT model & save it to my cloud directory using Google Colab. model.save_pretrained() seems to be missing completely for some reason. Link to Colab notebook: https://colab.research.google.com/drive/1ix_nNhsd89nLfTy6Nyh-Ak8PHn1SYm-0 Here's my code: ``` !pip install pytorch_pretrained_bert import torch from pytorch_pretrained_bert import BertTokenizer, BertModel import pandas as pd import numpy as np ### Let's download a model and tokenizer model = BertModel.from_pretrained('bert-base-uncased') tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') ### Now let's save our model and tokenizer to a directory model.save_pretrained('./models/') tokenizer.save_pretrained('./models/') ``` Error: ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-18-1a3b2c8b8e82> in <module>() 11 12 ### Now let's save our model and tokenizer to a directory ---> 13 model.save_pretrained('./models/') 14 tokenizer.save_pretrained('./models/') 15 /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __getattr__(self, name) 574 return modules[name] 575 raise AttributeError("'{}' object has no attribute '{}'".format( --> 576 type(self).__name__, name)) 577 578 def __setattr__(self, name, value): ``` AttributeError: 'BertModel' object has no attribute 'save_pretrained'
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2896/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2896/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2895
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2895/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2895/comments
https://api.github.com/repos/huggingface/transformers/issues/2895/events
https://github.com/huggingface/transformers/pull/2895
567,091,007
MDExOlB1bGxSZXF1ZXN0Mzc2Nzc5ODY3
2,895
Enable 'from transformers import AlbertMLMHead'
{ "login": "jarednielsen", "id": 4564897, "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jarednielsen", "html_url": "https://github.com/jarednielsen", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "repos_url": "https://api.github.com/users/jarednielsen/repos", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2895?src=pr&el=h1) Report\n> Merging [#2895](https://codecov.io/gh/huggingface/transformers/pull/2895?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2ae98336d17fceea7506af9880b862b6252a38f6?src=pr&el=desc) will **decrease** coverage by `1.07%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2895/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2895?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2895 +/- ##\n==========================================\n- Coverage 75.06% 73.98% -1.08% \n==========================================\n Files 94 94 \n Lines 15288 15288 \n==========================================\n- Hits 11476 11311 -165 \n- Misses 3812 3977 +165\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2895?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/2895/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `98.87% <ø> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2895/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2895/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.91% <0%> (-9.86%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2895/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `70.86% <0%> (-2.3%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2895/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `92.07% <0%> (-2.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2895/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2895?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2895?src=pr&el=footer). Last update [2ae9833...014ad24](https://codecov.io/gh/huggingface/transformers/pull/2895?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "(I removed myself because GitHub suggests me as a reviewer on every PR because of a refactoring, and I can't review every PR, not because this is a bad idea. The PR is most likely good.)" ]
1,582
1,582
1,582
CONTRIBUTOR
null
Discussed at https://github.com/huggingface/transformers/issues/2894 I'm writing a custom pretraining script that incorporates both the masked language modeling (MLM) and sentence order prediction (SOP) objectives. I'm able to use the TFAlbertForMaskedLM model for the MLM objective, but need access to the last_hidden_state to write my SOP objective. I can do this if I have a raw TFAlbertModel and write my own MLM objective, but would prefer to just create my own model from pre-existing modularized components. I know that there's a lot of care taken in API design, and the team may have explicitly decided against this. But if it is an option, it would make the transformers repo much more extensible for research.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2895/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2895/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/2895", "html_url": "https://github.com/huggingface/transformers/pull/2895", "diff_url": "https://github.com/huggingface/transformers/pull/2895.diff", "patch_url": "https://github.com/huggingface/transformers/pull/2895.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/2894
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2894/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2894/comments
https://api.github.com/repos/huggingface/transformers/issues/2894/events
https://github.com/huggingface/transformers/issues/2894
567,084,724
MDU6SXNzdWU1NjcwODQ3MjQ=
2,894
Allow import of model components, e.g. `from transformers import TFAlbertMLMHead`
{ "login": "jarednielsen", "id": 4564897, "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jarednielsen", "html_url": "https://github.com/jarednielsen", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "repos_url": "https://api.github.com/users/jarednielsen/repos", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Why don't you import it directly via\r\n`from transformers.modeling_tf_albert import TFAlbertMLMHead`\r\n?\r\nIn my opinion it is not good practise to expose everything via `__init__.py` because the autocomplete feature of an IDE will become messy.", "That's true, don't know why that slipped my mind. Thanks for the suggestion!" ]
1,582
1,582
1,582
CONTRIBUTOR
null
# 🚀 Feature request Expose model components such as custom layers like `TFAlbertMLMHead`, that are created in `modeling_tf_albert.py` and others. ## Motivation I'm writing a custom pretraining script that incorporates both the masked language modeling (MLM) and sentence order prediction (SOP) objectives. I'm able to use the `TFAlbertForMaskedLM` model for the MLM objective, but need access to the `last_hidden_state` to write my SOP objective. I can do this if I have a raw `TFAlbertModel` and write my own MLM objective, but would prefer to just create my own model from pre-existing modularized components. ## Your contribution I can contribute this, it would just be modifying `__init__.py`. I know that there's a lot of care taken in API design, and the team may have explicitly decided against this. But if it is an option, it would make the transformers repo much more extensible for research.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2894/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2894/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2893
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2893/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2893/comments
https://api.github.com/repos/huggingface/transformers/issues/2893/events
https://github.com/huggingface/transformers/issues/2893
567,084,440
MDU6SXNzdWU1NjcwODQ0NDA=
2,893
Pipeline Loading Models and Tokenizers
{ "login": "rcontesti", "id": 13105045, "node_id": "MDQ6VXNlcjEzMTA1MDQ1", "avatar_url": "https://avatars.githubusercontent.com/u/13105045?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rcontesti", "html_url": "https://github.com/rcontesti", "followers_url": "https://api.github.com/users/rcontesti/followers", "following_url": "https://api.github.com/users/rcontesti/following{/other_user}", "gists_url": "https://api.github.com/users/rcontesti/gists{/gist_id}", "starred_url": "https://api.github.com/users/rcontesti/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rcontesti/subscriptions", "organizations_url": "https://api.github.com/users/rcontesti/orgs", "repos_url": "https://api.github.com/users/rcontesti/repos", "events_url": "https://api.github.com/users/rcontesti/events{/privacy}", "received_events_url": "https://api.github.com/users/rcontesti/received_events", "type": "User", "site_admin": false }
[ { "id": 1771187924, "node_id": "MDU6TGFiZWwxNzcxMTg3OTI0", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Pipeline", "name": "Core: Pipeline", "color": "FF7066", "default": false, "description": "Internals of the library; Pipeline." } ]
closed
false
{ "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false }
[ { "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false } ]
[ "Also cc'ing @fmikaelian on this for information :)", "Apologize for the careless mistake @fmikaelian ", "Hi, other than the careless mistake, I'm trying to understand why I cannot load any model from transformers S3 repo. I have tried :\r\n\r\n1) from transformers import FlaubertModel, FlaubertTokenizer\r\n\r\n2) from transformers import CamembertTokenizer\r\n\r\n3)from transformers import CamembertModel\r\n\r\n\r\n4)from transformers import BertModel\r\nmodel = BertModel.from_pretrained('bert-base-uncased') \r\n\r\nOnly the forth option has triggered the download process. All other options return : \r\n`\"ImportError: cannot import name 'CamembertModel'\"`\r\n\r\n i was wondering if there is an issue since I'm using conda in a Windows PC. \r\n\r\nMany thanks for your help.\r\n\r\n\r\n", "I tried to update transformers with conda but that did not work and I also tried to do some pip install but also getting some errors:\r\n\r\n```\r\nFile \"C:\\Users\\Ruben Contesti\\AppData\\Local\\Continuum\\Anaconda3\\envs\\...\\lib\\site-packages\\transformers\\configuration_utils.py\", line 145, in from_pretrained\r\n raise EnvironmentError(msg)\r\nOSError: Model name 'flaubert-base-uncased-squad' was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased). We assumed 'flaubert-base-uncased-squad' was a path or url to a configuration file named config.json or a directory containing such a file but couldn't find any such file at this path or url.\r\n```", "As pointed out in my Stackoverflow answer, I suspect a versioning conflict. I successfully managed to load the pipeline in `2.5.0`, but had errors in `2.4.1` (not quite the same as @rcontesti , but similar enough for me to assume problems with an older version).", "Do you have torch installed in your environment? That might explain why you can't import `CamembertModel`.\r\n\r\nThe error \r\n```\r\nOSError: Model name 'flaubert-base-uncased-squad' was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased). We assumed 'flaubert-base-uncased-squad' was a path or url to a configuration file named config.json or a directory containing such a file but couldn't find any such file at this path or url.\r\n```\r\n\r\nmeans you're trying to load a flaubert checkpoint in BERT. Could you share the code that raised the last error so that we may try to reproduce the error?", "Guyz thank so much for your answers. I was able to solve the version problem but now I'm running into a different problem(Should I open a new thread?):\r\n\r\nI'm currently using:\r\n\r\n```py\r\nmodel_=transformers.FlaubertForQuestionAnswering\r\ntokenizer_ = transformers.FlaubertTokenizer\r\n```\r\n\r\nBut when I place them into pipeline:\r\n\r\n```py\r\nnlp = pipeline('question-answering', \\\r\n model=model, \\\r\n tokenizer=tokenizer)\r\n```\r\n\r\nI'm getting the following error:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\Ruben Contesti\\AppData\\Local\\Continuum\\Anaconda3\\envs\\..\\lib\\multiprocessing\\pool.py\", line 119, in worker\r\n result = (True, func(*args, **kwds))\r\n File \"C:\\Users\\Ruben Contesti\\AppData\\Local\\Continuum\\Anaconda3\\envs\\..\\lib\\multiprocessing\\pool.py\", line 44, in mapstar\r\n return list(map(*args))\r\n File \"C:\\Users\\Ruben Contesti\\AppData\\Local\\Continuum\\Anaconda3\\envs\\..\\lib\\site-packages\\transformers\\data\\processors\\squad.py\", line 105, in squad_convert_example_to_features\r\n sub_tokens = tokenizer.tokenize(token)\r\nTypeError: tokenize() missing 1 required positional argument: 'text'\r\n\"\"\"\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"question_extraction.py\", line 61, in <module>\r\n answer, score=question_extraction(text, question_, model_, tokenizer_, language_, verbose= True)\r\n File \"question_extraction.py\", line 44, in question_extraction\r\n output=nlp({'question':question, 'context': text})\r\n File \"C:\\Users\\Ruben Contesti\\AppData\\Local\\Continuum\\Anaconda3\\envs\\..\\lib\\site-packages\\transformers\\pipelines.py\", line 802, in __call__\r\n for example in examples\r\n File \"C:\\Users\\Ruben Contesti\\AppData\\Local\\Continuum\\Anaconda3\\envs\\socgen_nlp\\lib\\site-packages\\transformers\\pipelines.py\", line 802, in <listcomp>\r\n for example in examples\r\n File \"C:\\Users\\Ruben Contesti\\AppData\\Local\\Continuum\\Anaconda3\\envs\\..\\lib\\site-packages\\transformers\\data\\processors\\squad.py\", line 316, in squad_convert_examples_to_features\r\n desc=\"convert squad examples to features\",\r\n File \"C:\\Users\\Ruben Contesti\\AppData\\Local\\Continuum\\Anaconda3\\envs\\..\\lib\\site-packages\\tqdm\\std.py\", line 1097, in __iter__\r\n for obj in iterable:\r\n File \"C:\\Users\\Ruben Contesti\\AppData\\Local\\Continuum\\Anaconda3\\envs\\..\\lib\\multiprocessing\\pool.py\", line 320, in <genexpr>\r\n return (item for chunk in result for item in chunk)\r\n File \"C:\\Users\\Ruben Contesti\\AppData\\Local\\Continuum\\Anaconda3\\envs\\..\\lib\\multiprocessing\\pool.py\", line 735, in next\r\n raise value\r\nTypeError: tokenize() missing 1 required positional argument: 'text'\r\nconvert squad examples to features: 0%|\r\n```", "You need to initialize your model and tokenizer with a checkpoint, for example instead of \r\n```py\r\nmodel_=transformers.FlaubertForQuestionAnswering\r\ntokenizer_ = transformers.FlaubertTokenizer\r\n```\r\nYou would specify a flaubert checkpoint:\r\n```py\r\nmodel_ = transformers.FlaubertModel.from_pretrained(\"fmikaelian/flaubert-base-uncased-squad\")\r\ntokenizer_ = transformers.FlaubertTokenizer.from_pretrained(\"fmikaelian/flaubert-base-uncased-squad\")\r\n```\r\n\r\nI chose a community checkpoint that was trained using question answering. You can check all available FlauBERT models [here](https://huggingface.co/models?search=flaubert).", "Once again many thanks @LysandreJik for the help. I proceed as suggested and now when I'm trying to put both the tokenizer and the model into pipeline I'm running into the following error:\r\n\r\n`Traceback (most recent call last):\r\n File \"question_extraction.py\", line 72, in <module>\r\n answer, score=question_extraction(text, question_, model_, tokenizer_, language_, verbose= True)\r\n File \"question_extraction.py\", line 55, in question_extraction\r\n output=nlp({'question':question, 'context': text})\r\n File \"C:\\Users\\Ruben Contesti\\AppData\\Local\\Continuum\\Anaconda3\\envs\\..\\lib\\site-packages\\transformers\\pipelines.py\", line 818, in __call__\r\n start, end = self.model(**fw_args)\r\nValueError: not enough values to unpack (expected 2, got 1)`\r\n\r\nIt seems like the dictionary of values start and end I'm getting is not a tuple or something like that.", "I updated the code so that it loads a previously saved model\r\n\r\n```python\r\n\r\ntokenizer_ = FlaubertTokenizer.from_pretrained(MODELS)\r\nmodel_ = FlaubertModel.from_pretrained(MODELS)\r\n\r\ndef question_extraction(text, question, model, tokenizer, language=\"French\", verbose=False):\r\n \r\n if language==\"French\":\r\n nlp = pipeline('question-answering', \\\r\n model=model, \\\r\n tokenizer=tokenizer)\r\n else:\r\n nlp=pipeline('question-answering')\r\n\r\n output=nlp({'question':question, 'context': text})\r\n\r\n answer, score = output.answer, output.score \r\n\r\n if verbose==True:\r\n print(\"Q: \", question ,\"\\n\",\\\r\n \"A:\", answer,\"\\n\", \\\r\n \"Confidence (%):\", \"{0:.2f}\".format(str(score*100) )\r\n )\r\n \r\n return answer, score\r\n\r\nif __name__==\"__main__\":\r\n question_=\"Quel est le montant de la garantie?\"\r\n language_=\"French\"\r\n text=\"le montant de la garantie est € 1000\"\r\n\r\n answer, score=question_extraction(text, question_, model_, tokenizer_, language_, verbose= True)\r\n\r\n\r\n```\r\n But now I'm getting an unpacking error:\r\n\r\n```\r\nC:\\...\\NLP\\src>python question_extraction.py\r\nOK\r\nOK\r\nconvert squad examples to features: 100%|████████████████████████████████████████████████| 1/1 [00:00<00:00, 4.66it/s]\r\nadd example index and unique id: 100%|███████████████████████████████████████████████████████████| 1/1 [00:00<?, ?it/s]\r\nTraceback (most recent call last):\r\n File \"question_extraction.py\", line 77, in <module>\r\n answer, score=question_extraction(text, question_, model_, tokenizer_, language_, verbose= True)\r\n File \"question_extraction.py\", line 60, in question_extraction\r\n output=nlp({'question':question, 'context': text})\r\n File \"C:\\...\\transformers\\pipelines.py\", line 818, in __call__\r\n start, end = self.model(**fw_args)\r\nValueError: not enough values to unpack (expected 2, got 1)\r\n\r\n```", "Hi @rcontesti, I've investigated further and found a few issues. First of all, the checkpoint you're trying to load is `fmikaelian/flaubert-base-uncased-squad`, which unfortunately cannot be used by pipelines.\r\n\r\nThis is because this model was fine-tuned with `FlaubertForQuestionAnswering` instead of `FlaubertForQuestionAnsweringSimple`, and only the latter can be used by pipelines. Since it was fine-tuned leveraging a different architecture for the QA head, it, unfortunately, won't be usable by pipelines. The usage example on the [models page](https://huggingface.co/fmikaelian/flaubert-base-uncased-squad) is misleading because of that (cc @fmikaelian).\r\n\r\nUnfortunately, there is no French model that can be used with the pipelines, so you would need to do a custom inference leveraging the model. We don't have any examples showcasing how to leverage `XLNet/XLM/FlaubertForQuestionAnswering`, but it is on our roadmap.", "@LysandreJik many thanks for your answer. It was very clarifying.\r\n\r\nSome follow up questions on my side:\r\n\r\n1. If I use FlaubertForQuestionAnsweringSimple then can I use pipelines? If that is the case would you show me how?\r\n2. Is it also the case that I cannot use CammmBert for QA?\r\n3. I guess that because we have different architectures theres is no quick hack to adapt it to pipelines, am I getting it right?\r\n4. If I were to do custom inferencing, without pipelines and only using pytorch, would you mind showing me the resources to do so?\r\n\r\nMany thanks!!!\r\n\r\n\r\n", "1. You can indeed use `FlaubertForQuestionAnsweringSimple` with pipelines, the issue is that there is currently no model fine-tuned on QA for this model.\r\n2. You could also use the `CamembertForQuestionAnswering` model with pipelines I believe, but unfortunately there is no model fine-tuned on QA for this model either.\r\n3. Indeed, we should add these down the line, but it is not very high on our priority list right now cc @mfuntowicz \r\n4. Yes, I'm currently working on some [examples](https://github.com/huggingface/transformers/pull/2850) that should be merged sometimes today. I'll look into using a `XLNet/XLM/FlaubertForQuestionAnswering` and their differing architecture as well.", "@rcontesti @LysandreJik \r\n\r\nI will fine-tune `FlaubertForQuestionAnsweringSimple` and `CamembertForQuestionAnswering` on French QA in the next days and let you know if we can use the pipeline with those", "@fmikaelian, @LysandreJik \r\n\r\nMany thanks for the help. Eventually I could train it myself, I haven't use Pytorch in a year but if you could point to a good dataset I could do training. Many thanks!", "@rcontesti @LysandreJik \r\n\r\nI fine-tuned `FlaubertForQuestionAnsweringSimple` on [FQuAD](https://fquad.illuin.tech/), by editing `run_squad.py` using the same approach as #2746, but still got `ValueError: not enough values to unpack (expected 2, got 1)` when using the model with a pipeline.\r\n\r\nI also fine-tuned `CamembertForQuestionAnswering` on [FQuAD](https://fquad.illuin.tech/) and [French-SQuAD](https://github.com/Alikabbadj/French-SQuAD), and pipelines are working :-]\r\n\r\n```python3\r\nfrom transformers import pipeline\r\n\r\nnlp = pipeline('question-answering', model='fmikaelian/camembert-base-squad', tokenizer='fmikaelian/camembert-base-squad')\r\n\r\nnlp({\r\n 'question': \"Qui est Claude Monet?\",\r\n 'context': \"Claude Monet, né le 14 novembre 1840 à Paris et mort le 5 décembre 1926 à Giverny, est un peintre français et l’un des fondateurs de l'impressionnisme.\"\r\n})\r\n```\r\n\r\n```\r\n{'answer': 'un peintre français',\r\n 'end': 106,\r\n 'score': 0.498404793881182,\r\n 'start': 87}\r\n```\r\n\r\nModel links:\r\n\r\n- [`fmikaelian/camembert-base-fquad`](https://huggingface.co/fmikaelian/camembert-base-fquad)\r\n- [`fmikaelian/camembert-base-squad`](https://huggingface.co/fmikaelian/camembert-base-squad)\r\n\r\nWill open a PR for models cards (#3089)", "@fmikaelian That's really cool, thanks for taking the time to fine-tune those models! I'll look into the error with the pipeline ASAP, I'm pretty sure I know where it comes from.\r\n\r\nReally cool to have the first community model for question answering in French!", "Hi @fmikaelian \r\n\r\nJust installed transformers from source and it seems the model is still not there\r\n\r\n`Model name 'fmikaelian/camembert-base-squad' was not found in model name list `\r\n\r\nAlso tried to download from S3 but it also does not seem to be there:\r\n\r\n\r\n\r\n`OSError: Model name '../models/fmikaelian/camembert-base-squad' was not found in model name list. We assumed 'https://s3.amazonaws.com/models.huggingface.co/bert/../models/fmikaelian/camembert-base-squad/config.json' was a path, a model identifier, or url to a configuration file named config.json or a directory containing such a file but couldn't find any such file at this path or url.`\r\n\r\nWould you mind sharing the s3 paths? I couldn t get them.", "The models are on the S3. What command did you use? Why is there \"../\" in your model name?\r\n\r\nThe following works:\r\n\r\n```py\r\nfrom transformers import CamembertModel\r\nmodel = CamembertModel.from_pretrained(\"fmikaelian/camembert-base-squad\")\r\n```\r\n\r\nThe following also works:\r\n\r\n```py\r\nfrom transformers import pipeline\r\nnlp = pipeline(\"question-answering\", model=\"fmikaelian/camembert-base-squad\", tokenizer=\"fmikaelian/camembert-base-squad\")\r\n\r\n```", "@LysandreJik, is working now. Many thanks!" ]
1,582
1,583
1,583
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> Hi I'm trying to use 'fmikaelian/flaubert-base-uncased-squad' for question answering. I understand that I should load the model and the tokenizers. I'm not sure how should I do this. My code is basically far ` from transformers import pipeline, BertTokenizer nlp = pipeline('question-answering', \ model='fmikaelian/flaubert-base-uncased-squad', \ tokenizer='fmikaelian/flaubert-base-uncased-squad')` Most probably this can be solve with a two liner. Many thanks <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**: https://stackoverflow.com/questions/60287465/pipeline-loading-models-and-tokenizers
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2893/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2893/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2892
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2892/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2892/comments
https://api.github.com/repos/huggingface/transformers/issues/2892/events
https://github.com/huggingface/transformers/pull/2892
567,079,208
MDExOlB1bGxSZXF1ZXN0Mzc2NzcwMjY1
2,892
Create README.md
{ "login": "BinWang28", "id": 25280416, "node_id": "MDQ6VXNlcjI1MjgwNDE2", "avatar_url": "https://avatars.githubusercontent.com/u/25280416?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BinWang28", "html_url": "https://github.com/BinWang28", "followers_url": "https://api.github.com/users/BinWang28/followers", "following_url": "https://api.github.com/users/BinWang28/following{/other_user}", "gists_url": "https://api.github.com/users/BinWang28/gists{/gist_id}", "starred_url": "https://api.github.com/users/BinWang28/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BinWang28/subscriptions", "organizations_url": "https://api.github.com/users/BinWang28/orgs", "repos_url": "https://api.github.com/users/BinWang28/repos", "events_url": "https://api.github.com/users/BinWang28/events{/privacy}", "received_events_url": "https://api.github.com/users/BinWang28/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2892?src=pr&el=h1) Report\n> Merging [#2892](https://codecov.io/gh/huggingface/transformers/pull/2892?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2ae98336d17fceea7506af9880b862b6252a38f6?src=pr&el=desc) will **decrease** coverage by `1.07%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2892/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2892?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2892 +/- ##\n==========================================\n- Coverage 75.06% 73.98% -1.08% \n==========================================\n Files 94 94 \n Lines 15288 15288 \n==========================================\n- Hits 11476 11311 -165 \n- Misses 3812 3977 +165\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2892?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2892/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2892/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.91% <0%> (-9.86%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2892/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `70.86% <0%> (-2.3%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2892/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `92.07% <0%> (-2.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2892/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2892?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2892?src=pr&el=footer). Last update [2ae9833...25c0467](https://codecov.io/gh/huggingface/transformers/pull/2892?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Thanks for sharing @BinWang28!", "[model page](https://huggingface.co/binwang/xlnet-base-cased)" ]
1,582
1,582
1,582
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2892/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2892/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/2892", "html_url": "https://github.com/huggingface/transformers/pull/2892", "diff_url": "https://github.com/huggingface/transformers/pull/2892.diff", "patch_url": "https://github.com/huggingface/transformers/pull/2892.patch", "merged_at": 1582127477000 }
https://api.github.com/repos/huggingface/transformers/issues/2891
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2891/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2891/comments
https://api.github.com/repos/huggingface/transformers/issues/2891/events
https://github.com/huggingface/transformers/pull/2891
567,044,916
MDExOlB1bGxSZXF1ZXN0Mzc2NzQyNjEw
2,891
Fix InputExample docstring
{ "login": "scottgigante", "id": 8499679, "node_id": "MDQ6VXNlcjg0OTk2Nzk=", "avatar_url": "https://avatars.githubusercontent.com/u/8499679?v=4", "gravatar_id": "", "url": "https://api.github.com/users/scottgigante", "html_url": "https://github.com/scottgigante", "followers_url": "https://api.github.com/users/scottgigante/followers", "following_url": "https://api.github.com/users/scottgigante/following{/other_user}", "gists_url": "https://api.github.com/users/scottgigante/gists{/gist_id}", "starred_url": "https://api.github.com/users/scottgigante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/scottgigante/subscriptions", "organizations_url": "https://api.github.com/users/scottgigante/orgs", "repos_url": "https://api.github.com/users/scottgigante/repos", "events_url": "https://api.github.com/users/scottgigante/events{/privacy}", "received_events_url": "https://api.github.com/users/scottgigante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2891?src=pr&el=h1) Report\n> Merging [#2891](https://codecov.io/gh/huggingface/transformers/pull/2891?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2ae98336d17fceea7506af9880b862b6252a38f6?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2891/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2891?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2891 +/- ##\n=======================================\n Coverage 75.06% 75.06% \n=======================================\n Files 94 94 \n Lines 15288 15288 \n=======================================\n Hits 11476 11476 \n Misses 3812 3812\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2891?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/2891/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `21.73% <ø> (ø)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2891?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2891?src=pr&el=footer). Last update [2ae9833...e0b3974](https://codecov.io/gh/huggingface/transformers/pull/2891?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Thanks!", "Thank you! Easiest review ever! Keep em coming :)" ]
1,582
1,582
1,582
CONTRIBUTOR
null
![image](https://user-images.githubusercontent.com/8499679/74761217-a1458700-5249-11ea-9fcf-0689c60ad5af.png)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2891/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2891/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/2891", "html_url": "https://github.com/huggingface/transformers/pull/2891", "diff_url": "https://github.com/huggingface/transformers/pull/2891.diff", "patch_url": "https://github.com/huggingface/transformers/pull/2891.patch", "merged_at": 1582230316000 }
https://api.github.com/repos/huggingface/transformers/issues/2890
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2890/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2890/comments
https://api.github.com/repos/huggingface/transformers/issues/2890/events
https://github.com/huggingface/transformers/pull/2890
567,043,439
MDExOlB1bGxSZXF1ZXN0Mzc2NzQxMzc2
2,890
Support for torch-lightning in NER examples
{ "login": "srush", "id": 35882, "node_id": "MDQ6VXNlcjM1ODgy", "avatar_url": "https://avatars.githubusercontent.com/u/35882?v=4", "gravatar_id": "", "url": "https://api.github.com/users/srush", "html_url": "https://github.com/srush", "followers_url": "https://api.github.com/users/srush/followers", "following_url": "https://api.github.com/users/srush/following{/other_user}", "gists_url": "https://api.github.com/users/srush/gists{/gist_id}", "starred_url": "https://api.github.com/users/srush/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/srush/subscriptions", "organizations_url": "https://api.github.com/users/srush/orgs", "repos_url": "https://api.github.com/users/srush/repos", "events_url": "https://api.github.com/users/srush/events{/privacy}", "received_events_url": "https://api.github.com/users/srush/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2890?src=pr&el=h1) Report\n> Merging [#2890](https://codecov.io/gh/huggingface/transformers/pull/2890?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0dbddba6d2c5b2c6fc08866358c1994a00d6a1ff?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2890/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2890?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2890 +/- ##\n=======================================\n Coverage 75.06% 75.06% \n=======================================\n Files 94 94 \n Lines 15288 15288 \n=======================================\n Hits 11476 11476 \n Misses 3812 3812\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2890?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2890?src=pr&el=footer). Last update [0dbddba...8f8137f](https://codecov.io/gh/huggingface/transformers/pull/2890?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "@LysandreJik \r\nThis implementation does not work with `pytorch_lightning` > `0.7.1`.\r\n\r\nIt throws the exception `'Trainer' object has no attribute 'avg_loss'` because since version `0.7.2` they removed the `avg_loss` field from the `Trainer` class.\r\n\r\nSee https://github.com/huggingface/transformers/pull/2890/files#diff-d68a6ecfacd8231c59af0ea67d77bb9cR120", "@simonepri can you file an issue? I guess we should just remove that key/value pair. ", "@simonepri did you try 0.7.5? " ]
1,582
1,588
1,582
CONTRIBUTOR
null
Update of https://github.com/huggingface/transformers/pull/2816 This PR creates a new example coding style for the pytorch code. * Uses pytorch-lightning for the underlying training. * Separates out the base transformer loading from the individual training. * Moves each individual example to its own directory. * Move the code in the readme to bash scripts. The only two new files are run_pl_ner.py and transformers_base.py. The goal is to keep the same format as the original command-line. Most of the argument names are preserved. I have verified that for NER the results of the same on GPU. There are several nice benefits of lightning -> somewhat nicer logging and library integration (e.g. wandb), auto-checkpointing. Mostly the goal though is code readability with identical functionality. Tests I ran: * make sure that the test results are identical. * print test results after training. * test multi-gpu and apex (multigpu gives a nice speedup)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2890/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2890/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/2890", "html_url": "https://github.com/huggingface/transformers/pull/2890", "diff_url": "https://github.com/huggingface/transformers/pull/2890.diff", "patch_url": "https://github.com/huggingface/transformers/pull/2890.patch", "merged_at": 1582217405000 }
https://api.github.com/repos/huggingface/transformers/issues/2889
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2889/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2889/comments
https://api.github.com/repos/huggingface/transformers/issues/2889/events
https://github.com/huggingface/transformers/issues/2889
567,013,661
MDU6SXNzdWU1NjcwMTM2NjE=
2,889
Getting: AttributeError: 'BertTokenizer' object has no attribute 'encode'
{ "login": "VeereshShringari", "id": 31262496, "node_id": "MDQ6VXNlcjMxMjYyNDk2", "avatar_url": "https://avatars.githubusercontent.com/u/31262496?v=4", "gravatar_id": "", "url": "https://api.github.com/users/VeereshShringari", "html_url": "https://github.com/VeereshShringari", "followers_url": "https://api.github.com/users/VeereshShringari/followers", "following_url": "https://api.github.com/users/VeereshShringari/following{/other_user}", "gists_url": "https://api.github.com/users/VeereshShringari/gists{/gist_id}", "starred_url": "https://api.github.com/users/VeereshShringari/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VeereshShringari/subscriptions", "organizations_url": "https://api.github.com/users/VeereshShringari/orgs", "repos_url": "https://api.github.com/users/VeereshShringari/repos", "events_url": "https://api.github.com/users/VeereshShringari/events{/privacy}", "received_events_url": "https://api.github.com/users/VeereshShringari/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1834053813, "node_id": "MDU6TGFiZWwxODM0MDUzODEz", "url": "https://api.github.com/repos/huggingface/transformers/labels/PyTorch", "name": "PyTorch", "color": "a12bef", "default": false, "description": "Anything PyTorch" }, { "id": 1834056635, "node_id": "MDU6TGFiZWwxODM0MDU2NjM1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization", "name": "Core: Tokenization", "color": "FF4446", "default": false, "description": "Internals of the library; Tokenization." } ]
closed
false
null
[]
[ "Please fix the formatting of your post and use code tags.", "I made the changes still all the text is shown struck off form.\r\nI am new to this bug log not sure how to change to code tag \r\n\r\n", "I have used <code> tag ", "Read how to use tags here: https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks", "I did the tags as suggested by BramVanroy by using guidelines::https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks", "You clearly did something wrong because, as you can see yourself, all text is striked through. Likely caused by having tildes (~) around your post.", "Thanks, I cleared it, there was one hiding beside a comment.", "You are using an old version of the library (pytorch_pretrained_bert). You should move to `transformers` instead.", "I upgraded latest ``` transformers ``` still I am getting following error message :\r\n```\r\nERROR:root:Internal Python error in the inspect module.\r\nBelow is the traceback from this internal error.\r\n\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\Veeresh\\Anaconda3\\lib\\site-packages\\IPython\\core\\interactiveshell.py\", line 3319, in run_code\r\n exec(code_obj, self.user_global_ns, self.user_ns)\r\n File \"<ipython-input-13-645c7873d473>\", line 1, in <module>\r\n encoding = tokenizer.encode(raw_text)\r\nAttributeError: 'BertTokenizer' object has no attribute 'encode'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\Veeresh\\Anaconda3\\lib\\site-packages\\IPython\\core\\interactiveshell.py\", line 2034, in showtraceback\r\n stb = value._render_traceback_()\r\nAttributeError: 'AttributeError' object has no attribute '_render_traceback_'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\Veeresh\\AppData\\Roaming\\Python\\Python37\\site-packages\\tensorflow_core\\python\\pywrap_tensorflow.py\", line 58, in <module>\r\n from tensorflow.python.pywrap_tensorflow_internal import *\r\n File \"C:\\Users\\Veeresh\\AppData\\Roaming\\Python\\Python37\\site-packages\\tensorflow_core\\python\\pywrap_tensorflow_internal.py\", line 28, in <module>\r\n _pywrap_tensorflow_internal = swig_import_helper()\r\n File \"C:\\Users\\Veeresh\\AppData\\Roaming\\Python\\Python37\\site-packages\\tensorflow_core\\python\\pywrap_tensorflow_internal.py\", line 24, in swig_import_helper\r\n _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)\r\n File \"C:\\Users\\Veeresh\\Anaconda3\\lib\\imp.py\", line 242, in load_module\r\n return load_dynamic(name, filename, file)\r\n File \"C:\\Users\\Veeresh\\Anaconda3\\lib\\imp.py\", line 342, in load_dynamic\r\n return _load(spec)\r\nImportError: DLL load failed: The specified module could not be found.\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\Veeresh\\Anaconda3\\lib\\site-packages\\IPython\\core\\ultratb.py\", line 1151, in get_records\r\n return _fixed_getinnerframes(etb, number_of_lines_of_context, tb_offset)\r\n File \"C:\\Users\\Veeresh\\Anaconda3\\lib\\site-packages\\IPython\\core\\ultratb.py\", line 319, in wrapped\r\n return f(*args, **kwargs)\r\n File \"C:\\Users\\Veeresh\\Anaconda3\\lib\\site-packages\\IPython\\core\\ultratb.py\", line 353, in _fixed_getinnerframes\r\n records = fix_frame_records_filenames(inspect.getinnerframes(etb, context))\r\n File \"C:\\Users\\Veeresh\\Anaconda3\\lib\\inspect.py\", line 1502, in getinnerframes\r\n frameinfo = (tb.tb_frame,) + getframeinfo(tb, context)\r\n File \"C:\\Users\\Veeresh\\Anaconda3\\lib\\inspect.py\", line 1460, in getframeinfo\r\n filename = getsourcefile(frame) or getfile(frame)\r\n File \"C:\\Users\\Veeresh\\Anaconda3\\lib\\inspect.py\", line 696, in getsourcefile\r\n if getattr(getmodule(object, filename), '__loader__', None) is not None:\r\n File \"C:\\Users\\Veeresh\\Anaconda3\\lib\\inspect.py\", line 733, in getmodule\r\n if ismodule(module) and hasattr(module, '__file__'):\r\n File \"C:\\Users\\Veeresh\\AppData\\Roaming\\Python\\Python37\\site-packages\\tensorflow\\__init__.py\", line 50, in __getattr__\r\n module = self._load()\r\n File \"C:\\Users\\Veeresh\\AppData\\Roaming\\Python\\Python37\\site-packages\\tensorflow\\__init__.py\", line 44, in _load\r\n module = _importlib.import_module(self.__name__)\r\n File \"C:\\Users\\Veeresh\\Anaconda3\\lib\\importlib\\__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"<frozen importlib._bootstrap>\", line 1006, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 983, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 953, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\r\n File \"<frozen importlib._bootstrap>\", line 1006, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 983, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 967, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 677, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 728, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\r\n File \"C:\\Users\\Veeresh\\AppData\\Roaming\\Python\\Python37\\site-packages\\tensorflow_core\\__init__.py\", line 42, in <module>\r\n from . _api.v2 import audio\r\n File \"C:\\Users\\Veeresh\\AppData\\Roaming\\Python\\Python37\\site-packages\\tensorflow_core\\_api\\v2\\audio\\__init__.py\", line 10, in <module>\r\n from tensorflow.python.ops.gen_audio_ops import decode_wav\r\n File \"C:\\Users\\Veeresh\\AppData\\Roaming\\Python\\Python37\\site-packages\\tensorflow_core\\python\\ops\\gen_audio_ops.py\", line 9, in <module>\r\n from tensorflow.python import pywrap_tensorflow as _pywrap_tensorflow\r\n File \"C:\\Users\\Veeresh\\AppData\\Roaming\\Python\\Python37\\site-packages\\tensorflow\\__init__.py\", line 50, in __getattr__\r\n module = self._load()\r\n File \"C:\\Users\\Veeresh\\AppData\\Roaming\\Python\\Python37\\site-packages\\tensorflow\\__init__.py\", line 44, in _load\r\n module = _importlib.import_module(self.__name__)\r\n File \"C:\\Users\\Veeresh\\Anaconda3\\lib\\importlib\\__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"C:\\Users\\Veeresh\\AppData\\Roaming\\Python\\Python37\\site-packages\\tensorflow_core\\python\\__init__.py\", line 49, in <module>\r\n from tensorflow.python import pywrap_tensorflow\r\n File \"C:\\Users\\Veeresh\\AppData\\Roaming\\Python\\Python37\\site-packages\\tensorflow_core\\python\\pywrap_tensorflow.py\", line 74, in <module>\r\n raise ImportError(msg)\r\nImportError: Traceback (most recent call last):\r\n File \"C:\\Users\\Veeresh\\Anaconda3\\lib\\site-packages\\IPython\\core\\interactiveshell.py\", line 3319, in run_code\r\n exec(code_obj, self.user_global_ns, self.user_ns)\r\n File \"<ipython-input-13-645c7873d473>\", line 1, in <module>\r\n encoding = tokenizer.encode(raw_text)\r\nAttributeError: 'BertTokenizer' object has no attribute 'encode'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\Veeresh\\Anaconda3\\lib\\site-packages\\IPython\\core\\interactiveshell.py\", line 2034, in showtraceback\r\n stb = value._render_traceback_()\r\nAttributeError: 'AttributeError' object has no attribute '_render_traceback_'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\Veeresh\\AppData\\Roaming\\Python\\Python37\\site-packages\\tensorflow_core\\python\\pywrap_tensorflow.py\", line 58, in <module>\r\n from tensorflow.python.pywrap_tensorflow_internal import *\r\n File \"C:\\Users\\Veeresh\\AppData\\Roaming\\Python\\Python37\\site-packages\\tensorflow_core\\python\\pywrap_tensorflow_internal.py\", line 28, in <module>\r\n _pywrap_tensorflow_internal = swig_import_helper()\r\n File \"C:\\Users\\Veeresh\\AppData\\Roaming\\Python\\Python37\\site-packages\\tensorflow_core\\python\\pywrap_tensorflow_internal.py\", line 24, in swig_import_helper\r\n _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)\r\n File \"C:\\Users\\Veeresh\\Anaconda3\\lib\\imp.py\", line 242, in load_module\r\n return load_dynamic(name, filename, file)\r\n File \"C:\\Users\\Veeresh\\Anaconda3\\lib\\imp.py\", line 342, in load_dynamic\r\n return _load(spec)\r\nImportError: DLL load failed: The specified module could not be found.\r\n\r\n\r\nFailed to load the native TensorFlow runtime.\r\n\r\nSee https://www.tensorflow.org/install/errors\r\n\r\nfor some common reasons and solutions. Include the entire stack trace\r\nabove this error message when asking for help.\r\n---------------------------------------------------------------------------\r\n```", "There's a lot going wrong in that trace. Please recreate your environment from scratch to ensure that all correct dependencies are installed. Particularly, in your first post you were using torch, but your new trace throws Tensorflow errors.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,582
1,593
1,593
NONE
null
# 🐛 Bug ## AttributeError: 'BertTokenizer' object has no attribute 'encode' Model, I am using Bert The language I am using the model on English The problem arises when using: ``` input_ids = torch.tensor([tokenizer.encode("raw_text", add_special_tokens=True)]) ``` The tasks I am working on is: ``` ##Text Summary for the following paragraph of text <code> "['26The Indian organic market\nhave begun to disrupt the market with their one-of-a-kind \nofferings.', 'In an effort to promote a healthier lifestyle, these \n\nplayers are playing a pivotal role by providing consumers with \n\nwholesome organic produce.', 'Since the organic food segment is still at a nascent stage \nin India, both the Government and private players need \n\n\n\ninvolved.', 'The organic farming industry in India holds immense \n\npotential to grow, provided it receives steady investment \n\n\n\nlike incentivizing organic cultivation, food processing, \n\n\n\nof the challenges faced by the organic sector today can be \n\ngrouped into three heads:\n\nŁ \n\nlengthy procedures, international validity, inadequate \ncertifying agencies and inadequate supporting infrastructure \n\n\n\n\ncost of internal audits and documentation is approximately \n\n\n\nreduced, it is expensive for many small groups of farmers or \nindividual farmers.', 'Ł \nThere is also a gap in the \n\nrequirements.', 'Additionally, key trading partners have \ntraditionally demonstrated a lack of willingness to sign \n\nequivalence arrangements.', 'Ł \nThe \n\n\nprocess of the farm or crop cannot be placed in the organic \n\n\nharvest is sold as conventional crops, thereby causing the \nfarmer to incur a loss.', 'Ł \ncommodities: \nDairy products have a different standard while \nmeat has a different standard .', 'The process of standardization \n\nof organic coconut will be different from that of the value-\n\nadded products of coconut.', 'Therefore, a company having \n\nand maintain multiple records as per the applicable standards.', 'Ł \n\nnumber of producers in the world yet they cultivate less than \n1% of the organic area.', 'The conventional production system is \nmore lucrative given the land fragmentation.', 'Ł Lack of incentives for farmers: \nThe transition from \n\nconventional to organic farming is accompanied by high \ninput costs and low yields in the initial years.', 'The cost of \ngoing completely organic is quite high, due to the high cost \n\nof organic manure.', 'The commercially available bio-manure \nproducts may not be completely organic, and therefore the \n\n\nThis is one of the many reasons why farmers are skeptical \nwhen it comes to shifting from conventional to organic \nfarming.', 'In such cases, the farmers choose to play it safe by \n\npracticing conventional methods of farming.', 'Ł Lack of standardized organic agriculture inputs and subsidy \non organic inputs:\n Farmers also face an acute shortage of \nquality standardized organic agriculture inputs, which are \noften much more expensive than conventional agricultural \n\ninputs.', 'There are no subsidies from the Government on \nagriculture inputs, especially biofertilizers and biopesticides, \nmaking the cost of cultivation for organic farming quite high.', 'Unless the farmers use their own farm grown manure in \nlarge quantities, they are unable to meet the expenses.', 'Lack \nof proper organic inputs often results in low yield making \n\norganic farming unsustainable for the farmers.', 'Ł Lack of organic cultivation research and extension: \nThe \n\ncurrent research and extension on organic farming are much \nlesser than that on conventional farming.', 'There is a lack of \n\n\nStrong government support for producing non-GMO high \nyielding varieties and niche crops for organic farming \nunder different agro-ecological zones across India require \n\ninvestment in organic research and extension.', 'The extension \nservices are very limited for organic, for example, the ATMA \nscheme focuses more on conventional farming.', 'There is no \n\ntimely advisory available for organic pest and disease control \n\nmeasures.', 'Processor-level challenges\nŁ Supply chain issues: \nMany farmers are apprehensive of \n\norganic farming since it involves high production costs.', 'The emphasis on collection, transportation and storage of \nfresh organic produce is very high.', 'Due to relatively low \n\nvolumes, the marketing and distribution chain of organic food \n\nvery high.', 'For example, organic produce cannot be stored in \n\ngovernment warehouses that practice chemical treatment of \nstorage areas.', 'High demand and low supply further create \n\n\nthese products have higher price markups than conventional \nproducts.', 'Additionally, many sellers mix the produce from \ndifferent geographical regions to help attain a competitive \n\nprice, thus compromising the geographical origin norm.', 'Ł Lack of a proper organic supply chain is felt more acutely in \n\nhilly, tribal and remote areas that have a high potential for \n\ninfrastructure.', 'Ł Global competitiveness:\n A major challenge India faces is \n\nthat of increasing its share in the global organic food export \nmarket, in lieu of global competitiveness.', 'There often exists a \ndichotomy between international quality and safety standards \n\nand Indian organic stands, which puts Indian produce at a \ndisadvantage.', 'Ł Lack of proper branding and packaging: \n\nof organic products require separate packing material that is \nnatural and requires distinctive branding that distinguishes \norganic from conventional products.', 'At present, there is \n\nan absence of regulations on labeling standards.', 'There is \n34\n\n10, 201835']" ``` ## To reproduce Steps to reproduce the behavior: 1. In the first Imported torch ```Python import torch from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM import logging ``` 2. defined models : ``` MODELS = [(BertModel, BertTokenizer, 'bert-base-uncased') ] ``` 3. ``` # Let's encode some text in a sequence of hidden-states using each model: for model_class, tokenizer_class, pretrained_weights in MODELS: # Load pretrained model/tokenizer tokenizer = tokenizer_class.from_pretrained(pretrained_weights) model = model_class.from_pretrained(pretrained_weights) ``` 4. If I am trying to encode with following code ``` # Encode text <code> input_ids = torch.tensor([tokenizer.encode("raw_text", add_special_tokens=True)]) # Add special tokens takes care of adding [CLS], [SEP], <s>... tokens in the right way for each model. with torch.no_grad(): last_hidden_states = model(input_ids)[0] ``` I am getting following error ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-10-190085fa3098> in <module> 1 # Encode text ----> 2 input_ids = torch.tensor([tokenizer.encode("raw_text", add_special_tokens=True)]) # Add special tokens takes care of adding [CLS], [SEP], <s>... tokens in the right way for each model. 3 with torch.no_grad(): 4 last_hidden_states = model(input_ids)[0] # Models outputs are now tuples AttributeError: 'BertTokenizer' object has no attribute 'encode' ``` ## Expected behavior Tokenization should get completed ## Environment info - `transformers` version: '0.6.2' - Platform: Windows 10 - Python version: 3.5 - PyTorch version (GPU?): 1.1.0 no gpu - Tensorflow version (GPU?): Tensorflow 2.0 - Using GPU in script?:No - Using distributed or parallel set-up in script?:No
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2889/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2889/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2888
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2888/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2888/comments
https://api.github.com/repos/huggingface/transformers/issues/2888/events
https://github.com/huggingface/transformers/pull/2888
566,948,323
MDExOlB1bGxSZXF1ZXN0Mzc2NjYzNDM1
2,888
[WIP] Adapt lm generate fn for seq 2 seq models
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Excited for this!\r\n\r\nA little early for me to have an opinion, but I'd start by adding a bunch of failing tests (e.g. for t5.generate), and some slow tests that verify that T5.generate/another non seq2seq model generate reasonable results. (You have to run those locally). \r\n\r\nStylistically, I'd say `is_seq_to_seq` should probably not be a function, just an attribute. But I think style here much less important than test coverage :)\r\n\r\nBon Chance!", "Meant to just comment, sorry!", "I adapted the language generation according to the newly added Bart file. This is still very much work in progress that's why I left a lot of comments in all files. Would be very happy about some feedback! @sshleifer ", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2888?src=pr&el=h1) Report\n> Merging [#2888](https://codecov.io/gh/huggingface/transformers/pull/2888?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fc38d4c86fe4bbde91b194880fe38b821a346123?src=pr&el=desc) will **decrease** coverage by `33.74%`.\n> The diff coverage is `2.38%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2888/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2888?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2888 +/- ##\n===========================================\n- Coverage 77.12% 43.37% -33.75% \n===========================================\n Files 98 98 \n Lines 15975 15995 +20 \n===========================================\n- Hits 12320 6938 -5382 \n- Misses 3655 9057 +5402\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2888?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `0% <0%> (-86.12%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `0% <0%> (-92.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `0% <0%> (-75.78%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/2888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `0% <0%> (-84.69%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/2888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `0% <0%> (-75.64%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `0% <0%> (-98.24%)` | :arrow_down: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/2888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `36.36% <0%> (-63.64%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.49% <100%> (+0.03%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG1fcm9iZXJ0YS5weQ==) | `0% <0%> (-100%)` | :arrow_down: |\n| [src/transformers/modeling\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jYW1lbWJlcnQucHk=) | `0% <0%> (-100%)` | :arrow_down: |\n| ... and [29 more](https://codecov.io/gh/huggingface/transformers/pull/2888/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2888?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2888?src=pr&el=footer). Last update [fc38d4c...ab13956](https://codecov.io/gh/huggingface/transformers/pull/2888?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,582
1,583
1,583
MEMBER
null
From looking at the soon-to-be-added Bart model, I though the language generation could be conceptually adapted as shown below to be able to produce language from seq-to-seq models (Bart & T5). So far this is not tested at all and only adapted for the `_generate_no_beam_search()` function. Also it still has to be checked whether this is compatible with T5. Would be happy about feedback @sshleifer, @thomwolf
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2888/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2888/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/2888", "html_url": "https://github.com/huggingface/transformers/pull/2888", "diff_url": "https://github.com/huggingface/transformers/pull/2888.diff", "patch_url": "https://github.com/huggingface/transformers/pull/2888.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/2887
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2887/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2887/comments
https://api.github.com/repos/huggingface/transformers/issues/2887/events
https://github.com/huggingface/transformers/issues/2887
566,894,315
MDU6SXNzdWU1NjY4OTQzMTU=
2,887
Regarding attention size returned by the model
{ "login": "divyag11", "id": 39218807, "node_id": "MDQ6VXNlcjM5MjE4ODA3", "avatar_url": "https://avatars.githubusercontent.com/u/39218807?v=4", "gravatar_id": "", "url": "https://api.github.com/users/divyag11", "html_url": "https://github.com/divyag11", "followers_url": "https://api.github.com/users/divyag11/followers", "following_url": "https://api.github.com/users/divyag11/following{/other_user}", "gists_url": "https://api.github.com/users/divyag11/gists{/gist_id}", "starred_url": "https://api.github.com/users/divyag11/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/divyag11/subscriptions", "organizations_url": "https://api.github.com/users/divyag11/orgs", "repos_url": "https://api.github.com/users/divyag11/repos", "events_url": "https://api.github.com/users/divyag11/events{/privacy}", "received_events_url": "https://api.github.com/users/divyag11/received_events", "type": "User", "site_admin": false }
[ { "id": 1834054694, "node_id": "MDU6TGFiZWwxODM0MDU0Njk0", "url": "https://api.github.com/repos/huggingface/transformers/labels/TensorFlow", "name": "TensorFlow", "color": "FF6F00", "default": false, "description": "Anything TensorFlow" }, { "id": 1834056761, "node_id": "MDU6TGFiZWwxODM0MDU2NzYx", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling", "name": "Core: Modeling", "color": "FF8446", "default": false, "description": "Internals of the library; Models." } ]
closed
false
null
[]
[ "Hi,\r\nwhich model do you use? Can't you simply remove the output of the other heads?", "I am using TFDistilbertmodelforsequenceclassification.\r\nYou are right,i can remove other head attention,but while using tfserving,it's taking a lot time ,since the output attentions has huge dimension.\r\nSo,that's why ,i was asking if i could get just last 2 head attention instead of getting all head attentions while hitting tfserving", "Unfortunately, we don't have a way to do that right now." ]
1,582
1,582
1,582
NONE
null
hi, while doing output_attentions - True in the huggingface model.it return attention of size:(no. of heads,seq length,seq length) can we configure it to return only the attention of the last 2 heads of the model. please let me know.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2887/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2887/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2886
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2886/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2886/comments
https://api.github.com/repos/huggingface/transformers/issues/2886/events
https://github.com/huggingface/transformers/issues/2886
566,721,632
MDU6SXNzdWU1NjY3MjE2MzI=
2,886
Load Pretrained Model Error in Inherit Class
{ "login": "yangzhch6", "id": 26158873, "node_id": "MDQ6VXNlcjI2MTU4ODcz", "avatar_url": "https://avatars.githubusercontent.com/u/26158873?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yangzhch6", "html_url": "https://github.com/yangzhch6", "followers_url": "https://api.github.com/users/yangzhch6/followers", "following_url": "https://api.github.com/users/yangzhch6/following{/other_user}", "gists_url": "https://api.github.com/users/yangzhch6/gists{/gist_id}", "starred_url": "https://api.github.com/users/yangzhch6/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yangzhch6/subscriptions", "organizations_url": "https://api.github.com/users/yangzhch6/orgs", "repos_url": "https://api.github.com/users/yangzhch6/repos", "events_url": "https://api.github.com/users/yangzhch6/events{/privacy}", "received_events_url": "https://api.github.com/users/yangzhch6/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1834053813, "node_id": "MDU6TGFiZWwxODM0MDUzODEz", "url": "https://api.github.com/repos/huggingface/transformers/labels/PyTorch", "name": "PyTorch", "color": "a12bef", "default": false, "description": "Anything PyTorch" }, { "id": 1834056761, "node_id": "MDU6TGFiZWwxODM0MDU2NzYx", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling", "name": "Core: Modeling", "color": "FF8446", "default": false, "description": "Internals of the library; Models." } ]
closed
false
null
[]
[ "```\r\nclass RoBertaMultiwayMatch(nn.Module):\r\n def __init__(self, pretrainedConfigName, num_choices=4):\r\n super(RoBertaMultiwayMatch, self).__init__()\r\n self.num_choices = num_choices\r\n self.RoBerta = RobertaModel.from_pretrained(pretrainedConfigName)\r\n config = self.RoBerta.config\r\n self.dropout = nn.Dropout(config.hidden_dropout_prob)\r\n self.linear_trans = nn.Linear(config.hidden_size, config.hidden_size)\r\n self.linear_fuse_p = nn.Linear(config.hidden_size*2, config.hidden_size)\r\n self.linear_fuse_q = nn.Linear(config.hidden_size*2, config.hidden_size)\r\n self.linear_fuse_a = nn.Linear(config.hidden_size * 2, config.hidden_size)\r\n self.classifier = nn.Linear(config.hidden_size*3, 1)\r\n\r\n #def matching(self, passage_encoded, question_encoded, passage_attention_mask, question_attention_mask): ...\r\n #def fusing_mlp(self, passage_encoded, mp_q, mp_a, mp_qa, question_encoded, ...\r\n #def forward(self, input_ids, token_type_ids=None, attention_mask=None, doc_len=None, ...\r\n```\r\n", "You're inheriting from `BertPreTrainedModel` in your `RoBertaMultiwayMatch`, with an attribute `RoBerta` which contains a `RobertaModel`.\r\n\r\nAs I see it, you want to load your roberta model from a given set of weights, but by calling `from_pretrained` on your class, he's looking to load those weights directly on your model.\r\n\r\nI believe you could override the `from_pretrained` method as such:\r\n\r\n```py\r\ndef from_pretrained(...):\r\n self.RoBerta.from_pretrained(...)\r\n```\r\n\r\nby specifying the correct arguments to your method and to your `RoBerta`'s method. This way when you call `from_pretrained`, it only loads the data for `RoBerta`. You'd have to find a way to save/load your data for your own layers though.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,582
1,588
1,588
NONE
null
# ❓ Questions & Help I wrote a new class based on RoBerta Model(pytorch) ## Details The code is shown below ``` import torch import numpy as np import logging import torch from torch import nn from torch.autograd import Variable from torch.nn import CrossEntropyLoss import torch.nn.functional as F from transformers import BertPreTrainedModel from transformers import RobertaConfig, RobertaTokenizer, RobertaModel # from transformers.modeling_bert import BertEmbeddings logging.basicConfig(format='%(asctime)s - %(levelname)s - %(name)s - %(message)s', datefmt='%m/%d/%Y %H:%M:%S', level=logging.INFO) logger = logging.getLogger(__name__) class RoBertaMultiwayMatch(BertPreTrainedModel): def __init__(self, config, num_choices=4): super(RoBertaMultiwayMatch, self).__init__(config) self.num_choices = num_choices self.RoBerta = RobertaModel(config) self.dropout = nn.Dropout(config.hidden_dropout_prob) self.linear_trans = nn.Linear(config.hidden_size, config.hidden_size) self.linear_fuse_p = nn.Linear(config.hidden_size*2, config.hidden_size) self.linear_fuse_q = nn.Linear(config.hidden_size*2, config.hidden_size) self.linear_fuse_a = nn.Linear(config.hidden_size * 2, config.hidden_size) self.classifier = nn.Linear(config.hidden_size*3, 1) self.init_weights() def matching(self, passage_encoded, question_encoded, passage_attention_mask, question_attention_mask): ... def fusing_mlp(self, passage_encoded, mp_q, mp_a, mp_qa, question_encoded, ... def forward(self, input_ids, token_type_ids=None, attention_mask=None, doc_len=None, ... if __name__ == "__main__": # tokenizer = RobertaTokenizer.from_pretrained('roberta-large', do_lower_case=True) model = RoBertaMultiwayMatch.from_pretrained('/data3/yangzhicheng/Data/RoBerta/roberta-large/', num_choices=4) ``` But the logger indicate that some weights are not initialized: ``` 02/18/2020 15:24:00 - INFO - transformers.modeling_utils - loading weights file /data3/yangzhicheng/Data/RoBerta/roberta-large/pytorch_model.bin 02/18/2020 15:24:29 - INFO - transformers.modeling_utils - Weights of RoBertaMultiwayMatch not initialized from pretrained model: ['roberta.RoBerta.embeddings.word_embeddings.weight', 'roberta.RoBerta.embeddings.position_embeddings.weight', 'roberta.RoBerta.embeddings.token_type_embeddings.weight', 'roberta.RoBerta.embeddings.LayerNorm.weight', 'roberta.RoBerta.embeddings.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.0.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.0.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.0.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.0.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.0.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.0.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.0.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.0.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.0.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.0.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.0.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.0.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.0.output.dense.weight', 'roberta.RoBerta.encoder.layer.0.output.dense.bias', 'roberta.RoBerta.encoder.layer.0.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.0.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.1.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.1.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.1.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.1.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.1.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.1.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.1.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.1.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.1.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.1.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.1.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.1.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.1.output.dense.weight', 'roberta.RoBerta.encoder.layer.1.output.dense.bias', 'roberta.RoBerta.encoder.layer.1.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.1.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.2.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.2.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.2.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.2.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.2.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.2.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.2.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.2.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.2.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.2.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.2.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.2.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.2.output.dense.weight', 'roberta.RoBerta.encoder.layer.2.output.dense.bias', 'roberta.RoBerta.encoder.layer.2.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.2.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.3.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.3.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.3.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.3.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.3.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.3.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.3.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.3.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.3.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.3.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.3.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.3.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.3.output.dense.weight', 'roberta.RoBerta.encoder.layer.3.output.dense.bias', 'roberta.RoBerta.encoder.layer.3.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.3.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.4.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.4.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.4.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.4.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.4.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.4.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.4.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.4.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.4.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.4.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.4.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.4.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.4.output.dense.weight', 'roberta.RoBerta.encoder.layer.4.output.dense.bias', 'roberta.RoBerta.encoder.layer.4.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.4.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.5.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.5.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.5.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.5.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.5.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.5.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.5.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.5.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.5.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.5.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.5.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.5.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.5.output.dense.weight', 'roberta.RoBerta.encoder.layer.5.output.dense.bias', 'roberta.RoBerta.encoder.layer.5.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.5.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.6.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.6.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.6.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.6.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.6.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.6.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.6.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.6.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.6.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.6.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.6.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.6.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.6.output.dense.weight', 'roberta.RoBerta.encoder.layer.6.output.dense.bias', 'roberta.RoBerta.encoder.layer.6.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.6.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.7.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.7.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.7.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.7.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.7.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.7.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.7.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.7.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.7.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.7.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.7.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.7.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.7.output.dense.weight', 'roberta.RoBerta.encoder.layer.7.output.dense.bias', 'roberta.RoBerta.encoder.layer.7.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.7.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.8.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.8.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.8.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.8.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.8.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.8.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.8.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.8.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.8.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.8.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.8.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.8.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.8.output.dense.weight', 'roberta.RoBerta.encoder.layer.8.output.dense.bias', 'roberta.RoBerta.encoder.layer.8.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.8.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.9.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.9.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.9.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.9.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.9.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.9.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.9.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.9.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.9.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.9.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.9.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.9.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.9.output.dense.weight', 'roberta.RoBerta.encoder.layer.9.output.dense.bias', 'roberta.RoBerta.encoder.layer.9.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.9.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.10.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.10.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.10.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.10.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.10.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.10.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.10.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.10.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.10.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.10.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.10.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.10.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.10.output.dense.weight', 'roberta.RoBerta.encoder.layer.10.output.dense.bias', 'roberta.RoBerta.encoder.layer.10.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.10.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.11.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.11.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.11.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.11.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.11.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.11.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.11.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.11.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.11.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.11.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.11.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.11.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.11.output.dense.weight', 'roberta.RoBerta.encoder.layer.11.output.dense.bias', 'roberta.RoBerta.encoder.layer.11.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.11.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.12.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.12.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.12.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.12.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.12.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.12.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.12.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.12.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.12.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.12.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.12.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.12.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.12.output.dense.weight', 'roberta.RoBerta.encoder.layer.12.output.dense.bias', 'roberta.RoBerta.encoder.layer.12.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.12.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.13.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.13.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.13.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.13.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.13.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.13.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.13.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.13.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.13.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.13.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.13.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.13.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.13.output.dense.weight', 'roberta.RoBerta.encoder.layer.13.output.dense.bias', 'roberta.RoBerta.encoder.layer.13.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.13.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.14.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.14.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.14.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.14.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.14.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.14.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.14.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.14.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.14.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.14.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.14.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.14.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.14.output.dense.weight', 'roberta.RoBerta.encoder.layer.14.output.dense.bias', 'roberta.RoBerta.encoder.layer.14.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.14.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.15.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.15.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.15.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.15.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.15.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.15.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.15.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.15.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.15.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.15.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.15.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.15.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.15.output.dense.weight', 'roberta.RoBerta.encoder.layer.15.output.dense.bias', 'roberta.RoBerta.encoder.layer.15.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.15.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.16.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.16.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.16.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.16.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.16.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.16.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.16.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.16.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.16.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.16.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.16.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.16.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.16.output.dense.weight', 'roberta.RoBerta.encoder.layer.16.output.dense.bias', 'roberta.RoBerta.encoder.layer.16.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.16.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.17.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.17.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.17.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.17.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.17.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.17.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.17.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.17.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.17.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.17.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.17.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.17.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.17.output.dense.weight', 'roberta.RoBerta.encoder.layer.17.output.dense.bias', 'roberta.RoBerta.encoder.layer.17.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.17.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.18.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.18.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.18.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.18.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.18.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.18.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.18.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.18.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.18.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.18.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.18.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.18.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.18.output.dense.weight', 'roberta.RoBerta.encoder.layer.18.output.dense.bias', 'roberta.RoBerta.encoder.layer.18.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.18.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.19.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.19.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.19.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.19.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.19.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.19.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.19.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.19.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.19.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.19.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.19.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.19.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.19.output.dense.weight', 'roberta.RoBerta.encoder.layer.19.output.dense.bias', 'roberta.RoBerta.encoder.layer.19.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.19.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.20.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.20.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.20.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.20.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.20.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.20.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.20.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.20.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.20.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.20.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.20.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.20.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.20.output.dense.weight', 'roberta.RoBerta.encoder.layer.20.output.dense.bias', 'roberta.RoBerta.encoder.layer.20.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.20.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.21.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.21.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.21.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.21.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.21.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.21.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.21.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.21.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.21.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.21.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.21.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.21.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.21.output.dense.weight', 'roberta.RoBerta.encoder.layer.21.output.dense.bias', 'roberta.RoBerta.encoder.layer.21.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.21.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.22.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.22.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.22.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.22.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.22.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.22.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.22.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.22.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.22.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.22.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.22.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.22.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.22.output.dense.weight', 'roberta.RoBerta.encoder.layer.22.output.dense.bias', 'roberta.RoBerta.encoder.layer.22.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.22.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.23.attention.self.query.weight', 'roberta.RoBerta.encoder.layer.23.attention.self.query.bias', 'roberta.RoBerta.encoder.layer.23.attention.self.key.weight', 'roberta.RoBerta.encoder.layer.23.attention.self.key.bias', 'roberta.RoBerta.encoder.layer.23.attention.self.value.weight', 'roberta.RoBerta.encoder.layer.23.attention.self.value.bias', 'roberta.RoBerta.encoder.layer.23.attention.output.dense.weight', 'roberta.RoBerta.encoder.layer.23.attention.output.dense.bias', 'roberta.RoBerta.encoder.layer.23.attention.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.23.attention.output.LayerNorm.bias', 'roberta.RoBerta.encoder.layer.23.intermediate.dense.weight', 'roberta.RoBerta.encoder.layer.23.intermediate.dense.bias', 'roberta.RoBerta.encoder.layer.23.output.dense.weight', 'roberta.RoBerta.encoder.layer.23.output.dense.bias', 'roberta.RoBerta.encoder.layer.23.output.LayerNorm.weight', 'roberta.RoBerta.encoder.layer.23.output.LayerNorm.bias', 'roberta.RoBerta.pooler.dense.weight', 'roberta.RoBerta.pooler.dense.bias', 'roberta.linear_trans.weight', 'roberta.linear_trans.bias', 'roberta.linear_fuse_p.weight', 'roberta.linear_fuse_p.bias', 'roberta.linear_fuse_q.weight', 'roberta.linear_fuse_q.bias', 'roberta.linear_fuse_a.weight', 'roberta.linear_fuse_a.bias', 'roberta.classifier.weight', 'roberta.classifier.bias'] 02/18/2020 15:24:29 - INFO - transformers.modeling_utils - Weights from pretrained model not used in RoBertaMultiwayMatch: ['roberta.embeddings.word_embeddings.weight', 'roberta.embeddings.position_embeddings.weight', 'roberta.embeddings.token_type_embeddings.weight', 'roberta.embeddings.LayerNorm.weight', 'roberta.embeddings.LayerNorm.bias', 'roberta.encoder.layer.0.attention.self.query.weight', 'roberta.encoder.layer.0.attention.self.query.bias', 'roberta.encoder.layer.0.attention.self.key.weight', 'roberta.encoder.layer.0.attention.self.key.bias', 'roberta.encoder.layer.0.attention.self.value.weight', 'roberta.encoder.layer.0.attention.self.value.bias', 'roberta.encoder.layer.0.attention.output.dense.weight', 'roberta.encoder.layer.0.attention.output.dense.bias', 'roberta.encoder.layer.0.attention.output.LayerNorm.weight', 'roberta.encoder.layer.0.attention.output.LayerNorm.bias', 'roberta.encoder.layer.0.intermediate.dense.weight', 'roberta.encoder.layer.0.intermediate.dense.bias', 'roberta.encoder.layer.0.output.dense.weight', 'roberta.encoder.layer.0.output.dense.bias', 'roberta.encoder.layer.0.output.LayerNorm.weight', 'roberta.encoder.layer.0.output.LayerNorm.bias', 'roberta.encoder.layer.1.attention.self.query.weight', 'roberta.encoder.layer.1.attention.self.query.bias', 'roberta.encoder.layer.1.attention.self.key.weight', 'roberta.encoder.layer.1.attention.self.key.bias', 'roberta.encoder.layer.1.attention.self.value.weight', 'roberta.encoder.layer.1.attention.self.value.bias', 'roberta.encoder.layer.1.attention.output.dense.weight', 'roberta.encoder.layer.1.attention.output.dense.bias', 'roberta.encoder.layer.1.attention.output.LayerNorm.weight', 'roberta.encoder.layer.1.attention.output.LayerNorm.bias', 'roberta.encoder.layer.1.intermediate.dense.weight', 'roberta.encoder.layer.1.intermediate.dense.bias', 'roberta.encoder.layer.1.output.dense.weight', 'roberta.encoder.layer.1.output.dense.bias', 'roberta.encoder.layer.1.output.LayerNorm.weight', 'roberta.encoder.layer.1.output.LayerNorm.bias', 'roberta.encoder.layer.2.attention.self.query.weight', 'roberta.encoder.layer.2.attention.self.query.bias', 'roberta.encoder.layer.2.attention.self.key.weight', 'roberta.encoder.layer.2.attention.self.key.bias', 'roberta.encoder.layer.2.attention.self.value.weight', 'roberta.encoder.layer.2.attention.self.value.bias', 'roberta.encoder.layer.2.attention.output.dense.weight', 'roberta.encoder.layer.2.attention.output.dense.bias', 'roberta.encoder.layer.2.attention.output.LayerNorm.weight', 'roberta.encoder.layer.2.attention.output.LayerNorm.bias', 'roberta.encoder.layer.2.intermediate.dense.weight', 'roberta.encoder.layer.2.intermediate.dense.bias', 'roberta.encoder.layer.2.output.dense.weight', 'roberta.encoder.layer.2.output.dense.bias', 'roberta.encoder.layer.2.output.LayerNorm.weight', 'roberta.encoder.layer.2.output.LayerNorm.bias', 'roberta.encoder.layer.3.attention.self.query.weight', 'roberta.encoder.layer.3.attention.self.query.bias', 'roberta.encoder.layer.3.attention.self.key.weight', 'roberta.encoder.layer.3.attention.self.key.bias', 'roberta.encoder.layer.3.attention.self.value.weight', 'roberta.encoder.layer.3.attention.self.value.bias', 'roberta.encoder.layer.3.attention.output.dense.weight', 'roberta.encoder.layer.3.attention.output.dense.bias', 'roberta.encoder.layer.3.attention.output.LayerNorm.weight', 'roberta.encoder.layer.3.attention.output.LayerNorm.bias', 'roberta.encoder.layer.3.intermediate.dense.weight', 'roberta.encoder.layer.3.intermediate.dense.bias', 'roberta.encoder.layer.3.output.dense.weight', 'roberta.encoder.layer.3.output.dense.bias', 'roberta.encoder.layer.3.output.LayerNorm.weight', 'roberta.encoder.layer.3.output.LayerNorm.bias', 'roberta.encoder.layer.4.attention.self.query.weight', 'roberta.encoder.layer.4.attention.self.query.bias', 'roberta.encoder.layer.4.attention.self.key.weight', 'roberta.encoder.layer.4.attention.self.key.bias', 'roberta.encoder.layer.4.attention.self.value.weight', 'roberta.encoder.layer.4.attention.self.value.bias', 'roberta.encoder.layer.4.attention.output.dense.weight', 'roberta.encoder.layer.4.attention.output.dense.bias', 'roberta.encoder.layer.4.attention.output.LayerNorm.weight', 'roberta.encoder.layer.4.attention.output.LayerNorm.bias', 'roberta.encoder.layer.4.intermediate.dense.weight', 'roberta.encoder.layer.4.intermediate.dense.bias', 'roberta.encoder.layer.4.output.dense.weight', 'roberta.encoder.layer.4.output.dense.bias', 'roberta.encoder.layer.4.output.LayerNorm.weight', 'roberta.encoder.layer.4.output.LayerNorm.bias', 'roberta.encoder.layer.5.attention.self.query.weight', 'roberta.encoder.layer.5.attention.self.query.bias', 'roberta.encoder.layer.5.attention.self.key.weight', 'roberta.encoder.layer.5.attention.self.key.bias', 'roberta.encoder.layer.5.attention.self.value.weight', 'roberta.encoder.layer.5.attention.self.value.bias', 'roberta.encoder.layer.5.attention.output.dense.weight', 'roberta.encoder.layer.5.attention.output.dense.bias', 'roberta.encoder.layer.5.attention.output.LayerNorm.weight', 'roberta.encoder.layer.5.attention.output.LayerNorm.bias', 'roberta.encoder.layer.5.intermediate.dense.weight', 'roberta.encoder.layer.5.intermediate.dense.bias', 'roberta.encoder.layer.5.output.dense.weight', 'roberta.encoder.layer.5.output.dense.bias', 'roberta.encoder.layer.5.output.LayerNorm.weight', 'roberta.encoder.layer.5.output.LayerNorm.bias', 'roberta.encoder.layer.6.attention.self.query.weight', 'roberta.encoder.layer.6.attention.self.query.bias', 'roberta.encoder.layer.6.attention.self.key.weight', 'roberta.encoder.layer.6.attention.self.key.bias', 'roberta.encoder.layer.6.attention.self.value.weight', 'roberta.encoder.layer.6.attention.self.value.bias', 'roberta.encoder.layer.6.attention.output.dense.weight', 'roberta.encoder.layer.6.attention.output.dense.bias', 'roberta.encoder.layer.6.attention.output.LayerNorm.weight', 'roberta.encoder.layer.6.attention.output.LayerNorm.bias', 'roberta.encoder.layer.6.intermediate.dense.weight', 'roberta.encoder.layer.6.intermediate.dense.bias', 'roberta.encoder.layer.6.output.dense.weight', 'roberta.encoder.layer.6.output.dense.bias', 'roberta.encoder.layer.6.output.LayerNorm.weight', 'roberta.encoder.layer.6.output.LayerNorm.bias', 'roberta.encoder.layer.7.attention.self.query.weight', 'roberta.encoder.layer.7.attention.self.query.bias', 'roberta.encoder.layer.7.attention.self.key.weight', 'roberta.encoder.layer.7.attention.self.key.bias', 'roberta.encoder.layer.7.attention.self.value.weight', 'roberta.encoder.layer.7.attention.self.value.bias', 'roberta.encoder.layer.7.attention.output.dense.weight', 'roberta.encoder.layer.7.attention.output.dense.bias', 'roberta.encoder.layer.7.attention.output.LayerNorm.weight', 'roberta.encoder.layer.7.attention.output.LayerNorm.bias', 'roberta.encoder.layer.7.intermediate.dense.weight', 'roberta.encoder.layer.7.intermediate.dense.bias', 'roberta.encoder.layer.7.output.dense.weight', 'roberta.encoder.layer.7.output.dense.bias', 'roberta.encoder.layer.7.output.LayerNorm.weight', 'roberta.encoder.layer.7.output.LayerNorm.bias', 'roberta.encoder.layer.8.attention.self.query.weight', 'roberta.encoder.layer.8.attention.self.query.bias', 'roberta.encoder.layer.8.attention.self.key.weight', 'roberta.encoder.layer.8.attention.self.key.bias', 'roberta.encoder.layer.8.attention.self.value.weight', 'roberta.encoder.layer.8.attention.self.value.bias', 'roberta.encoder.layer.8.attention.output.dense.weight', 'roberta.encoder.layer.8.attention.output.dense.bias', 'roberta.encoder.layer.8.attention.output.LayerNorm.weight', 'roberta.encoder.layer.8.attention.output.LayerNorm.bias', 'roberta.encoder.layer.8.intermediate.dense.weight', 'roberta.encoder.layer.8.intermediate.dense.bias', 'roberta.encoder.layer.8.output.dense.weight', 'roberta.encoder.layer.8.output.dense.bias', 'roberta.encoder.layer.8.output.LayerNorm.weight', 'roberta.encoder.layer.8.output.LayerNorm.bias', 'roberta.encoder.layer.9.attention.self.query.weight', 'roberta.encoder.layer.9.attention.self.query.bias', 'roberta.encoder.layer.9.attention.self.key.weight', 'roberta.encoder.layer.9.attention.self.key.bias', 'roberta.encoder.layer.9.attention.self.value.weight', 'roberta.encoder.layer.9.attention.self.value.bias', 'roberta.encoder.layer.9.attention.output.dense.weight', 'roberta.encoder.layer.9.attention.output.dense.bias', 'roberta.encoder.layer.9.attention.output.LayerNorm.weight', 'roberta.encoder.layer.9.attention.output.LayerNorm.bias', 'roberta.encoder.layer.9.intermediate.dense.weight', 'roberta.encoder.layer.9.intermediate.dense.bias', 'roberta.encoder.layer.9.output.dense.weight', 'roberta.encoder.layer.9.output.dense.bias', 'roberta.encoder.layer.9.output.LayerNorm.weight', 'roberta.encoder.layer.9.output.LayerNorm.bias', 'roberta.encoder.layer.10.attention.self.query.weight', 'roberta.encoder.layer.10.attention.self.query.bias', 'roberta.encoder.layer.10.attention.self.key.weight', 'roberta.encoder.layer.10.attention.self.key.bias', 'roberta.encoder.layer.10.attention.self.value.weight', 'roberta.encoder.layer.10.attention.self.value.bias', 'roberta.encoder.layer.10.attention.output.dense.weight', 'roberta.encoder.layer.10.attention.output.dense.bias', 'roberta.encoder.layer.10.attention.output.LayerNorm.weight', 'roberta.encoder.layer.10.attention.output.LayerNorm.bias', 'roberta.encoder.layer.10.intermediate.dense.weight', 'roberta.encoder.layer.10.intermediate.dense.bias', 'roberta.encoder.layer.10.output.dense.weight', 'roberta.encoder.layer.10.output.dense.bias', 'roberta.encoder.layer.10.output.LayerNorm.weight', 'roberta.encoder.layer.10.output.LayerNorm.bias', 'roberta.encoder.layer.11.attention.self.query.weight', 'roberta.encoder.layer.11.attention.self.query.bias', 'roberta.encoder.layer.11.attention.self.key.weight', 'roberta.encoder.layer.11.attention.self.key.bias', 'roberta.encoder.layer.11.attention.self.value.weight', 'roberta.encoder.layer.11.attention.self.value.bias', 'roberta.encoder.layer.11.attention.output.dense.weight', 'roberta.encoder.layer.11.attention.output.dense.bias', 'roberta.encoder.layer.11.attention.output.LayerNorm.weight', 'roberta.encoder.layer.11.attention.output.LayerNorm.bias', 'roberta.encoder.layer.11.intermediate.dense.weight', 'roberta.encoder.layer.11.intermediate.dense.bias', 'roberta.encoder.layer.11.output.dense.weight', 'roberta.encoder.layer.11.output.dense.bias', 'roberta.encoder.layer.11.output.LayerNorm.weight', 'roberta.encoder.layer.11.output.LayerNorm.bias', 'roberta.encoder.layer.12.attention.self.query.weight', 'roberta.encoder.layer.12.attention.self.query.bias', 'roberta.encoder.layer.12.attention.self.key.weight', 'roberta.encoder.layer.12.attention.self.key.bias', 'roberta.encoder.layer.12.attention.self.value.weight', 'roberta.encoder.layer.12.attention.self.value.bias', 'roberta.encoder.layer.12.attention.output.dense.weight', 'roberta.encoder.layer.12.attention.output.dense.bias', 'roberta.encoder.layer.12.attention.output.LayerNorm.weight', 'roberta.encoder.layer.12.attention.output.LayerNorm.bias', 'roberta.encoder.layer.12.intermediate.dense.weight', 'roberta.encoder.layer.12.intermediate.dense.bias', 'roberta.encoder.layer.12.output.dense.weight', 'roberta.encoder.layer.12.output.dense.bias', 'roberta.encoder.layer.12.output.LayerNorm.weight', 'roberta.encoder.layer.12.output.LayerNorm.bias', 'roberta.encoder.layer.13.attention.self.query.weight', 'roberta.encoder.layer.13.attention.self.query.bias', 'roberta.encoder.layer.13.attention.self.key.weight', 'roberta.encoder.layer.13.attention.self.key.bias', 'roberta.encoder.layer.13.attention.self.value.weight', 'roberta.encoder.layer.13.attention.self.value.bias', 'roberta.encoder.layer.13.attention.output.dense.weight', 'roberta.encoder.layer.13.attention.output.dense.bias', 'roberta.encoder.layer.13.attention.output.LayerNorm.weight', 'roberta.encoder.layer.13.attention.output.LayerNorm.bias', 'roberta.encoder.layer.13.intermediate.dense.weight', 'roberta.encoder.layer.13.intermediate.dense.bias', 'roberta.encoder.layer.13.output.dense.weight', 'roberta.encoder.layer.13.output.dense.bias', 'roberta.encoder.layer.13.output.LayerNorm.weight', 'roberta.encoder.layer.13.output.LayerNorm.bias', 'roberta.encoder.layer.14.attention.self.query.weight', 'roberta.encoder.layer.14.attention.self.query.bias', 'roberta.encoder.layer.14.attention.self.key.weight', 'roberta.encoder.layer.14.attention.self.key.bias', 'roberta.encoder.layer.14.attention.self.value.weight', 'roberta.encoder.layer.14.attention.self.value.bias', 'roberta.encoder.layer.14.attention.output.dense.weight', 'roberta.encoder.layer.14.attention.output.dense.bias', 'roberta.encoder.layer.14.attention.output.LayerNorm.weight', 'roberta.encoder.layer.14.attention.output.LayerNorm.bias', 'roberta.encoder.layer.14.intermediate.dense.weight', 'roberta.encoder.layer.14.intermediate.dense.bias', 'roberta.encoder.layer.14.output.dense.weight', 'roberta.encoder.layer.14.output.dense.bias', 'roberta.encoder.layer.14.output.LayerNorm.weight', 'roberta.encoder.layer.14.output.LayerNorm.bias', 'roberta.encoder.layer.15.attention.self.query.weight', 'roberta.encoder.layer.15.attention.self.query.bias', 'roberta.encoder.layer.15.attention.self.key.weight', 'roberta.encoder.layer.15.attention.self.key.bias', 'roberta.encoder.layer.15.attention.self.value.weight', 'roberta.encoder.layer.15.attention.self.value.bias', 'roberta.encoder.layer.15.attention.output.dense.weight', 'roberta.encoder.layer.15.attention.output.dense.bias', 'roberta.encoder.layer.15.attention.output.LayerNorm.weight', 'roberta.encoder.layer.15.attention.output.LayerNorm.bias', 'roberta.encoder.layer.15.intermediate.dense.weight', 'roberta.encoder.layer.15.intermediate.dense.bias', 'roberta.encoder.layer.15.output.dense.weight', 'roberta.encoder.layer.15.output.dense.bias', 'roberta.encoder.layer.15.output.LayerNorm.weight', 'roberta.encoder.layer.15.output.LayerNorm.bias', 'roberta.encoder.layer.16.attention.self.query.weight', 'roberta.encoder.layer.16.attention.self.query.bias', 'roberta.encoder.layer.16.attention.self.key.weight', 'roberta.encoder.layer.16.attention.self.key.bias', 'roberta.encoder.layer.16.attention.self.value.weight', 'roberta.encoder.layer.16.attention.self.value.bias', 'roberta.encoder.layer.16.attention.output.dense.weight', 'roberta.encoder.layer.16.attention.output.dense.bias', 'roberta.encoder.layer.16.attention.output.LayerNorm.weight', 'roberta.encoder.layer.16.attention.output.LayerNorm.bias', 'roberta.encoder.layer.16.intermediate.dense.weight', 'roberta.encoder.layer.16.intermediate.dense.bias', 'roberta.encoder.layer.16.output.dense.weight', 'roberta.encoder.layer.16.output.dense.bias', 'roberta.encoder.layer.16.output.LayerNorm.weight', 'roberta.encoder.layer.16.output.LayerNorm.bias', 'roberta.encoder.layer.17.attention.self.query.weight', 'roberta.encoder.layer.17.attention.self.query.bias', 'roberta.encoder.layer.17.attention.self.key.weight', 'roberta.encoder.layer.17.attention.self.key.bias', 'roberta.encoder.layer.17.attention.self.value.weight', 'roberta.encoder.layer.17.attention.self.value.bias', 'roberta.encoder.layer.17.attention.output.dense.weight', 'roberta.encoder.layer.17.attention.output.dense.bias', 'roberta.encoder.layer.17.attention.output.LayerNorm.weight', 'roberta.encoder.layer.17.attention.output.LayerNorm.bias', 'roberta.encoder.layer.17.intermediate.dense.weight', 'roberta.encoder.layer.17.intermediate.dense.bias', 'roberta.encoder.layer.17.output.dense.weight', 'roberta.encoder.layer.17.output.dense.bias', 'roberta.encoder.layer.17.output.LayerNorm.weight', 'roberta.encoder.layer.17.output.LayerNorm.bias', 'roberta.encoder.layer.18.attention.self.query.weight', 'roberta.encoder.layer.18.attention.self.query.bias', 'roberta.encoder.layer.18.attention.self.key.weight', 'roberta.encoder.layer.18.attention.self.key.bias', 'roberta.encoder.layer.18.attention.self.value.weight', 'roberta.encoder.layer.18.attention.self.value.bias', 'roberta.encoder.layer.18.attention.output.dense.weight', 'roberta.encoder.layer.18.attention.output.dense.bias', 'roberta.encoder.layer.18.attention.output.LayerNorm.weight', 'roberta.encoder.layer.18.attention.output.LayerNorm.bias', 'roberta.encoder.layer.18.intermediate.dense.weight', 'roberta.encoder.layer.18.intermediate.dense.bias', 'roberta.encoder.layer.18.output.dense.weight', 'roberta.encoder.layer.18.output.dense.bias', 'roberta.encoder.layer.18.output.LayerNorm.weight', 'roberta.encoder.layer.18.output.LayerNorm.bias', 'roberta.encoder.layer.19.attention.self.query.weight', 'roberta.encoder.layer.19.attention.self.query.bias', 'roberta.encoder.layer.19.attention.self.key.weight', 'roberta.encoder.layer.19.attention.self.key.bias', 'roberta.encoder.layer.19.attention.self.value.weight', 'roberta.encoder.layer.19.attention.self.value.bias', 'roberta.encoder.layer.19.attention.output.dense.weight', 'roberta.encoder.layer.19.attention.output.dense.bias', 'roberta.encoder.layer.19.attention.output.LayerNorm.weight', 'roberta.encoder.layer.19.attention.output.LayerNorm.bias', 'roberta.encoder.layer.19.intermediate.dense.weight', 'roberta.encoder.layer.19.intermediate.dense.bias', 'roberta.encoder.layer.19.output.dense.weight', 'roberta.encoder.layer.19.output.dense.bias', 'roberta.encoder.layer.19.output.LayerNorm.weight', 'roberta.encoder.layer.19.output.LayerNorm.bias', 'roberta.encoder.layer.20.attention.self.query.weight', 'roberta.encoder.layer.20.attention.self.query.bias', 'roberta.encoder.layer.20.attention.self.key.weight', 'roberta.encoder.layer.20.attention.self.key.bias', 'roberta.encoder.layer.20.attention.self.value.weight', 'roberta.encoder.layer.20.attention.self.value.bias', 'roberta.encoder.layer.20.attention.output.dense.weight', 'roberta.encoder.layer.20.attention.output.dense.bias', 'roberta.encoder.layer.20.attention.output.LayerNorm.weight', 'roberta.encoder.layer.20.attention.output.LayerNorm.bias', 'roberta.encoder.layer.20.intermediate.dense.weight', 'roberta.encoder.layer.20.intermediate.dense.bias', 'roberta.encoder.layer.20.output.dense.weight', 'roberta.encoder.layer.20.output.dense.bias', 'roberta.encoder.layer.20.output.LayerNorm.weight', 'roberta.encoder.layer.20.output.LayerNorm.bias', 'roberta.encoder.layer.21.attention.self.query.weight', 'roberta.encoder.layer.21.attention.self.query.bias', 'roberta.encoder.layer.21.attention.self.key.weight', 'roberta.encoder.layer.21.attention.self.key.bias', 'roberta.encoder.layer.21.attention.self.value.weight', 'roberta.encoder.layer.21.attention.self.value.bias', 'roberta.encoder.layer.21.attention.output.dense.weight', 'roberta.encoder.layer.21.attention.output.dense.bias', 'roberta.encoder.layer.21.attention.output.LayerNorm.weight', 'roberta.encoder.layer.21.attention.output.LayerNorm.bias', 'roberta.encoder.layer.21.intermediate.dense.weight', 'roberta.encoder.layer.21.intermediate.dense.bias', 'roberta.encoder.layer.21.output.dense.weight', 'roberta.encoder.layer.21.output.dense.bias', 'roberta.encoder.layer.21.output.LayerNorm.weight', 'roberta.encoder.layer.21.output.LayerNorm.bias', 'roberta.encoder.layer.22.attention.self.query.weight', 'roberta.encoder.layer.22.attention.self.query.bias', 'roberta.encoder.layer.22.attention.self.key.weight', 'roberta.encoder.layer.22.attention.self.key.bias', 'roberta.encoder.layer.22.attention.self.value.weight', 'roberta.encoder.layer.22.attention.self.value.bias', 'roberta.encoder.layer.22.attention.output.dense.weight', 'roberta.encoder.layer.22.attention.output.dense.bias', 'roberta.encoder.layer.22.attention.output.LayerNorm.weight', 'roberta.encoder.layer.22.attention.output.LayerNorm.bias', 'roberta.encoder.layer.22.intermediate.dense.weight', 'roberta.encoder.layer.22.intermediate.dense.bias', 'roberta.encoder.layer.22.output.dense.weight', 'roberta.encoder.layer.22.output.dense.bias', 'roberta.encoder.layer.22.output.LayerNorm.weight', 'roberta.encoder.layer.22.output.LayerNorm.bias', 'roberta.encoder.layer.23.attention.self.query.weight', 'roberta.encoder.layer.23.attention.self.query.bias', 'roberta.encoder.layer.23.attention.self.key.weight', 'roberta.encoder.layer.23.attention.self.key.bias', 'roberta.encoder.layer.23.attention.self.value.weight', 'roberta.encoder.layer.23.attention.self.value.bias', 'roberta.encoder.layer.23.attention.output.dense.weight', 'roberta.encoder.layer.23.attention.output.dense.bias', 'roberta.encoder.layer.23.attention.output.LayerNorm.weight', 'roberta.encoder.layer.23.attention.output.LayerNorm.bias', 'roberta.encoder.layer.23.intermediate.dense.weight', 'roberta.encoder.layer.23.intermediate.dense.bias', 'roberta.encoder.layer.23.output.dense.weight', 'roberta.encoder.layer.23.output.dense.bias', 'roberta.encoder.layer.23.output.LayerNorm.weight', 'roberta.encoder.layer.23.output.LayerNorm.bias', 'roberta.pooler.dense.weight', 'roberta.pooler.dense.bias'] ``` How can I solve this problem? Thanks in advance!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2886/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2886/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2885
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2885/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2885/comments
https://api.github.com/repos/huggingface/transformers/issues/2885/events
https://github.com/huggingface/transformers/pull/2885
566,506,642
MDExOlB1bGxSZXF1ZXN0Mzc2MzA1ODg3
2,885
Improve special_token_id logic in run_generation.py and add tests
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2885?src=pr&el=h1) Report\n> Merging [#2885](https://codecov.io/gh/huggingface/transformers/pull/2885?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d490b5d5003654f104af3abd0556e598335b5650?src=pr&el=desc) will **increase** coverage by `1.75%`.\n> The diff coverage is `86.22%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2885/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2885?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2885 +/- ##\n=========================================\n+ Coverage 75.35% 77.1% +1.75% \n=========================================\n Files 94 98 +4 \n Lines 15444 15971 +527 \n=========================================\n+ Hits 11638 12315 +677 \n+ Misses 3806 3656 -150\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2885?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.59% <ø> (-0.04%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `96.99% <ø> (ø)` | :arrow_up: |\n| [src/transformers/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `96.82% <ø> (-0.03%)` | :arrow_down: |\n| [src/transformers/modeling\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/2885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `26.66% <ø> (+1.36%)` | :arrow_up: |\n| [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/2885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `21.73% <ø> (ø)` | :arrow_up: |\n| [src/transformers/utils\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/2885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy91dGlsc19lbmNvZGVyX2RlY29kZXIucHk=) | `0% <0%> (ø)` | |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/2885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `75.47% <100%> (+0.23%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.2% <100%> (+30.87%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.45% <100%> (-0.14%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.71% <100%> (-0.07%)` | :arrow_down: |\n| ... and [18 more](https://codecov.io/gh/huggingface/transformers/pull/2885/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2885?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2885?src=pr&el=footer). Last update [d490b5d...80ca73d](https://codecov.io/gh/huggingface/transformers/pull/2885?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "> Love the tests!\r\n> \r\n> Do we ever test cases:\r\n> \r\n> * eos_token_ids are None, pad_token_id present\r\n> * pad_token_id=None, eos_ids present\r\n> * pad_token_id present, eos_ids None\r\n> \r\n> We should also figure out if those are realistic scenarios. Because if they are not we can delete a lot of code!\r\n\r\neos_token_ids present and pad_token_id = None -> GPT2, so this scenario is tested. This case is the hardest to handle for batch_size > 1, therefore quite a lot of assert statements and a warning in modeling_utils.py\r\n\r\neos_token_ids = None and pad_token_id present -> If eos_token_ids = None, the pad_token_id is somewhat irrelevant for generation because all batches will always be of max. length and therefore never will have to be padded (they can't finish because no eos_token can be generated)\r\n\r\n", "This PR finally implements the following `bos_token_id, pad_token_id, eos_token_ids` logic for lm model generation.\r\n\r\n1. If `bos_token_id` is None, then the input_ids must be defined otherwise, the model cannot generate text, which is checked by the asserts in the beginning. The `bos_token_id` is only relevant for starting a new sentence.\r\n\r\n2. If `eos_token_id` is None, then the length of the generated text will always equal max_length,\r\nno matter how the pad_token_id is defined. Since there is no `eos_token_id` the text will also not \"end\".\r\n\r\n3. If `pad_token_id` is None and `eos_token_ids` is defined (as it is the case for gpt2), then the pad_token_id will be set to the `eos_token_ids[0]` tensor `batches_len` is used to keep track of the first time the sequence generated an eos_token and will later set all tokens following this token to the `pad_token_id`, which is `eos_token_ids[0]` and can thus be handled by the tokenizer (whereas the -1 cannot be handled by the tokenizer).\r\n\r\n4. **No** eos_token_id is appended to sentences that finish due to `max_length`. Instead those sentences are returned with the last token being the last token produced by the model until `max_length` was hit.\r\n\r\nAs an overview, here a table showing which LMModel Tokenizer have which of the tokens `bos_token_id`, `pad_token_id` and `eos_token_ids` is defined:\r\n\r\n\r\n| LM Model | bos_token_id | pad_token_id | eos_token_ids |\r\n| ------------- | ------------- | ------------- | ------------- | \r\n| XLNet | x | x | x |\r\n| OpenAIGPT | o | o | o |\r\n| CTRL | o | o | o |\r\n| GPT2 | x | o | x |\r\n| Transfo-XL | o | o | x |\r\n| XLM | x | x | o |\r\n\r\n## Future PRs:\r\n\r\n- [x] [WIP] adding hard-coded slow tests for pretrained lms in PR #2909\r\n- [ ] [WIP] adapting the `generate` function for Seq-2-Seq and DoubleHeads or other special LM models in PR #2888 \r\n- [x] checking and possibly adapting behavior of `generate_beam_search`\r\n- [x] treat Issues: #2482 and #2415", "> # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2885?src=pr&el=h1) Report\r\n> > Merging [#2885](https://codecov.io/gh/huggingface/transformers/pull/2885?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/59c23ad9c931ac4fe719abeb3c3851df046ef3a6?src=pr&el=desc) will **increase** coverage by `1.49%`.\r\n> > The diff coverage is `100%`.\r\n> \r\n> [![Impacted file tree graph](https://camo.githubusercontent.com/0d9954474a38f8b8a437b85ef46677aec2ec2f8a/68747470733a2f2f636f6465636f762e696f2f67682f68756767696e67666163652f7472616e73666f726d6572732f70756c6c2f323838352f6772617068732f747265652e7376673f77696474683d36353026746f6b656e3d39714f6c4e3648623163266865696768743d313530267372633d7072)](https://codecov.io/gh/huggingface/transformers/pull/2885?src=pr&el=tree)\r\n> \r\n> ```diff\r\n> @@ Coverage Diff @@\r\n> ## master #2885 +/- ##\r\n> ==========================================\r\n> + Coverage 75.3% 76.79% +1.49% \r\n> ==========================================\r\n> Files 94 94 \r\n> Lines 15424 15448 +24 \r\n> ==========================================\r\n> + Hits 11615 11864 +249 \r\n> + Misses 3809 3584 -225\r\n> ```\r\n> \r\n> [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2885?src=pr&el=tree)\tCoverage Δ\t\r\n> [src/transformers/modeling_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==)\t`92.14% <100%> (+30.81%)`\t\r\n> [src/transformers/configuration_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5)\t`96.46% <100%> (ø)`\t\r\n> [src/transformers/modeling_transfo_xl.py](https://codecov.io/gh/huggingface/transformers/pull/2885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5)\t`75.63% <0%> (+0.84%)`\t\r\n> [src/transformers/modeling_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/2885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=)\t`88.43% <0%> (+2.05%)`\t\r\n> [src/transformers/modeling_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==)\t`75.77% <0%> (+2.61%)`\t\r\n> [src/transformers/modeling_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5)\t`86.11% <0%> (+2.83%)`\t\r\n> [src/transformers/modeling_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5)\t`98.23% <0%> (+3.96%)`\t\r\n> [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2885?src=pr&el=continue).\r\n> \r\n> > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\r\n> > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\r\n> > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2885?src=pr&el=footer). Last update [59c23ad...ac2e172](https://codecov.io/gh/huggingface/transformers/pull/2885?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\r\n\r\nI'm a bit confused that the coverage for the file modeling_openai.py did not change. Tests for the openai lm were added but they had seemingly no effect - do you know why? @LysandreJik ", "No need to do all my `torch.Tensor` vs .new comments, TIL that it lets you copy the device and dtype of the first tensor." ]
1,581
1,584
1,582
MEMBER
null
This PR finally implements the following `bos_token_id, pad_token_id, eos_token_ids` logic for lm model generation. 1. If `bos_token_id` is None, then the input_ids must be defined otherwise, the model cannot generate text, which is checked by the asserts in the beginning. The `bos_token_id` is only relevant for starting a new sentence. 2. If `eos_token_id` is None, then the length of the generated text will always equal max_length, no matter how the pad_token_id is defined. Since there is no `eos_token_id` the text will also not "end". 3. If `pad_token_id` is None and `eos_token_ids` is defined (as it is the case for gpt2), then the pad_token_id will be set to the `eos_token_ids[0]` tensor `batches_len` is used to keep track of the first time the sequence generated an eos_token and will later set all tokens following this token to the `pad_token_id`, which is `eos_token_ids[0]` and can thus be handled by the tokenizer (whereas the -1 cannot be handled by the tokenizer). 4. **No** eos_token_id is appended to sentences that finish due to `max_length`. Instead those sentences are returned with the last token being the last token produced by the model until `max_length` was hit. As an overview, here a table showing which LMModel Tokenizer have which of the tokens `bos_token_id`, `pad_token_id` and `eos_token_ids` is defined: | LM Model | bos_token_id | pad_token_id | eos_token_ids | | ------------- | ------------- | ------------- | ------------- | | XLNet | x | x | x | | OpenAIGPT | o | o | o | | CTRL | o | o | o | | GPT2 | x | o | x | | Transfo-XL | o | o | x | | XLM | x | x | o | Testing times are increased by the following times (measured on a local machine): | LM Model | Increase in test time | | ------------- | ------------- | | XLNet | 8.0s -> 9.7s | | OpenAIGPT | 7.1s -> 8.3s | | CTRL | 2.5s -> 4.3s | | GPT2 | 7.3s -> 8.0s | | Transfo-XL | 7.5s -> 8.0s | | XLM | 7.4s -> 7.7s | -> So overall mostly around 10% increase in testing time ## Future PRs: - [x] [WIP] adding hard-coded slow tests for pretrained lms in PR #2909 - [x] [WIP] adapting the `generate` function for Seq-2-Seq and DoubleHeads or other special LM models in PR #2888 - [x] checking and possibly adapting behavior of `generate_beam_search` - [x] treat Issues: #2482 and #2415
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2885/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2885/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/2885", "html_url": "https://github.com/huggingface/transformers/pull/2885", "diff_url": "https://github.com/huggingface/transformers/pull/2885.diff", "patch_url": "https://github.com/huggingface/transformers/pull/2885.patch", "merged_at": 1582305000000 }
https://api.github.com/repos/huggingface/transformers/issues/2884
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2884/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2884/comments
https://api.github.com/repos/huggingface/transformers/issues/2884/events
https://github.com/huggingface/transformers/issues/2884
566,457,160
MDU6SXNzdWU1NjY0NTcxNjA=
2,884
Evaluation and Inference added to run_glue.py
{ "login": "CMobley7", "id": 10121829, "node_id": "MDQ6VXNlcjEwMTIxODI5", "avatar_url": "https://avatars.githubusercontent.com/u/10121829?v=4", "gravatar_id": "", "url": "https://api.github.com/users/CMobley7", "html_url": "https://github.com/CMobley7", "followers_url": "https://api.github.com/users/CMobley7/followers", "following_url": "https://api.github.com/users/CMobley7/following{/other_user}", "gists_url": "https://api.github.com/users/CMobley7/gists{/gist_id}", "starred_url": "https://api.github.com/users/CMobley7/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CMobley7/subscriptions", "organizations_url": "https://api.github.com/users/CMobley7/orgs", "repos_url": "https://api.github.com/users/CMobley7/repos", "events_url": "https://api.github.com/users/CMobley7/events{/privacy}", "received_events_url": "https://api.github.com/users/CMobley7/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1834052574, "node_id": "MDU6TGFiZWwxODM0MDUyNTc0", "url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20Sequence%20Classification", "name": "Ex: Sequence Classification", "color": "46FFCF", "default": false, "description": "" } ]
closed
false
null
[]
[ "Hi! You can already provide `do_eval` to `run_glue` to do the evaluation. If you don't specify `do_train`, it will only do the evaluation, and no training.\r\n\r\nThe inference would be a nice addition.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,581
1,588
1,588
NONE
null
# 🚀 Feature request It would be useful to have the following arguments added to `run_glue.py`, as well as probably the other task example scripts: `--eval_only` and `--inference_only`. ## Motivation This will allow users to either provide a `.tsv` or `.csv` with either sentences and labels or just sentences and then perform evaluation or inference on the model without having to train. This may already exist, but I was unable to find it. Consequently, I'm using https://github.com/kaushaltrivedi/fast-bert for these tasks. It would be helpful to not have to stray from the main huggingface transformers library.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2884/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2884/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2883
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2883/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2883/comments
https://api.github.com/repos/huggingface/transformers/issues/2883/events
https://github.com/huggingface/transformers/pull/2883
566,332,571
MDExOlB1bGxSZXF1ZXN0Mzc2MTY0MDUy
2,883
Create README.md in the right path for bert-spanish-cased-finetuned-ner
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "👍 " ]
1,581
1,581
1,581
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2883/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2883/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/2883", "html_url": "https://github.com/huggingface/transformers/pull/2883", "diff_url": "https://github.com/huggingface/transformers/pull/2883.diff", "patch_url": "https://github.com/huggingface/transformers/pull/2883.patch", "merged_at": 1581955124000 }
https://api.github.com/repos/huggingface/transformers/issues/2882
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2882/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2882/comments
https://api.github.com/repos/huggingface/transformers/issues/2882/events
https://github.com/huggingface/transformers/issues/2882
566,303,614
MDU6SXNzdWU1NjYzMDM2MTQ=
2,882
No prediction for some words (BERT NER) when run on GPU
{ "login": "cibinjohn", "id": 24930555, "node_id": "MDQ6VXNlcjI0OTMwNTU1", "avatar_url": "https://avatars.githubusercontent.com/u/24930555?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cibinjohn", "html_url": "https://github.com/cibinjohn", "followers_url": "https://api.github.com/users/cibinjohn/followers", "following_url": "https://api.github.com/users/cibinjohn/following{/other_user}", "gists_url": "https://api.github.com/users/cibinjohn/gists{/gist_id}", "starred_url": "https://api.github.com/users/cibinjohn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cibinjohn/subscriptions", "organizations_url": "https://api.github.com/users/cibinjohn/orgs", "repos_url": "https://api.github.com/users/cibinjohn/repos", "events_url": "https://api.github.com/users/cibinjohn/events{/privacy}", "received_events_url": "https://api.github.com/users/cibinjohn/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1834053813, "node_id": "MDU6TGFiZWwxODM0MDUzODEz", "url": "https://api.github.com/repos/huggingface/transformers/labels/PyTorch", "name": "PyTorch", "color": "a12bef", "default": false, "description": "Anything PyTorch" }, { "id": 1834060867, "node_id": "MDU6TGFiZWwxODM0MDYwODY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20Named%20Entity%20Recognition", "name": "Ex: Named Entity Recognition", "color": "06FFD8", "default": false, "description": "" } ]
closed
false
null
[]
[ "I tried to make the predictions using CPU, and it worked just fine. But the predictions made by CPU is totally different from the predictions made by GPU. Isn't a model supposed to give the same predictions irrespective of whether it is loaded in GPU or CPU?\r\n\r\nAny help would be appreciated.\r\nThanks in advance", "Can you post a short reproducibility case?", "@cibinjohn \r\n\r\nThe error msg \"Maximum sequence length exceeded\" indicated that the input length (assume 257) was longer than the max_seq_len parameter (assume 256). In this case, the last token will not be predicted (actually be trimmed during the preprocessing). You have to reduce the length of your input or increase max_seq_len whichever works for you.\r\n\r\nFor the question of difference results between CPU and GPU, I cannot repeat a case similar to yours.", "@cibinjohn \r\nI have been facing the same issue yesterday while testing on my own database. I tried doubling the max_seq_length value which was before equal to the MAX_LENGTH env variable. It worked for me.\r\n", "If any of you can post a short reproducible example, we can look into this.", "@BramVanroy what I observed is that the prediction list has a list of different sizes which depends upon the length of the row data before every empty line in test.txt which is created after pre-processing using preprocess.py. \r\nSo, I kept MAX_LENGTH as 128 and max_sequence length in the range of 190-256 and it worked.\r\nAccording to me, it might be because of the length of the word_tokens list or maybe because of the length of the row data before every empty line of the test.txt file.\r\n[test (5).txt](https://github.com/huggingface/transformers/files/4255447/test.5.txt)\r\nIf for this text file if you keep max_sequence_length as 128 then it will show Maximum_sequence_length_exceeded for around 400 tokens.", " @BramVanroy \r\n\r\n%%capture\r\n!pip install -qU transformers==2.4\r\n!pip install -qU pytorch-lightning\r\n!git clone --branch fixlight https://github.com/srush/transformers\r\n!pip install -r transformers/examples/requirements.txt\r\n\r\n%%capture\r\n%%bash\r\ncd transformers/examples/ner/\r\nwget \"https://raw.githubusercontent.com/stefan-it/fine-tuned-berts-seq/master/scripts/preprocess.py\"\r\nexport MAX_LENGTH=128\r\nexport BERT_MODEL=bert-base-multilingual-cased\r\npython3 preprocess.py train.txt.tmp $BERT_MODEL $MAX_LENGTH > train.txt\r\npython3 preprocess.py dev.txt.tmp $BERT_MODEL $MAX_LENGTH > dev.txt\r\npython3 preprocess.py test.txt.tmp $BERT_MODEL $MAX_LENGTH > test.txt\r\ncat train.txt dev.txt test.txt | cut -d \" \" -f 2 | grep -v \"^$\"| sort | uniq > labels.txt\r\n\r\n\r\n!cd transformers/examples/ner/; \\\r\nexport MAX_LENGTH=190; \\\r\nexport BERT_MODEL=bert-base-multilingual-cased; \\\r\nexport OUTPUT_DIR=germeval-model; \\\r\nexport BATCH_SIZE=32; \\\r\nexport NUM_EPOCHS=3; \\\r\nexport SAVE_STEPS=750; \\\r\nexport SEED=42; \\\r\npython3 run_ner.py --data_dir ./ \\\r\n--model_type bert \\\r\n--labels ./labels.txt \\\r\n--model_name_or_path $BERT_MODEL \\\r\n--output_dir $OUTPUT_DIR \\\r\n--max_seq_length $MAX_LENGTH \\\r\n--num_train_epochs $NUM_EPOCHS \\\r\n--per_gpu_train_batch_size $BATCH_SIZE \\\r\n--save_steps $SAVE_STEPS \\\r\n--seed $SEED \\\r\n--do_train \\\r\n--do_eval \\\r\n--do_predict", "I met this question last day ,and i checked all cases but nothing has gone wrong。\r\nso,i made a new dir,then just put ‘train.txt’、“test.txt”、“dev.txt”、“labels.txt” there, start;\r\n\r\nthen,all is ok。\r\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,581
1,588
1,588
NONE
null
I have tried to make predictions over the test data using GPU, but ended up having no predictions for some words. Any help would be appreciated. Following is the shell script used for prediction. export MAX_LENGTH=128 export BERT_MODEL=bert-base-multilingual-cased DATA_DIR=../DATA/after_preprocess OUTPUT_DIR=../CHECKPOINTS/GPU/CONLL_25L LABELS_FILE_25=../DATA/after_preprocess/labels.txt export BATCH_SIZE=32 export NUM_EPOCHS=3 export SAVE_STEPS=750 export SEED=1 cd ../examples python3 -m torch.distributed.launch run_ner.py --data_dir $DATA_DIR/ \ --model_type bert \ --labels $LABELS_FILE_25 \ --model_name_or_path $BERT_MODEL \ --output_dir $OUTPUT_DIR \ --max_seq_length $MAX_LENGTH \ --num_train_epochs $NUM_EPOCHS \ --per_gpu_train_batch_size $BATCH_SIZE \ --save_steps $SAVE_STEPS \ --seed $SEED \ --do_predict Following is the error message: 02/17/2020 12:07:51 - INFO - transformers.tokenization_utils - Model name '../CHECKPOINTS/GPU/CONLL_25L' not found in model shortcut name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, bert-base-finnish-cased-v1, bert-base-finnish-uncased-v1, bert-base-dutch-cased). Assuming '../CHECKPOINTS/GPU/CONLL_25L' is a path, a model identifier, or url to a directory containing tokenizer files. 02/17/2020 12:07:51 - INFO - transformers.tokenization_utils - Didn't find file ../CHECKPOINTS/GPU/CONLL_25L/added_tokens.json. We won't load it. 02/17/2020 12:07:51 - INFO - transformers.tokenization_utils - loading file ../CHECKPOINTS/GPU/CONLL_25L/vocab.txt 02/17/2020 12:07:51 - INFO - transformers.tokenization_utils - loading file None 02/17/2020 12:07:51 - INFO - transformers.tokenization_utils - loading file ../CHECKPOINTS/GPU/CONLL_25L/special_tokens_map.json 02/17/2020 12:07:51 - INFO - transformers.tokenization_utils - loading file ../CHECKPOINTS/GPU/CONLL_25L/tokenizer_config.json 02/17/2020 12:07:51 - INFO - transformers.configuration_utils - loading configuration file ../CHECKPOINTS/GPU/CONLL_25L/config.json 02/17/2020 12:07:51 - INFO - transformers.configuration_utils - Model config BertConfig { "architectures": [ "BertForTokenClassification" ], "attention_probs_dropout_prob": 0.1, "bos_token_id": 0, "directionality": "bidi", "do_sample": false, "eos_token_ids": 0, "finetuning_task": null, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "id2label": { "0": "LABEL_0", "1": "LABEL_1" }, "initializer_range": 0.02, "intermediate_size": 3072, "is_decoder": false, "label2id": { "LABEL_0": 0, "LABEL_1": 1 }, "layer_norm_eps": 1e-12, "length_penalty": 1.0, "max_length": 20, "max_position_embeddings": 512, "model_type": "bert", "num_attention_heads": 12, "num_beams": 1, "num_hidden_layers": 12, "num_labels": 25, "num_return_sequences": 1, "output_attentions": false, "output_hidden_states": false, "output_past": true, "pad_token_id": 0, "pooler_fc_size": 768, "pooler_num_attention_heads": 12, "pooler_num_fc_layers": 3, "pooler_size_per_head": 128, "pooler_type": "first_token_transform", "pruned_heads": {}, "repetition_penalty": 1.0, "temperature": 1.0, "top_k": 50, "top_p": 1.0, "torchscript": false, "type_vocab_size": 2, "use_bfloat16": false, "vocab_size": 119547 } 02/17/2020 12:07:51 - INFO - transformers.modeling_utils - loading weights file ../CHECKPOINTS/GPU/CONLL_25L/pytorch_model.bin 02/17/2020 12:07:54 - INFO - __main__ - Creating features from dataset file at ../DATA/after_preprocess/ 02/17/2020 12:07:54 - INFO - utils_ner - Writing example 0 of 5100 02/17/2020 12:07:54 - INFO - utils_ner - *** Example *** 02/17/2020 12:07:54 - INFO - utils_ner - guid: test-1 02/17/2020 12:07:54 - INFO - utils_ner - tokens: [CLS] 1951 bis 1953 wurde der nördlich ##e Teil als Jugend ##burg des Ko ##lp ##ing ##werke ##s gebaut . [SEP] 02/17/2020 12:07:54 - INFO - utils_ner - input_ids: 101 11200 10467 11087 10283 10118 28253 10112 13043 10223 32790 12248 10139 30186 35451 10230 32827 10107 25760 119 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 02/17/2020 12:07:54 - INFO - utils_ner - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 02/17/2020 12:07:54 - INFO - utils_ner - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 02/17/2020 12:07:54 - INFO - utils_ner - label_ids: -100 24 24 24 24 24 24 -100 24 24 24 -100 24 6 -100 -100 -100 -100 24 24 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 02/17/2020 12:07:54 - INFO - utils_ner - *** Example *** 02/17/2020 12:07:54 - INFO - utils_ner - guid: test-2 02/17/2020 12:07:54 - INFO - utils_ner - tokens: [CLS] Da Mu ##ck das Krieg ##ss ##chreiben nicht über ##bra ##cht hat , wird er als Re ##tter des Landes ausgezeichnet und soll zum Sc ##hat ##zm ##eister ernannt werden . [SEP] 02/17/2020 12:07:54 - INFO - utils_ner - input_ids: 101 11818 49056 11263 10242 20587 13420 82089 10726 10848 13581 11640 11250 117 10790 10163 10223 20304 18413 10139 23244 32149 10130 17375 10580 55260 19180 37661 45940 27093 10615 119 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 02/17/2020 12:07:54 - INFO - utils_ner - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 02/17/2020 12:07:54 - INFO - utils_ner - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 02/17/2020 12:07:54 - INFO - utils_ner - label_ids: -100 24 9 -100 24 24 -100 -100 24 24 -100 -100 24 24 24 24 24 24 -100 24 24 24 24 24 24 24 -100 -100 -100 24 24 24 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 02/17/2020 12:07:54 - INFO - utils_ner - *** Example *** 02/17/2020 12:07:54 - INFO - utils_ner - guid: test-3 02/17/2020 12:07:54 - INFO - utils_ner - tokens: [CLS] Mit 1 . Jänner 2007 wurde Robert Sc ##h ##ör ##gen ##hof ##er , als Nachfolger des aus ##ges ##chie ##dene ##n Dietmar Dr ##abe ##k , in die Kader ##liste der FIFA - Sc ##hie ##ds ##richter aufgenommen . [SEP] 02/17/2020 12:07:54 - INFO - utils_ner - input_ids: 101 12699 122 119 105531 10202 10283 10820 55260 10237 15020 11280 20202 10165 117 10223 27968 10139 10441 13156 50784 49906 10115 102411 11612 40929 10174 117 10106 10128 53361 26719 10118 13707 118 55260 72287 13268 59410 25919 119 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 02/17/2020 12:07:54 - INFO - utils_ner - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 02/17/2020 12:07:54 - INFO - utils_ner - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 02/17/2020 12:07:54 - INFO - utils_ner - label_ids: -100 24 24 -100 24 24 24 9 21 -100 -100 -100 -100 -100 24 24 24 24 24 -100 -100 -100 -100 9 21 -100 -100 24 24 24 24 -100 24 5 -100 -100 -100 -100 -100 24 24 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 02/17/2020 12:07:54 - INFO - utils_ner - *** Example *** 02/17/2020 12:07:54 - INFO - utils_ner - guid: test-4 02/17/2020 12:07:54 - INFO - utils_ner - tokens: [CLS] Die These , Sc ##hla ##tter sei Anti ##sem ##it gewesen , wurde seither in der theo ##logischen Fach ##lite ##ratu ##r nicht mehr vertreten . [SEP] 02/17/2020 12:07:54 - INFO - utils_ner - input_ids: 101 10236 13252 117 55260 74935 18413 13868 26267 38443 10486 27044 117 10283 85983 10106 10118 13951 57325 100705 66289 50088 10129 10726 12471 41852 119 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 02/17/2020 12:07:54 - INFO - utils_ner - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 02/17/2020 12:07:54 - INFO - utils_ner - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 02/17/2020 12:07:54 - INFO - utils_ner - label_ids: -100 24 24 24 9 -100 -100 24 24 -100 -100 24 24 24 24 24 24 24 -100 24 -100 -100 -100 24 24 24 24 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 02/17/2020 12:07:54 - INFO - utils_ner - *** Example *** 02/17/2020 12:07:54 - INFO - utils_ner - guid: test-5 02/17/2020 12:07:54 - INFO - utils_ner - tokens: [CLS] " Le ##hm ##bru ##ck - Be ##uy ##s . Zeichnungen " lautet der Titel der gerade eröffnete ##n Ausstellung , die Kur ##atori ##n Dr . Marion Born ##sche ##uer bis zum 11 . Januar im Le ##hm ##bru ##ck - Museum präsentiert . [SEP] 02/17/2020 12:07:54 - INFO - utils_ner - input_ids: 101 107 10281 29389 40309 11263 118 14321 53452 10107 119 96784 107 77566 10118 16076 10118 43234 61469 10115 41972 117 10128 61912 45804 10115 11612 119 27276 18021 12279 19047 10467 10580 10193 119 12468 10211 10281 29389 40309 11263 118 11325 91619 119 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 02/17/2020 12:07:54 - INFO - utils_ner - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 02/17/2020 12:07:54 - INFO - utils_ner - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 02/17/2020 12:07:54 - INFO - utils_ner - label_ids: -100 24 6 -100 -100 -100 18 18 -100 -100 18 -100 24 24 24 24 24 24 24 -100 24 24 24 24 -100 -100 24 24 9 21 -100 -100 24 24 24 -100 24 24 3 -100 -100 -100 -100 -100 24 24 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 02/17/2020 12:07:57 - INFO - __main__ - Saving features into cached file ../DATA/after_preprocess/cached_test_bert-base-multilingual-cased_128 02/17/2020 12:07:58 - INFO - __main__ - ***** Running evaluation ***** 02/17/2020 12:07:58 - INFO - __main__ - Num examples = 5100 02/17/2020 12:07:58 - INFO - __main__ - Batch size = 8 Evaluating: 0%| | 0/638 [00:00<?, ?it/s] Evaluating: 100%|██████████| 638/638 [00:27<00:00, 23.15it/s] 02/17/2020 12:08:27 - INFO - __main__ - ***** Eval results ***** 02/17/2020 12:08:27 - INFO - __main__ - f1 = 0.8600886024969794 02/17/2020 12:08:27 - INFO - __main__ - loss = 0.070130527494103 02/17/2020 12:08:27 - INFO - __main__ - precision = 0.8560205226871893 02/17/2020 12:08:27 - INFO - __main__ - recall = 0.8641955325348009 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'wird'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'er'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'als'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'Retter'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'des'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'Landes'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'ausgezeichnet'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'und'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'soll'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'zum'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'Schatzmeister'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'ernannt'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'werden'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for '.'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'Nachfolger'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'des'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'ausgeschiedenen'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'Dietmar'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'Drabek'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for ','. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'in'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'die'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'Kaderliste'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'der'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'FIFA-Schiedsrichter'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'aufgenommen'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for '.'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for '"'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'lautet'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'der'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'Titel'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'der'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'gerade'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'eröffneten'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'Ausstellung'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for ','. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'die'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'Kuratorin'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'Dr'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for '.'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'Marion'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'Bornscheuer'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'bis'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'zum'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for '11.'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'Januar'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'im'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'Lehmbruck-Museum'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'präsentiert'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for '.'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'an'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'der'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'Südseite'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'des'. 02/17/2020 12:08:27 - WARNING - __main__ - Maximum sequence length exceeded: No prediction for 'Saals'.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2882/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2882/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2881
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2881/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2881/comments
https://api.github.com/repos/huggingface/transformers/issues/2881/events
https://github.com/huggingface/transformers/pull/2881
566,298,612
MDExOlB1bGxSZXF1ZXN0Mzc2MTM2MDA3
2,881
update .gitignore to ignore .swp files created when using vim
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2881?src=pr&el=h1) Report\n> Merging [#2881](https://codecov.io/gh/huggingface/transformers/pull/2881?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6083c1566e261668a5de73cfe484c171ce232812?src=pr&el=desc) will **decrease** coverage by `1.07%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2881/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2881?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2881 +/- ##\n==========================================\n- Coverage 75.06% 73.98% -1.08% \n==========================================\n Files 94 94 \n Lines 15288 15288 \n==========================================\n- Hits 11476 11311 -165 \n- Misses 3812 3977 +165\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2881?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2881/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2881/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.91% <0%> (-9.86%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2881/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `70.86% <0%> (-2.3%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2881/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `92.07% <0%> (-2.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2881/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2881?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2881?src=pr&el=footer). Last update [6083c15...fb4d8d0](https://codecov.io/gh/huggingface/transformers/pull/2881?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Welcome @patrickvonplaten :)" ]
1,581
1,582
1,581
MEMBER
null
adds one line to .gitignore
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2881/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2881/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/2881", "html_url": "https://github.com/huggingface/transformers/pull/2881", "diff_url": "https://github.com/huggingface/transformers/pull/2881.diff", "patch_url": "https://github.com/huggingface/transformers/pull/2881.patch", "merged_at": 1581946609000 }
https://api.github.com/repos/huggingface/transformers/issues/2880
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2880/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2880/comments
https://api.github.com/repos/huggingface/transformers/issues/2880/events
https://github.com/huggingface/transformers/pull/2880
566,091,985
MDExOlB1bGxSZXF1ZXN0Mzc1OTY5MDEx
2,880
Transformers と Simpletransfomrersを使ったAlebertでのNERの対応
{ "login": "dwarfer7634", "id": 19330059, "node_id": "MDQ6VXNlcjE5MzMwMDU5", "avatar_url": "https://avatars.githubusercontent.com/u/19330059?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dwarfer7634", "html_url": "https://github.com/dwarfer7634", "followers_url": "https://api.github.com/users/dwarfer7634/followers", "following_url": "https://api.github.com/users/dwarfer7634/following{/other_user}", "gists_url": "https://api.github.com/users/dwarfer7634/gists{/gist_id}", "starred_url": "https://api.github.com/users/dwarfer7634/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dwarfer7634/subscriptions", "organizations_url": "https://api.github.com/users/dwarfer7634/orgs", "repos_url": "https://api.github.com/users/dwarfer7634/repos", "events_url": "https://api.github.com/users/dwarfer7634/events{/privacy}", "received_events_url": "https://api.github.com/users/dwarfer7634/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,581
1,581
1,581
NONE
null
AlbertでNERが出来るようになっていなかったので、その対応をしました。 主に、modeling_albert.pyにAlbertForTokenClassificationを実装したのと、それと整合性をとるために、その他のコードの変更を行いました。
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2880/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2880/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/2880", "html_url": "https://github.com/huggingface/transformers/pull/2880", "diff_url": "https://github.com/huggingface/transformers/pull/2880.diff", "patch_url": "https://github.com/huggingface/transformers/pull/2880.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/2879
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2879/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2879/comments
https://api.github.com/repos/huggingface/transformers/issues/2879/events
https://github.com/huggingface/transformers/pull/2879
565,996,781
MDExOlB1bGxSZXF1ZXN0Mzc1ODkyNzc5
2,879
[model_cards] 🇹🇷 Add new (cased) BERTurk model
{ "login": "stefan-it", "id": 20651387, "node_id": "MDQ6VXNlcjIwNjUxMzg3", "avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stefan-it", "html_url": "https://github.com/stefan-it", "followers_url": "https://api.github.com/users/stefan-it/followers", "following_url": "https://api.github.com/users/stefan-it/following{/other_user}", "gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}", "starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions", "organizations_url": "https://api.github.com/users/stefan-it/orgs", "repos_url": "https://api.github.com/users/stefan-it/repos", "events_url": "https://api.github.com/users/stefan-it/events{/privacy}", "received_events_url": "https://api.github.com/users/stefan-it/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2879?src=pr&el=h1) Report\n> Merging [#2879](https://codecov.io/gh/huggingface/transformers/pull/2879?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6083c1566e261668a5de73cfe484c171ce232812?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2879/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2879?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2879 +/- ##\n=======================================\n Coverage 75.06% 75.06% \n=======================================\n Files 94 94 \n Lines 15288 15288 \n=======================================\n Hits 11476 11476 \n Misses 3812 3812\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2879?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2879?src=pr&el=footer). Last update [6083c15...d18f775](https://codecov.io/gh/huggingface/transformers/pull/2879?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Looks good: https://huggingface.co/dbmdz/bert-base-turkish-cased" ]
1,581
1,581
1,581
COLLABORATOR
null
Hi, this PR adds the model card for the (cased) community-driven 🇹🇷 BERTurk model. Uncased model is coming soon!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2879/reactions", "total_count": 3, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2879/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/2879", "html_url": "https://github.com/huggingface/transformers/pull/2879", "diff_url": "https://github.com/huggingface/transformers/pull/2879.diff", "patch_url": "https://github.com/huggingface/transformers/pull/2879.patch", "merged_at": 1581951287000 }
https://api.github.com/repos/huggingface/transformers/issues/2878
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2878/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2878/comments
https://api.github.com/repos/huggingface/transformers/issues/2878/events
https://github.com/huggingface/transformers/issues/2878
565,960,150
MDU6SXNzdWU1NjU5NjAxNTA=
2,878
FileNotFoundError when python runs setup.py for sentencepiece
{ "login": "DaveXanatos", "id": 26697976, "node_id": "MDQ6VXNlcjI2Njk3OTc2", "avatar_url": "https://avatars.githubusercontent.com/u/26697976?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DaveXanatos", "html_url": "https://github.com/DaveXanatos", "followers_url": "https://api.github.com/users/DaveXanatos/followers", "following_url": "https://api.github.com/users/DaveXanatos/following{/other_user}", "gists_url": "https://api.github.com/users/DaveXanatos/gists{/gist_id}", "starred_url": "https://api.github.com/users/DaveXanatos/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DaveXanatos/subscriptions", "organizations_url": "https://api.github.com/users/DaveXanatos/orgs", "repos_url": "https://api.github.com/users/DaveXanatos/repos", "events_url": "https://api.github.com/users/DaveXanatos/events{/privacy}", "received_events_url": "https://api.github.com/users/DaveXanatos/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Sounds like a [sentencepiece](https://github.com/google/sentencepiece) issue?", "I have an inquiry there as well since just a straight install of sentencepiece gives same results - I was just hoping there might be a way of using the HuggingFace Transformers without sentencepiece (although the name sentencepiece suggests to me that it may perform some critical functions...) something like \"pip install transformers --skip sentencepiece\" ? :)\r\n", "@DaveXanatos Even I faced same issues. Issue seems to be with pip package of sentencepiece. I have opened an issue with them.\r\n\r\nAs a word around, I installed sentence piece from conda and it worked. After installing this you can install transformer.\r\n\r\n`conda install -c powerai sentencepiece`\r\n", "Be wary of using conda and pip at the same time, if you don't know _exactly_ what you are doing, this will lead to unexpected complications.\r\n\r\nSentencePiece is required for most recent models, so it is a hard dependency. I advise you to just wait until this is solved in the sentencepiece library, or download and install an earlier version of their library, e.g. https://github.com/google/sentencepiece/releases/tag/v0.1.84", "@tkhan3 Thanks for the conda possibility, I will look into that in the interim.\r\n\r\n@BramVanroy I have heard this before and a couple of years ago I completely hosed my build doing just this :) Where would you suggest, as the most direct route to understanding exactly the differences between pip installs and conda installs in terms of paths, dependencies, etc., such that I could conda install with confidence a package in an otherwise pip installed environment?", "I'm afraid I cannot help with that. I stay away from conda as much as I can. Pipenv is my main driver, falling back to an environment's pip where necessary.", "@BramVanroy I'm with you on that from my experience, although I know some folks swear by it... thanks for the warning. I'll probably flash a backup image and then go and play with the possibilities and see if I can get it to work... I can always reflash back to the backup if I break everything again. If I have success I'll let you know what I found.", "Closing this. I propose that the discussion is moved to the sentencepiece library. https://github.com/google/sentencepiece/issues/452", "I got a similar issue.\r\nWhen I install sentence-transformers on Linux by python, I got an error message: \r\n\r\nERROR: Could not find a version that satisfies the requirement transformers>=3.0.2 (from sentence-transformers) (fr\r\nom versions: 0.1, 2.0.0, 2.1.0, 2.1.1, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.4.0, 2.4.1, 2.5.0, 2.5.1)\r\nERROR: No matching distribution found for transformers>=3.0.2 (from sentence-transformers)\r\n\r\nsystem: Linux 4.9.0-12-amd64 #1 SMP Debian 4.9.210-1+deb9u1 (2020-06-07) x86_64 on GCP VM instance.\r\nIs there any suggestion?\r\n", "I had the same issue but on Unbuntu 20.04.1.\r\nMy problem was that i used a pip version to old to install sentencepiece, as it requires pip>=19.3. (https://github.com/google/sentencepiece/issues/572#issuecomment-716890916)\r\nSo my solution was to upgrade my pip installation to 20.2.4\r\n$ pip install --upgrade pip\r\n\r\nSimilar issues has been discussed here https://github.com/google/sentencepiece/issues/572\r\n\r\nHope it helps" ]
1,581
1,603
1,582
NONE
null
# 🐛 Bug FileNotFoundError when python runs setup.py for sentencepiece I am Running Python 3.7, Tensorflow 2.1, Buster Model I am using (Bert, XLNet ...): Would be using gpt-2 if I can install it... Language I am using the model on is English The problem arises when installing using pip install transformers ## To reproduce Steps to reproduce the behavior: Run on a Raspberry Pi 4B (4Gig) running Python 3.7, Tensorflow 2.1 and Buster 1. pip install transformers Wait for other downloads and installing to complete and this will eventually arise: ``` Collecting sentencepiece (from transformers) Downloading https://files.pythonhosted.org/packages/1b/87/c3c2fa8cbec61fffe031ca9f0da512747520bec9be7f886f748457daac31/sentencepiece-0.1.83.tar.gz (497kB) 100% |████████████████████████████████| 501kB 225kB/s Complete output from command python setup.py egg_info: Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-install-xohx1aio/sentencepiece/setup.py", line 29, in <module> with codecs.open(os.path.join('..', 'VERSION'), 'r', 'utf-8') as f: File "/usr/lib/python3.7/codecs.py", line 898, in open file = builtins.open(filename, mode, buffering) FileNotFoundError: [Errno 2] No such file or directory: '../VERSION' ---------------------------------------- Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-install-xohx1aio/sentencepiece/ ``` SO I TRIED download the wheel file from https://github.com/google/sentencepiece/releases for my python version and installing it with pip install sentencepiece-xxx-cpxx-xx.whl However I see only Mac, x86, and "manylinux" wheels but the manylinux wheels specifically reference iOS or x86, nothing I can see for Arm Core 71 (linux_armv7l). Also tried a straight install on sentencepiece with identical failure:: ``` $ pip install sentencepiece Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple Collecting sentencepiece Using cached https://files.pythonhosted.org/packages/1b/87/c3c2fa8cbec61fffe031ca9f0da512747520bec9be7f886f748457daac31/sentencepiece-0.1.83.tar.gz Complete output from command python setup.py egg_info: Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-install-6tdniw95/sentencepiece/setup.py", line 29, in <module> with codecs.open(os.path.join('..', 'VERSION'), 'r', 'utf-8') as f: File "/usr/lib/python3.7/codecs.py", line 898, in open file = builtins.open(filename, mode, buffering) FileNotFoundError: [Errno 2] No such file or directory: '../VERSION' ---------------------------------------- Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-install-6tdniw95/sentencepiece/ ``` ## Expected behavior I was expecting installation to complete successfully. ## Environment info $ python transformers-cli env python: can't open file 'transformers-cli': [Errno 2] No such file or directory This makes sense since transformers never finishes installing - `transformers` version: Newest available with pip install - Platform: Raspbian Buster - Python version: 3.7.3 - PyTorch version (GPU?): - Tensorflow version (GPU?): 2.1 cpu - Using GPU in script?: - Using distributed or parallel set-up in script?: Thanks for your assistance.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2878/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2878/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2877
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2877/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2877/comments
https://api.github.com/repos/huggingface/transformers/issues/2877/events
https://github.com/huggingface/transformers/issues/2877
565,946,032
MDU6SXNzdWU1NjU5NDYwMzI=
2,877
Error with run_language_modeling.py training from scratch
{ "login": "mod-cpu", "id": 24903033, "node_id": "MDQ6VXNlcjI0OTAzMDMz", "avatar_url": "https://avatars.githubusercontent.com/u/24903033?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mod-cpu", "html_url": "https://github.com/mod-cpu", "followers_url": "https://api.github.com/users/mod-cpu/followers", "following_url": "https://api.github.com/users/mod-cpu/following{/other_user}", "gists_url": "https://api.github.com/users/mod-cpu/gists{/gist_id}", "starred_url": "https://api.github.com/users/mod-cpu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mod-cpu/subscriptions", "organizations_url": "https://api.github.com/users/mod-cpu/orgs", "repos_url": "https://api.github.com/users/mod-cpu/repos", "events_url": "https://api.github.com/users/mod-cpu/events{/privacy}", "received_events_url": "https://api.github.com/users/mod-cpu/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1834053007, "node_id": "MDU6TGFiZWwxODM0MDUzMDA3", "url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20LM%20(Pretraining)", "name": "Ex: LM (Pretraining)", "color": "76FFAF", "default": false, "description": "Related to language modeling pre-training" }, { "id": 1834053813, "node_id": "MDU6TGFiZWwxODM0MDUzODEz", "url": "https://api.github.com/repos/huggingface/transformers/labels/PyTorch", "name": "PyTorch", "color": "a12bef", "default": false, "description": "Anything PyTorch" } ]
closed
false
null
[]
[ "I ran into this with my own dataset. Following some discussion in #1538 I changed truncation to 256.\r\n\r\n> tokenizer.enable_truncation(max_length=256)\r\n\r\nI also had to make sure that the pad token had index 1, as that seems to be hardcoded in roberta.\r\n\r\nThis appears to work, though the previous error had only appeared fairly deep into training, and reducing context isn't great. So I'm not satisfied by this hack. ", "If you want to see the error clearer, switch it to CPU, then it will print out the real error. I rant into this error in another project, and finally, I found out it is basically `index out of range` error. Fixed it by add some missing words to the vocabulary.txt and resize the model itself.", "> I ran into this with my own dataset. Following some discussion in #1538 I changed truncation to 256.\r\n> \r\n> > tokenizer.enable_truncation(max_length=256)\r\n> \r\n> I also had to make sure that the pad token had index 1, as that seems to be hardcoded in roberta.\r\n> \r\n> This appears to work, though the previous error had only appeared fairly deep into training, and reducing context isn't great. So I'm not satisfied by this hack.\r\n@reidsanders were u able to train language model with ur own dataset eventually?\r\n@binhna have u added missing words inplaceon unk or randomly added them in vocab, ?\r\n", "Hello @reidsanders, @samreenkazi \r\n\r\nI encountered the same error and tried `tokenizer.enable_truncation(max_length=256)` on some BERT models. But it seems that there is no such method:\r\n`'PreTrainedTokenizer' object has no attribute 'enable_truncation'`\r\n\r\nCould you give more details about how you solved the problem?", "I am unable to solve this problem as yet\n\nOn Thu, Mar 5, 2020 at 10:22 PM Jinan Zhou <[email protected]> wrote:\n\n> Hello @reidsanders <https://github.com/reidsanders>, @samreenkazi\n> <https://github.com/samreenkazi>\n>\n> I encountered the same error tried\n> tokenizer.enable_truncation(max_length=256) on some BERT models. But it\n> seems that there is no such method:\n> 'PreTrainedTokenizer' object has no attribute 'enable_truncation'\n>\n> Could you give more details about how you solved the problem?\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/2877?email_source=notifications&email_token=ALPPGB4ZSN6BT5PXL6QRWWTRF7NU5A5CNFSM4KWFP7FKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEN6ETGY#issuecomment-595347867>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ALPPGBYDQHHQTNNOESJOZ33RF7NU5ANCNFSM4KWFP7FA>\n> .\n>\n", "> Hello @reidsanders, @samreenkazi\r\n> \r\n> I encountered the same error and tried `tokenizer.enable_truncation(max_length=256)` on some BERT models. But it seems that there is no such method:\r\n> `'PreTrainedTokenizer' object has no attribute 'enable_truncation'`\r\n> \r\n> Could you give more details about how you solved the problem?\r\n\r\nenable_truncation is not a method in PreTrainedTokenizers (we are training from scratch, not pretrained). I'm using ByteLevelBPETokenizer imported from tokenizers module as in the example blog post (and op).", "Thanks for your comment @reidsanders @samreenkazi \r\n\r\nAccording to my observation, the error is indeed caused by the data samples longer than 512 tokens. A conservative solution is fixing `block_size=512` in `TextDataset` and `LineByLineTextDataset` class. Or if you worry about the truncation of data, you can go through `self.examples` in these two classes, check whether each sample is shorter than 512. If not, split it into multiple chunks with length 512. \r\n ", "For me it helped to just specify the argument `--block_size=512` (it's -1 by default)", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,581
1,589
1,589
NONE
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): Training from scratch Language I am using the model on (English, Chinese ...): Training from scratch with Esperanto (per tutorial) The problem arises when using: * [ ] the official example scripts: (give details below) run_language_model.py * [x] my own modified scripts: (give details below) Added the class per the tutorial (https://huggingface.co/blog/how-to-train) and call it instead of TextDataset ``` class EsperantoDataset(Dataset): def __init__(self, evaluate: bool = false): tokenizer = ByteLevelBPETokenizer( "./models/EsperBERTo-small/vocab.json", "./models/EsperBERTo-small/merges.txt", ) tokenizer._tokenizer.post_processor = BertProcessing( ("</s>", tokenizer.token_to_id("</s>")), ("<s>", tokenizer.token_to_id("<s>")), ) tokenizer.enable_truncation(max_length=512) # or use the RobertaTokenizer from `transformers` directly. self.examples = [] src_files = Path("./data/").glob("*-eval.txt") if evaluate else Path("./data/").glob("*-train.txt") for src_file in src_files: print("🔥", src_file) lines = src_file.read_text(encoding="utf-8").splitlines() self.examples += [x.ids for x in tokenizer.encode_batch(lines)] def __len__(self): return len(self.examples) def __getitem__(self, i): # We’ll pad at the batch level. return torch.tensor(self.examples[i]) ``` The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) Training from scratch * [x] my own task or dataset: (give details below) eo_dedup.txt.gz (https://traces1.inria.fr/oscar/) ## To reproduce Steps to reproduce the behavior: 1. file structure: project │ run_language_modeling.py │ └───models │ │ │ └───EsperBERTo-small │ │ merges.txt │ │ vocab.json | └───datasets │ eo-dedup-train.txt 2. ``` python run_language_model.py --output_dir ./models/EsperBERTo-small-v1 --model_type roberta --mlm --tokenizer_name ./models/EsperBERTo-small --do_train --learning_rate 1e-4 --num_train_epochs 5 --save_total_limit 2 --save_steps 2000 --per_gpu_train_batch_size 4 --evaluate_during_training --seed 42 --train_data_file eo-dedup-train.txt ``` Results in this stack trace with CUDA_LAUNCH_BLOCKING=1: ``` /opt/conda/conda-bld/pytorch_1573049304260/work/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [53,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1573049304260/work/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [53,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed. Traceback (most recent call last): File "test.py", line 832, in <module> main() File "test.py", line 782, in main global_step, tr_loss = train(args, train_dataset, model, tokenizer) File "test.py", line 386, in train outputs = model(inputs, masked_lm_labels=labels) if args.mlm else model(inputs, labels=labels) File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/modeling_roberta.py", line 243, in forward inputs_embeds=inputs_embeds, File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/modeling_bert.py", line 799, in forward input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/modeling_roberta.py", line 64, in forward input_ids, token_type_ids=token_type_ids, position_ids=position_ids, inputs_embeds=inputs_embeds File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/modeling_bert.py", line 193, in forward embeddings = inputs_embeds + position_embeddings + token_type_embeddings RuntimeError: CUDA error: device-side assert triggered ``` ## Expected behavior ## Environment info - `transformers` version: 2.4.1 - Platform: AWS p2.xlarge ubuntu - Python version: 3.6.5 - PyTorch version (GPU?): 1.3.1 - Tensorflow version (GPU?): N/A - Using GPU in script?: No - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2877/reactions", "total_count": 5, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2877/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2876
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2876/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2876/comments
https://api.github.com/repos/huggingface/transformers/issues/2876/events
https://github.com/huggingface/transformers/pull/2876
565,912,816
MDExOlB1bGxSZXF1ZXN0Mzc1ODMxMzE5
2,876
Create bert-spanish-cased-finedtuned-ner.md
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "The file path should be `model_cards/mrm8488/bert-spanish-cased-finedtuned-ner/README.md` @mrm8488 " ]
1,581
1,581
1,581
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2876/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2876/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/2876", "html_url": "https://github.com/huggingface/transformers/pull/2876", "diff_url": "https://github.com/huggingface/transformers/pull/2876.diff", "patch_url": "https://github.com/huggingface/transformers/pull/2876.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/2875
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2875/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2875/comments
https://api.github.com/repos/huggingface/transformers/issues/2875/events
https://github.com/huggingface/transformers/pull/2875
565,895,256
MDExOlB1bGxSZXF1ZXN0Mzc1ODE4NTM3
2,875
Update README.md
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2875?src=pr&el=h1) Report\n> Merging [#2875](https://codecov.io/gh/huggingface/transformers/pull/2875?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/73028c5df0c28ca179fbe565482a9c2143787f61?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2875/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2875?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2875 +/- ##\n=======================================\n Coverage 75.06% 75.06% \n=======================================\n Files 94 94 \n Lines 15288 15288 \n=======================================\n Hits 11476 11476 \n Misses 3812 3812\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2875?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2875?src=pr&el=footer). Last update [73028c5...cb1cba9](https://codecov.io/gh/huggingface/transformers/pull/2875?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Thanks for updating! GIFs on Giphy are a bit over-compressed so you can also host them \r\n- on one of your GitHub repos\r\n- or even directly in the model card's folder (see severinsimmler/literary-german-bert/README.md as an example)", "Also, great results :)", "Thank you!!" ]
1,581
1,581
1,581
CONTRIBUTOR
null
I trained the model for more epochs so I improved the results. This commit will update the results of the model and add a gif using it with **transformers/pipelines**
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2875/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2875/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/2875", "html_url": "https://github.com/huggingface/transformers/pull/2875", "diff_url": "https://github.com/huggingface/transformers/pull/2875.diff", "patch_url": "https://github.com/huggingface/transformers/pull/2875.patch", "merged_at": 1581865775000 }
https://api.github.com/repos/huggingface/transformers/issues/2874
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2874/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2874/comments
https://api.github.com/repos/huggingface/transformers/issues/2874/events
https://github.com/huggingface/transformers/issues/2874
565,872,113
MDU6SXNzdWU1NjU4NzIxMTM=
2,874
How to run TFBERT model in disable_eager_execution() mode
{ "login": "JKP0", "id": 48640299, "node_id": "MDQ6VXNlcjQ4NjQwMjk5", "avatar_url": "https://avatars.githubusercontent.com/u/48640299?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JKP0", "html_url": "https://github.com/JKP0", "followers_url": "https://api.github.com/users/JKP0/followers", "following_url": "https://api.github.com/users/JKP0/following{/other_user}", "gists_url": "https://api.github.com/users/JKP0/gists{/gist_id}", "starred_url": "https://api.github.com/users/JKP0/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JKP0/subscriptions", "organizations_url": "https://api.github.com/users/JKP0/orgs", "repos_url": "https://api.github.com/users/JKP0/repos", "events_url": "https://api.github.com/users/JKP0/events{/privacy}", "received_events_url": "https://api.github.com/users/JKP0/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "TFBERT model can only be loaded in tf>2.0 version.And,tf 2.0 onwards,eager_execution() is on by default" ]
1,581
1,582
1,582
NONE
null
How to run `TFBERT `model in `disable_eager_execution()` mode. If it is possible please let me know, thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2874/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2874/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2873
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2873/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2873/comments
https://api.github.com/repos/huggingface/transformers/issues/2873/events
https://github.com/huggingface/transformers/issues/2873
565,860,022
MDU6SXNzdWU1NjU4NjAwMjI=
2,873
how to get "xlnet-base-cased-pytorch_model.bin" original 'last modified' date?
{ "login": "eyal-orbach", "id": 48019957, "node_id": "MDQ6VXNlcjQ4MDE5OTU3", "avatar_url": "https://avatars.githubusercontent.com/u/48019957?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eyal-orbach", "html_url": "https://github.com/eyal-orbach", "followers_url": "https://api.github.com/users/eyal-orbach/followers", "following_url": "https://api.github.com/users/eyal-orbach/following{/other_user}", "gists_url": "https://api.github.com/users/eyal-orbach/gists{/gist_id}", "starred_url": "https://api.github.com/users/eyal-orbach/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eyal-orbach/subscriptions", "organizations_url": "https://api.github.com/users/eyal-orbach/orgs", "repos_url": "https://api.github.com/users/eyal-orbach/repos", "events_url": "https://api.github.com/users/eyal-orbach/events{/privacy}", "received_events_url": "https://api.github.com/users/eyal-orbach/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Just click on 'List all files in model' and you will see the upload date [1].\r\n\r\n[1] https://huggingface.co/xlnet-base-cased" ]
1,581
1,581
1,581
NONE
null
I'd like to test my finetuning on some wikipedia articles that have not been seen by the model. For that, I can find the date of creation on the wikipedia article, but I'd also like to verify that this is after the 'last modified' date of the model. How can I get the answer to when was the model parameters' last modified?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2873/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2873/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2872
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2872/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2872/comments
https://api.github.com/repos/huggingface/transformers/issues/2872/events
https://github.com/huggingface/transformers/issues/2872
565,858,800
MDU6SXNzdWU1NjU4NTg4MDA=
2,872
Explanation of the results derived from fine tuning
{ "login": "pentegroom", "id": 60902670, "node_id": "MDQ6VXNlcjYwOTAyNjcw", "avatar_url": "https://avatars.githubusercontent.com/u/60902670?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pentegroom", "html_url": "https://github.com/pentegroom", "followers_url": "https://api.github.com/users/pentegroom/followers", "following_url": "https://api.github.com/users/pentegroom/following{/other_user}", "gists_url": "https://api.github.com/users/pentegroom/gists{/gist_id}", "starred_url": "https://api.github.com/users/pentegroom/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pentegroom/subscriptions", "organizations_url": "https://api.github.com/users/pentegroom/orgs", "repos_url": "https://api.github.com/users/pentegroom/repos", "events_url": "https://api.github.com/users/pentegroom/events{/privacy}", "received_events_url": "https://api.github.com/users/pentegroom/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "@gofimofi You might want to look at https://github.com/jessevig/bertviz which is compatible with transformers", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,581
1,589
1,589
NONE
null
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> Hi, It would be super nice if you can add a visualization util that can show why the model inferred a particular result such that which words of the "sentencex" made it labelled as positive? Thank you.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2872/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2872/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2871
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2871/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2871/comments
https://api.github.com/repos/huggingface/transformers/issues/2871/events
https://github.com/huggingface/transformers/issues/2871
565,857,323
MDU6SXNzdWU1NjU4NTczMjM=
2,871
RoBERTa has a token_type layer (just a cosmetic issue)
{ "login": "cronoik", "id": 18630848, "node_id": "MDQ6VXNlcjE4NjMwODQ4", "avatar_url": "https://avatars.githubusercontent.com/u/18630848?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cronoik", "html_url": "https://github.com/cronoik", "followers_url": "https://api.github.com/users/cronoik/followers", "following_url": "https://api.github.com/users/cronoik/following{/other_user}", "gists_url": "https://api.github.com/users/cronoik/gists{/gist_id}", "starred_url": "https://api.github.com/users/cronoik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cronoik/subscriptions", "organizations_url": "https://api.github.com/users/cronoik/orgs", "repos_url": "https://api.github.com/users/cronoik/repos", "events_url": "https://api.github.com/users/cronoik/events{/privacy}", "received_events_url": "https://api.github.com/users/cronoik/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1834056761, "node_id": "MDU6TGFiZWwxODM0MDU2NzYx", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling", "name": "Core: Modeling", "color": "FF8446", "default": false, "description": "Internals of the library; Models." } ]
closed
false
null
[]
[ "This is something that we're looking at with @LysandreJik and @thomwolf – In the meantime, feel free to open a draft PR.", "As discussed in the other issues, it would be great if a lot of care is taken in maximising the compatibility between a tokenizer and its corresponding model, as I discussed in https://github.com/huggingface/transformers/issues/2702#issuecomment-581480669. In other words, the tokenizer encode methods should only return those values that are accepted by its model's forward method.", "RoBERTa has only one possible token type id (`0`), but the embedding for that still needs to be there. That embedding is added to all word piece embeddings, and that's how it was trained. If you suddenly stop doing that, the model will stop working.\r\n\r\nYou could just add that embedding to all word piece embeddings and store the model under a new name to achieve the same effect. But you can't just take it away.\r\n\r\nIf you take it away, you have to retrain the whole thing.", "In my opinion you dont have to retrain it as the weights of this layer are zero. Have a look at the example below:\r\n```from transformers.modeling_roberta import RobertaForSequenceClassification\r\nmodel = RobertaForSequenceClassification.from_pretrained('roberta-base')\r\nprint(model.state_dict()['roberta.embeddings.token_type_embeddings.weight'])\r\n##Output truncated by me:\r\n##tensor([[0., 0., 0., 0., 0., .....0., 0., 0., 0.]])\r\n```\r\nSo what happens when we remove this layer:\r\n```\r\n##Defining our own roberta class without the token_type_embeddings layer\r\nfrom torch import nn\r\nfrom transformers.modeling_bert import BertLayerNorm, BertModel, BertPreTrainedModel\r\nfrom transformers.modeling_roberta import RobertaClassificationHead, RobertaConfig, ROBERTA_PRETRAINED_MODEL_ARCHIVE_MAP, create_position_ids_from_input_ids, CrossEntropyLoss\r\nclass MyBertEmbeddings(nn.Module):\r\n def __init__(self, config):\r\n super().__init__()\r\n self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=0)\r\n self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size)\r\n #self.token_type_embeddings = nn.Embedding(config.type_vocab_size, config.hidden_size)\r\n\r\n # self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load\r\n # any TensorFlow checkpoint file\r\n self.LayerNorm = BertLayerNorm(config.hidden_size, eps=config.layer_norm_eps)\r\n self.dropout = nn.Dropout(config.hidden_dropout_prob)\r\n\r\n def forward(self, input_ids=None, token_type_ids=None, position_ids=None, inputs_embeds=None):\r\n if input_ids is not None:\r\n input_shape = input_ids.size()\r\n else:\r\n input_shape = inputs_embeds.size()[:-1]\r\n\r\n seq_length = input_shape[1]\r\n device = input_ids.device if input_ids is not None else inputs_embeds.device\r\n if position_ids is None:\r\n position_ids = torch.arange(seq_length, dtype=torch.long, device=device)\r\n position_ids = position_ids.unsqueeze(0).expand(input_shape)\r\n if token_type_ids is None:\r\n token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device)\r\n\r\n if inputs_embeds is None:\r\n inputs_embeds = self.word_embeddings(input_ids)\r\n position_embeddings = self.position_embeddings(position_ids)\r\n #token_type_embeddings = self.token_type_embeddings(token_type_ids)\r\n\r\n embeddings = inputs_embeds + position_embeddings #+ token_type_embeddings\r\n embeddings = self.LayerNorm(embeddings)\r\n embeddings = self.dropout(embeddings)\r\n return embeddings\r\n\r\nclass MyRobertaEmbeddings(MyBertEmbeddings):\r\n\r\n def __init__(self, config):\r\n super().__init__(config)\r\n self.padding_idx = 1\r\n self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=self.padding_idx)\r\n self.position_embeddings = nn.Embedding(\r\n config.max_position_embeddings, config.hidden_size, padding_idx=self.padding_idx\r\n )\r\n\r\n def forward(self, input_ids=None, token_type_ids=None, position_ids=None, inputs_embeds=None):\r\n if position_ids is None:\r\n if input_ids is not None:\r\n # Create the position ids from the input token ids. Any padded tokens remain padded.\r\n position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx).to(input_ids.device)\r\n else:\r\n position_ids = self.create_position_ids_from_inputs_embeds(inputs_embeds)\r\n\r\n return super().forward(\r\n input_ids, token_type_ids=token_type_ids, position_ids=position_ids, inputs_embeds=inputs_embeds\r\n )\r\n\r\n def create_position_ids_from_inputs_embeds(self, inputs_embeds):\r\n input_shape = inputs_embeds.size()[:-1]\r\n sequence_length = input_shape[1]\r\n\r\n position_ids = torch.arange(\r\n self.padding_idx + 1, sequence_length + self.padding_idx + 1, dtype=torch.long, device=inputs_embeds.device\r\n )\r\n return position_ids.unsqueeze(0).expand(input_shape)\r\n\r\nclass MyRobertaModel(BertModel):\r\n config_class = RobertaConfig\r\n pretrained_model_archive_map = ROBERTA_PRETRAINED_MODEL_ARCHIVE_MAP\r\n base_model_prefix = \"roberta\"\r\n\r\n def __init__(self, config):\r\n super().__init__(config)\r\n\r\n self.embeddings = MyRobertaEmbeddings(config)\r\n self.init_weights()\r\n\r\n def get_input_embeddings(self):\r\n return self.embeddings.word_embeddings\r\n\r\n def set_input_embeddings(self, value):\r\n self.embeddings.word_embeddings = value\r\n\r\nclass MyRobertaForSequenceClassification(BertPreTrainedModel):\r\n config_class = RobertaConfig\r\n pretrained_model_archive_map = ROBERTA_PRETRAINED_MODEL_ARCHIVE_MAP\r\n base_model_prefix = \"roberta\"\r\n\r\n def __init__(self, config):\r\n super().__init__(config)\r\n self.num_labels = config.num_labels\r\n\r\n self.roberta = MyRobertaModel(config)\r\n self.classifier = RobertaClassificationHead(config)\r\n\r\n def forward(\r\n self,\r\n input_ids=None,\r\n attention_mask=None,\r\n token_type_ids=None,\r\n position_ids=None,\r\n head_mask=None,\r\n inputs_embeds=None,\r\n labels=None,\r\n ):\r\n outputs = self.roberta(\r\n input_ids,\r\n attention_mask=attention_mask,\r\n token_type_ids=token_type_ids,\r\n position_ids=position_ids,\r\n head_mask=head_mask,\r\n inputs_embeds=inputs_embeds,\r\n )\r\n sequence_output = outputs[0]\r\n logits = self.classifier(sequence_output)\r\n\r\n outputs = (logits,) + outputs[2:]\r\n if labels is not None:\r\n if self.num_labels == 1:\r\n # We are doing regression\r\n loss_fct = MSELoss()\r\n loss = loss_fct(logits.view(-1), labels.view(-1))\r\n else:\r\n loss_fct = CrossEntropyLoss()\r\n loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))\r\n outputs = (loss,) + outputs\r\n\r\n return outputs # (loss), logits, (hidden_states), (attentions)\r\n```\r\n\r\n```\r\nmymodel = MyRobertaForSequenceClassification.from_pretrained('roberta-base')\r\n\r\n##We need to set the weights of the randomly initialized layers to the same values\r\nimport torch\r\nfor name, param in mymodel.named_parameters():\r\n if not(torch.all(param.data.eq(model.state_dict()[name]))):\r\n print('{} is not identical'.format(name))\r\n param.data = model.state_dict()[name]\r\n##Output:\r\n##classifier.dense.weight is not identical\r\n##classifier.dense.bias is not identical\r\n##classifier.out_proj.weight is not identical\r\n##classifier.out_proj.bias is not identical\r\n```\r\nNow we can compare mymodel with model:\r\n```\r\nfrom transformers import RobertaTokenizer\r\n\r\ntokenizer = RobertaTokenizer.from_pretrained('roberta-base')\r\n\r\nfor m in [mymodel, model]:\r\n input_ids = torch.tensor(tokenizer.encode(\"Hello, my dog is cute\", add_special_tokens=True)).unsqueeze(0) # Batch size 1\r\n labels = torch.tensor([1]).unsqueeze(0) # Batch size 1\r\n outputs = m(input_ids, labels=labels)\r\n print(outputs)\r\n##(tensor(0.8658, grad_fn=<NllLossBackward>), tensor([[ 0.0927, -0.2271]], grad_fn=<AddmmBackward>))\r\n##(tensor(0.8658, grad_fn=<NllLossBackward>), tensor([[ 0.0927, -0.2271]], grad_fn=<AddmmBackward>))\r\n```\r\nand see that the output is the same.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,581
1,592
1,592
CONTRIBUTOR
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): RoBERTa Language I am using the model on (English, Chinese ...): Language-independent The cosmetic problem: The fairseq RoBERTa doesn't has a token_type layer: ``` TransformerSentenceEncoder( (embed_tokens): Embedding(50265, 768, padding_idx=1) (embed_positions): LearnedPositionalEmbedding(514, 768, padding_idx=1) ``` The huggingface implementation of RoBERTa accepts token typ ids because RobertaModel inherits from BertModel and the layer is inherited by RobertaEmbeddings from BertEmbeddings: ``` RobertaEmbeddings( (word_embeddings): Embedding(50265, 768, padding_idx=1) (position_embeddings): Embedding(514, 768, padding_idx=1) (token_type_embeddings): Embedding(1, 768) (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ``` @julien-c wrote in #2702: > On the fact that our RoBERTa implem(tation) takes (inoperant by default) `token_type_ids`, maybe we should actually remove them from the implem. If you want to train some, you can always subclass RoBERTa and add them back (but I'm not 100% sure a lot of people use them). Thoughts? ## Expected behavior I think the huggingface models should be as close to original as possible and therefore RoBERTA should not have a token_type_embeddings layer and not accept token_type_ids. I know this is just a cosmetic issue, but I think it causes some confusion. I would like to use this issue to collect some opinions. When there are no others opinions, I would like to work on this. This also affects #2727
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2871/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2871/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2870
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2870/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2870/comments
https://api.github.com/repos/huggingface/transformers/issues/2870/events
https://github.com/huggingface/transformers/pull/2870
565,848,089
MDExOlB1bGxSZXF1ZXN0Mzc1Nzg0ODUx
2,870
distilberttokenizer.encode_plus() token_type_ids are non-default
{ "login": "cronoik", "id": 18630848, "node_id": "MDQ6VXNlcjE4NjMwODQ4", "avatar_url": "https://avatars.githubusercontent.com/u/18630848?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cronoik", "html_url": "https://github.com/cronoik", "followers_url": "https://api.github.com/users/cronoik/followers", "following_url": "https://api.github.com/users/cronoik/following{/other_user}", "gists_url": "https://api.github.com/users/cronoik/gists{/gist_id}", "starred_url": "https://api.github.com/users/cronoik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cronoik/subscriptions", "organizations_url": "https://api.github.com/users/cronoik/orgs", "repos_url": "https://api.github.com/users/cronoik/repos", "events_url": "https://api.github.com/users/cronoik/events{/privacy}", "received_events_url": "https://api.github.com/users/cronoik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2870?src=pr&el=h1) Report\n> Merging [#2870](https://codecov.io/gh/huggingface/transformers/pull/2870?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/73028c5df0c28ca179fbe565482a9c2143787f61?src=pr&el=desc) will **increase** coverage by `<.01%`.\n> The diff coverage is `100%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2870/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2870?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2870 +/- ##\n==========================================\n+ Coverage 75.06% 75.06% +<.01% \n==========================================\n Files 94 94 \n Lines 15288 15290 +2 \n==========================================\n+ Hits 11476 11478 +2 \n Misses 3812 3812\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2870?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2870/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `86.1% <ø> (ø)` | :arrow_up: |\n| [src/transformers/tokenization\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2870/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZGlzdGlsYmVydC5weQ==) | `100% <100%> (ø)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2870?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2870?src=pr&el=footer). Last update [73028c5...db509eb](https://codecov.io/gh/huggingface/transformers/pull/2870?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "obsolet since 2.6 release" ]
1,581
1,586
1,586
CONTRIBUTOR
null
DistilBert doesn't use token_type_ids. Therefore the encode_plus() method of the DistilBertTokenizer should generate them per default. This fix sets the default value of return_token_type_ids to False. Closes #2702
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2870/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2870/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/2870", "html_url": "https://github.com/huggingface/transformers/pull/2870", "diff_url": "https://github.com/huggingface/transformers/pull/2870.diff", "patch_url": "https://github.com/huggingface/transformers/pull/2870.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/2869
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2869/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2869/comments
https://api.github.com/repos/huggingface/transformers/issues/2869/events
https://github.com/huggingface/transformers/issues/2869
565,847,977
MDU6SXNzdWU1NjU4NDc5Nzc=
2,869
ValueError: too many dimensions 'str'
{ "login": "lenyabloko", "id": 55606, "node_id": "MDQ6VXNlcjU1NjA2", "avatar_url": "https://avatars.githubusercontent.com/u/55606?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lenyabloko", "html_url": "https://github.com/lenyabloko", "followers_url": "https://api.github.com/users/lenyabloko/followers", "following_url": "https://api.github.com/users/lenyabloko/following{/other_user}", "gists_url": "https://api.github.com/users/lenyabloko/gists{/gist_id}", "starred_url": "https://api.github.com/users/lenyabloko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lenyabloko/subscriptions", "organizations_url": "https://api.github.com/users/lenyabloko/orgs", "repos_url": "https://api.github.com/users/lenyabloko/repos", "events_url": "https://api.github.com/users/lenyabloko/events{/privacy}", "received_events_url": "https://api.github.com/users/lenyabloko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I suggest to close this topic and keep the discussion over at https://github.com/ThilinaRajapakse/simpletransformers/issues/229." ]
1,581
1,582
1,582
NONE
null
# 🐛 Bug **To Reproduce** Steps to reproduce the behavior: Here is my Colab Notebook you can run to to see the error https://colab.research.google.com/drive/1ESyf46RNBvrg-7DDQ5l8zhlKZjWGdqUv#scrollTo=MqlsdjFVMmMZ ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-9-0b9dcdf94c77> in <module>() 71 72 # Train the model ---> 73 model.train_model(train_df) 74 75 # Evaluate the model 1 frames /usr/local/lib/python3.6/dist-packages/simpletransformers/classification/classification_model.py in train_model(self, train_df, multi_label, output_dir, show_running_loss, args, eval_df, verbose, **kwargs) 261 ] 262 --> 263 train_dataset = self.load_and_cache_examples(train_examples, verbose=verbose) 264 265 os.makedirs(output_dir, exist_ok=True) /usr/local/lib/python3.6/dist-packages/simpletransformers/classification/classification_model.py in load_and_cache_examples(self, examples, evaluate, no_cache, multi_label, verbose, silent) 757 758 if output_mode == "classification": --> 759 all_label_ids = torch.tensor([f.label_id for f in features], dtype=torch.long) 760 elif output_mode == "regression": 761 all_label_ids = torch.tensor([f.label_id for f in features], dtype=torch.float) ValueError: too many dimensions 'str' ``` The problem arises when using ``` from simpletransformers.classification import ClassificationModel import pandas as pd prefix = '/content/' train_df = pd.read_csv(prefix + 'train.csv', header=None) train_df=train_df.drop(index=0) model = ClassificationModel('roberta', 'roberta-base') model.train_model(train_df) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2869/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2869/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2868
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2868/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2868/comments
https://api.github.com/repos/huggingface/transformers/issues/2868/events
https://github.com/huggingface/transformers/issues/2868
565,824,248
MDU6SXNzdWU1NjU4MjQyNDg=
2,868
How can I run NER on ALBERT?
{ "login": "xf05888", "id": 33285394, "node_id": "MDQ6VXNlcjMzMjg1Mzk0", "avatar_url": "https://avatars.githubusercontent.com/u/33285394?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xf05888", "html_url": "https://github.com/xf05888", "followers_url": "https://api.github.com/users/xf05888/followers", "following_url": "https://api.github.com/users/xf05888/following{/other_user}", "gists_url": "https://api.github.com/users/xf05888/gists{/gist_id}", "starred_url": "https://api.github.com/users/xf05888/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xf05888/subscriptions", "organizations_url": "https://api.github.com/users/xf05888/orgs", "repos_url": "https://api.github.com/users/xf05888/repos", "events_url": "https://api.github.com/users/xf05888/events{/privacy}", "received_events_url": "https://api.github.com/users/xf05888/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1834060867, "node_id": "MDU6TGFiZWwxODM0MDYwODY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20Named%20Entity%20Recognition", "name": "Ex: Named Entity Recognition", "color": "06FFD8", "default": false, "description": "" } ]
closed
false
null
[]
[ "+1 \r\nSame for [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py)?", "What do you think about moving the examples to `AutoModels` (in this case `AutoModelForTokenClassification`) @srush @LysandreJik @julien-c ?", "@thomwolf Indeed, that would be nice.", "Yup, sounds good to me (it will make things much simpler). ", "Yes, makes sense", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,581
1,589
1,589
NONE
null
I what to run NER on ALBERT, so I checked the [run_ner.py](https://github.com/huggingface/transformers/blob/master/examples/run_ner.py), but it seems like no ALBERT support. So can I simply import `AlbertTokenizer`, `AlbertForTokenClassification` and `AlbertConfig` in the script and add them to `MODEL_CLASSES` and `ALL_MODELS`(Or need other config)?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2868/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2868/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2867
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2867/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2867/comments
https://api.github.com/repos/huggingface/transformers/issues/2867/events
https://github.com/huggingface/transformers/issues/2867
565,781,926
MDU6SXNzdWU1NjU3ODE5MjY=
2,867
from_pretrained making internet connection if internet turned on
{ "login": "Swarzkopf314", "id": 9811899, "node_id": "MDQ6VXNlcjk4MTE4OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/9811899?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Swarzkopf314", "html_url": "https://github.com/Swarzkopf314", "followers_url": "https://api.github.com/users/Swarzkopf314/followers", "following_url": "https://api.github.com/users/Swarzkopf314/following{/other_user}", "gists_url": "https://api.github.com/users/Swarzkopf314/gists{/gist_id}", "starred_url": "https://api.github.com/users/Swarzkopf314/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Swarzkopf314/subscriptions", "organizations_url": "https://api.github.com/users/Swarzkopf314/orgs", "repos_url": "https://api.github.com/users/Swarzkopf314/repos", "events_url": "https://api.github.com/users/Swarzkopf314/events{/privacy}", "received_events_url": "https://api.github.com/users/Swarzkopf314/received_events", "type": "User", "site_admin": false }
[ { "id": 1834052129, "node_id": "MDU6TGFiZWwxODM0MDUyMTI5", "url": "https://api.github.com/repos/huggingface/transformers/labels/High-Level%20feature", "name": "High-Level feature", "color": "f7c9a3", "default": false, "description": "" } ]
closed
false
null
[]
[ "I might be mistaken, but it seems that `s3_etag` verifies that the etag of a cached (downloaded) file is the same as the one that is in the S3 bucket, to ensure that you have the right files (in terms of versions, or corruption). If those files are not in the cached folder, they are downloaded.\r\n\r\nSee \r\n\r\nhttps://github.com/huggingface/transformers/blob/0dbddba6d2c5b2c6fc08866358c1994a00d6a1ff/src/transformers/file_utils.py#L330-L336", "Is there any way to turn that off? ", "Not as far as I can see. What is your use-case? Why do you need this?", "A use case where validating against external servers is not ideal is if the network is behind a firewall and/or is a containerized microservice, and you want to avoid pinging outside the firewall as much as possible.\r\n\r\nI would appreciate a config flag that disables all external pinging.", "It's not comfortable for development - I'm doing many tests with the pretrained model and it's pretty annoying as it slows down my experiments considerably. I quess I could just save and load the model myself but I was curious why `from_pretrained` takes so long.", "I think it should be possible by skipping this block (and setting `etag=None`)\r\n\r\nhttps://github.com/huggingface/transformers/blob/0dbddba6d2c5b2c6fc08866358c1994a00d6a1ff/src/transformers/file_utils.py#L399-L409\r\n\r\nwhich will then fallback to\r\n\r\nhttps://github.com/huggingface/transformers/blob/0dbddba6d2c5b2c6fc08866358c1994a00d6a1ff/src/transformers/file_utils.py#L418-L430\r\n\r\nA flag should be added to the signature, something like: `disable_outgoing=False`. When `True`, it will skip the lookup and possible download.\r\n\r\nI might be able to work on this in the future, but it's not high on my priority list.\r\n\r\nOpinions? @minimaxir @Swarzkopf314 ", "Yeah that would be great :)", "@Swarzkopf314 Can you tell me how you made the graphs in OP? (some library, I presume) So I can use them for testing.", "I made a wrapper for `pyinstrument`, feel free to use it:\r\n\r\n```python\r\nimport pyinstrument\r\n\r\n# with TreeProfiler(show_all=True):\r\n# # code to profie...\r\nclass TreeProfiler(object):\r\n\r\n def __init__(self, show_all=False):\r\n self.profiler = pyinstrument.Profiler()\r\n self.show_all = show_all # verbose output of pyinstrument profiler\r\n\r\n def __enter__(self):\r\n print(\"WITH TREE_PROFILER:\")\r\n self.profiler.start()\r\n\r\n def __exit__(self, *args):\r\n self.profiler.stop()\r\n print(self.profiler.output_text(unicode=True, color=True, show_all=self.show_all))\r\n\r\n```", "You can try out my PR https://github.com/huggingface/transformers/pull/2930 if you want.\r\n\r\n```python\r\nimport pyinstrument\r\nfrom transformers import DistilBertConfig, DistilBertModel, DistilBertTokenizer\r\n\r\n\r\nclass TreeProfiler():\r\n def __init__(self, show_all=False):\r\n self.profiler = pyinstrument.Profiler()\r\n self.show_all = show_all # verbose output of pyinstrument profiler\r\n\r\n def __enter__(self):\r\n print(\"WITH TREE_PROFILER:\")\r\n self.profiler.start()\r\n\r\n def __exit__(self, *args):\r\n self.profiler.stop()\r\n print(self.profiler.output_text(unicode=True, color=True, show_all=self.show_all))\r\n\r\n\r\ndef main():\r\n with TreeProfiler(show_all=True):\r\n config = DistilBertConfig.from_pretrained('distilbert-base-uncased', disable_outgoing=True)\r\n model = DistilBertModel.from_pretrained('distilbert-base-uncased', disable_outgoing=True)\r\n tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased', disable_outgoing=True)\r\n\r\n\r\nif __name__ == '__main__':\r\n main()\r\n```\r\n\r\nThe above snippet will throw an error message when the expected files are not present in the cache. When they are, though, everything is loaded fine without the need of any additional lookups.", "Amazing, thanks a lot! <3", "No problem. Note that I have not written tests for this functionality yet. I don't think it should break the library, but if you do find some inconsistencies, please let me know.", "Excellent! :D", "Note that the parameter name has been changed to `local_files_only`.", "Note that in practice, I find some parameter \"local_files_first\" which will resolve this issue even further. As named, it will first check if the model is cached. If not, it will make internet connection and download that model. I find this useful for production and testing, thus might write some pull requests for this new feature." ]
1,581
1,671
1,582
NONE
null
I'd like to ask why model.from_pretrained makes ssl connection event though I provide cache_dir? If I turn off the internet everything works just fine. ``` │ └─ 0.726 from_pretrained transformers/tokenization_utils.py:256 │ └─ 0.726 _from_pretrained transformers/tokenization_utils.py:311 │ ├─ 0.570 cached_path transformers/file_utils.py:205 │ │ └─ 0.570 get_from_cache transformers/file_utils.py:333 │ │ └─ 0.570 head requests/api.py:91 │ │ └─ 0.570 request requests/api.py:16 │ │ └─ 0.565 request requests/sessions.py:466 │ │ └─ 0.558 send requests/sessions.py:617 │ │ └─ 0.558 send requests/adapters.py:394 │ │ └─ 0.557 urlopen urllib3/connectionpool.py:494 │ │ └─ 0.557 _make_request urllib3/connectionpool.py:351 │ │ ├─ 0.413 _validate_conn urllib3/connectionpool.py:986 │ │ │ └─ 0.413 connect urllib3/connection.py:298 │ │ │ ├─ 0.281 ssl_wrap_socket urllib3/util/ssl_.py:296 │ │ │ │ ├─ 0.263 wrap_socket ssl.py:410 │ │ │ │ │ └─ 0.263 _create ssl.py:813 │ │ │ │ │ └─ 0.263 do_handshake ssl.py:1132 │ │ │ │ └─ 0.018 [self] │ │ │ └─ 0.132 _new_conn urllib3/connection.py:143 │ │ │ └─ 0.132 create_connection urllib3/util/connection.py:33 │ │ │ └─ 0.130 [self] │ │ └─ 0.144 getresponse http/client.py:1300 │ │ └─ 0.144 begin http/client.py:299 │ │ └─ 0.144 _read_status http/client.py:266 │ │ └─ 0.144 readinto socket.py:575 │ │ └─ 0.144 recv_into ssl.py:1060 │ │ └─ 0.144 read ssl.py:920 ``` and here's the output with internet turned off ``` └─ 0.358 from_pretrained transformers/tokenization_utils.py:256 │ └─ 0.358 _from_pretrained transformers/tokenization_utils.py:311 │ ├─ 0.255 __init__ transformers/tokenization_bert.py:138 │ │ ├─ 0.163 load_vocab transformers/tokenization_bert.py:98 │ │ │ └─ 0.160 [self] │ │ ├─ 0.056 <listcomp> transformers/tokenization_bert.py:186 │ │ └─ 0.036 [self] │ └─ 0.102 cached_path transformers/file_utils.py:205 │ └─ 0.101 get_from_cache transformers/file_utils.py:333 │ ├─ 0.083 head requests/api.py:91 │ │ └─ 0.083 request requests/api.py:16 │ │ └─ 0.080 request requests/sessions.py:466 │ │ ├─ 0.066 send requests/sessions.py:617 │ │ │ └─ 0.066 send requests/adapters.py:394 │ │ │ ├─ 0.046 urlopen urllib3/connectionpool.py:494 │ │ │ │ ├─ 0.035 _make_request urllib3/connectionpool.py:351 │ │ │ │ │ └─ 0.035 _validate_conn urllib3/connectionpool.py:986 │ │ │ │ │ └─ 0.035 connect urllib3/connection.py:298 │ │ │ │ │ └─ 0.035 _new_conn urllib3/connection.py:143 │ │ │ │ │ ├─ 0.015 create_connection urllib3/util/connection.py:33 │ │ │ │ │ │ └─ 0.014 getaddrinfo socket.py:735 │ │ │ │ │ ├─ 0.012 [self] │ │ │ │ │ └─ 0.008 __init__ urllib3/exceptions.py:20 │ │ │ │ └─ 0.006 increment urllib3/util/retry.py:355 │ │ │ ├─ 0.008 [self] │ │ │ └─ 0.008 __init__ requests/exceptions.py:17 │ │ ├─ 0.006 merge_environment_settings requests/sessions.py:690 │ │ │ └─ 0.005 get_environ_proxies requests/utils.py:755 │ │ └─ 0.006 [self] │ └─ 0.014 filter fnmatch.py:48 │ └─ 0.009 _compile_pattern fnmatch.py:38 │ └─ 0.005 compile re.py:232 │ └─ 0.005 _compile re.py:271 │ └─ 0.005 compile sre_compile.py:759 ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2867/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2867/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2866
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2866/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2866/comments
https://api.github.com/repos/huggingface/transformers/issues/2866/events
https://github.com/huggingface/transformers/issues/2866
565,605,760
MDU6SXNzdWU1NjU2MDU3NjA=
2,866
How to get the matrix that is used to combine output from multiple number of attention heads?
{ "login": "h56cho", "id": 52889259, "node_id": "MDQ6VXNlcjUyODg5MjU5", "avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4", "gravatar_id": "", "url": "https://api.github.com/users/h56cho", "html_url": "https://github.com/h56cho", "followers_url": "https://api.github.com/users/h56cho/followers", "following_url": "https://api.github.com/users/h56cho/following{/other_user}", "gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}", "starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/h56cho/subscriptions", "organizations_url": "https://api.github.com/users/h56cho/orgs", "repos_url": "https://api.github.com/users/h56cho/repos", "events_url": "https://api.github.com/users/h56cho/events{/privacy}", "received_events_url": "https://api.github.com/users/h56cho/received_events", "type": "User", "site_admin": false }
[ { "id": 1108649053, "node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz", "url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted", "name": "Help wanted", "color": "008672", "default": false, "description": "Extra attention is needed, help appreciated" }, { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,581
1,588
1,588
NONE
null
Hello, if I am understanding transformers correctly, right before the feedforward layer, output of individual attention head are concatenated and multiplied by a matrix **H**, so that the outputs from the multiple number of heads will be combined into one output which will then be an input to the subsequent feedforward block within the same layer. Is there any way that I can retrieve the matrix **H** from the Hugging Face GPT2 model? Thank you,
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2866/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2866/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2865
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2865/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2865/comments
https://api.github.com/repos/huggingface/transformers/issues/2865/events
https://github.com/huggingface/transformers/issues/2865
565,535,048
MDU6SXNzdWU1NjU1MzUwNDg=
2,865
UserWarning: The number of elements in the out tensor of shape [1] is 1
{ "login": "lenyabloko", "id": 55606, "node_id": "MDQ6VXNlcjU1NjA2", "avatar_url": "https://avatars.githubusercontent.com/u/55606?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lenyabloko", "html_url": "https://github.com/lenyabloko", "followers_url": "https://api.github.com/users/lenyabloko/followers", "following_url": "https://api.github.com/users/lenyabloko/following{/other_user}", "gists_url": "https://api.github.com/users/lenyabloko/gists{/gist_id}", "starred_url": "https://api.github.com/users/lenyabloko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lenyabloko/subscriptions", "organizations_url": "https://api.github.com/users/lenyabloko/orgs", "repos_url": "https://api.github.com/users/lenyabloko/repos", "events_url": "https://api.github.com/users/lenyabloko/events{/privacy}", "received_events_url": "https://api.github.com/users/lenyabloko/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1834053813, "node_id": "MDU6TGFiZWwxODM0MDUzODEz", "url": "https://api.github.com/repos/huggingface/transformers/labels/PyTorch", "name": "PyTorch", "color": "a12bef", "default": false, "description": "Anything PyTorch" } ]
closed
false
null
[]
[ "It is likely that your SO question was downvoted because it is a lot of unreproducible code, and not a lot of explanation. In other words: when someone reads your qusetion, it is almost impossible to answer because we cannot try your code ourselves. Try reducing it to a minimal, verifiable, executable example.\r\n\r\nThat being said: you are mixing conda and pip installations, which is a drag. Also you don't need to install pytorch-transformers AND transformers. The latter is the successor to the former, so you should only install one or the other (preferably only transformers), and fix your imports accordingly. Just install everything with pip, is my advice.", "Thanks for your answer. I am following your suggestions. However when I replace\r\n```\r\nfrom pytorch_transformers import AdamW, WarmupLinearSchedule\r\n```\r\nwith \r\n\r\n```\r\nfrom transformers import AdamW, WarmupLinearSchedule\r\n\r\n```\r\n\r\nI get this error\r\n\r\n```\r\nImportError Traceback (most recent call last)\r\n<ipython-input-7-fc8519a4dbdc> in <module>()\r\n 19 RobertaConfig, RobertaForSequenceClassification, RobertaTokenizer)\r\n 20 \r\n---> 21 from transformers import AdamW, WarmupLinearSchedule\r\n 22 \r\n 23 from utils import (convert_examples_to_features,output_modes, processors)\r\n\r\nImportError: cannot import name 'WarmupLinearSchedule'\r\n```\r\nCan you help me out?\r\nThanks\r\n", "You are probably looking for\r\n\r\nhttps://github.com/huggingface/transformers/blob/20fc18fbda3669c2f4a3510e0705b2acd54bff07/src/transformers/optimization.py#L47-L59", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,581
1,587
1,587
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> I am using HuggingFace pytorch-transformers and one of my pre-trained models refuse to fine tune giving me those UserWarnings for every torch.utils.data.DataLoader call. I have described the details in https://stackoverflow.com/questions/60218634/userwarning-the-number-of-elements-in-the-out-tensor-of-shape-1-is-1 Here is my Notebook so you can run and see the results: https://colab.research.google.com/drive/1mq9RZ_BX1O5vgxCM0CvPzAm9YVKnq4DQ But someone downgraded my question for some reason. What am I missing? Thanks for your help! <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**: https://stackoverflow.com/questions/60218634/userwarning-the-number-of-elements-in-the-out-tensor-of-shape-1-is-1
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2865/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2865/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2864
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2864/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2864/comments
https://api.github.com/repos/huggingface/transformers/issues/2864/events
https://github.com/huggingface/transformers/pull/2864
565,491,325
MDExOlB1bGxSZXF1ZXN0Mzc1NTE5Mjk1
2,864
Update model card: new performance chart
{ "login": "Timoeller", "id": 3264870, "node_id": "MDQ6VXNlcjMyNjQ4NzA=", "avatar_url": "https://avatars.githubusercontent.com/u/3264870?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Timoeller", "html_url": "https://github.com/Timoeller", "followers_url": "https://api.github.com/users/Timoeller/followers", "following_url": "https://api.github.com/users/Timoeller/following{/other_user}", "gists_url": "https://api.github.com/users/Timoeller/gists{/gist_id}", "starred_url": "https://api.github.com/users/Timoeller/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Timoeller/subscriptions", "organizations_url": "https://api.github.com/users/Timoeller/orgs", "repos_url": "https://api.github.com/users/Timoeller/repos", "events_url": "https://api.github.com/users/Timoeller/events{/privacy}", "received_events_url": "https://api.github.com/users/Timoeller/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "looks good!", "On fire! :D", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2864?src=pr&el=h1) Report\n> Merging [#2864](https://codecov.io/gh/huggingface/transformers/pull/2864?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/92e974196fc35eb826f64808ae82d20c4380e3eb?src=pr&el=desc) will **increase** coverage by `1.1%`.\n> The diff coverage is `90.9%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2864/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2864?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2864 +/- ##\n=========================================\n+ Coverage 73.95% 75.06% +1.1% \n=========================================\n Files 93 94 +1 \n Lines 15272 15288 +16 \n=========================================\n+ Hits 11295 11476 +181 \n+ Misses 3977 3812 -165\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2864?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2864/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2Rpc3RpbGJlcnQucHk=) | `100% <ø> (ø)` | :arrow_up: |\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/2864/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `98.87% <ø> (ø)` | :arrow_up: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/2864/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.89% <0%> (-0.29%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2864/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.15% <100%> (+2.21%)` | :arrow_up: |\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2864/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.62% <100%> (-0.01%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/2864/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `86.37% <100%> (-0.04%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2864/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.16% <100%> (+0.25%)` | :arrow_up: |\n| [src/transformers/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2864/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `96.26% <100%> (+0.05%)` | :arrow_up: |\n| [src/transformers/modeling\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/2864/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mbGF1YmVydC5weQ==) | `83.82% <100%> (ø)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2864/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `86.1% <100%> (+0.41%)` | :arrow_up: |\n| ... and [14 more](https://codecov.io/gh/huggingface/transformers/pull/2864/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2864?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2864?src=pr&el=footer). Last update [92e9741...a2925e9](https://codecov.io/gh/huggingface/transformers/pull/2864?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,581
1,581
1,581
CONTRIBUTOR
null
We found a bug in our German conll03 data and fixed it. See deepset-ai/FARM#235 We reran the eval scripts on the new data and updated our charts accordingly.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2864/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2864/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/2864", "html_url": "https://github.com/huggingface/transformers/pull/2864", "diff_url": "https://github.com/huggingface/transformers/pull/2864.diff", "patch_url": "https://github.com/huggingface/transformers/pull/2864.patch", "merged_at": 1581705564000 }
https://api.github.com/repos/huggingface/transformers/issues/2863
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2863/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2863/comments
https://api.github.com/repos/huggingface/transformers/issues/2863/events
https://github.com/huggingface/transformers/issues/2863
565,435,647
MDU6SXNzdWU1NjU0MzU2NDc=
2,863
What does the variable 'present' represent?
{ "login": "h56cho", "id": 52889259, "node_id": "MDQ6VXNlcjUyODg5MjU5", "avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4", "gravatar_id": "", "url": "https://api.github.com/users/h56cho", "html_url": "https://github.com/h56cho", "followers_url": "https://api.github.com/users/h56cho/followers", "following_url": "https://api.github.com/users/h56cho/following{/other_user}", "gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}", "starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/h56cho/subscriptions", "organizations_url": "https://api.github.com/users/h56cho/orgs", "repos_url": "https://api.github.com/users/h56cho/repos", "events_url": "https://api.github.com/users/h56cho/events{/privacy}", "received_events_url": "https://api.github.com/users/h56cho/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,581
1,581
1,581
NONE
null
Hello, does the variable 'present' shown in [this](https://github.com/huggingface/transformers/blob/4e69104a1fba717026d6909d06288788e684c749/src/transformers/modeling_gpt2.py#L187) line of Hugging Face GPT-2 code represent final output of a single attention-head? (i.e. **not** the final output of the _output head_ , but the final output of the individual _attention head_, which is placed right before the feedforward block of the same layer). If yes, is there anyway that I can retrieve the value of the variable 'present'? Would it be possible that Hugging Face will make the value available for everyone? Thank you,
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2863/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2863/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2862
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2862/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2862/comments
https://api.github.com/repos/huggingface/transformers/issues/2862/events
https://github.com/huggingface/transformers/issues/2862
565,431,349
MDU6SXNzdWU1NjU0MzEzNDk=
2,862
PreTrainedTokenizer returns potentially incorrect attention mask
{ "login": "ab-10", "id": 12305910, "node_id": "MDQ6VXNlcjEyMzA1OTEw", "avatar_url": "https://avatars.githubusercontent.com/u/12305910?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ab-10", "html_url": "https://github.com/ab-10", "followers_url": "https://api.github.com/users/ab-10/followers", "following_url": "https://api.github.com/users/ab-10/following{/other_user}", "gists_url": "https://api.github.com/users/ab-10/gists{/gist_id}", "starred_url": "https://api.github.com/users/ab-10/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ab-10/subscriptions", "organizations_url": "https://api.github.com/users/ab-10/orgs", "repos_url": "https://api.github.com/users/ab-10/repos", "events_url": "https://api.github.com/users/ab-10/events{/privacy}", "received_events_url": "https://api.github.com/users/ab-10/received_events", "type": "User", "site_admin": false }
[ { "id": 1260952223, "node_id": "MDU6TGFiZWwxMjYwOTUyMjIz", "url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion", "name": "Discussion", "color": "22870e", "default": false, "description": "Discussion on a topic (keep it focused or open a new issue though)" }, { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1834056635, "node_id": "MDU6TGFiZWwxODM0MDU2NjM1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization", "name": "Core: Tokenization", "color": "FF4446", "default": false, "description": "Internals of the library; Tokenization." } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,581
1,587
1,587
NONE
null
# 🐛 Bug ## Information When deriving the `attention_mask` `PreTrainedTokenizer` makes an assumption in `prepare_for_model` that the input hasn't been padded prior, this assumption can be false. For example, in the case where one precomputes padded token ids for sentences separately and then uses `BertTokenizer.encode_plus` to join them. I'm submitting this issue, in order to find out whether this assumption has been made on purpose and if it hasn't I can easily submit a PR fixing it. In the `PreTrainedTokenizer`, the `attention_mask` is obtained in two places: - line `1175`: `encoded_inputs["attention_mask"] = [0] * difference + [1] * len(encoded_inputs["attention_mask"] = [0] * difference + [1] * len(encoded_inputs["input_ids"]) ` - line `1188`: `encoded_inputs["attention_mask"] = [1] * len(encoded_inputs["input_ids"])`. I suggest that instead of making the assumption attention mask is derived as: `encoded_inputs["attention_mask"] = encoded_inputs["input_ids"] != 0`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2862/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2862/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2861
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2861/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2861/comments
https://api.github.com/repos/huggingface/transformers/issues/2861/events
https://github.com/huggingface/transformers/issues/2861
565,400,199
MDU6SXNzdWU1NjU0MDAxOTk=
2,861
DistilBERT distilbert-base-cased failed to load
{ "login": "anshoomehra", "id": 24396120, "node_id": "MDQ6VXNlcjI0Mzk2MTIw", "avatar_url": "https://avatars.githubusercontent.com/u/24396120?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anshoomehra", "html_url": "https://github.com/anshoomehra", "followers_url": "https://api.github.com/users/anshoomehra/followers", "following_url": "https://api.github.com/users/anshoomehra/following{/other_user}", "gists_url": "https://api.github.com/users/anshoomehra/gists{/gist_id}", "starred_url": "https://api.github.com/users/anshoomehra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anshoomehra/subscriptions", "organizations_url": "https://api.github.com/users/anshoomehra/orgs", "repos_url": "https://api.github.com/users/anshoomehra/repos", "events_url": "https://api.github.com/users/anshoomehra/events{/privacy}", "received_events_url": "https://api.github.com/users/anshoomehra/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ " Should be fixed with ee5a6856caec83e7f2f305418f3199b87ea6cc2d. I can execute your code without an error with the latest version from github.", "> Should be fixed with [ee5a685](https://github.com/huggingface/transformers/commit/ee5a6856caec83e7f2f305418f3199b87ea6cc2d). I can execute your code without an error with the latest version from github.\r\n\r\n@cronoik I appreciate the prompt response. **I didn't compile from git, rather installed via** \r\n\r\n`pip install transformer --upgrade`\r\n\r\nIt upgraded to transformers==2.4.1 -- post-upgrade though the error code changed to below:\r\n\r\n`ValueError: Can't find a vocabulary file at path /root/.cache/torch/transformers/37cc1eaaea18a456726fc28ecb438852f0ca1d9e7d259e6e3747ee33065936f6'. To load the vocabulary from a Google pretrained model use tokenizer = BertTokenizer.from_pretrained(PRETRAINED_MODEL_NAME)`\r\n", "The mentioned commit is not part of 2.4.1. You have to wait for the next release or pull transformers from git.", "Ok, I will pull from the git for the time being. Thank you!", "Please close the issure when your problem is solved.", "We can close this since we have a workaround, and the team is aware of the issue to be rolled out in the next release. Thanks!!", "v2.5.0 was released a few days ago, `distilbert-base-cased` is now accessible via the pip release! :)", "The vocab file is missing here: \r\nhttps://huggingface.co/distilbert-base-cased#list-files\r\nWhile the auto-downloaded model has one.", "I'm still having the same problem. Using transformers version 2.8.0, neither `distilbert-base-cased` or `distilbert-base-uncased` are available. I also ran the following command:\r\n\r\n```\r\nimport pytorch_pretrained_bert as ppb\r\nassert 'distilbert-base-uncased' in ppb.modeling.PRETRAINED_MODEL_ARCHIVE_MAP\r\n```\r\n\r\nWhich results in `AssertionError`. Any thoughts on what might be going on here?", "Are you really using the `transformers` [1] package? The code you have showed contains only the `pytorch_pretrained_bert` [2] package which doesn't contain distilbert. While `pytorch_pretrained_bert` [2] and `transformers` [1] are both packages from huggingface, they are not the same. `pytorch_pretrained_bert` last release is from april 2019. Please use the `transformers` package [1].\r\n\r\n[1] https://pypi.org/project/transformers/\r\n[2] https://pypi.org/project/pytorch-pretrained-bert/#description", "Thanks for the quick reply: I am using transformers, I picked up that code snippet from another issue, must have been for that package. \r\n\r\nI realize what I did wrong: I was using BertTokenizer/BertModel to load, and I should have been using DistilBertTokenizer/DistilBertModel. It's working now, thanks!" ]
1,581
1,589
1,581
NONE
null
**Issue** DistilBERT **distilbert-base-cased** failed to load. _Please note, 'distilbert-base-uncased' works perfectly fine._ **Error Message** OSError: Model name 'distilbert-base-cased' was not found in tokenizers model name list (distilbert-base-uncased, distilbert-base-uncased-distilled-squad, distilbert-base-german-cased, distilbert-base-multilingual-cased). We assumed 'distilbert-base-cased' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url. **Model I am using** : distilbert-base-cased **Language** : English **The problem arises when using below code** ``` MODELS = [(DistilBertModel, DistilBertTokenizer, 'distilbert-base-cased')] for model_class, tokenizer_class, pretrained_weights in MODELS: # Load pretrained model/tokenizer tokenizer = tokenizer_class.from_pretrained(pretrained_weights) model = model_class.from_pretrained(pretrained_weights) ``` **Environment info** Python 3.6.9 ipykernel==5.1.3 ipython==7.11.1 ipython-genutils==0.2.0 ipywidgets==7.5.1 jupyter==1.0.0 jupyter-client==5.3.4 jupyter-console==6.0.0 jupyter-core==4.6.1 jupyter-http-over-ws==0.0.7 Keras-Applications==1.0.8 Keras-Preprocessing==1.1.0 matplotlib==3.1.2 numpy==1.18.1 scipy==1.4.1 tensorboard==2.1.0 tensorflow-estimator==2.1.0 tensorflow-gpu==2.1.0 tokenizers==0.0.11 torch==1.4.0 tornado==6.0.3 tqdm==4.42.1 traitlets==4.3.3 transformers==2.4.1
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2861/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2861/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2860
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2860/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2860/comments
https://api.github.com/repos/huggingface/transformers/issues/2860/events
https://github.com/huggingface/transformers/issues/2860
565,279,628
MDU6SXNzdWU1NjUyNzk2Mjg=
2,860
Post-padding affects the Bert embedding output
{ "login": "XinnuoXu", "id": 5082188, "node_id": "MDQ6VXNlcjUwODIxODg=", "avatar_url": "https://avatars.githubusercontent.com/u/5082188?v=4", "gravatar_id": "", "url": "https://api.github.com/users/XinnuoXu", "html_url": "https://github.com/XinnuoXu", "followers_url": "https://api.github.com/users/XinnuoXu/followers", "following_url": "https://api.github.com/users/XinnuoXu/following{/other_user}", "gists_url": "https://api.github.com/users/XinnuoXu/gists{/gist_id}", "starred_url": "https://api.github.com/users/XinnuoXu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/XinnuoXu/subscriptions", "organizations_url": "https://api.github.com/users/XinnuoXu/orgs", "repos_url": "https://api.github.com/users/XinnuoXu/repos", "events_url": "https://api.github.com/users/XinnuoXu/events{/privacy}", "received_events_url": "https://api.github.com/users/XinnuoXu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, please look into the documentation of the [attention mask](https://huggingface.co/transformers/glossary.html#attention-mask).", "Actually, it was a valid question. The output will be numerically different for sure since there are extra positions to attend and even if those are paddings -> there is a difference among the float values. However, the real question is whether the (cosine/dot) similarity among the resulting vectors have changed at all." ]
1,581
1,617
1,582
NONE
null
# 🐛 Bug ## Information Model: BertModel Language: English The problem arises when using: ``` # Load model from transformers import * import torch model_class = BertModel tokenizer_class = BertTokenizer pretrained_weights = 'bert-base-uncased' tokenizer = tokenizer_class.from_pretrained(pretrained_weights) model = model_class.from_pretrained(pretrained_weights).to('cuda') # First example batch_id = [[101, 1996, 3035, 2038, 2741, 1037, 1056, 28394, 2102, 2000, 1996, 3035, 2012, 17836, 4186, 2000, 8439, 2014, 3938, 2705, 5798, 102]] batch_id = torch.tensor(batch_id).to('cuda') with torch.no_grad(): last_hidden_states = model(batch_id)[0].cpu().numpy() print (last_hidden_states[0][:10]) # Second example batch_id = [[101, 1996, 3035, 2038, 2741, 1037, 1056, 28394, 2102, 2000, 1996, 3035, 2012, 17836, 4186, 2000, 8439, 2014, 3938, 2705, 5798, 102, 0, 0]] batch_id = torch.tensor(batch_id).to('cuda') with torch.no_grad(): last_hidden_states = model(batch_id)[0].cpu().numpy() print (last_hidden_states[0][:10]) ``` Output for the first example ``` array([[ 0.00197573, -0.06912418, 0.24121636, ..., -0.13239928, 0.13210389, 0.3860737 ], [ 0.18745837, -0.15252575, 0.16234997, ..., -0.34497464, 1.0031146 , 0.20545363], [ 0.40690556, -0.7345518 , 1.1162403 , ..., -1.148023 , -0.38943186, -0.6397534 ], ..., [ 1.3574413 , -0.87637144, 1.007168 , ..., -0.7466023 , -0.5337318 , -0.02415964], [ 0.0907229 , -1.0051603 , 0.7100666 , ..., -0.00599465, -0.37829682, 0.4773703 ], [-0.00619348, -0.34730428, 0.9920887 , ..., 0.28678447, 0.2980772 , 0.8005251 ]], dtype=float32) ``` Output for the second example ``` array([[-0.10877508, 0.0271297 , 0.17947783, ..., -0.2650592 , 0.15821457, 0.35017303], [-0.1396759 , -0.25098413, 0.3990493 , ..., -0.52468735, 0.8060062 , 0.42330667], [ 0.18865047, -1.0035415 , 1.3446846 , ..., -1.1652598 , -0.60856164, -0.419513 ], ..., [ 1.3687737 , -0.9032434 , 1.0184443 , ..., -0.7951573 , -0.56618035, -0.00522863], [ 0.02363256, -0.962884 , 0.68822455, ..., -0.03798304, -0.34567115, 0.5442954 ], [-0.00341167, -0.33559048, 1.0627198 , ..., 0.31898227, 0.2941662 , 0.7981017 ]], dtype=float32) ``` I also checked that the output for `tokenizer.convert_tokens_to_ids(tokenizer.pad_token)` is `0` ## Expected behavior The embeddings for the padded sequence should be same with the ones without padding. ## Environment info - `transformers` version: transformers 2.4.1 - Platform: Linux - Python version: Python 3.7.5 - PyTorch version (GPU?): PyTorch 1.4.0 (CUDA Version 10.1.243) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2860/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2860/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2859
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2859/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2859/comments
https://api.github.com/repos/huggingface/transformers/issues/2859/events
https://github.com/huggingface/transformers/pull/2859
565,253,320
MDExOlB1bGxSZXF1ZXN0Mzc1MzI5ODM4
2,859
Added model card for bert-base-multilingual-uncased-sentiment
{ "login": "yvespeirsman", "id": 3431621, "node_id": "MDQ6VXNlcjM0MzE2MjE=", "avatar_url": "https://avatars.githubusercontent.com/u/3431621?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yvespeirsman", "html_url": "https://github.com/yvespeirsman", "followers_url": "https://api.github.com/users/yvespeirsman/followers", "following_url": "https://api.github.com/users/yvespeirsman/following{/other_user}", "gists_url": "https://api.github.com/users/yvespeirsman/gists{/gist_id}", "starred_url": "https://api.github.com/users/yvespeirsman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yvespeirsman/subscriptions", "organizations_url": "https://api.github.com/users/yvespeirsman/orgs", "repos_url": "https://api.github.com/users/yvespeirsman/repos", "events_url": "https://api.github.com/users/yvespeirsman/events{/privacy}", "received_events_url": "https://api.github.com/users/yvespeirsman/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2859?src=pr&el=h1) Report\n> Merging [#2859](https://codecov.io/gh/huggingface/transformers/pull/2859?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/925a13ced1e155ea7e55e14e177a7b5ae7ad174c?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2859/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2859?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2859 +/- ##\n=======================================\n Coverage 75.06% 75.06% \n=======================================\n Files 94 94 \n Lines 15287 15287 \n=======================================\n Hits 11475 11475 \n Misses 3812 3812\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2859?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2859?src=pr&el=footer). Last update [925a13c...917aa8d](https://codecov.io/gh/huggingface/transformers/pull/2859?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "@yvespeirsman Thanks for sharing! I can't push to your fork so I'll merge this and tweak it (languages have to be in a list)" ]
1,581
1,581
1,581
CONTRIBUTOR
null
Added the model card for nlptown/bert-base-multilingual-uncased-sentiment
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2859/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2859/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/2859", "html_url": "https://github.com/huggingface/transformers/pull/2859", "diff_url": "https://github.com/huggingface/transformers/pull/2859.diff", "patch_url": "https://github.com/huggingface/transformers/pull/2859.patch", "merged_at": 1581690676000 }
https://api.github.com/repos/huggingface/transformers/issues/2858
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2858/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2858/comments
https://api.github.com/repos/huggingface/transformers/issues/2858/events
https://github.com/huggingface/transformers/issues/2858
565,250,556
MDU6SXNzdWU1NjUyNTA1NTY=
2,858
is right?
{ "login": "ARDUJS", "id": 20811685, "node_id": "MDQ6VXNlcjIwODExNjg1", "avatar_url": "https://avatars.githubusercontent.com/u/20811685?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ARDUJS", "html_url": "https://github.com/ARDUJS", "followers_url": "https://api.github.com/users/ARDUJS/followers", "following_url": "https://api.github.com/users/ARDUJS/following{/other_user}", "gists_url": "https://api.github.com/users/ARDUJS/gists{/gist_id}", "starred_url": "https://api.github.com/users/ARDUJS/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ARDUJS/subscriptions", "organizations_url": "https://api.github.com/users/ARDUJS/orgs", "repos_url": "https://api.github.com/users/ARDUJS/repos", "events_url": "https://api.github.com/users/ARDUJS/events{/privacy}", "received_events_url": "https://api.github.com/users/ARDUJS/received_events", "type": "User", "site_admin": false }
[ { "id": 1834053007, "node_id": "MDU6TGFiZWwxODM0MDUzMDA3", "url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20LM%20(Pretraining)", "name": "Ex: LM (Pretraining)", "color": "76FFAF", "default": false, "description": "Related to language modeling pre-training" } ]
closed
false
null
[]
[ "Hi @ARDUJS can you update your issue title to something more descriptive? Thanks!", "Should be correct -> 80% masked, that means 20% is left. Using this 20% in 50 % the random word is used, 50% original token is kept. So both random word and original has an overall prob. of 10%.\r\n\r\nOriginal BERT is using the same logic, see [here](https://github.com/google-research/bert/blob/master/create_pretraining_data.py#L391)." ]
1,581
1,582
1,582
NONE
null
ERROR: type should be string, got "https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py\r\nin 225 row\r\n\r\n![image](https://user-images.githubusercontent.com/20811685/74525332-d1480e00-4f5b-11ea-8918-ae22507de8e8.png)\r\n\r\n# 10% of the time, we replace masked input tokens with random word\r\nbut write 0.5 \r\nis ok?"
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2858/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2858/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2857
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2857/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2857/comments
https://api.github.com/repos/huggingface/transformers/issues/2857/events
https://github.com/huggingface/transformers/pull/2857
565,211,701
MDExOlB1bGxSZXF1ZXN0Mzc1Mjk2NDc5
2,857
Fix typos
{ "login": "iliaschalkidis", "id": 1626984, "node_id": "MDQ6VXNlcjE2MjY5ODQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1626984?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iliaschalkidis", "html_url": "https://github.com/iliaschalkidis", "followers_url": "https://api.github.com/users/iliaschalkidis/followers", "following_url": "https://api.github.com/users/iliaschalkidis/following{/other_user}", "gists_url": "https://api.github.com/users/iliaschalkidis/gists{/gist_id}", "starred_url": "https://api.github.com/users/iliaschalkidis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iliaschalkidis/subscriptions", "organizations_url": "https://api.github.com/users/iliaschalkidis/orgs", "repos_url": "https://api.github.com/users/iliaschalkidis/repos", "events_url": "https://api.github.com/users/iliaschalkidis/events{/privacy}", "received_events_url": "https://api.github.com/users/iliaschalkidis/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2857?src=pr&el=h1) Report\n> Merging [#2857](https://codecov.io/gh/huggingface/transformers/pull/2857?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/925a13ced1e155ea7e55e14e177a7b5ae7ad174c?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2857/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2857?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2857 +/- ##\n=======================================\n Coverage 75.06% 75.06% \n=======================================\n Files 94 94 \n Lines 15287 15287 \n=======================================\n Hits 11475 11475 \n Misses 3812 3812\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2857?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2857?src=pr&el=footer). Last update [925a13c...acca7c4](https://codecov.io/gh/huggingface/transformers/pull/2857?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,581
1,581
1,581
NONE
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2857/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2857/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/2857", "html_url": "https://github.com/huggingface/transformers/pull/2857", "diff_url": "https://github.com/huggingface/transformers/pull/2857.diff", "patch_url": "https://github.com/huggingface/transformers/pull/2857.patch", "merged_at": 1581689468000 }
https://api.github.com/repos/huggingface/transformers/issues/2856
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2856/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2856/comments
https://api.github.com/repos/huggingface/transformers/issues/2856/events
https://github.com/huggingface/transformers/pull/2856
565,184,164
MDExOlB1bGxSZXF1ZXN0Mzc1Mjc0NzY5
2,856
Fix typo
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,581
1,581
1,581
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2856/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2856/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/2856", "html_url": "https://github.com/huggingface/transformers/pull/2856", "diff_url": "https://github.com/huggingface/transformers/pull/2856.diff", "patch_url": "https://github.com/huggingface/transformers/pull/2856.patch", "merged_at": 1581689263000 }
https://api.github.com/repos/huggingface/transformers/issues/2855
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2855/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2855/comments
https://api.github.com/repos/huggingface/transformers/issues/2855/events
https://github.com/huggingface/transformers/pull/2855
565,104,179
MDExOlB1bGxSZXF1ZXN0Mzc1MjExNDY5
2,855
Fix typo
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2855?src=pr&el=h1) Report\n> Merging [#2855](https://codecov.io/gh/huggingface/transformers/pull/2855?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/925a13ced1e155ea7e55e14e177a7b5ae7ad174c?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2855/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2855?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2855 +/- ##\n=======================================\n Coverage 75.06% 75.06% \n=======================================\n Files 94 94 \n Lines 15287 15287 \n=======================================\n Hits 11475 11475 \n Misses 3812 3812\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2855?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2855?src=pr&el=footer). Last update [925a13c...c86fc74](https://codecov.io/gh/huggingface/transformers/pull/2855?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,581
1,581
1,581
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2855/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2855/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/2855", "html_url": "https://github.com/huggingface/transformers/pull/2855", "diff_url": "https://github.com/huggingface/transformers/pull/2855.diff", "patch_url": "https://github.com/huggingface/transformers/pull/2855.patch", "merged_at": 1581689588000 }
https://api.github.com/repos/huggingface/transformers/issues/2854
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2854/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2854/comments
https://api.github.com/repos/huggingface/transformers/issues/2854/events
https://github.com/huggingface/transformers/pull/2854
565,088,898
MDExOlB1bGxSZXF1ZXN0Mzc1MjAwMzQ4
2,854
Create model card for 'distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es'
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks!", "Welcome, Julien!\nThis one won't be my last contribution! :)\nNot so easy :P\n\nEl vie., 14 feb. 2020 5:05, Julien Chaumond <[email protected]>\nescribió:\n\n> Thanks!\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/pull/2854?email_source=notifications&email_token=AA34BHPFYBDFG2UBR7FZQHDRCYJ6ZA5CNFSM4KVADOQ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOELXP3CI#issuecomment-586087817>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AA34BHIPBUNF5MRYBVO6ZPTRCYJ6ZANCNFSM4KVADOQQ>\n> .\n>\n", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2854?src=pr&el=h1) Report\n> Merging [#2854](https://codecov.io/gh/huggingface/transformers/pull/2854?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4d36472b96d144887cbe95b083f0d2091fd5ff03?src=pr&el=desc) will **decrease** coverage by `25.28%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2854/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2854?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2854 +/- ##\n===========================================\n- Coverage 75.06% 49.77% -25.29% \n===========================================\n Files 94 94 \n Lines 15287 15287 \n===========================================\n- Hits 11475 7609 -3866 \n- Misses 3812 7678 +3866\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2854?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jYW1lbWJlcnQucHk=) | `0% <0%> (-100%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG1fcm9iZXJ0YS5weQ==) | `0% <0%> (-100%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `0% <0%> (-100%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `0% <0%> (-97.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/2854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `0% <0%> (-96.55%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `0% <0%> (-96.06%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `0% <0%> (-95.85%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `0% <0%> (-95.12%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `0% <0%> (-94.67%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `0% <0%> (-92.79%)` | :arrow_down: |\n| ... and [18 more](https://codecov.io/gh/huggingface/transformers/pull/2854/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2854?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2854?src=pr&el=footer). Last update [4d36472...3643bb8](https://codecov.io/gh/huggingface/transformers/pull/2854?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,581
1,581
1,581
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2854/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2854/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/2854", "html_url": "https://github.com/huggingface/transformers/pull/2854", "diff_url": "https://github.com/huggingface/transformers/pull/2854.diff", "patch_url": "https://github.com/huggingface/transformers/pull/2854.patch", "merged_at": 1581653093000 }
https://api.github.com/repos/huggingface/transformers/issues/2853
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2853/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2853/comments
https://api.github.com/repos/huggingface/transformers/issues/2853/events
https://github.com/huggingface/transformers/pull/2853
565,044,378
MDExOlB1bGxSZXF1ZXN0Mzc1MTY1NjEy
2,853
[pipeline] Alias NerPipeline as TokenClassificationPipeline
{ "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2853?src=pr&el=h1) Report\n> Merging [#2853](https://codecov.io/gh/huggingface/transformers/pull/2853?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1eec69a90007b8f4a7af10805dab4904ea5dea77?src=pr&el=desc) will **decrease** coverage by `1.07%`.\n> The diff coverage is `100%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2853/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2853?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2853 +/- ##\n==========================================\n- Coverage 75.06% 73.98% -1.08% \n==========================================\n Files 94 94 \n Lines 15287 15288 +1 \n==========================================\n- Hits 11475 11311 -164 \n- Misses 3812 3977 +165\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2853?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/2853/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `98.87% <ø> (ø)` | :arrow_up: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/2853/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `71.5% <100%> (+0.07%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2853/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2853/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.91% <0%> (-9.86%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2853/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `70.86% <0%> (-2.3%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2853/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `92.07% <0%> (-2.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2853/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2853?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2853?src=pr&el=footer). Last update [1eec69a...549ce87](https://codecov.io/gh/huggingface/transformers/pull/2853?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,581
1,581
1,581
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2853/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2853/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/2853", "html_url": "https://github.com/huggingface/transformers/pull/2853", "diff_url": "https://github.com/huggingface/transformers/pull/2853.diff", "patch_url": "https://github.com/huggingface/transformers/pull/2853.patch", "merged_at": 1581689891000 }
https://api.github.com/repos/huggingface/transformers/issues/2852
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2852/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2852/comments
https://api.github.com/repos/huggingface/transformers/issues/2852/events
https://github.com/huggingface/transformers/pull/2852
565,037,055
MDExOlB1bGxSZXF1ZXN0Mzc1MTYwMTAz
2,852
Update with additional information
{ "login": "iliaschalkidis", "id": 1626984, "node_id": "MDQ6VXNlcjE2MjY5ODQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1626984?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iliaschalkidis", "html_url": "https://github.com/iliaschalkidis", "followers_url": "https://api.github.com/users/iliaschalkidis/followers", "following_url": "https://api.github.com/users/iliaschalkidis/following{/other_user}", "gists_url": "https://api.github.com/users/iliaschalkidis/gists{/gist_id}", "starred_url": "https://api.github.com/users/iliaschalkidis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iliaschalkidis/subscriptions", "organizations_url": "https://api.github.com/users/iliaschalkidis/orgs", "repos_url": "https://api.github.com/users/iliaschalkidis/repos", "events_url": "https://api.github.com/users/iliaschalkidis/events{/privacy}", "received_events_url": "https://api.github.com/users/iliaschalkidis/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2852?src=pr&el=h1) Report\n> Merging [#2852](https://codecov.io/gh/huggingface/transformers/pull/2852?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1eec69a90007b8f4a7af10805dab4904ea5dea77?src=pr&el=desc) will **decrease** coverage by `1.07%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2852/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2852?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2852 +/- ##\n==========================================\n- Coverage 75.06% 73.98% -1.08% \n==========================================\n Files 94 94 \n Lines 15287 15287 \n==========================================\n- Hits 11475 11310 -165 \n- Misses 3812 3977 +165\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2852?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2852/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2852/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.91% <0%> (-9.86%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2852/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `70.86% <0%> (-2.3%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2852/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `92.07% <0%> (-2.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2852/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2852?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2852?src=pr&el=footer). Last update [1eec69a...59baea0](https://codecov.io/gh/huggingface/transformers/pull/2852?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "See #2851.\r\n\r\nThanks!" ]
1,581
1,581
1,581
NONE
null
Added a "Pre-training details" section
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2852/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2852/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/2852", "html_url": "https://github.com/huggingface/transformers/pull/2852", "diff_url": "https://github.com/huggingface/transformers/pull/2852.diff", "patch_url": "https://github.com/huggingface/transformers/pull/2852.patch", "merged_at": 1581648883000 }
https://api.github.com/repos/huggingface/transformers/issues/2851
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2851/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2851/comments
https://api.github.com/repos/huggingface/transformers/issues/2851/events
https://github.com/huggingface/transformers/pull/2851
565,026,939
MDExOlB1bGxSZXF1ZXN0Mzc1MTUyMTA1
2,851
Create model card for the newly released 'nlpaueb/bert-base-greek-uncased-v1'
{ "login": "iliaschalkidis", "id": 1626984, "node_id": "MDQ6VXNlcjE2MjY5ODQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1626984?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iliaschalkidis", "html_url": "https://github.com/iliaschalkidis", "followers_url": "https://api.github.com/users/iliaschalkidis/followers", "following_url": "https://api.github.com/users/iliaschalkidis/following{/other_user}", "gists_url": "https://api.github.com/users/iliaschalkidis/gists{/gist_id}", "starred_url": "https://api.github.com/users/iliaschalkidis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iliaschalkidis/subscriptions", "organizations_url": "https://api.github.com/users/iliaschalkidis/orgs", "repos_url": "https://api.github.com/users/iliaschalkidis/repos", "events_url": "https://api.github.com/users/iliaschalkidis/events{/privacy}", "received_events_url": "https://api.github.com/users/iliaschalkidis/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "Thanks for sharing!\r\n\r\nHow did you pre-train this model (infrastructure, number of epochs, etc.)?\r\nDo you have eval results on downstream tasks?\r\n\r\nAlso you can add a \r\n```\r\n---\r\nlanguage: greek\r\n---\r\n```\r\ntag to the top of the file\r\n\r\nI'll merge this in the meantime, thanks for sharing!", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2851?src=pr&el=h1) Report\n> Merging [#2851](https://codecov.io/gh/huggingface/transformers/pull/2851?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8744402f1eb51c7ae6b86cae1015983096beb655?src=pr&el=desc) will **decrease** coverage by `1.07%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2851/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2851?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2851 +/- ##\n==========================================\n- Coverage 75.06% 73.98% -1.08% \n==========================================\n Files 94 94 \n Lines 15287 15287 \n==========================================\n- Hits 11475 11310 -165 \n- Misses 3812 3977 +165\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2851?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2851/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2851/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.91% <0%> (-9.86%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2851/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `70.86% <0%> (-2.3%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2851/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `92.07% <0%> (-2.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2851/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2851?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2851?src=pr&el=footer). Last update [8744402...6aa9688](https://codecov.io/gh/huggingface/transformers/pull/2851?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "I just amended the initial model card with extra information on the pre-training process. No evaluation yet, I hope will have some experiments, pretty soon. Thanks!" ]
1,581
1,581
1,581
NONE
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2851/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2851/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/2851", "html_url": "https://github.com/huggingface/transformers/pull/2851", "diff_url": "https://github.com/huggingface/transformers/pull/2851.diff", "patch_url": "https://github.com/huggingface/transformers/pull/2851.patch", "merged_at": 1581640043000 }
https://api.github.com/repos/huggingface/transformers/issues/2850
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2850/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2850/comments
https://api.github.com/repos/huggingface/transformers/issues/2850/events
https://github.com/huggingface/transformers/pull/2850
564,966,427
MDExOlB1bGxSZXF1ZXN0Mzc1MTAxNDY4
2,850
Adding usage examples for common tasks
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2850?src=pr&el=h1) Report\n> Merging [#2850](https://codecov.io/gh/huggingface/transformers/pull/2850?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7f98edd7e362a64c947b083cfc0c401c4d0ffe91?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2850/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2850?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2850 +/- ##\n=======================================\n Coverage 75.06% 75.06% \n=======================================\n Files 94 94 \n Lines 15287 15287 \n=======================================\n Hits 11475 11475 \n Misses 3812 3812\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2850?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2850?src=pr&el=footer). Last update [7f98edd...51830ef](https://codecov.io/gh/huggingface/transformers/pull/2850?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "I've added a way to switch between `PyTorch` and `TensorFlow` implementations. I didn't want to have a wall of code that had the two frameworks, so now there's a toggle to show which framework you would like to see.\r\n\r\nIt works as follows: a javascript method parses the documentation page shown and looks for `.highlight` classes, which are the code blocks. In there, it looks for `## PYTORCH CODE`, which represents the beginning of a `PyTorch` snippet and `## TENSORFLOW CODE` which represents the beginning of a `TensorFlow` snippet.\r\n\r\nWould love an opinion on the Javascript code as well. Would love to convert this to TS down the road.\r\n\r\nHere's a gif of the result\r\n![peek3](https://user-images.githubusercontent.com/30755778/75115468-db899c80-562c-11ea-81aa-28cabf6af538.gif)\r\n\r\n", "@LysandreJik reviewing the JS now and had a question. Depending on your thoughts, would it make sense from a UX standpoint for all of the buttons to toggle together? So, if a user selects \"Tensorflow\" all of the code blocks would switch to \"Tensorflow\". ", "I guess this would be cool and makes sense from a UX standpoint. Do you think it's necessary or can it wait for the second version?", "It can 100% wait for a second version. " ]
1,581
1,582
1,582
MEMBER
null
Adding a documentation page detailing usage for common tasks (inference, not training
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2850/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2850/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/2850", "html_url": "https://github.com/huggingface/transformers/pull/2850", "diff_url": "https://github.com/huggingface/transformers/pull/2850.diff", "patch_url": "https://github.com/huggingface/transformers/pull/2850.patch", "merged_at": 1582656505000 }
https://api.github.com/repos/huggingface/transformers/issues/2849
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2849/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2849/comments
https://api.github.com/repos/huggingface/transformers/issues/2849/events
https://github.com/huggingface/transformers/issues/2849
564,931,431
MDU6SXNzdWU1NjQ5MzE0MzE=
2,849
PreTrainedEncoderDecoder does not work for LSTM
{ "login": "pruksmhc", "id": 10094008, "node_id": "MDQ6VXNlcjEwMDk0MDA4", "avatar_url": "https://avatars.githubusercontent.com/u/10094008?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pruksmhc", "html_url": "https://github.com/pruksmhc", "followers_url": "https://api.github.com/users/pruksmhc/followers", "following_url": "https://api.github.com/users/pruksmhc/following{/other_user}", "gists_url": "https://api.github.com/users/pruksmhc/gists{/gist_id}", "starred_url": "https://api.github.com/users/pruksmhc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pruksmhc/subscriptions", "organizations_url": "https://api.github.com/users/pruksmhc/orgs", "repos_url": "https://api.github.com/users/pruksmhc/repos", "events_url": "https://api.github.com/users/pruksmhc/events{/privacy}", "received_events_url": "https://api.github.com/users/pruksmhc/received_events", "type": "User", "site_admin": false }
[ { "id": 1834056761, "node_id": "MDU6TGFiZWwxODM0MDU2NzYx", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling", "name": "Core: Modeling", "color": "FF8446", "default": false, "description": "Internals of the library; Models." }, { "id": 1845609017, "node_id": "MDU6TGFiZWwxODQ1NjA5MDE3", "url": "https://api.github.com/repos/huggingface/transformers/labels/seq2seq", "name": "seq2seq", "color": "fef2c0", "default": false, "description": "" }, { "id": 1862634478, "node_id": "MDU6TGFiZWwxODYyNjM0NDc4", "url": "https://api.github.com/repos/huggingface/transformers/labels/Should%20Fix", "name": "Should Fix", "color": "FF0000", "default": false, "description": "This has been identified as a bug and should be fixed." } ]
closed
false
null
[]
[ "I put this as a bug because the code as-is does not hint that Model2LSTM does not work. \r\nhttps://github.com/huggingface/transformers/blob/90ab15cb7a8fcf8bf58c05453ddf1aa6a4fa00c1/src/transformers/modeling_encoder_decoder.py\r\nIt would be great to say that LSTM is not currently supported there. ", "Indeed, my initial comment was a mistake. I'm looking into it now." ]
1,581
1,582
1,582
NONE
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): If we want to have a BERT-based encoder and LSTM encoder, that is not currently possible with the current huggingface implementation, mostly because torch.nn.LSTM does not contain a config class variable. ## Stack Trace File "/beegfs/yp913/anaconda3/envs/jiant_new/lib/python3.6/site-packages/transformers/modeling_encoder_decoder.py", line 349, in from_pretrained model = super().from_pretrained(*args, **kwargs) File "/beegfs/yp913/anaconda3/envs/jiant_new/lib/python3.6/site-packages/transformers/modeling_encoder_decoder.py", line 153, in from_pretrained decoder.config.is_decoder = True File "/beegfs/yp913/anaconda3/envs/jiant_new/lib/python3.6/site-packages/torch/nn/modules/module.py", line 539, in __getattr__ type(self).__name__, name)) AttributeError: 'LSTM' object has no attribute 'config' ## To reproduce You can reproduce this by: import transformers from transformers.modeling_encoder_decoder import Model2LSTM model = Model2LSTM.from_pretrained("roberta-large", decoder_config={"hidden_size":512, "input_size":1024, "num_layers": 2}) (When you initialize Model2LSTM like the above it runs into a separate error. I believe a ** is missing from the Model2LSTM decoder LSTM initialization).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2849/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2849/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2848
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2848/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2848/comments
https://api.github.com/repos/huggingface/transformers/issues/2848/events
https://github.com/huggingface/transformers/issues/2848
564,885,131
MDU6SXNzdWU1NjQ4ODUxMzE=
2,848
Add `masked_lm_labels` argument to `TFAlbertForMaskedLM`
{ "login": "jarednielsen", "id": 4564897, "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jarednielsen", "html_url": "https://github.com/jarednielsen", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "repos_url": "https://api.github.com/users/jarednielsen/repos", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1834054694, "node_id": "MDU6TGFiZWwxODM0MDU0Njk0", "url": "https://api.github.com/repos/huggingface/transformers/labels/TensorFlow", "name": "TensorFlow", "color": "FF6F00", "default": false, "description": "Anything TensorFlow" } ]
closed
false
null
[]
[ "Hi! This feature would be great to have.\r\n\r\nI'm curious how `TFBertMaskedLM` (and the like) are supposed to be used with the keras `fit()` functionality?\r\n\r\nIt seems like one is supposed to loop through the training data and calculate the cross-entropy loss for each batch (#2926). I see there was related discussion also here #1999.\r\n\r\nHappy for any input!", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Closing because this feature was added in #4530 " ]
1,581
1,591
1,591
CONTRIBUTOR
null
# 🚀 Feature request The PyTorch `AlbertForMaskedLM` model has support for the `masked_lm_labels` parameter, while `TFAlbertForMaskedLM` does not. I'd like to bring feature parity. It looks like a similar feature is also missing for `TFBertForMaskedLM`, `TFRobertaForMaskedLM`, `TFDistilBertForMaskedLM`. I'd be happy to add support for those models as well. ## Motivation I'm pretraining TF NLP models, and this would simplify the training script by encapsulating the loss function. ## Your contribution I'm happy to contribute the code. I'll follow CONTRIBUTING.md, any gotchas I should be aware of?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2848/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2848/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2847
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2847/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2847/comments
https://api.github.com/repos/huggingface/transformers/issues/2847/events
https://github.com/huggingface/transformers/issues/2847
564,777,598
MDU6SXNzdWU1NjQ3Nzc1OTg=
2,847
BART/T5 seq2seq example
{ "login": "deepanwayx", "id": 13917097, "node_id": "MDQ6VXNlcjEzOTE3MDk3", "avatar_url": "https://avatars.githubusercontent.com/u/13917097?v=4", "gravatar_id": "", "url": "https://api.github.com/users/deepanwayx", "html_url": "https://github.com/deepanwayx", "followers_url": "https://api.github.com/users/deepanwayx/followers", "following_url": "https://api.github.com/users/deepanwayx/following{/other_user}", "gists_url": "https://api.github.com/users/deepanwayx/gists{/gist_id}", "starred_url": "https://api.github.com/users/deepanwayx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/deepanwayx/subscriptions", "organizations_url": "https://api.github.com/users/deepanwayx/orgs", "repos_url": "https://api.github.com/users/deepanwayx/repos", "events_url": "https://api.github.com/users/deepanwayx/events{/privacy}", "received_events_url": "https://api.github.com/users/deepanwayx/received_events", "type": "User", "site_admin": false }
[ { "id": 1845609017, "node_id": "MDU6TGFiZWwxODQ1NjA5MDE3", "url": "https://api.github.com/repos/huggingface/transformers/labels/seq2seq", "name": "seq2seq", "color": "fef2c0", "default": false, "description": "" } ]
closed
false
null
[]
[ "We are hard at work on this! I'd estimate 6 weeks out.", "Looking forward to this for the T5 model :)", "@sshleifer any updates? ", "The example doesn't seem to show training/fine-tuning, only evaluation of already fine-tuned models.", "@sshleifer Hello, any updates for training/fine-tuning on text generation for T5 model ?", "`summarization/bart/finetune.py` supports T5." ]
1,581
1,587
1,583
NONE
null
# 🚀 Feature request Can we have a seq2seq example with training/fine-tuning and generation for BART/T5 models?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2847/reactions", "total_count": 9, "+1": 9, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2847/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2846
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2846/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2846/comments
https://api.github.com/repos/huggingface/transformers/issues/2846/events
https://github.com/huggingface/transformers/issues/2846
564,768,880
MDU6SXNzdWU1NjQ3Njg4ODA=
2,846
Error reported when running ''run_language_modeling.py" file
{ "login": "XIN-von-SUN", "id": 49686954, "node_id": "MDQ6VXNlcjQ5Njg2OTU0", "avatar_url": "https://avatars.githubusercontent.com/u/49686954?v=4", "gravatar_id": "", "url": "https://api.github.com/users/XIN-von-SUN", "html_url": "https://github.com/XIN-von-SUN", "followers_url": "https://api.github.com/users/XIN-von-SUN/followers", "following_url": "https://api.github.com/users/XIN-von-SUN/following{/other_user}", "gists_url": "https://api.github.com/users/XIN-von-SUN/gists{/gist_id}", "starred_url": "https://api.github.com/users/XIN-von-SUN/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/XIN-von-SUN/subscriptions", "organizations_url": "https://api.github.com/users/XIN-von-SUN/orgs", "repos_url": "https://api.github.com/users/XIN-von-SUN/repos", "events_url": "https://api.github.com/users/XIN-von-SUN/events{/privacy}", "received_events_url": "https://api.github.com/users/XIN-von-SUN/received_events", "type": "User", "site_admin": false }
[ { "id": 1108649070, "node_id": "MDU6TGFiZWwxMTA4NjQ5MDcw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Need%20more%20information", "name": "Need more information", "color": "d876e3", "default": false, "description": "Further information is requested" }, { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1834056761, "node_id": "MDU6TGFiZWwxODM0MDU2NzYx", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling", "name": "Core: Modeling", "color": "FF8446", "default": false, "description": "Internals of the library; Models." }, { "id": 1843377584, "node_id": "MDU6TGFiZWwxODQzMzc3NTg0", "url": "https://api.github.com/repos/huggingface/transformers/labels/Version%20mismatch", "name": "Version mismatch", "color": "ddea7c", "default": false, "description": "" } ]
closed
false
null
[]
[ "Hi, this is probably due to a version mismatch. Can you update your repository to be on the same version than the script's ? \r\n\r\nIf it's `run_language_modeling` (was `run_lm_finetuning` up until very recently), that would be version 2.4.1 (safe, but the script may have evolved a bit since the release 13 days ago) or `master` (safer, should work 100%).", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,581
1,587
1,587
NONE
null
# 🐛 Bug ## Information Model I am using (Bert and RoBerta): Language I am using the model on (English). The problem arises when using: * [ ] the official example scripts: (give details below) I followed the tutorial of how to fine tuning the Bert model on own corpus data, and used recommended corpus 'wiki-text-2' corpus to fine tune the Bert Model. However there is always an error appears: "RuntimeError: Assertion `cur_target >= 0 && cur_target < n_classes' failed. at ../aten/src/THNN/generic/ClassNLLCriterion.c:97". Thus I am not sure are there something wrong with the "run_language_modeling.py" file, because I did do any change of the original code and used recommended wiki corpus. Could you help me check this error? * [ ] my own modified scripts: (give details below) None The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) Fine tuning the language model of Bert on our own customer corpus data. ## To reproduce Steps to reproduce the behavior: 1. 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `python transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2846/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2846/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2845
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2845/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2845/comments
https://api.github.com/repos/huggingface/transformers/issues/2845/events
https://github.com/huggingface/transformers/pull/2845
564,697,850
MDExOlB1bGxSZXF1ZXN0Mzc0ODgwNDEy
2,845
Skip flaky test_tf_question_answering
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2845?src=pr&el=h1) Report\n> Merging [#2845](https://codecov.io/gh/huggingface/transformers/pull/2845?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ef74b0f07a190f19c69abc0732ea955e8dd7330f?src=pr&el=desc) will **decrease** coverage by `0.05%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2845/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2845?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2845 +/- ##\n==========================================\n- Coverage 75.04% 74.98% -0.06% \n==========================================\n Files 94 94 \n Lines 15274 15274 \n==========================================\n- Hits 11462 11453 -9 \n- Misses 3812 3821 +9\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2845?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.77% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/2845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `68.62% <0%> (-5.89%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.15% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `94.27% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.54% <0%> (ø)` | :arrow_up: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/2845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `71.17% <0%> (-0.77%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2845?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2845?src=pr&el=footer). Last update [ef74b0f...4c62bdc](https://codecov.io/gh/huggingface/transformers/pull/2845?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,581
1,651
1,582
CONTRIBUTOR
null
Reasoning: While we diagnose the problem, better to keep circleci from randomly failing.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2845/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2845/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/2845", "html_url": "https://github.com/huggingface/transformers/pull/2845", "diff_url": "https://github.com/huggingface/transformers/pull/2845.diff", "patch_url": "https://github.com/huggingface/transformers/pull/2845.patch", "merged_at": 1582060491000 }
https://api.github.com/repos/huggingface/transformers/issues/2844
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2844/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2844/comments
https://api.github.com/repos/huggingface/transformers/issues/2844/events
https://github.com/huggingface/transformers/pull/2844
564,681,696
MDExOlB1bGxSZXF1ZXN0Mzc0ODY3MTY5
2,844
Attempt to increase timeout for circleci slow tests
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2844?src=pr&el=h1) Report\n> Merging [#2844](https://codecov.io/gh/huggingface/transformers/pull/2844?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f54a5bd37f99e3933a396836cb0be0b5a497c077?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2844/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2844?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2844 +/- ##\n=======================================\n Coverage 75.02% 75.02% \n=======================================\n Files 93 93 \n Lines 15275 15275 \n=======================================\n Hits 11460 11460 \n Misses 3815 3815\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2844?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2844?src=pr&el=footer). Last update [f54a5bd...68880a1](https://codecov.io/gh/huggingface/transformers/pull/2844?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Julien verbal approval :)", "@sshleifer:\r\n```\r\nConfiguration errors: 1 error occurred:\r\n\r\n* In step 4 definition: step type \"no_output_timeout\" is not a valid type\r\n```\r\n\r\nin https://app.circleci.com/jobs/github/huggingface/transformers/18406" ]
1,581
1,582
1,581
CONTRIBUTOR
null
@LysandreJik can you help me test this?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2844/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2844/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/2844", "html_url": "https://github.com/huggingface/transformers/pull/2844", "diff_url": "https://github.com/huggingface/transformers/pull/2844.diff", "patch_url": "https://github.com/huggingface/transformers/pull/2844.patch", "merged_at": 1581603064000 }
https://api.github.com/repos/huggingface/transformers/issues/2843
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2843/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2843/comments
https://api.github.com/repos/huggingface/transformers/issues/2843/events
https://github.com/huggingface/transformers/pull/2843
564,673,760
MDExOlB1bGxSZXF1ZXN0Mzc0ODYwNDc2
2,843
Model card: Literary German BERT
{ "login": "severinsimmler", "id": 16133277, "node_id": "MDQ6VXNlcjE2MTMzMjc3", "avatar_url": "https://avatars.githubusercontent.com/u/16133277?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severinsimmler", "html_url": "https://github.com/severinsimmler", "followers_url": "https://api.github.com/users/severinsimmler/followers", "following_url": "https://api.github.com/users/severinsimmler/following{/other_user}", "gists_url": "https://api.github.com/users/severinsimmler/gists{/gist_id}", "starred_url": "https://api.github.com/users/severinsimmler/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severinsimmler/subscriptions", "organizations_url": "https://api.github.com/users/severinsimmler/orgs", "repos_url": "https://api.github.com/users/severinsimmler/repos", "events_url": "https://api.github.com/users/severinsimmler/events{/privacy}", "received_events_url": "https://api.github.com/users/severinsimmler/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2843?src=pr&el=h1) Report\n> Merging [#2843](https://codecov.io/gh/huggingface/transformers/pull/2843?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/21da895013a95e60df645b7d6b95f4a38f604759?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2843/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2843?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2843 +/- ##\n=======================================\n Coverage 75.02% 75.02% \n=======================================\n Files 93 93 \n Lines 15275 15275 \n=======================================\n Hits 11460 11460 \n Misses 3815 3815\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2843?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2843?src=pr&el=footer). Last update [21da895...6f2b608](https://codecov.io/gh/huggingface/transformers/pull/2843?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Thanks for using our BERT model! Great to see that your fine-tuned model beats the CRF baseline :)", "Thanks for sharing your German BERT -- outperformed the multilingual one by the way.", "@severinsimmler Thank you! I tweaked the references to images (this should be documented at some point, but not sure where we can put it for now) + added tags\r\n\r\nAlso thank you @stefan-it ", "Hi @julien-c, why is this page offline? https://huggingface.co/severinsimmler/literary-german-bert\r\n\r\nThe model is neither listed here anymore: https://huggingface.co/models\r\n\r\nnor says my user page that there are any published models: https://huggingface.co/severinsimmler\r\n\r\nBut the CLI says it's still there:\r\n\r\n```\r\n$ transformers-cli s3 ls\r\nFilename LastModified ETag Size \r\n-------------------------------------------- ------------------------ ---------------------------------- --------- \r\nliterary-german-bert/config.json 2020-02-13T13:37:48.000Z \"7e68409fc147acec10dadb06b33d0ba6\" 1043 \r\nliterary-german-bert/eval_results.txt 2020-02-13T12:24:48.000Z \"cda28cf0e39c7783bf8c8995ef940492\" 147 \r\nliterary-german-bert/pytorch_model.bin 2020-02-13T12:25:18.000Z \"27c22d3d221287715ca781d3939f9bb2\" 439770223 \r\nliterary-german-bert/special_tokens_map.json 2020-02-13T12:24:50.000Z \"8b3fb1023167bb4ab9d70708eb05f6ec\" 112 \r\nliterary-german-bert/test_results.txt 2020-02-13T12:24:45.000Z \"c5276b24e5788305862f5b7bc847fa95\" 147 \r\nliterary-german-bert/tokenizer_config.json 2020-02-13T12:24:49.000Z \"b2db3b45d8945539dab67f41f04101d7\" 152 \r\nliterary-german-bert/training_args.bin 2020-02-13T12:24:44.000Z \"ce5c09e8214e66daa6a97005f20e7300\" 1309 \r\nliterary-german-bert/vocab.txt 2020-02-13T12:24:46.000Z \"5787056a1ea58629b0c71cfc37728ce4\" 239836 \r\n```\r\n\r\nAnd I am also able to download and use it. 🤔 ", "We had a small hiccup on the website (due to improperly sanitized user-input – a.k.a. developer error :)\r\n\r\nYour model should be back up.", "Thanks for the quick response and fix! Keep up the great work :) " ]
1,581
1,583
1,581
CONTRIBUTOR
null
This PR adds a model card for [severinsimmler/literary-german-bert](https://huggingface.co/severinsimmler/literary-german-bert), a domain-adapted and fine-tuned BERT for named entity recognition in German literary texts.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2843/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2843/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/2843", "html_url": "https://github.com/huggingface/transformers/pull/2843", "diff_url": "https://github.com/huggingface/transformers/pull/2843.diff", "patch_url": "https://github.com/huggingface/transformers/pull/2843.patch", "merged_at": 1581626625000 }
https://api.github.com/repos/huggingface/transformers/issues/2842
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2842/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2842/comments
https://api.github.com/repos/huggingface/transformers/issues/2842/events
https://github.com/huggingface/transformers/issues/2842
564,656,103
MDU6SXNzdWU1NjQ2NTYxMDM=
2,842
when will add XLMRobertaForQuestionAnswering package
{ "login": "ynebula", "id": 22788865, "node_id": "MDQ6VXNlcjIyNzg4ODY1", "avatar_url": "https://avatars.githubusercontent.com/u/22788865?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ynebula", "html_url": "https://github.com/ynebula", "followers_url": "https://api.github.com/users/ynebula/followers", "following_url": "https://api.github.com/users/ynebula/following{/other_user}", "gists_url": "https://api.github.com/users/ynebula/gists{/gist_id}", "starred_url": "https://api.github.com/users/ynebula/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ynebula/subscriptions", "organizations_url": "https://api.github.com/users/ynebula/orgs", "repos_url": "https://api.github.com/users/ynebula/repos", "events_url": "https://api.github.com/users/ynebula/events{/privacy}", "received_events_url": "https://api.github.com/users/ynebula/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It is pretty easy to add the code yourself since RobertaForQuestionAnswering is already implemented and XLMRobertaForQuestionAnswering is just a wrapper around it. ", "Thank you for your answering.\r\n\r\nI got your mention\r\n\r\nI have question one more\r\n\r\nIs it possible to learn XLM-Roberta data to Roberta\r\n(XLM-Roberta data is https://github.com/pytorch/fairseq/tree/master/examples/xlmr)\r\n\r\nIf it is possible, can you show me how to set up.\r\n\r\nplease let me know.", "I resolve the problem.\r\n\r\nThank you your answering" ]
1,581
1,582
1,582
NONE
null
I will study squad of multilingual. I found that question answer package did include in run_squad.py. I wanna release that package Are you plan to release XLMRobertaForQuestionAnwering? please let me know.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2842/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2842/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2841
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2841/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2841/comments
https://api.github.com/repos/huggingface/transformers/issues/2841/events
https://github.com/huggingface/transformers/issues/2841
564,652,612
MDU6SXNzdWU1NjQ2NTI2MTI=
2,841
cannot find model in model name list
{ "login": "zxr19980213", "id": 29746014, "node_id": "MDQ6VXNlcjI5NzQ2MDE0", "avatar_url": "https://avatars.githubusercontent.com/u/29746014?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zxr19980213", "html_url": "https://github.com/zxr19980213", "followers_url": "https://api.github.com/users/zxr19980213/followers", "following_url": "https://api.github.com/users/zxr19980213/following{/other_user}", "gists_url": "https://api.github.com/users/zxr19980213/gists{/gist_id}", "starred_url": "https://api.github.com/users/zxr19980213/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zxr19980213/subscriptions", "organizations_url": "https://api.github.com/users/zxr19980213/orgs", "repos_url": "https://api.github.com/users/zxr19980213/repos", "events_url": "https://api.github.com/users/zxr19980213/events{/privacy}", "received_events_url": "https://api.github.com/users/zxr19980213/received_events", "type": "User", "site_admin": false }
[ { "id": 1834056635, "node_id": "MDU6TGFiZWwxODM0MDU2NjM1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization", "name": "Core: Tokenization", "color": "FF4446", "default": false, "description": "Internals of the library; Tokenization." }, { "id": 1834081910, "node_id": "MDU6TGFiZWwxODM0MDgxOTEw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Usage", "name": "Usage", "color": "e28436", "default": false, "description": "General questions about the library" }, { "id": 1843377584, "node_id": "MDU6TGFiZWwxODQzMzc3NTg0", "url": "https://api.github.com/repos/huggingface/transformers/labels/Version%20mismatch", "name": "Version mismatch", "color": "ddea7c", "default": false, "description": "" } ]
closed
false
null
[]
[ "Hi, could you please provide all the information required in the template so that we may help you? Namely which version of `transformers`, python and PyTorch are you using?\r\n\r\nYou seem to be using `pytorch-pretrained-BERT`, which is a very old version of this repository. Have you tried using the newer `transformers`, which has much more functionalities and is more robust than `pytorch-pretrained-BERT`?", "I am using `Python 3.6.9` , `torch 1.3.1` and `pytorch-pretrained-bert 0.6.2` .\r\n\r\nI am following the tutorial on https://pypi.org/project/pytorch-pretrained-bert/ and meet with this problem.", "I changed to a better Internet and this problem solved .\r\nSorry for your time and thank you for your attention !", "can u tell me how to change a better internet? i met thie question too", "@zxr19980213 " ]
1,581
1,631
1,581
NONE
null
Hi , thank you for developing well-made pytorch version of BERT ! I am new to NLP area and have problem while coding like this: ```python tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') ``` The error discription is below: ``` INFO:pytorch_pretrained_bert.file_utils:https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt not found in cache, downloading to C:\Users\zxr\AppData\Local\Temp\tmpb3lgzjlo ERROR:pytorch_pretrained_bert.tokenization:Model name 'bert-base-uncased' was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese). We assumed 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt' was a path or url but couldn't find any file associated to this path or url. ``` I searched it and thought it might be poor internet connect. I downloaded the model i want but donot know how to load it with code. Thank you very much for your attention!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2841/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2841/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2840
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2840/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2840/comments
https://api.github.com/repos/huggingface/transformers/issues/2840/events
https://github.com/huggingface/transformers/pull/2840
564,591,633
MDExOlB1bGxSZXF1ZXN0Mzc0NzkzMTIx
2,840
[WIP] Add patience argument to run_language_modeling script
{ "login": "thesamuel", "id": 6275391, "node_id": "MDQ6VXNlcjYyNzUzOTE=", "avatar_url": "https://avatars.githubusercontent.com/u/6275391?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thesamuel", "html_url": "https://github.com/thesamuel", "followers_url": "https://api.github.com/users/thesamuel/followers", "following_url": "https://api.github.com/users/thesamuel/following{/other_user}", "gists_url": "https://api.github.com/users/thesamuel/gists{/gist_id}", "starred_url": "https://api.github.com/users/thesamuel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thesamuel/subscriptions", "organizations_url": "https://api.github.com/users/thesamuel/orgs", "repos_url": "https://api.github.com/users/thesamuel/repos", "events_url": "https://api.github.com/users/thesamuel/events{/privacy}", "received_events_url": "https://api.github.com/users/thesamuel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Sounds great! I'll go ahead and fix the code quality check.", "Since `run_langauge_modeling.py` now uses the `Trainer` class, I'll likely create a new PR that adds patience to `Trainer`." ]
1,581
1,588
1,588
NONE
null
# Summary Often, we want to stop training if loss does not improve for a number of epochs. This PR adds a "patience" argument, which is a limit on the number of times we can get a non-improving eval loss before stopping training early. It is implemented by other NLP frameworks, such as AllenNLP (see [trainer.py](https://github.com/allenai/allennlp/blob/master/allennlp/training/trainer.py#L95) and [metric_tracker.py](https://github.com/allenai/allennlp/blob/1a8a12cd1b065d74fec3d2e80105a684736ff709/allennlp/training/metric_tracker.py#L6)). # Motivation This feature allows faster fine-tuning by breaking the training loop early and avoids users the toil of checking metrics on Tensorboard. # Caveats Often, models are evaluated once per epoch, but run_lm_finetuning.py has an option to evaluate after a set number of model update steps (dictated by `--logging_steps` if `--evaluate_during_training` is true). Because of this, I've elected to tie patience to the number of evaluations without improvement in loss. # To-do - Add tests - Fix long lines
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2840/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2840/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/2840", "html_url": "https://github.com/huggingface/transformers/pull/2840", "diff_url": "https://github.com/huggingface/transformers/pull/2840.diff", "patch_url": "https://github.com/huggingface/transformers/pull/2840.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/2839
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2839/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2839/comments
https://api.github.com/repos/huggingface/transformers/issues/2839/events
https://github.com/huggingface/transformers/issues/2839
564,589,285
MDU6SXNzdWU1NjQ1ODkyODU=
2,839
Fine-tuning the model using classification tasks
{ "login": "ankush20m", "id": 45195876, "node_id": "MDQ6VXNlcjQ1MTk1ODc2", "avatar_url": "https://avatars.githubusercontent.com/u/45195876?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ankush20m", "html_url": "https://github.com/ankush20m", "followers_url": "https://api.github.com/users/ankush20m/followers", "following_url": "https://api.github.com/users/ankush20m/following{/other_user}", "gists_url": "https://api.github.com/users/ankush20m/gists{/gist_id}", "starred_url": "https://api.github.com/users/ankush20m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ankush20m/subscriptions", "organizations_url": "https://api.github.com/users/ankush20m/orgs", "repos_url": "https://api.github.com/users/ankush20m/repos", "events_url": "https://api.github.com/users/ankush20m/events{/privacy}", "received_events_url": "https://api.github.com/users/ankush20m/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1834052574, "node_id": "MDU6TGFiZWwxODM0MDUyNTc0", "url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20Sequence%20Classification", "name": "Ex: Sequence Classification", "color": "46FFCF", "default": false, "description": "" }, { "id": 1834081910, "node_id": "MDU6TGFiZWwxODM0MDgxOTEw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Usage", "name": "Usage", "color": "e28436", "default": false, "description": "General questions about the library" } ]
closed
false
null
[]
[ "Hi, the `run_glue` example script was designed to showcase how to fine-tune any model to a classification task. It showcases many things you maybe don't need, such as data-parallel, checkpointing, half-precision, etc. You can adapt this script or study the training loop to create your own.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,581
1,587
1,587
NONE
null
Hello All, Could anyone tell how can I fine-tune the language model using classification tasks, however not using any GLUE data, as I have my own custom dataset? Is there any solution and/or method to do the classification using a custom dataset?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2839/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2839/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2838
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2838/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2838/comments
https://api.github.com/repos/huggingface/transformers/issues/2838/events
https://github.com/huggingface/transformers/issues/2838
564,380,853
MDU6SXNzdWU1NjQzODA4NTM=
2,838
A small model for CTRL
{ "login": "D-i-l-r-u-k-s-h-i", "id": 47185867, "node_id": "MDQ6VXNlcjQ3MTg1ODY3", "avatar_url": "https://avatars.githubusercontent.com/u/47185867?v=4", "gravatar_id": "", "url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i", "html_url": "https://github.com/D-i-l-r-u-k-s-h-i", "followers_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/followers", "following_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/following{/other_user}", "gists_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/gists{/gist_id}", "starred_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/subscriptions", "organizations_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/orgs", "repos_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/repos", "events_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/events{/privacy}", "received_events_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1838876023, "node_id": "MDU6TGFiZWwxODM4ODc2MDIz", "url": "https://api.github.com/repos/huggingface/transformers/labels/Distillation", "name": "Distillation", "color": "d4c5f9", "default": false, "description": "Related to model distillation" } ]
closed
false
null
[]
[ "cc'ing @keskarnitish on this issue just in case!", "Thank you, @julien-c.", "hi, any update on this?", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,581
1,588
1,588
NONE
null
# 🚀 Feature request A smaller version of the pre-trained model CTRL, related to the stack overflow question; [https://stackoverflow.com/questions/60142937/huggingface-transformers-for-text-generation-with-ctrl] ## Motivation I've been trying to generate text using CTRL and I run to a memory insufficiency, since it is a large model. And was wondering whether there would be a small version of CTRL like a Distill version like some of the other transformer models do.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2838/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2838/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2837
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2837/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2837/comments
https://api.github.com/repos/huggingface/transformers/issues/2837/events
https://github.com/huggingface/transformers/issues/2837
564,370,795
MDU6SXNzdWU1NjQzNzA3OTU=
2,837
Pretrained TFAlbertForMaskedLM returns seemingly random token predictions
{ "login": "jarednielsen", "id": 4564897, "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jarednielsen", "html_url": "https://github.com/jarednielsen", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "repos_url": "https://api.github.com/users/jarednielsen/repos", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1834054694, "node_id": "MDU6TGFiZWwxODM0MDU0Njk0", "url": "https://api.github.com/repos/huggingface/transformers/labels/TensorFlow", "name": "TensorFlow", "color": "FF6F00", "default": false, "description": "Anything TensorFlow" }, { "id": 1834056761, "node_id": "MDU6TGFiZWwxODM0MDU2NzYx", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling", "name": "Core: Modeling", "color": "FF8446", "default": false, "description": "Internals of the library; Models." } ]
closed
false
null
[]
[ "Hi, thank you for opening an issue, there was indeed an error with the way the `TFAlbertModel` was implemented! It was fixed with https://github.com/huggingface/transformers/commit/1abd53b1aa2f15953bbbbbfefda885d1d9c9d94b.\r\n\r\nEven with the fix, the sequence `I <mask> you` is hard for ALBERT, but using your sample with a longer sequence yields satisfying results:\r\n\r\n```py\r\nimport tensorflow as tf\r\nfrom transformers import BertTokenizer, TFBertForMaskedLM, AlbertTokenizer, TFAlbertForMaskedLM\r\n\r\ntf.random.set_seed(1)\r\ntokenizer = AlbertTokenizer.from_pretrained(\"albert-base-v2\")\r\nmodel = TFAlbertForMaskedLM.from_pretrained(\"albert-base-v2\")\r\ninput_ids = tokenizer.encode(f\"This is the best thing I've {nlp.tokenizer.mask_token} in my life.\", return_tensors=\"tf\")\r\noutputs = model(input_ids)\r\nprediction_scores = outputs[0]\r\npredicted_ids = tf.reshape(tf.argmax(prediction_scores, -1), [-1])\r\npredicted_tokens = tokenizer.convert_ids_to_tokens(predicted_ids)\r\nprint(predicted_tokens)\r\n# ['▁time', '▁this', '▁is', '▁the', '▁best', '▁thing', '▁i', \"'\", 've', '▁done', '▁in', '▁my', '▁life', '!!!', '▁your']\r\n```\r\n\r\nLet me know if the updated model works for you.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,581
1,587
1,587
CONTRIBUTOR
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): BERT, ALBERT Language I am using the model on (English, Chinese ...): English The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: * [x] my own task or dataset: toy data. ## To reproduce ``` import tensorflow as tf from transformers import BertTokenizer, TFBertForMaskedLM, AlbertTokenizer, TFAlbertForMaskedLM tf.random.set_seed(1) tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") model = TFBertForMaskedLM.from_pretrained("bert-base-uncased") input_ids = tokenizer.encode(f"I {tokenizer.mask_token} you", return_tensors="tf") outputs = model(input_ids) prediction_scores = outputs[0] predicted_ids = tf.reshape(tf.argmax(prediction_scores, -1), [-1]) predicted_tokens = tokenizer.convert_ids_to_tokens(predicted_ids) print(predicted_tokens) # ['.', 'i', 'love', 'you', '.'] tf.random.set_seed(1) tokenizer = AlbertTokenizer.from_pretrained("albert-base-v2") model = TFAlbertForMaskedLM.from_pretrained("albert-base-v2") input_ids = tokenizer.encode(f"I {tokenizer.mask_token} you", return_tensors="tf") outputs = model(input_ids) prediction_scores = outputs[0] predicted_ids = tf.reshape(tf.argmax(prediction_scores, -1), [-1]) predicted_tokens = tokenizer.convert_ids_to_tokens(predicted_ids) print(predicted_tokens) # ['_pawn', '_addressing', '_fundraising', '_george', '_hybrid'] ``` ## Expected behavior I would expect both commands to return the same result, filling in the middle with "love" or some other word. BERT performs correctly, while ALBERT seems to return nonsense. Any idea why this is happening? ## Environment info - `transformers` version: 2.4.1 - Platform: Linux - Python version: 3.6.5 - PyTorch version (GPU?): not installed - Tensorflow version (GPU?): 2.0.0 (True) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2837/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2837/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2836
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2836/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2836/comments
https://api.github.com/repos/huggingface/transformers/issues/2836/events
https://github.com/huggingface/transformers/issues/2836
564,354,398
MDU6SXNzdWU1NjQzNTQzOTg=
2,836
Getting value of [UNK] labels
{ "login": "Javier-Jimenez99", "id": 38747614, "node_id": "MDQ6VXNlcjM4NzQ3NjE0", "avatar_url": "https://avatars.githubusercontent.com/u/38747614?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Javier-Jimenez99", "html_url": "https://github.com/Javier-Jimenez99", "followers_url": "https://api.github.com/users/Javier-Jimenez99/followers", "following_url": "https://api.github.com/users/Javier-Jimenez99/following{/other_user}", "gists_url": "https://api.github.com/users/Javier-Jimenez99/gists{/gist_id}", "starred_url": "https://api.github.com/users/Javier-Jimenez99/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Javier-Jimenez99/subscriptions", "organizations_url": "https://api.github.com/users/Javier-Jimenez99/orgs", "repos_url": "https://api.github.com/users/Javier-Jimenez99/repos", "events_url": "https://api.github.com/users/Javier-Jimenez99/events{/privacy}", "received_events_url": "https://api.github.com/users/Javier-Jimenez99/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1834056635, "node_id": "MDU6TGFiZWwxODM0MDU2NjM1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization", "name": "Core: Tokenization", "color": "FF4446", "default": false, "description": "Internals of the library; Tokenization." }, { "id": 1834060867, "node_id": "MDU6TGFiZWwxODM0MDYwODY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20Named%20Entity%20Recognition", "name": "Ex: Named Entity Recognition", "color": "06FFD8", "default": false, "description": "" } ]
closed
false
null
[]
[ "Did you try using the `add_tokens` method on the tokenizer alongside the `resize_token_embeddings` on the model, to add your tokens to the vocabulary? The won't be marked as `[UNK]` this way, but will instead receive brand new embeddings (which need to be trained).", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "@LysandreJik Hello, I have the same problem。\r\nI want to implement [pointer-generator](https://arxiv.org/abs/1704.04368) by BertTokenizer. Pointer-generator can generate OOV tokens in the inputs by dynamically extending vocab. And there is no way to add all the tokens directly, especially in the test set.\r\nDo you have any good solutions?", "Have you tried using the `add_tokens` method on the tokenizer and the `resize_token_embeddings` method on your model?", "I tried the dumbest solution:\r\n```\r\nbert_tokens = tokenizer.tokenize(query)\r\ntokens = []\r\npre_text = \"\"\r\nfor i in range(len(bert_tokens)):\r\n bert_token = bert_tokens[i].replace(\"##\", \"\")\r\n if i+1 < len(bert_tokens):\r\n post_token = bert_tokens[i+1].replace(\"##\", \"\")\r\n else:\r\n post_token = \"\"\r\n if bert_token == '[UNK]':\r\n token = str(\r\n re.match(f\"{pre_text}(.*){post_token}(.*)\",\r\n query).group(1))\r\n tokens.append(token)\r\n pre_text += token\r\n else:\r\n tokens.append(bert_token)\r\n pre_text += bert_token\r\nreturn tokens\r\n```" ]
1,581
1,618
1,587
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> I have created a NER model based ob Bert with this library, but I have a problem when I run my model due to `[UNK]`. Sometimos there are entities that aren't on my vocab so they are marked as unkowns so I cant know what they are. I know I can't revert `[UNK]` label, so would like to be able to define the words that would be unknown before the processing of the sentence. ## Details <!-- Description of your issue --> <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**: [https://stackoverflow.com/questions/60192523/get-the-value-of-unk-in-bert](url)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2836/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2836/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2835
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2835/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2835/comments
https://api.github.com/repos/huggingface/transformers/issues/2835/events
https://github.com/huggingface/transformers/issues/2835
564,323,931
MDU6SXNzdWU1NjQzMjM5MzE=
2,835
Failing slow RobertaModelIntegrationTest
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1834056761, "node_id": "MDU6TGFiZWwxODM0MDU2NzYx", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling", "name": "Core: Modeling", "color": "FF8446", "default": false, "description": "Internals of the library; Models." }, { "id": 1834088753, "node_id": "MDU6TGFiZWwxODM0MDg4NzUz", "url": "https://api.github.com/repos/huggingface/transformers/labels/Tests", "name": "Tests", "color": "a6fcca", "default": false, "description": "Related to tests" } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,581
1,587
1,587
CONTRIBUTOR
null
``` RUN_SLOW=1 pytest tests/test_modeling_roberta.py::RobertaModelIntegrationTest::test_inference_masked_lm ``` Have not investigated at all, but wanted to record. Traceback: ``` self = <tests.test_modeling_roberta.RobertaModelIntegrationTest testMethod=test_inference_masked_lm> @slow def test_inference_masked_lm(self): model = RobertaForMaskedLM.from_pretrained("roberta-base") input_ids = torch.tensor([[0, 31414, 232, 328, 740, 1140, 12695, 69, 46078, 1588, 2]]) output = model(input_ids)[0] expected_shape = torch.Size((1, 11, 50265)) self.assertEqual(output.shape, expected_shape) # compare the actual values for a slice. expected_slice = torch.Tensor( [[[33.8843, -4.3107, 22.7779], [4.6533, -2.8099, 13.6252], [1.8222, -3.6898, 8.8600]]] ) > self.assertTrue(torch.allclose(output[:, :3, :3], expected_slice, atol=1e-3)) E AssertionError: False is not true ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2835/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/2835/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2834
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2834/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2834/comments
https://api.github.com/repos/huggingface/transformers/issues/2834/events
https://github.com/huggingface/transformers/issues/2834
564,323,533
MDU6SXNzdWU1NjQzMjM1MzM=
2,834
Failing slow AutoModelTest/BertForPreTraining
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1771187924, "node_id": "MDU6TGFiZWwxNzcxMTg3OTI0", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Pipeline", "name": "Core: Pipeline", "color": "FF7066", "default": false, "description": "Internals of the library; Pipeline." }, { "id": 1834088753, "node_id": "MDU6TGFiZWwxODM0MDg4NzUz", "url": "https://api.github.com/repos/huggingface/transformers/labels/Tests", "name": "Tests", "color": "a6fcca", "default": false, "description": "Related to tests" } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,581
1,587
1,587
CONTRIBUTOR
null
``` RUN_SLOW=1 pytest tests/test_modeling_auto.py::AutoModelTest::test_model_for_pretraining_from_pretrained ``` Have not investigated at all, but wanted to record since the slow test failures are evasive :) Clues: model: `transformers.modeling_bert.BertForPreTraining` ``` loading_info = {'missing_keys': ['cls.predictions.decoder.bias'], 'unexpected_keys': [], 'error_msgs': []} ``` Likely related to this funkiness https://github.com/huggingface/transformers/blob/ee5de0ba449d638da704e1c03ffcc20a930f5589/src/transformers/modeling_bert.py#L482-L483
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2834/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2834/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2833
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2833/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2833/comments
https://api.github.com/repos/huggingface/transformers/issues/2833/events
https://github.com/huggingface/transformers/pull/2833
564,314,638
MDExOlB1bGxSZXF1ZXN0Mzc0NTY5NjM0
2,833
add model_card flaubert-base-uncased-squad
{ "login": "fmikaelian", "id": 39884124, "node_id": "MDQ6VXNlcjM5ODg0MTI0", "avatar_url": "https://avatars.githubusercontent.com/u/39884124?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fmikaelian", "html_url": "https://github.com/fmikaelian", "followers_url": "https://api.github.com/users/fmikaelian/followers", "following_url": "https://api.github.com/users/fmikaelian/following{/other_user}", "gists_url": "https://api.github.com/users/fmikaelian/gists{/gist_id}", "starred_url": "https://api.github.com/users/fmikaelian/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fmikaelian/subscriptions", "organizations_url": "https://api.github.com/users/fmikaelian/orgs", "repos_url": "https://api.github.com/users/fmikaelian/repos", "events_url": "https://api.github.com/users/fmikaelian/events{/privacy}", "received_events_url": "https://api.github.com/users/fmikaelian/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2833?src=pr&el=h1) Report\n> Merging [#2833](https://codecov.io/gh/huggingface/transformers/pull/2833?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f54a5bd37f99e3933a396836cb0be0b5a497c077?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2833/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2833?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2833 +/- ##\n=======================================\n Coverage 75.02% 75.02% \n=======================================\n Files 93 93 \n Lines 15275 15275 \n=======================================\n Hits 11460 11460 \n Misses 3815 3815\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2833?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2833?src=pr&el=footer). Last update [f54a5bd...286b4fa](https://codecov.io/gh/huggingface/transformers/pull/2833?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,581
1,581
1,581
CONTRIBUTOR
null
A baseline model for question-answering in french ([flaubert](https://github.com/getalp/Flaubert) model fine-tuned on [french-translated SQuAD 1.1 dataset](https://github.com/Alikabbadj/French-SQuAD)) Small error when trying it with the pipeline though: ```python-traceback >>> nlp = pipeline('question-answering', model='fmikaelian/flaubert-base-uncased-squad', tokenizer='fmikaelian/flaubert-base-uncased-squad') nlp({ 'question': "Qui est Claude Monet?", 'context': "Claude Monet, né le 14 novembre 1840 à Paris et mort le 5 décembre 1926 à Giverny, est un peintre français et l’un des fondateurs de l'impressionnisme." }) Model name 'fmikaelian/flaubert-base-uncased-squad' was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, bert-base-japanese, bert-base-japanese-whole-word-masking, bert-base-japanese-char, bert-base-japanese-char-whole-word-masking, bert-base-finnish-cased-v1, bert-base-finnish-uncased-v1, bert-base-dutch-cased, openai-gpt, transfo-xl-wt103, gpt2, gpt2-medium, gpt2-large, gpt2-xl, distilgpt2, ctrl, xlnet-base-cased, xlnet-large-cased, xlm-mlm-en-2048, xlm-mlm-ende-1024, xlm-mlm-enfr-1024, xlm-mlm-enro-1024, xlm-mlm-tlm-xnli15-1024, xlm-mlm-xnli15-1024, xlm-clm-enfr-1024, xlm-clm-ende-1024, xlm-mlm-17-1280, xlm-mlm-100-1280, roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector, distilbert-base-uncased, distilbert-base-uncased-distilled-squad, distilbert-base-cased, distilbert-base-cased-distilled-squad, distilbert-base-german-cased, distilbert-base-multilingual-cased, distilbert-base-uncased-finetuned-sst-2-english, albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2, camembert-base, umberto-commoncrawl-cased-v1, umberto-wikipedia-uncased-v1, t5-small, t5-base, t5-large, t5-3b, t5-11b, xlm-roberta-base, xlm-roberta-large, xlm-roberta-large-finetuned-conll02-dutch, xlm-roberta-large-finetuned-conll02-spanish, xlm-roberta-large-finetuned-conll03-english, xlm-roberta-large-finetuned-conll03-german, flaubert-small-cased, flaubert-base-uncased, flaubert-base-cased, flaubert-large-cased). We assumed 'https://s3.amazonaws.com/models.huggingface.co/bert/fmikaelian/flaubert-base-uncased-squad/modelcard.json' was a path or url to a model card file named modelcard.json or a directory containing such a file but couldn't find any such file at this path or url. Creating an empty model card. >>> >>> nlp({ ... 'question': "Qui est Claude Monet?", ... 'context': "Claude Monet, né le 14 novembre 1840 à Paris et mort le 5 décembre 1926 à Giverny, est un peintre français et l’un des fondateurs de l'impressionnisme." ... }) convert squad examples to features: 100%|██████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 3.25it/s] add example index and unique id: 100%|███████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 4181.76it/s] Traceback (most recent call last): File "<stdin>", line 3, in <module> File "/usr/local/lib/python3.7/site-packages/transformers/pipelines.py", line 815, in __call__ start, end = self.model(**fw_args) ValueError: too many values to unpack (expected 2) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2833/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2833/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/2833", "html_url": "https://github.com/huggingface/transformers/pull/2833", "diff_url": "https://github.com/huggingface/transformers/pull/2833.diff", "patch_url": "https://github.com/huggingface/transformers/pull/2833.patch", "merged_at": 1581632354000 }
https://api.github.com/repos/huggingface/transformers/issues/2832
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2832/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2832/comments
https://api.github.com/repos/huggingface/transformers/issues/2832/events
https://github.com/huggingface/transformers/issues/2832
564,287,261
MDU6SXNzdWU1NjQyODcyNjE=
2,832
'distilbert-base-cased-distilled-squad' was not found error
{ "login": "elronbandel", "id": 23455264, "node_id": "MDQ6VXNlcjIzNDU1MjY0", "avatar_url": "https://avatars.githubusercontent.com/u/23455264?v=4", "gravatar_id": "", "url": "https://api.github.com/users/elronbandel", "html_url": "https://github.com/elronbandel", "followers_url": "https://api.github.com/users/elronbandel/followers", "following_url": "https://api.github.com/users/elronbandel/following{/other_user}", "gists_url": "https://api.github.com/users/elronbandel/gists{/gist_id}", "starred_url": "https://api.github.com/users/elronbandel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/elronbandel/subscriptions", "organizations_url": "https://api.github.com/users/elronbandel/orgs", "repos_url": "https://api.github.com/users/elronbandel/repos", "events_url": "https://api.github.com/users/elronbandel/events{/privacy}", "received_events_url": "https://api.github.com/users/elronbandel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi! This checkpoint was added six days ago but our latest release was 13 days ago, so you would need to install the repository from source to use that model:\r\n\r\n```\r\npip install git+https://github.com/huggingface/transformers\r\n```\r\n\r\nIt'll be available in a pip install once we do a new release." ]
1,581
1,581
1,581
NONE
null
# 🐛 Bug ## Information Model I am using: distilbert-base-cased-distilled-squad The problem arises when using: AutoTokenizer or AutoModelForQuestionAnswering Steps to reproduce the behavior: 0. make sure you have everything on colab installed and imported ``` !pip install transformers import torch from transformers import AutoModelForQuestionAnswering, AutoTokenizer ``` 1. run the code on google colab: ``` tokenizer = AutoTokenizer.from_pretrained("distilbert-base-cased-distilled-squad") model = AutoModelForQuestionAnswering.from_pretrained("distilbert-base-cased-distilled-squad") ``` and the error: ``` OSError Traceback (most recent call last) /usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, pretrained_config_archive_map, **kwargs) 224 if resolved_config_file is None: --> 225 raise EnvironmentError 226 config_dict = cls._dict_from_json_file(resolved_config_file) OSError: During handling of the above exception, another exception occurred: OSError Traceback (most recent call last) 3 frames /usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, pretrained_config_archive_map, **kwargs) 239 ) 240 ) --> 241 raise EnvironmentError(msg) 242 243 except json.JSONDecodeError: OSError: Model name 'distilbert-base-cased-distilled-squad' was not found in model name list. We assumed 'https://s3.amazonaws.com/models.huggingface.co/bert/distilbert-base-cased-distilled-squad/config.json' was a path, a model identifier, or url to a configuration file named config.json or a directory containing such a file but couldn't find any such file at this path or url. ``` ## Environment info - `transformers` version: 2.4.1 - Platform: Google Colab - PyTorch version :1.4.0 - Using GPU in script?: no - Using distributed or parallel set-up in script?: no
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2832/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2832/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2831
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2831/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2831/comments
https://api.github.com/repos/huggingface/transformers/issues/2831/events
https://github.com/huggingface/transformers/issues/2831
564,230,995
MDU6SXNzdWU1NjQyMzA5OTU=
2,831
Installation Error - Failed building wheel for tokenizers
{ "login": "victorlongo", "id": 17074908, "node_id": "MDQ6VXNlcjE3MDc0OTA4", "avatar_url": "https://avatars.githubusercontent.com/u/17074908?v=4", "gravatar_id": "", "url": "https://api.github.com/users/victorlongo", "html_url": "https://github.com/victorlongo", "followers_url": "https://api.github.com/users/victorlongo/followers", "following_url": "https://api.github.com/users/victorlongo/following{/other_user}", "gists_url": "https://api.github.com/users/victorlongo/gists{/gist_id}", "starred_url": "https://api.github.com/users/victorlongo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/victorlongo/subscriptions", "organizations_url": "https://api.github.com/users/victorlongo/orgs", "repos_url": "https://api.github.com/users/victorlongo/repos", "events_url": "https://api.github.com/users/victorlongo/events{/privacy}", "received_events_url": "https://api.github.com/users/victorlongo/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1834056635, "node_id": "MDU6TGFiZWwxODM0MDU2NjM1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization", "name": "Core: Tokenization", "color": "FF4446", "default": false, "description": "Internals of the library; Tokenization." }, { "id": 1843765959, "node_id": "MDU6TGFiZWwxODQzNzY1OTU5", "url": "https://api.github.com/repos/huggingface/transformers/labels/Installation", "name": "Installation", "color": "bfdadc", "default": false, "description": "" } ]
closed
false
null
[]
[ "Having the exact same issue on a Linux machine!", "Environment: macOS Mojave Ver 10.14.6\r\nTried installing both from pip and source. Same issue:\r\n> Successfully built transformers\r\n> Failed to build tokenizers \r\n\r\nResult was that Transformers was not installed (not listed in pip freeze)\r\n\r\nThis however should work - seems like you just won't get the the new tokenizers:\r\npip install transformers==2.4.1", "@GDBSD I had the same issue on the same OS version and also tried pip and source. Your version specification worked. ", "Had the same issue on MacOS Mojave when doing pip3 install. Tried pip2 install, it worked but I got another error when running my script telling me I should really be using python 3.\r\n\r\nI tried @GDBSD 's answer, but I got this error: \r\n\r\n```\r\nERROR: Exception:\r\nTraceback (most recent call last):\r\n File \"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/py_compile.py\", line 143, in compile\r\n _optimize=optimize)\r\n File \"<frozen importlib._bootstrap_external>\", line 791, in source_to_code\r\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\r\n File \"/private/var/folders/g0/5zwy4mtx7579v5x6rxqb083r0000gn/T/pip-unpacked-wheel-k410h9s0/sacremoses/sent_tokenize.py\", line 69\r\n if re.search(IS_EOS, token)\r\n ^\r\nSyntaxError: invalid syntax\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/compileall.py\", line 159, in compile_file\r\n invalidation_mode=invalidation_mode)\r\n File \"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/py_compile.py\", line 147, in compile\r\n raise py_exc\r\npy_compile.PyCompileError: File \"/private/var/folders/g0/5zwy4mtx7579v5x6rxqb083r0000gn/T/pip-unpacked-wheel-k410h9s0/sacremoses/sent_tokenize.py\", line 69\r\n if re.search(IS_EOS, token)\r\n ^\r\nSyntaxError: invalid syntax\r\n\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pip/_internal/cli/base_command.py\", line 186, in _main\r\n status = self.run(options, args)\r\n File \"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pip/_internal/commands/install.py\", line 404, in run\r\n use_user_site=options.use_user_site,\r\n File \"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pip/_internal/req/__init__.py\", line 71, in install_given_reqs\r\n **kwargs\r\n File \"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pip/_internal/req/req_install.py\", line 815, in install\r\n warn_script_location=warn_script_location,\r\n File \"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pip/_internal/operations/install/wheel.py\", line 614, in install_wheel\r\n warn_script_location=warn_script_location,\r\n File \"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pip/_internal/operations/install/wheel.py\", line 338, in install_unpacked_wheel\r\n compileall.compile_dir(source, force=True, quiet=True)\r\n File \"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/compileall.py\", line 97, in compile_dir\r\n legacy, optimize, invalidation_mode):\r\n File \"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/compileall.py\", line 169, in compile_file\r\n msg = err.msg.encode(sys.stdout.encoding,\r\n File \"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pip/_internal/utils/misc.py\", line 554, in encoding\r\n return self.orig_stream.encoding\r\n File \"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/codecs.py\", line 409, in __getattr__\r\n return getattr(self.stream, name)\r\nAttributeError: '_io.BufferedWriter' object has no attribute 'encoding'\r\n```", "yes I had the same issue with `pip3.6 install`", "Can you all run `python transformers-cli env` and post the output here? It provides some useful information about your platform that might be helpful to debug.", "Hi, I had the same problem and resolved it by installing rust.\r\n\"error: Can not find Rust compiler\"\r\n\r\nFor MacOS, I used \"curl https://sh.rustup.rs -sSf | sh\". I also found that it needed a nightly version of rust, so you have to specify that in the install options. ", "Hi, I also had the same problem with my initial installation of the library. After some time, I realized that my anaconda version was on 32Bit. You can check your version with \r\n`python -c \"import struct;print( 8 * struct.calcsize('P'))\"`\r\nThe output should be 64.\r\nIf it is 32 then you have to reinstall your IDE\r\n", "@Wild3d I can confirm after running your snippet that I am on a 64bit version ", "@gardnerds after creating a new environment to try your solution that also worked for me. I didn't have rust installed before. It successfully built the wheel for tokenizers (PEP 517). ", "@gardnerds also worked for me. Using python 3.7 and built from source using a clean conda env", "Install Python 64-bit instead of 32-bit solved my same issue.", "I was having the same issue on virtualenv over Mac OS Mojave. Managed to solve it and install Transformers 2.5.1 by manually install the last version of tokenizers (0.6.0) instead of 0.5.2 that is required in the transformer package.\r\n\r\npip install tokenizers\r\n\r\nGit clone latest version of transformers:\r\n\r\ngit clone https://github.com/huggingface/transformers\r\n\r\nBefore running the installation edit transformers/setup.py and change requirement of tokenizers to 0.6.0\r\n\r\n Line 93: install_requires=[\r\n \"numpy\",\r\n \"tokenizers == 0.6.0\",\r\n\r\nThen run as usual: \r\n\r\ncd transformers\r\npip install .\r\n\r\nI assume that you could also skip the first step and just collect the package as you run the install. \r\nI'm quite new to this, so just wanted to share my take.", "@dafraile That solves mine! Thank you very much!", "@dafraile That helps, thanks a lot!", "I managed to solve the issue by installing Rust compiler\r\n\r\n- Install Rust [link](https://www.rust-lang.org/tools/install) `curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh`\r\n- Restart the terminal\r\n- `pip install transformers==2.5.1`", "> Environment: macOS Mojave Ver 10.14.6\r\n> Tried installing both from pip and source. Same issue:\r\n> \r\n> > Successfully built transformers\r\n> > Failed to build tokenizers\r\n> \r\n> Result was that Transformers was not installed (not listed in pip freeze)\r\n> \r\n> This however should work - seems like you just won't get the the new tokenizers:\r\n> pip install transformers==2.4.1\r\n\r\nThis solution is working for me", "> I managed to solve the issue by installing Rust compiler\r\n> \r\n> * Install Rust [link](https://www.rust-lang.org/tools/install) `curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh`\r\n> * Restart the terminal\r\n> * `pip install transformers==2.5.1`\r\n\r\nIt works for me, thanks!\r\nYou can do `source $HOME/.cargo/env` instead of restarting the terminal.", "@gardnerds, adding `$HOME/.cargo/bin` to PATH after installing rust fixed my installation. Thank you. ", "@dafraile Thanks a lot. It solves my problem", "@dafraile Thanks! It works!", "@AvivNavon Thanks ! Solved my problem too. (MacOS Mojave)\r\nI install latest version of transformers though (2.8.0)\r\n`pip install transformers` instead of `pip install transformers==2.5.1`", "resolved this issue by installing Rust ", "I resolved this issue by installing Rust - I initially did forget to restart the terminal first.\r\nI'm using Mojave 10.14.5.\r\nThis thread is great! Btw I had no such issues on my Ubuntu 18.04 machine.", "@phihung recommendation works. ", "Just installing rust compiler works for me too (Thanks @phihung ) I'm on Mac Mojave 10.14.6. \r\nMay be conda installation should be able to over come this? (don't know if pip can force install a 3rd party compiler)?", "@dafraile Actually your solution is the closest one ! But now I saw that they just corrected that line in setup.py so it became tokenizers==0.7.0 now (and the newest tokenizers are 0.7.0).\r\nSo the real importance is that we should \r\n1. always update the transformers from the source \r\n2. (really important !) uninstall the old version before we reinstall the newest :p \r\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "I am facing a similar issue trying to build on a PowerPC with RedHat\r\nI am getting errors when trying to build tokenizers:\r\n```\r\nBuilding wheels for collected packages: tokenizers\r\n Building wheel for tokenizers (PEP 517) ... error\r\n ERROR: Command errored out with exit status 1:\r\n command: /home/aarbelle/.conda/envs/gbs/bin/python3.6 /home/aarbelle/.conda/envs/gbs/lib/python3.6/site-packages/pip/_vendor/pep517/_in_process.py build_wheel /tmp/tmpd6q9xccz\r\n cwd: /tmp/pip-install-ohxny31i/tokenizers\r\n Complete output (136 lines):\r\n running bdist_wheel\r\n running build\r\n running build_py\r\n creating build\r\n creating build/lib\r\n creating build/lib/tokenizers\r\n copying tokenizers/__init__.py -> build/lib/tokenizers\r\n creating build/lib/tokenizers/models\r\n copying tokenizers/models/__init__.py -> build/lib/tokenizers/models\r\n creating build/lib/tokenizers/decoders\r\n copying tokenizers/decoders/__init__.py -> build/lib/tokenizers/decoders\r\n creating build/lib/tokenizers/normalizers\r\n copying tokenizers/normalizers/__init__.py -> build/lib/tokenizers/normalizers\r\n creating build/lib/tokenizers/pre_tokenizers\r\n copying tokenizers/pre_tokenizers/__init__.py -> build/lib/tokenizers/pre_tokenizers\r\n creating build/lib/tokenizers/processors\r\n copying tokenizers/processors/__init__.py -> build/lib/tokenizers/processors\r\n creating build/lib/tokenizers/trainers\r\n copying tokenizers/trainers/__init__.py -> build/lib/tokenizers/trainers\r\n creating build/lib/tokenizers/implementations\r\n copying tokenizers/implementations/bert_wordpiece.py -> build/lib/tokenizers/implementations\r\n copying tokenizers/implementations/__init__.py -> build/lib/tokenizers/implementations\r\n copying tokenizers/implementations/byte_level_bpe.py -> build/lib/tokenizers/implementations\r\n copying tokenizers/implementations/sentencepiece_bpe.py -> build/lib/tokenizers/implementations\r\n copying tokenizers/implementations/base_tokenizer.py -> build/lib/tokenizers/implementations\r\n copying tokenizers/implementations/char_level_bpe.py -> build/lib/tokenizers/implementations\r\n copying tokenizers/__init__.pyi -> build/lib/tokenizers\r\n copying tokenizers/models/__init__.pyi -> build/lib/tokenizers/models\r\n copying tokenizers/decoders/__init__.pyi -> build/lib/tokenizers/decoders\r\n copying tokenizers/normalizers/__init__.pyi -> build/lib/tokenizers/normalizers\r\n copying tokenizers/pre_tokenizers/__init__.pyi -> build/lib/tokenizers/pre_tokenizers\r\n copying tokenizers/processors/__init__.pyi -> build/lib/tokenizers/processors\r\n copying tokenizers/trainers/__init__.pyi -> build/lib/tokenizers/trainers\r\n running build_ext\r\n running build_rust\r\n Updating crates.io index\r\n Updating git repository `https://github.com/n1t0/rayon-cond`\r\n warning: unused manifest key: target.x86_64-apple-darwin.rustflags\r\n Compiling proc-macro2 v1.0.21\r\n Compiling unicode-xid v0.2.1\r\n Compiling autocfg v1.0.1\r\n Compiling syn v1.0.41\r\n Compiling libc v0.2.77\r\n Compiling lazy_static v1.4.0\r\n Compiling cfg-if v0.1.10\r\n Compiling memchr v2.3.3\r\n Compiling serde_derive v1.0.116\r\n Compiling scopeguard v1.1.0\r\n Compiling serde v1.0.116\r\n Compiling maybe-uninit v2.0.0\r\n Compiling regex-syntax v0.6.18\r\n Compiling ryu v1.0.5\r\n Compiling rayon-core v1.8.1\r\n Compiling getrandom v0.1.15\r\n Compiling serde_json v1.0.57\r\n Compiling smallvec v1.4.2\r\n Compiling itoa v0.4.6\r\n Compiling inventory v0.1.9\r\n Compiling pkg-config v0.3.18\r\n Compiling proc-macro-hack v0.5.18\r\n Compiling bitflags v1.2.1\r\n Compiling cc v1.0.60\r\n Compiling unicode-width v0.1.8\r\n Compiling either v1.6.1\r\n Running `rustc --crate-name build_script_build --edition=2018 /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/proc-macro2-1.0.21/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -Cembed-bitcode=no --cfg 'feature=\"default\"' --cfg 'feature=\"proc-macro\"' -C metadata=93385cb1e678e330 -C extra-filename=-93385cb1e678e330 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/build/proc-macro2-93385cb1e678e330 -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name unicode_xid /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/unicode-xid-0.2.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no --cfg 'feature=\"default\"' -C metadata=cac161967aa527e1 -C extra-filename=-cac161967aa527e1 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name autocfg /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/autocfg-1.0.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no -C metadata=ddb9624730d1e52a -C extra-filename=-ddb9624730d1e52a --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name build_script_build --edition=2018 /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/syn-1.0.41/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -Cembed-bitcode=no --cfg 'feature=\"clone-impls\"' --cfg 'feature=\"default\"' --cfg 'feature=\"derive\"' --cfg 'feature=\"extra-traits\"' --cfg 'feature=\"full\"' --cfg 'feature=\"parsing\"' --cfg 'feature=\"printing\"' --cfg 'feature=\"proc-macro\"' --cfg 'feature=\"quote\"' --cfg 'feature=\"visit\"' -C metadata=9988fc7a157e69c9 -C extra-filename=-9988fc7a157e69c9 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/build/syn-9988fc7a157e69c9 -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name build_script_build /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/libc-0.2.77/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -Cembed-bitcode=no --cfg 'feature=\"default\"' --cfg 'feature=\"std\"' -C metadata=5a4798f2b06c36bd -C extra-filename=-5a4798f2b06c36bd --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/build/libc-5a4798f2b06c36bd -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name cfg_if --edition=2018 /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/cfg-if-0.1.10/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no -C metadata=a7dbefe7725970f6 -C extra-filename=-a7dbefe7725970f6 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name lazy_static /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/lazy_static-1.4.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no -C metadata=09f05f31cfc64306 -C extra-filename=-09f05f31cfc64306 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name build_script_build /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/memchr-2.3.3/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -Cembed-bitcode=no --cfg 'feature=\"default\"' --cfg 'feature=\"std\"' --cfg 'feature=\"use_std\"' -C metadata=a8f56f28f9bbd928 -C extra-filename=-a8f56f28f9bbd928 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/build/memchr-a8f56f28f9bbd928 -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name build_script_build /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/serde_derive-1.0.116/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -Cembed-bitcode=no --cfg 'feature=\"default\"' -C metadata=d850080603f4774e -C extra-filename=-d850080603f4774e --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/build/serde_derive-d850080603f4774e -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name scopeguard /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/scopeguard-1.1.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no -C metadata=91afa33e60eb09b1 -C extra-filename=-91afa33e60eb09b1 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name build_script_build /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/serde-1.0.116/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -Cembed-bitcode=no --cfg 'feature=\"default\"' --cfg 'feature=\"derive\"' --cfg 'feature=\"serde_derive\"' --cfg 'feature=\"std\"' -C metadata=1a02cab7c16e427d -C extra-filename=-1a02cab7c16e427d --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/build/serde-1a02cab7c16e427d -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name build_script_build /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/maybe-uninit-2.0.0/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -Cembed-bitcode=no -C metadata=9f94ee50e1295f1f -C extra-filename=-9f94ee50e1295f1f --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/build/maybe-uninit-9f94ee50e1295f1f -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name regex_syntax /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/regex-syntax-0.6.18/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no --cfg 'feature=\"default\"' --cfg 'feature=\"unicode\"' --cfg 'feature=\"unicode-age\"' --cfg 'feature=\"unicode-bool\"' --cfg 'feature=\"unicode-case\"' --cfg 'feature=\"unicode-gencat\"' --cfg 'feature=\"unicode-perl\"' --cfg 'feature=\"unicode-script\"' --cfg 'feature=\"unicode-segment\"' -C metadata=604baccf8464f333 -C extra-filename=-604baccf8464f333 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name build_script_build --edition=2018 /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/ryu-1.0.5/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -Cembed-bitcode=no -C metadata=a40cc9c191e07da8 -C extra-filename=-a40cc9c191e07da8 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/build/ryu-a40cc9c191e07da8 -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name build_script_build --edition=2018 /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/getrandom-0.1.15/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -Cembed-bitcode=no --cfg 'feature=\"std\"' -C metadata=3134d02611660405 -C extra-filename=-3134d02611660405 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/build/getrandom-3134d02611660405 -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name build_script_build --edition=2018 /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-core-1.8.1/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -Cembed-bitcode=no -C metadata=4f258883be84b941 -C extra-filename=-4f258883be84b941 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/build/rayon-core-4f258883be84b941 -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name build_script_build --edition=2018 /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/serde_json-1.0.57/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -Cembed-bitcode=no --cfg 'feature=\"default\"' --cfg 'feature=\"std\"' -C metadata=9c7f2a71de758875 -C extra-filename=-9c7f2a71de758875 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/build/serde_json-9c7f2a71de758875 -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name smallvec --edition=2018 /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/smallvec-1.4.2/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no -C metadata=af516ba081f6df94 -C extra-filename=-af516ba081f6df94 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name itoa /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/itoa-0.4.6/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no -C metadata=def6b42508610d1c -C extra-filename=-def6b42508610d1c --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name build_script_build --edition=2018 /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/inventory-0.1.9/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -Cembed-bitcode=no -C metadata=55eb92d7e72d18d1 -C extra-filename=-55eb92d7e72d18d1 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/build/inventory-55eb92d7e72d18d1 -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name proc_macro_hack --edition=2018 /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/proc-macro-hack-0.5.18/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type proc-macro --emit=dep-info,link -C prefer-dynamic -C opt-level=3 -Cembed-bitcode=no -C metadata=24f8c9a7698fc568 -C extra-filename=-24f8c9a7698fc568 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --extern proc_macro --cap-lints allow`\r\n Running `rustc --crate-name pkg_config /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/pkg-config-0.3.18/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no -C metadata=a729ffec8f42b1bf -C extra-filename=-a729ffec8f42b1bf --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name build_script_build /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/bitflags-1.2.1/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -Cembed-bitcode=no --cfg 'feature=\"default\"' -C metadata=86d2212697398c07 -C extra-filename=-86d2212697398c07 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/build/bitflags-86d2212697398c07 -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name cc --edition=2018 /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/cc-1.0.60/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no -C metadata=bd7ffcf8ae7a9c20 -C extra-filename=-bd7ffcf8ae7a9c20 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`\r\n Compiling unindent v0.1.6\r\n Compiling version_check v0.9.2\r\n Compiling ppv-lite86 v0.2.9\r\n Compiling number_prefix v0.3.0\r\n Compiling strsim v0.8.0\r\n Compiling vec_map v0.8.2\r\n Compiling ansi_term v0.11.0\r\n Compiling unicode_categories v0.1.1\r\n Running `rustc --crate-name unicode_width /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/unicode-width-0.1.8/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no --cfg 'feature=\"default\"' -C metadata=2ffe7097d8c6b666 -C extra-filename=-2ffe7097d8c6b666 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name either /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/either-1.6.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no --cfg 'feature=\"default\"' --cfg 'feature=\"use_std\"' -C metadata=644a45e467402f81 -C extra-filename=-644a45e467402f81 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name version_check /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/version_check-0.9.2/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no -C metadata=aa50462cc4c9df50 -C extra-filename=-aa50462cc4c9df50 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name unindent --edition=2018 /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/unindent-0.1.6/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no -C metadata=fdeaf6996f560ff0 -C extra-filename=-fdeaf6996f560ff0 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name ppv_lite86 --edition=2018 /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/ppv-lite86-0.2.9/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no --cfg 'feature=\"simd\"' --cfg 'feature=\"std\"' -C metadata=e3e8e9d2c7899d24 -C extra-filename=-e3e8e9d2c7899d24 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name number_prefix /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/number_prefix-0.3.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no --cfg 'feature=\"default\"' --cfg 'feature=\"std\"' -C metadata=a640ea83003307f7 -C extra-filename=-a640ea83003307f7 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name strsim /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/strsim-0.8.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no -C metadata=816b20067865d64c -C extra-filename=-816b20067865d64c --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name vec_map /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/vec_map-0.8.2/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no -C metadata=a7a30dfbdcea21f0 -C extra-filename=-a7a30dfbdcea21f0 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name ansi_term /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/ansi_term-0.11.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no -C metadata=9c09db9f9cbc7749 -C extra-filename=-9c09db9f9cbc7749 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name unicode_categories /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/unicode_categories-0.1.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no -C metadata=f5d72f9ccd926082 -C extra-filename=-f5d72f9ccd926082 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --cap-lints allow`\r\n Compiling lock_api v0.3.4\r\n Running `rustc --crate-name lock_api --edition=2018 /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/lock_api-0.3.4/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no --cfg 'feature=\"nightly\"' -C metadata=54cc9296368f9d0e -C extra-filename=-54cc9296368f9d0e --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --extern scopeguard=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps/libscopeguard-91afa33e60eb09b1.rmeta --cap-lints allow`\r\n Compiling thread_local v1.0.1\r\n Running `rustc --crate-name thread_local /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/thread_local-1.0.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no -C metadata=44b3f6e675105288 -C extra-filename=-44b3f6e675105288 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --extern lazy_static=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps/liblazy_static-09f05f31cfc64306.rmeta --cap-lints allow`\r\n Compiling textwrap v0.11.0\r\n Running `rustc --crate-name textwrap /home/aarbelle/.cargo/registry/src/github.com-1ecc6299db9ec823/textwrap-0.11.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no -C metadata=05dca2f2bb6ce7b5 -C extra-filename=-05dca2f2bb6ce7b5 --out-dir /tmp/pip-install-ohxny31i/tokenizers/target/release/deps -L dependency=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps --extern unicode_width=/tmp/pip-install-ohxny31i/tokenizers/target/release/deps/libunicode_width-2ffe7097d8c6b666.rmeta --cap-lints allow`\r\n Running `/tmp/pip-install-ohxny31i/tokenizers/target/release/build/serde_json-9c7f2a71de758875/build-script-build`\r\n Running `/tmp/pip-install-ohxny31i/tokenizers/target/release/build/rayon-core-4f258883be84b941/build-script-build`\r\n error: failed to run custom build command for `serde_json v1.0.57`\r\n \r\n Caused by:\r\n could not execute process `/tmp/pip-install-ohxny31i/tokenizers/target/release/build/serde_json-9c7f2a71de758875/build-script-build` (never executed)\r\n \r\n Caused by:\r\n No such file or directory (os error 2)\r\n warning: build failed, waiting for other jobs to finish...\r\n error: failed to run custom build command for `rayon-core v1.8.1`\r\n \r\n Caused by:\r\n could not execute process `/tmp/pip-install-ohxny31i/tokenizers/target/release/build/rayon-core-4f258883be84b941/build-script-build` (never executed)\r\n \r\n Caused by:\r\n No such file or directory (os error 2)\r\n warning: build failed, waiting for other jobs to finish...\r\n error: build failed\r\n /tmp/pip-build-env-7kdpvzfy/overlay/lib/python3.6/site-packages/setuptools/dist.py:452: UserWarning: Normalizing '0.8.1.rc2' to '0.8.1rc2'\r\n warnings.warn(tmpl.format(**locals()))\r\n cargo rustc --lib --manifest-path Cargo.toml --features pyo3/extension-module --release --verbose -- --crate-type cdylib\r\n error: cargo failed with code: 101\r\n \r\n ----------------------------------------\r\n ERROR: Failed building wheel for tokenizers\r\nFailed to build tokenizers\r\nERROR: Could not build wheels for tokenizers which use PEP 517 and cannot be installed directly\r\n```", "@arbellea Please make an issue on the tokenizer page. https://github.com/huggingface/tokenizers" ]
1,581
1,706
1,594
NONE
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): N/A Language I am using the model on (English, Chinese ...): N/A The problem arises when using: * [X] the official example scripts: (give details below) Problem arises in transformers installation on Microsoft Windows 10 Pro, version 10.0.17763 After creating and activating the virtual environment, installing transformers is not possible, because the following error occurs: "error: can not find Rust Compiler" "ERROR: Failed building wheel for tokenizers" Failed to build tokenizers ERROR: Could not build wheels for tokenizers which use PEP 517 and cannot be installed d The tasks I am working on is: [X ] transformers installation ## To reproduce Steps to reproduce the behavior: 1. From command line interface, create and activate a virtual environment by following the steps in this URL: https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/ 2. Install transformers from source, by following the example in the topic From Source on this URL: https://github.com/huggingface/transformers <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ``` -m pip --version -m pip install --upgrade pip -m pip install --user virtualenv -m venv env .\env\Scripts\activate pip install transformers ERROR: Command errored out with exit status 1: command: 'c:\users\vbrandao\env\scripts\python.exe' 'c:\users\vbrandao\env\lib\site-packages\pip\_vendor\pep517\_in_process.py' build_wheel 'C:\Users\vbrandao\AppData\Local\Temp\tmpj6evjmze' cwd: C:\Users\vbrandao\AppData\Local\Temp\pip-install-sza2_lmj\tokenizers Complete output (10 lines): running bdist_wheel running build running build_py creating build creating build\lib creating build\lib\tokenizers copying tokenizers\__init__.py -> build\lib\tokenizers running build_ext running build_rust error: Can not find Rust compiler ---------------------------------------- ERROR: Failed building wheel for tokenizers Failed to build tokenizers ERROR: Could not build wheels for tokenizers which use PEP 517 and cannot be installed directly ``` ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> Installation of transformers should be complete. ## Environment info <!-- You can run the command `python transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: N/A - installation step - Platform: Command Line Interface / Virtual Env - Python version: python 3.8 - PyTorch version (GPU?): N/A - Tensorflow version (GPU?): N/A - Using GPU in script?: N/A - Using distributed or parallel set-up in script?: N/A ![tokenizers_intallation_error](https://user-images.githubusercontent.com/17074908/74371705-06b3f680-4db8-11ea-8a2d-5f920cb3caab.PNG)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2831/reactions", "total_count": 76, "+1": 76, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2831/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2830
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2830/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2830/comments
https://api.github.com/repos/huggingface/transformers/issues/2830/events
https://github.com/huggingface/transformers/issues/2830
564,185,272
MDU6SXNzdWU1NjQxODUyNzI=
2,830
Reusing states for sequential decoding in BERTForMaskedLM
{ "login": "da03", "id": 5753959, "node_id": "MDQ6VXNlcjU3NTM5NTk=", "avatar_url": "https://avatars.githubusercontent.com/u/5753959?v=4", "gravatar_id": "", "url": "https://api.github.com/users/da03", "html_url": "https://github.com/da03", "followers_url": "https://api.github.com/users/da03/followers", "following_url": "https://api.github.com/users/da03/following{/other_user}", "gists_url": "https://api.github.com/users/da03/gists{/gist_id}", "starred_url": "https://api.github.com/users/da03/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/da03/subscriptions", "organizations_url": "https://api.github.com/users/da03/orgs", "repos_url": "https://api.github.com/users/da03/repos", "events_url": "https://api.github.com/users/da03/events{/privacy}", "received_events_url": "https://api.github.com/users/da03/received_events", "type": "User", "site_admin": false }
[ { "id": 1834053813, "node_id": "MDU6TGFiZWwxODM0MDUzODEz", "url": "https://api.github.com/repos/huggingface/transformers/labels/PyTorch", "name": "PyTorch", "color": "a12bef", "default": false, "description": "Anything PyTorch" }, { "id": 1843738573, "node_id": "MDU6TGFiZWwxODQzNzM4NTcz", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Encoder-Decoder", "name": "Core: Encoder-Decoder", "color": "ef536d", "default": false, "description": "" }, { "id": 1845609017, "node_id": "MDU6TGFiZWwxODQ1NjA5MDE3", "url": "https://api.github.com/repos/huggingface/transformers/labels/seq2seq", "name": "seq2seq", "color": "fef2c0", "default": false, "description": "" } ]
closed
false
null
[]
[ "That's a cool idea.", "Closed by #3059 " ]
1,581
1,583
1,583
NONE
null
# 🚀 Feature request I am using Bert as a decoder (by setting is_decoder=True). However, during sequential decoding, there is no way of reusing the hidden states, so for every word to be generated we need to rerun the model on the ENTIRE decoded sequence, which renders decoding inefficient. Can you add something similar to the keyword `past=` in GPT2 model to BERT's forward function (https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L938)? ## Generalization of my Issue More generally, to the best of my knowledge, there's no model in this library that simultaneously supports 1) cross attention (by feeding `encoder_hidden_states=` or `memory=`), and 2) reusing decoder states during sequential decoding (by feeding `past=`). 1) rules out models like GPT-2 and XLNet which only supports language modeling (although in theory we can just use a decoder to do translation, I want to use a separate encoder and decoder); and 2) rules out models like BERT and T5 which supports 1) but not 2). For example, the point of T5 is to use it for text-to-text translation problems, but since we cannot reuse hidden states, sequential decoding (beam search) would be extremely inefficient. ## Example In the provided summarization example, both 1) and 2) are supported. However, the decoder is defined in its own code (examples/summarization/modeling_bertabs.py) and cannot be used directly in the library. Besides, supporting incremental state update is a basic function that every decoder shall support. ## Relevant Issues I checked the suggested similar issues and did not find the same issue. Please let me know if my issue duplicates others'.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2830/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2830/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2829
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2829/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2829/comments
https://api.github.com/repos/huggingface/transformers/issues/2829/events
https://github.com/huggingface/transformers/issues/2829
564,152,367
MDU6SXNzdWU1NjQxNTIzNjc=
2,829
BERT generating prediction in 120sec approx using squad 2.0 in prediction.json
{ "login": "tusharsh23", "id": 37954726, "node_id": "MDQ6VXNlcjM3OTU0NzI2", "avatar_url": "https://avatars.githubusercontent.com/u/37954726?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tusharsh23", "html_url": "https://github.com/tusharsh23", "followers_url": "https://api.github.com/users/tusharsh23/followers", "following_url": "https://api.github.com/users/tusharsh23/following{/other_user}", "gists_url": "https://api.github.com/users/tusharsh23/gists{/gist_id}", "starred_url": "https://api.github.com/users/tusharsh23/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tusharsh23/subscriptions", "organizations_url": "https://api.github.com/users/tusharsh23/orgs", "repos_url": "https://api.github.com/users/tusharsh23/repos", "events_url": "https://api.github.com/users/tusharsh23/events{/privacy}", "received_events_url": "https://api.github.com/users/tusharsh23/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,581
1,587
1,587
NONE
null
I am using the below command to predict the question answer using BERT with squad but it taking too long to generate the prediction.json approx 120 sec. I wanna reduce this time to 10secs. run_squad.py --vocab_file=uncased_L-12_H-768_A-12/vocab.txt --bert_config_file=uncased_L-12_H-768_A-12/bert_config.json --init_checkpoint=model.ckpt-21899 --do_train=False --train_file=train-v1.1.json --do_predict=True --train_batch_size=32 --learning_rate=5e-5 --num_train_epochs=3.0 --max_seq_length=384 --doc_stride=128 --version_2_with_negative=True --output_dir=/ --predict_file=input.json --use_tpu=False Please suggest me some solution.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2829/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2829/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2828
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2828/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2828/comments
https://api.github.com/repos/huggingface/transformers/issues/2828/events
https://github.com/huggingface/transformers/pull/2828
564,106,241
MDExOlB1bGxSZXF1ZXN0Mzc0Mzk2MDE4
2,828
[WIP] Create a Trainer class to handle TF2 model training
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I'm not 100% sure which ones of those methods need to live on the model vs. in the training framework\r\n\r\nFor instance, over in #2816, @srush is implementing support for `pytorch-lightning`, which over in PyTorch world, handles a lot of those tasks. In PyTorch we wouldn't want to implement these in to the model themselves.\r\n\r\nThoughts?", "This is a really good point indeed, because it is something I don't know myself and also why I wanted your opinion on this.\r\n\r\nHow I see the whole picture is like the following. Having a class that will handle the training, let's name it `Trainer` for example, we can imagine this class implementing:\r\n- An LR finder\r\n- A cyclic training\r\n- And maybe other things\r\nThen looks like:\r\n```\r\nclass Trainer(object):\r\n def __init__(self, model_path, training_data, eval_data, **kwargs):\r\n #kwargs will contain the the parameters of the TFPretrainedModel class\r\n # such as distributed=True, optimizer=\"adam\", etc...\r\n self.model = Automodel.from_pretrained(model_path, kwargs)\r\n self.tokenizer = AutoTokenizer.from_pretrained(model_path)\r\n self.training_data = training_data\r\n self.eval_data = eval_data\r\n\r\n def preprocess_data():\r\n # preprocessing the data with the tokenizer\r\n \r\n def lr_finder():\r\n #blabla implementation\r\n return best_lr_over_training_data\r\n\r\n def train(epochs):\r\n lr = self.lr_finder()\r\n self.model.create_optimizer(lr) # Certainly need to modify the signature of this method in the file above\r\n self.create_checkpoint_manager(\"save\")\r\n self.create_summary_writer(\"logs\")\r\n self.model.fit(training_data, epochs) # implementing the cycling training here instead of just model.fit()\r\n```\r\n\r\nThen the code of the external user would be maybe something like:\r\n```\r\nparameters = {....}\r\ntrainer = Trainer(\"bert-base-uncased\", [training, data], [eval, data], parameters)\r\ntrainer.preprocess_data()\r\ntrainer.train(4) # We can even imagine not giving the number of epochs, and use an EarlyStop callback and give a default number of epochs.\r\ntrainer.model.save_pretrained()\r\n```\r\n\r\nOf course this is just the first draft that comes threw my mind. There will be certainly several changes.", "I have checked what is `pytorch-lightning` and it blows my mind, this is really awesome! So convenient and group a lot of the things I want to add here indeed. Unfortunately, I don't know such lib over TF2. I will take some time to check if it exists in parallel of what I'm doing here :)", "Ok, I finally moved everything into a `Trainer` class. I think it was a bad idea to mix the pretrained model and the training features. I think it is much better now.\r\n\r\nAlso instead of the long list of keys in the `**kwargs` parameter we can imagine a config file specifically made for training, and one could custom the training just by updating the JSON file and not the code itself.", "You now have a working example in `examples/run_tf_glue_with_trainer.py`. You can now see how simple it becomes to train a model, if we put apart the config dictionary, training a model takes 4 lines of code.\r\n\r\nOf course there is still a lot of work to do, but now you can have a much better idea of where I wanna go. The next main focuses will be:\r\n- how to select such or such data processor in order to have a trainer more generic for the dataprocessing part\r\n- include metrics\r\n- run an evaluation", "This looks really great @jplu!", "Thanks!", "Close this PR to create a cleaner, and more on purpose one." ]
1,581
1,582
1,582
CONTRIBUTOR
null
**EDIT** Close this PR to create a cleaner, and more on purpose one. Hello, I'm opening the pull request I was talking about in the issue #2783. Here the proposed features in this PR: - [x] add checkpoint manager in order to make a training fault-tolerant - [x] add custom fit method to take into account the specific training steps in distributed mode or not - [x] add optimizer creation method depending of its name - [ ] add loss method in order to be able to custom the loss computation - [x] add a Tensorboard summary writer to make the logs available in Tensorboard For now I have created the definition of the methods with their documentation but with a `raise NotImplementedError` as first I would like to have your opinion on the signatures of these methods. Also I know that @julien-c you have recently worked on a `TFModelUtilsMixin` class. Do you think that some of these methods should go into it instead of directly in `TFPreTrainedModel`? ping also @sshleifer
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2828/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2828/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/2828", "html_url": "https://github.com/huggingface/transformers/pull/2828", "diff_url": "https://github.com/huggingface/transformers/pull/2828.diff", "patch_url": "https://github.com/huggingface/transformers/pull/2828.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/2827
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2827/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2827/comments
https://api.github.com/repos/huggingface/transformers/issues/2827/events
https://github.com/huggingface/transformers/issues/2827
564,095,688
MDU6SXNzdWU1NjQwOTU2ODg=
2,827
OOM risk in RobertaTokenizer/GPT2Tokenizer
{ "login": "AlexDut", "id": 26843313, "node_id": "MDQ6VXNlcjI2ODQzMzEz", "avatar_url": "https://avatars.githubusercontent.com/u/26843313?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AlexDut", "html_url": "https://github.com/AlexDut", "followers_url": "https://api.github.com/users/AlexDut/followers", "following_url": "https://api.github.com/users/AlexDut/following{/other_user}", "gists_url": "https://api.github.com/users/AlexDut/gists{/gist_id}", "starred_url": "https://api.github.com/users/AlexDut/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AlexDut/subscriptions", "organizations_url": "https://api.github.com/users/AlexDut/orgs", "repos_url": "https://api.github.com/users/AlexDut/repos", "events_url": "https://api.github.com/users/AlexDut/events{/privacy}", "received_events_url": "https://api.github.com/users/AlexDut/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1834056635, "node_id": "MDU6TGFiZWwxODM0MDU2NjM1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization", "name": "Core: Tokenization", "color": "FF4446", "default": false, "description": "Internals of the library; Tokenization." } ]
closed
false
null
[]
[ "If lru_cache is used, the max size couldn't be configured at runtime or disabled completely. Trying to go around this with anonymous functions will cause pickling problems and is generally ugly.\r\n\r\nA more elegant and straightforward solution is to use a custom cache with ordered dict and a max size checked at each insertion.\r\n\r\nBoth have a considerable performance impact versus the current unbounded dict though, about 10% more processing time which can really add up in a high throughput objects like tokenizers.\r\n\r\nPretty much any more advanced cache will come with a performance hit. Thoughts?", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,581
1,587
1,587
NONE
null
# 🐛 Bug ## Information Model I am using: Roberta (_roberta-base_) Language I am using the model on: English The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts I am using a modified version of the [examples/distillation/scripts/binarized_data.py](https://github.com/huggingface/transformers/blob/master/examples/distillation/scripts/binarized_data.py) file. The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset I am tokenizing a gzipped JSON file that contains 2B inputs resulting in 250Gb of uncompressed data. The tokenization function is divided across _n_ processes to make the tokenization part faster. The resulting _token_ids_ are written as a list of integers in an output file. While tokenization was done batch by batch, I noticed that my RAM was increasing. It caused an OOM error (I have 64GB or RAM) while only 1.5B inputs had been processed. I identified the problem to be the `cache` attribute of the `GPT2Tokenizer` ([link](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_gpt2.py#L191)) that is never flushed so its size will potentially grow infinitely. Tokenizers inheriting from `GPT2Tokenizer` (such as `RobertaTokenizer` ) are thus also impacted. ## To reproduce Steps to reproduce the behavior: Run tokenization: `tokenizer.encode()` on a very big file using `GPT2Tokenizer` or `RobertaTokenizer`. ## Expected behavior The memory footprint of the tokenizer should be constant while processing an infinite stream of inputs. ## Suggestion I made a quick and dirty fix in my script by flushing the `cache` (tokenizer.cache.clear()) if its size reaches an arbitrarily set threshold (100k in my case) with no significant loss in terms of performance. However I think that there are smarter solutions rather than flushing the whole cache content. One can use a LRU cache instead of a Python dict. You can also define a private method that checks if cache size reaches a threshold and perform flushing in an "elegant" way. I know that for production purposes [Tokenizers](https://github.com/huggingface/tokenizers) lib would be more appropriate but I wanted to notice you this behavior. ## Environment info - `transformers` version: 2.4.1 - Platform: Ubuntu 16.04 LTS - Python version: 3.6.9 - PyTorch version: 1.4.0 (with GPU) - Tensorflow version : 2.0.0 (with GPU) - Using GPU in script?: No - Using distributed or parallel set-up in script?: using python 3.6 [multiprocessing](https://docs.python.org/3.6/library/multiprocessing.html) lib
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2827/reactions", "total_count": 6, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2827/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2826
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2826/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2826/comments
https://api.github.com/repos/huggingface/transformers/issues/2826/events
https://github.com/huggingface/transformers/issues/2826
564,037,051
MDU6SXNzdWU1NjQwMzcwNTE=
2,826
Why only use the hidden state of last token of last layer is used for predicting the next word?
{ "login": "mainulquraishi", "id": 14335238, "node_id": "MDQ6VXNlcjE0MzM1MjM4", "avatar_url": "https://avatars.githubusercontent.com/u/14335238?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mainulquraishi", "html_url": "https://github.com/mainulquraishi", "followers_url": "https://api.github.com/users/mainulquraishi/followers", "following_url": "https://api.github.com/users/mainulquraishi/following{/other_user}", "gists_url": "https://api.github.com/users/mainulquraishi/gists{/gist_id}", "starred_url": "https://api.github.com/users/mainulquraishi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mainulquraishi/subscriptions", "organizations_url": "https://api.github.com/users/mainulquraishi/orgs", "repos_url": "https://api.github.com/users/mainulquraishi/repos", "events_url": "https://api.github.com/users/mainulquraishi/events{/privacy}", "received_events_url": "https://api.github.com/users/mainulquraishi/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,581
1,587
1,587
NONE
null
I was trying to generate text and also reading the code to understand how it is working. I found that, after providing some text as context (first iteration), it goes through the transformer and the output of the transformer (`output[0] `of `GPT2Model` ) is for each token position there is a vector. To my understanding, these vectors are the context-aware representation of each token position. Now for generating the next word, the representation of the last token of last layers is being used. This is the case for the first iteration. Then for each subsequent iteration, only the representation of the last predicted word is used to predict the next word. My question is, why the only the representation of the last word is used to predict the next word? This raises another question, is the last token-position representation hold the context of the whole sequence (like LSTM)?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2826/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2826/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2825
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2825/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2825/comments
https://api.github.com/repos/huggingface/transformers/issues/2825/events
https://github.com/huggingface/transformers/issues/2825
564,027,827
MDU6SXNzdWU1NjQwMjc4Mjc=
2,825
binarized_data.py in distillation uses incorrect type casting
{ "login": "Rexhaif", "id": 5154447, "node_id": "MDQ6VXNlcjUxNTQ0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/5154447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rexhaif", "html_url": "https://github.com/Rexhaif", "followers_url": "https://api.github.com/users/Rexhaif/followers", "following_url": "https://api.github.com/users/Rexhaif/following{/other_user}", "gists_url": "https://api.github.com/users/Rexhaif/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rexhaif/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rexhaif/subscriptions", "organizations_url": "https://api.github.com/users/Rexhaif/orgs", "repos_url": "https://api.github.com/users/Rexhaif/repos", "events_url": "https://api.github.com/users/Rexhaif/events{/privacy}", "received_events_url": "https://api.github.com/users/Rexhaif/received_events", "type": "User", "site_admin": false }
[ { "id": 1838876023, "node_id": "MDU6TGFiZWwxODM4ODc2MDIz", "url": "https://api.github.com/repos/huggingface/transformers/labels/Distillation", "name": "Distillation", "color": "d4c5f9", "default": false, "description": "Related to model distillation" } ]
closed
false
null
[]
[ "Good catch @Rexhaif \r\nI'll fix that. Thanks for pointing that out.\r\nVictor" ]
1,581
1,581
1,581
CONTRIBUTOR
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): possibly affected model is DistilBert(distilbert-base-multilingual-cased) Language I am using the model on (English, Chinese ...): multiple The problem arises when using: * [x] the official example scripts: (give details below) The tasks I am working on is: * [x] my task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Open line [84 in examples/distillation/scripts/binarized_data.py](https://github.com/huggingface/transformers/blob/21da895013a95e60df645b7d6b95f4a38f604759/examples/distillation/scripts/binarized_data.py#L84) 2. See typecast into np.uint16 (possibly added to produce smaller output file size) 3. Realize, that multilingual model has vocab size of 119547, so a large portion of tokens(54012, 45%), which has id > uint16 max value(65535), receives the wrong id after binarization ```python # Some code to demonstrate the thing import transformers as tr import numpy as np tok = tr.DistilBertTokenizer.from_pretrained("distilbert-base-multilingual-cased") print("UInt16 max value", np.iinfo(np.uint16).max) ## 65535 print("Vocab size:", tok.vocab_size) ## 119547 ## code to produce table i've included into issue def table_row(tok_id): print(f"|{tok_id:^15}|{tok.decode([tok_id]):^18}|{np.uint16(tok_id):^18}|{tok.decode([np.uint16(tok_id)]):^30}|") print("|Actual token id|Actual token value|Token id in uint16|Token value by uint16 token id|") print("|---------------|------------------|------------------|------------------------------|") for i in range(65535, 65700): table_row(i) ``` ## Examples |Actual token id|Actual token value|Token id in uint16|Token value by uint16 token id| |---------------|------------------|------------------|------------------------------| | 65535 | PD | 65535 | PD | | 65536 | ##्ग | 0 | [PAD] | | 65537 | označava | 1 | [unused1] | | 65538 | ##gården | 2 | [unused2] | | 65539 | ##чном | 3 | [unused3] | | .... | .... | .... | .... | | 65635 | siege | 99 | [unused99] | | 65636 | ##lën | 100 | [UNK] | | 65637 | dotato | 101 | [CLS] | | 65638 | madeira | 102 | [SEP] | | 65639 | ##μίας | 103 | [MASK] | | 65640 | ##muggen | 104 | <S> | | 65641 | ##льним | 105 | <T> | | 65642 | Crimea | 106 | ! | | 65643 | altor | 107 | " | | 65644 | chefo | 108 | # | | 65645 | persoon | 109 | $ | | 65646 | ##зія | 110 | % | | 65647 | новое | 111 | & | | 65648 | ##šť | 112 | ' | | 65649 | ##황 | 113 | ( | | 65650 | fisica | 114 | ) | | 65651 | ##ținut | 115 | * | | 65652 | Woche | 116 | + | | 65653 | angesehen | 117 | , | | 65654 | Mach | 118 | - | | 65655 | TNT | 119 | . | | 65656 | obiettivo | 120 | / | | 65657 | ##ceno | 121 | 0 | | 65658 | ##מכון | 122 | 1 | | 65659 | Tallinnas | 123 | 2 | | 65660 | graet | 124 | 3 | | 65661 | straal | 125 | 4 | | 65662 | Pulitzer | 126 | 5 | | 65663 | прво | 127 | 6 | | 65664 | ##laska | 128 | 7 | | 65665 | Actors | 129 | 8 | | 65666 | Daimler | 130 | 9 | | 65667 | estadual | 131 | : | | 65668 | ##ಃ | 132 | ; | | 65669 | resultó | 133 | < | | 65670 | Tokom | 134 | = | | 65671 | Parliamentary | 135 | > | | 65672 | Phật | 136 | ? | | 65673 | liście | 137 | @ | | 65674 | ##ерна | 138 | A | ## Expected behavior binarize_data.py should use typecasting to int32 at least to avoid incorrect behavior ## Environment info <!-- You can run the command `python transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.3.0 (distillation code from current master branch) - Platform: GNU/Linux Fedora 5.4.13-201.fc31.x86_64 - Python version: Python 3.6.9 :: Anaconda, Inc. - PyTorch version (GPU?): 1.4.0 GPU - Tensorflow version (GPU?): not applicable - Using GPU in script?: no - Using distributed or parallel set-up in script?: no
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2825/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2825/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2824
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2824/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2824/comments
https://api.github.com/repos/huggingface/transformers/issues/2824/events
https://github.com/huggingface/transformers/issues/2824
563,996,039
MDU6SXNzdWU1NjM5OTYwMzk=
2,824
GPT-2 language model: multiplying decoder-transformer output with token embedding or another weight matrix
{ "login": "mainulquraishi", "id": 14335238, "node_id": "MDQ6VXNlcjE0MzM1MjM4", "avatar_url": "https://avatars.githubusercontent.com/u/14335238?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mainulquraishi", "html_url": "https://github.com/mainulquraishi", "followers_url": "https://api.github.com/users/mainulquraishi/followers", "following_url": "https://api.github.com/users/mainulquraishi/following{/other_user}", "gists_url": "https://api.github.com/users/mainulquraishi/gists{/gist_id}", "starred_url": "https://api.github.com/users/mainulquraishi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mainulquraishi/subscriptions", "organizations_url": "https://api.github.com/users/mainulquraishi/orgs", "repos_url": "https://api.github.com/users/mainulquraishi/repos", "events_url": "https://api.github.com/users/mainulquraishi/events{/privacy}", "received_events_url": "https://api.github.com/users/mainulquraishi/received_events", "type": "User", "site_admin": false }
[ { "id": 1834053813, "node_id": "MDU6TGFiZWwxODM0MDUzODEz", "url": "https://api.github.com/repos/huggingface/transformers/labels/PyTorch", "name": "PyTorch", "color": "a12bef", "default": false, "description": "Anything PyTorch" }, { "id": 1834056761, "node_id": "MDU6TGFiZWwxODM0MDU2NzYx", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling", "name": "Core: Modeling", "color": "FF8446", "default": false, "description": "Internals of the library; Models." } ]
closed
false
null
[]
[ "Hi, the input embeddings are tied to the output embeddings -> The `lm_head` attribute essentially shares its weights with the embedding layer. Passing the output of the transformer through that layer is the same as multiplying this output (the hidden states) with the token embedding matrix.", "@LysandreJik Where is the code that performs this weight tying?", "I think I found it:\r\n\r\n```\r\n output_embeddings = self.get_output_embeddings()\r\n if output_embeddings is not None:\r\n self._tie_or_clone_weights(output_embeddings, self.get_input_embeddings())\r\n\r\n```\r\nin `PreTrainedModel.tie_weights()` in `modeling_utils.py`." ]
1,581
1,594
1,581
NONE
null
I was reading the code of GPT2 language model. The transformation of hidden states to the probability distribution over the vocabulary has done in the following line: `lm_logits = self.lm_head(hidden_states)` Here, `self.lm_head = nn.Linear(config.n_embd, config.vocab_size, bias=False)` However, In the original paper, they suggested multiplying hidden states with the token embedding matrix whereas huggingface implementation used another matrix. Is there any advantage of this? Am I missing something?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2824/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2824/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2823
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2823/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2823/comments
https://api.github.com/repos/huggingface/transformers/issues/2823/events
https://github.com/huggingface/transformers/pull/2823
563,952,413
MDExOlB1bGxSZXF1ZXN0Mzc0MjY4OTUz
2,823
Update run_tf_squad.py
{ "login": "Perseus14", "id": 8448630, "node_id": "MDQ6VXNlcjg0NDg2MzA=", "avatar_url": "https://avatars.githubusercontent.com/u/8448630?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Perseus14", "html_url": "https://github.com/Perseus14", "followers_url": "https://api.github.com/users/Perseus14/followers", "following_url": "https://api.github.com/users/Perseus14/following{/other_user}", "gists_url": "https://api.github.com/users/Perseus14/gists{/gist_id}", "starred_url": "https://api.github.com/users/Perseus14/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Perseus14/subscriptions", "organizations_url": "https://api.github.com/users/Perseus14/orgs", "repos_url": "https://api.github.com/users/Perseus14/repos", "events_url": "https://api.github.com/users/Perseus14/events{/privacy}", "received_events_url": "https://api.github.com/users/Perseus14/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,581
1,581
1,581
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2823/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2823/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/2823", "html_url": "https://github.com/huggingface/transformers/pull/2823", "diff_url": "https://github.com/huggingface/transformers/pull/2823.diff", "patch_url": "https://github.com/huggingface/transformers/pull/2823.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/2822
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2822/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2822/comments
https://api.github.com/repos/huggingface/transformers/issues/2822/events
https://github.com/huggingface/transformers/issues/2822
563,939,822
MDU6SXNzdWU1NjM5Mzk4MjI=
2,822
bugs in xlnet XLNetLMHeadModel
{ "login": "zhangjiekui", "id": 33198334, "node_id": "MDQ6VXNlcjMzMTk4MzM0", "avatar_url": "https://avatars.githubusercontent.com/u/33198334?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhangjiekui", "html_url": "https://github.com/zhangjiekui", "followers_url": "https://api.github.com/users/zhangjiekui/followers", "following_url": "https://api.github.com/users/zhangjiekui/following{/other_user}", "gists_url": "https://api.github.com/users/zhangjiekui/gists{/gist_id}", "starred_url": "https://api.github.com/users/zhangjiekui/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhangjiekui/subscriptions", "organizations_url": "https://api.github.com/users/zhangjiekui/orgs", "repos_url": "https://api.github.com/users/zhangjiekui/repos", "events_url": "https://api.github.com/users/zhangjiekui/events{/privacy}", "received_events_url": "https://api.github.com/users/zhangjiekui/received_events", "type": "User", "site_admin": false }
[ { "id": 1834053813, "node_id": "MDU6TGFiZWwxODM0MDUzODEz", "url": "https://api.github.com/repos/huggingface/transformers/labels/PyTorch", "name": "PyTorch", "color": "a12bef", "default": false, "description": "Anything PyTorch" }, { "id": 1834056761, "node_id": "MDU6TGFiZWwxODM0MDU2NzYx", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling", "name": "Core: Modeling", "color": "FF8446", "default": false, "description": "Internals of the library; Models." }, { "id": 1862634478, "node_id": "MDU6TGFiZWwxODYyNjM0NDc4", "url": "https://api.github.com/repos/huggingface/transformers/labels/Should%20Fix", "name": "Should Fix", "color": "FF0000", "default": false, "description": "This has been identified as a bug and should be fixed." } ]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "I took a closer look into the XLNet model. As I understand the [paper](https://arxiv.org/pdf/1906.08237.pdf) and how the input/target lm training data is created in the code base (look [here](https://github.com/zihangdai/xlnet/blob/bbaa3a6fa0b3a2ee694e8cf66167434f9eca9660/data_utils.py#L616)), the language modelling loss is calculated using a <mask> token for specific words similar to BERT. Opposite to BERT though, the model still performs some kind of auto-regressive training as the length input_ids and labels are regressively increased over time `T`. At each training step `t` though, the `output_ids` are equal to the `input_ids`, whereas some `input_ids` are masked and the `embeddings` corresponding to the different positions `[1,T]` only see the `input_ids` and `embeddings` of certain other positions according to a random permutation (I think Fig. 4 in the paper explains it quite well). \r\n\r\nLong story short, in my opinion the labels should not be a shifted version of the input_ids. \r\nRegarding the special tokens, it's true that the `XLNetTokenizer` add two special tokens by default. I changed that in the examples provided in the `modeling_xlnet.py` file for the `XLNetModel` and `XLNetWithLMHeadModel` as those tokens were only mainly used for the two sentence input lm pre-training (similar to BERT) and might be confusing for simpler examples. \r\n\r\nI added an examples in PR which gives a simple example how the `XLNetWithLMHeadModel` can be used for \"standard\" auto-regressive pretraining. Also when looking at the function `prepare_inputs_for_language_generation()` in `modeling_xlnet.py`, it can be seen that a <mask> token is added to the `input_ids` in order to perform language generation. This might make everything clearer as well.", "Maybe @thomwolf can confirm before closing the issue? ", "Thanks ! Clear, Cool!", "@patrickvonplaten As an aside, are there test available per-model that check that the output of a given model is identical to the output of the original model? Like the integration tests, but where the expected output is actually the same as the original implementation?", "@BramVanroy We are working on those at the moment. So far only the new models (bart & roberta) have real IntegrationTests. Most of the LMHead models have some form of Integration Test that check whether reasonable language is generated.", "@patrickvonplaten Reasonable output is indeed an important aspect, but comparing with original implementations might bring discrepancies to light quickly. I am not sure how feasible that is, so it's just a thought.", "I agree 100%. We compare to the original implementations as best as we can! ", "@BramVanroy we compare to the original implementations when we initially convert the models. We make sure that the output of our models is the same as the output from the official models, given a small margin error. You can find an example of this in the [`convert_pytorch_checkpoint_to_tensorflow.py` script.](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_pytorch_checkpoint_to_tf2.py#L351)", "> @BramVanroy we compare to the original implementations when we initially convert the models. We make sure that the output of our models is the same as the output from the official models, given a small margin error. You can find an example of this in the [`convert_pytorch_checkpoint_to_tensorflow.py` script.](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_pytorch_checkpoint_to_tf2.py#L351)\r\n\r\n@LysandreJik From the looks of this, it seems that you are comparing the output of the imported and/or mapped weights between pt and tf. But it seems that this does not cover architectural difference (correct me if I'm wrong). For instance, the recent issue of bias being counted twice wouldn't have been caught in this test, I think? But if you have some example input case and hard-code a slice of its output from the original implementation (be that tensorflow, pytorch, or something else), then you can test that the transformer implementation (architecture + weights) behave the same.", "Actually, the double bias would definitely have been caught with this! We load the original models' weights onto our models and compare the output of the two models given the same input. This usually results in a tensor of size `(batch_size, sequence_length, hidden_size)` for base models or `(batch_size, sequence_length, vocab_size)` for models with an LM head (that is a lot of values!) that we each compare individually to make sure the difference is under the defined threshold.\r\n\r\nWhere our tests failed us is that we did not have integration tests for this model at the time, which is something @patrickvonplaten is doing a great job at changing :).", "@BramVanroy I think you're describing what we have in e.g. https://github.com/huggingface/transformers/blob/master/tests/test_modeling_roberta.py#L322 (and @patrickvonplaten indeed added others recently)\r\n\r\nHere `expected_slice` is the output of the original (here, fairseq) implementation. I agree that it's a good way to ensure correctness (except in cases where the original implem is \"incorrect\" in some way!)\r\n\r\nSee the recently merged https://github.com/huggingface/transformers/pull/3014\r\n\r\n" ]
1,581
1,582
1,582
NONE
null
# 🐛 Bug ## Information Model I am using XLNet : Language I am using the model on English : The problem arises when using: * [True ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. just deep into modeling_xlnet.py code; 2. browsing https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_xlnet.py#L1057; 3. comparing with other LMHead,such as https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_gpt2.py#L603; 4.you'll found inputs and labels are not shifted. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> the code should be refactored to the same style as other LMHeadModels and because xlnet tokenizer appends <'sep'> , <'cls'> on the end, so the num tokens shifted should be 2, so the think the code should be: if labels is not None: # Shift so that tokens < n predict n shift_logits = logits[..., :-2, :].contiguous() shift_labels = labels[..., 1:-1].contiguous() # Flatten the tokens loss_fct = CrossEntropyLoss() loss = loss_fct(logits.view(-1, logits.size(-1)), labels.view(-1)) # loss = loss_fct(logits.view(-1, logits.size(-1)), labels.reshape(-1)) outputs = (loss,) + outputs ## Environment info <!-- You can run the command `python transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.4.1 - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2822/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2822/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2821
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2821/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2821/comments
https://api.github.com/repos/huggingface/transformers/issues/2821/events
https://github.com/huggingface/transformers/issues/2821
563,856,552
MDU6SXNzdWU1NjM4NTY1NTI=
2,821
CUDA out of memory issue in the middle of training in run_language_modeling.py (say after 1000 steps).
{ "login": "yuvalkirstain", "id": 57996478, "node_id": "MDQ6VXNlcjU3OTk2NDc4", "avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yuvalkirstain", "html_url": "https://github.com/yuvalkirstain", "followers_url": "https://api.github.com/users/yuvalkirstain/followers", "following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}", "gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}", "starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions", "organizations_url": "https://api.github.com/users/yuvalkirstain/orgs", "repos_url": "https://api.github.com/users/yuvalkirstain/repos", "events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}", "received_events_url": "https://api.github.com/users/yuvalkirstain/received_events", "type": "User", "site_admin": false }
[ { "id": 1108649070, "node_id": "MDU6TGFiZWwxMTA4NjQ5MDcw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Need%20more%20information", "name": "Need more information", "color": "d876e3", "default": false, "description": "Further information is requested" }, { "id": 1834052847, "node_id": "MDU6TGFiZWwxODM0MDUyODQ3", "url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20LM%20(Finetuning)", "name": "Ex: LM (Finetuning)", "color": "26FFF8", "default": false, "description": "Related to language modeling fine-tuning" } ]
closed
false
null
[]
[ "what is your GPU?", "> what is your GPU?\r\n\r\nTITAN Xp\r\n", "If I'm not mistaken the Titan XP has 12GB of VRAM? From my tests training RoBERTa-large with a batch size of 1 already requires 10GB of VRAM, so your GPU memory should be filled quickly. It is surprising that it crashes later though. Could you try with a smaller batch size and gradient accumulation?", "Working with a smaller batch size and gradient accumulation works better. Thank you !" ]
1,581
1,581
1,581
CONTRIBUTOR
null
# 🐛 Bug CUDA OOM in run_language_modeling.py after many steps. ## Information It seems strange to get them so late in the training procedure. Model I am using (Bert, XLNet ...): roberta-large Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: run_language_modeling.py, with line_by_line parameter The tasks I am working on is: * [ ] an official GLUE/SQUaD task: Wikitext after some filter of sentences (attached) [wiki.train.raw.time_filter.normalized.text.txt](https://github.com/huggingface/transformers/files/4191154/wiki.train.raw.time_filter.normalized.text.txt) ## To reproduce just run the script with the line_by_line parameter Steps to reproduce the behavior: - `transformers` version: latest - Platform: - Python version: 3.7.0 - PyTorch version (GPU?): latest - Tensorflow version (GPU?): - Using GPU in script?: yes - TITAN Xp - Using distributed or parallel set-up in script?: no
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2821/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2821/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2820
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2820/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2820/comments
https://api.github.com/repos/huggingface/transformers/issues/2820/events
https://github.com/huggingface/transformers/issues/2820
563,815,966
MDU6SXNzdWU1NjM4MTU5NjY=
2,820
ImportError: cannot import name 'GradientAccumulator'
{ "login": "DenceChen", "id": 11643704, "node_id": "MDQ6VXNlcjExNjQzNzA0", "avatar_url": "https://avatars.githubusercontent.com/u/11643704?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DenceChen", "html_url": "https://github.com/DenceChen", "followers_url": "https://api.github.com/users/DenceChen/followers", "following_url": "https://api.github.com/users/DenceChen/following{/other_user}", "gists_url": "https://api.github.com/users/DenceChen/gists{/gist_id}", "starred_url": "https://api.github.com/users/DenceChen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DenceChen/subscriptions", "organizations_url": "https://api.github.com/users/DenceChen/orgs", "repos_url": "https://api.github.com/users/DenceChen/repos", "events_url": "https://api.github.com/users/DenceChen/events{/privacy}", "received_events_url": "https://api.github.com/users/DenceChen/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1843377584, "node_id": "MDU6TGFiZWwxODQzMzc3NTg0", "url": "https://api.github.com/repos/huggingface/transformers/labels/Version%20mismatch", "name": "Version mismatch", "color": "ddea7c", "default": false, "description": "" } ]
closed
false
null
[]
[ "Hi,\r\nanyone found workaround for this issue?\r\nThanks", "This shouldn't happen with transformers v2.4.1 and tensorflow >= 2.0.0.\r\n\r\nI can't replicate this issue with the versions you mentioned.\r\n\r\nWould you mind telling me what gets printed out when you run the following snippet?\r\n\r\n```py\r\nfrom transformers import __version__\r\nimport tensorflow as tf\r\n\r\nprint(\"Transformers version\", __version__)\r\nprint(\"TensorFlow version\", tf.__version__)\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "well done" ]
1,581
1,586
1,586
NONE
null
transformers==2.4.1; tensoflow==2.1.0; torch==1.4.0; when i start with follow code get some error and i didn't have tensoflow-gpu. **code** --------------------------------------------------------------------------- from transformers import ( TF2_WEIGHTS_NAME, BertConfig, BertTokenizer, DistilBertConfig, DistilBertTokenizer, GradientAccumulator, RobertaConfig, RobertaTokenizer, TFBertForTokenClassification, TFDistilBertForTokenClassification, TFRobertaForTokenClassification, create_optimizer, ) **error** --------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-2-4dd6bfff15e9> in <module>() 12 from seqeval import metrics 13 ---> 14 from transformers import ( 15 TF2_WEIGHTS_NAME, 16 BertConfig, ImportError: cannot import name 'GradientAccumulator'
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2820/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2820/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2819
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2819/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2819/comments
https://api.github.com/repos/huggingface/transformers/issues/2819/events
https://github.com/huggingface/transformers/pull/2819
563,652,448
MDExOlB1bGxSZXF1ZXN0Mzc0MDIyMjk2
2,819
Create card for model bert-base-spanish-wwm-cased-finetuned-spa-squad2-es.md
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2819?src=pr&el=h1) Report\n> Merging [#2819](https://codecov.io/gh/huggingface/transformers/pull/2819?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e0b6247cf749c5a6c7b9543f6c16935b58370ce0?src=pr&el=desc) will **decrease** coverage by `0.26%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2819/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2819?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2819 +/- ##\n==========================================\n- Coverage 75.02% 74.75% -0.27% \n==========================================\n Files 93 93 \n Lines 15275 15275 \n==========================================\n- Hits 11460 11419 -41 \n- Misses 3815 3856 +41\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2819?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jYW1lbWJlcnQucHk=) | `100% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG1fcm9iZXJ0YS5weQ==) | `100% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `100% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `97.82% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/2819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `96.54% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `96.05% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.84% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `95.11% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `94.66% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.26% <0%> (-0.52%)` | :arrow_down: |\n| ... and [18 more](https://codecov.io/gh/huggingface/transformers/pull/2819/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2819?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2819?src=pr&el=footer). Last update [e0b6247...9d4599b](https://codecov.io/gh/huggingface/transformers/pull/2819?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,581
1,581
1,581
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2819/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2819/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/2819", "html_url": "https://github.com/huggingface/transformers/pull/2819", "diff_url": "https://github.com/huggingface/transformers/pull/2819.diff", "patch_url": "https://github.com/huggingface/transformers/pull/2819.patch", "merged_at": 1581469636000 }
https://api.github.com/repos/huggingface/transformers/issues/2818
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2818/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2818/comments
https://api.github.com/repos/huggingface/transformers/issues/2818/events
https://github.com/huggingface/transformers/issues/2818
563,633,876
MDU6SXNzdWU1NjM2MzM4NzY=
2,818
Albert multilingual
{ "login": "nimning", "id": 7147016, "node_id": "MDQ6VXNlcjcxNDcwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/7147016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nimning", "html_url": "https://github.com/nimning", "followers_url": "https://api.github.com/users/nimning/followers", "following_url": "https://api.github.com/users/nimning/following{/other_user}", "gists_url": "https://api.github.com/users/nimning/gists{/gist_id}", "starred_url": "https://api.github.com/users/nimning/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nimning/subscriptions", "organizations_url": "https://api.github.com/users/nimning/orgs", "repos_url": "https://api.github.com/users/nimning/repos", "events_url": "https://api.github.com/users/nimning/events{/privacy}", "received_events_url": "https://api.github.com/users/nimning/received_events", "type": "User", "site_admin": false }
[ { "id": 1108649070, "node_id": "MDU6TGFiZWwxMTA4NjQ5MDcw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Need%20more%20information", "name": "Need more information", "color": "d876e3", "default": false, "description": "Further information is requested" }, { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
null
[]
[ "As far as I know (from following https://github.com/google-research/ALBERT/issues/5 and https://github.com/google-research/ALBERT/issues/91), ALBERT multilingual is not yet released.\r\n\r\nWe'll make sure to support it once it's released.", "https://github.com/google-research/ALBERT/pull/152/files\r\n\r\n😱😱😱", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,581
1,587
1,587
NONE
null
# 🚀 Feature request Provide multilingual pre-trained albert model. ## Motivation Albert is a light weighted bert. It would be nice if it has the multilingual version. ## Your contribution
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2818/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2818/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2817
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2817/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2817/comments
https://api.github.com/repos/huggingface/transformers/issues/2817/events
https://github.com/huggingface/transformers/issues/2817
563,519,994
MDU6SXNzdWU1NjM1MTk5OTQ=
2,817
GPT2LMHeadModel with variable length batch input
{ "login": "plstory", "id": 7537477, "node_id": "MDQ6VXNlcjc1Mzc0Nzc=", "avatar_url": "https://avatars.githubusercontent.com/u/7537477?v=4", "gravatar_id": "", "url": "https://api.github.com/users/plstory", "html_url": "https://github.com/plstory", "followers_url": "https://api.github.com/users/plstory/followers", "following_url": "https://api.github.com/users/plstory/following{/other_user}", "gists_url": "https://api.github.com/users/plstory/gists{/gist_id}", "starred_url": "https://api.github.com/users/plstory/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/plstory/subscriptions", "organizations_url": "https://api.github.com/users/plstory/orgs", "repos_url": "https://api.github.com/users/plstory/repos", "events_url": "https://api.github.com/users/plstory/events{/privacy}", "received_events_url": "https://api.github.com/users/plstory/received_events", "type": "User", "site_admin": false }
[ { "id": 1108649053, "node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz", "url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted", "name": "Help wanted", "color": "008672", "default": false, "description": "Extra attention is needed, help appreciated" }, { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1834053813, "node_id": "MDU6TGFiZWwxODM0MDUzODEz", "url": "https://api.github.com/repos/huggingface/transformers/labels/PyTorch", "name": "PyTorch", "color": "a12bef", "default": false, "description": "Anything PyTorch" } ]
closed
false
null
[]
[ "Have you tried concatenating the sequences into one long string and using a separator token without changing any of the code? You can then use a moving window of 1024 to train the model. You can make each step of the window start after an <|endoftext|> to ensure the primary sequence is not truncated.\r\n\r\nYou can then train on batches of these moving windows of 1024 (either moving a random # of tokens or to the next <|endoftext|> token)\r\n\r\nE.g., An example input for 1024 may then look something like this: \"Some sequence == Some other sequence <|endoftext|> Some sequence_2 == Some other sequence_2 <|endoftext|> Some sequence_3 == Some other sequence_3 <|endoftext|> Some sequence_4 == Some othe\" (clipped on purpose to illustrate a point.)\r\n\r\nThen you prompt with \"Some sequence ==\" and terminate generation/clip text on <|endoftext|>\r\n\r\nThe model is *very good* at learning like this. It is okay to have the window clip things off at the end.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,581
1,587
1,587
NONE
null
I'm trying to repurpose the GPT2LMHeadModel for a seq2seq-like task, where I have an input prompt sequence of length L and I'm trying to ask the model to output a sequence to match a target sequence/sentence. For a single input-output pair, I simply change the original code of `shift_logits = lm_logits[..., :-1, :].contiguous()` to `shift_logits = lm_logits[..., L-1:-1, :].contiguous()` But I'm a bit lost on how can I do this for a batch of variable length input. Even if I pad the shorter sequences, I would need to shift the logits by different amount for each input. I'm also uncertain if I need to do something about the attention mask. Any tip is appreicated!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2817/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2817/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2816
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2816/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2816/comments
https://api.github.com/repos/huggingface/transformers/issues/2816/events
https://github.com/huggingface/transformers/pull/2816
563,446,753
MDExOlB1bGxSZXF1ZXN0MzczODU1OTE1
2,816
Proposal: Update examples to utilize a new format.
{ "login": "srush", "id": 35882, "node_id": "MDQ6VXNlcjM1ODgy", "avatar_url": "https://avatars.githubusercontent.com/u/35882?v=4", "gravatar_id": "", "url": "https://api.github.com/users/srush", "html_url": "https://github.com/srush", "followers_url": "https://api.github.com/users/srush/followers", "following_url": "https://api.github.com/users/srush/following{/other_user}", "gists_url": "https://api.github.com/users/srush/gists{/gist_id}", "starred_url": "https://api.github.com/users/srush/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/srush/subscriptions", "organizations_url": "https://api.github.com/users/srush/orgs", "repos_url": "https://api.github.com/users/srush/repos", "events_url": "https://api.github.com/users/srush/events{/privacy}", "received_events_url": "https://api.github.com/users/srush/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @srush, thanks for this PR :heart: Can't wait to test it!\r\n\r\nOne suggestion/RFC: could we rename it to something like `token_classification` instead of `ner`. I know PoS tagging is not really covered in recent papers, but I always test new models for this task with the \"identical\" implementation 😅 This requires only a little modification in the code: we then should report accuracy as well.\r\n\r\nBut I will be totally fine with `ner` here!", "Token Classification sounds good to me. That is consistent with the internal naming. " ]
1,581
1,582
1,582
CONTRIBUTOR
null
This PR creates a new example coding style for the pytorch code. * Uses pytorch-lightning for the underlying training. * Separates out the base transformer loading from the individual training. * Moves each individual example to its own directory. * Move the code in the readme to bash scripts. The only two new files are `run_pl_ner.py` and `transformers_base.py`. The goal is to keep the same format as the original command-line. Most of the argument names are preserved. I have verified that for NER the results of the same on GPU. There are several nice benefits of lightning -> somewhat nicer logging and library integration (e.g. wandb), auto-checkpointing. Mostly the goal though is code readability with identical functionality. Todo: * make sure that the output file format is identical. * print test results after training. * test multi-gpu and apex (in theory these should work)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2816/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2816/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/2816", "html_url": "https://github.com/huggingface/transformers/pull/2816", "diff_url": "https://github.com/huggingface/transformers/pull/2816.diff", "patch_url": "https://github.com/huggingface/transformers/pull/2816.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/2815
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2815/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2815/comments
https://api.github.com/repos/huggingface/transformers/issues/2815/events
https://github.com/huggingface/transformers/pull/2815
563,432,757
MDExOlB1bGxSZXF1ZXN0MzczODQ0NDA3
2,815
Add more specific testing advice to Contributing.md
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2815?src=pr&el=h1) Report\n> Merging [#2815](https://codecov.io/gh/huggingface/transformers/pull/2815?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bed38d3afec99ce99ef8610337cb279a8fb25033?src=pr&el=desc) will **increase** coverage by `0.55%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2815/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2815?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2815 +/- ##\n==========================================\n+ Coverage 75.02% 75.58% +0.55% \n==========================================\n Files 93 93 \n Lines 15275 15275 \n==========================================\n+ Hits 11460 11545 +85 \n+ Misses 3815 3730 -85\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2815?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/2815/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `56.49% <0%> (+27.59%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2815?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2815?src=pr&el=footer). Last update [bed38d3...4ae60cb](https://codecov.io/gh/huggingface/transformers/pull/2815?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,581
1,581
1,581
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2815/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2815/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/2815", "html_url": "https://github.com/huggingface/transformers/pull/2815", "diff_url": "https://github.com/huggingface/transformers/pull/2815.diff", "patch_url": "https://github.com/huggingface/transformers/pull/2815.patch", "merged_at": 1581459610000 }
https://api.github.com/repos/huggingface/transformers/issues/2814
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2814/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2814/comments
https://api.github.com/repos/huggingface/transformers/issues/2814/events
https://github.com/huggingface/transformers/issues/2814
563,309,030
MDU6SXNzdWU1NjMzMDkwMzA=
2,814
Repository with recipes how to pretrain model from scratch on my own data
{ "login": "ksopyla", "id": 64201, "node_id": "MDQ6VXNlcjY0MjAx", "avatar_url": "https://avatars.githubusercontent.com/u/64201?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ksopyla", "html_url": "https://github.com/ksopyla", "followers_url": "https://api.github.com/users/ksopyla/followers", "following_url": "https://api.github.com/users/ksopyla/following{/other_user}", "gists_url": "https://api.github.com/users/ksopyla/gists{/gist_id}", "starred_url": "https://api.github.com/users/ksopyla/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ksopyla/subscriptions", "organizations_url": "https://api.github.com/users/ksopyla/orgs", "repos_url": "https://api.github.com/users/ksopyla/repos", "events_url": "https://api.github.com/users/ksopyla/events{/privacy}", "received_events_url": "https://api.github.com/users/ksopyla/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1834067346, "node_id": "MDU6TGFiZWwxODM0MDY3MzQ2", "url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation", "name": "Documentation", "color": "77cc3b", "default": false, "description": "" } ]
closed
false
null
[]
[ "Hi @ksopyla that's a great – but very broad – question.\r\n\r\nWe just wrote a blogpost that might be helpful: https://huggingface.co/blog/how-to-train\r\n\r\nThe post itself is on GitHub so feel free to improve/edit it too.", "Thank you @julien-c. It will help to add new models to transformer model repository :)", "Hi, \r\nthe blogpost is nice but it is NOT an end to end solution. I've been trying to learn how to use the huggingface \"ecosystem\" to build a LM model from scratch on a novel dataset, and the blogpost is not enough. Adding a jupyter notebook to the blog post would make it very easy for users to learn how to run things end to end. (VS \"put in a Dataset type here\" and \"then run one of the scripts\"). :) ", "@ddofer You are right, this is in process of being addressed at https://github.com/huggingface/blog/issues/3\r\n\r\nFeel free to help :)", "@julien-c Is it possible to do another example using bert to pretrain the LM instead of roberta? I followed the steps, but it doesn't seem to work when I changed the model_type to bert. ", "I am a new contributor and thought this might be a reasonable issue to start with. \r\n\r\nI'm happy to add an additional example of using bert rather than roberta to pretrain the LM.\r\n\r\nPlease let me know if this would be helpful and/or if starting elsewhere would be better ", "> I am a new contributor and thought this might be a reasonable issue to start with.\r\n> \r\n> I'm happy to add an additional example of using bert rather than roberta to pretrain the LM.\r\n> \r\n> Please let me know if this would be helpful and/or if starting elsewhere would be better\r\n\r\nGreat that you want to contribute!; any help is welcome! Fine-tuning and pretraining BERT seems to be already covered in [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py) though. So your contribution should differ significantly from this functionality. Perhaps it can be written in a more educational rather than production-ready way? That would definitely be useful - explaining all concepts from scratch and such. (But not an easy task.)", "First version of a notebook is up over at https://github.com/huggingface/blog/tree/master/notebooks\r\n(thanks @aditya-malte for the help)", "> > I am a new contributor and thought this might be a reasonable issue to start with.\r\n> > I'm happy to add an additional example of using bert rather than roberta to pretrain the LM.\r\n> > Please let me know if this would be helpful and/or if starting elsewhere would be better\r\n> \r\n> Great that you want to contribute!; any help is welcome! Fine-tuning and pretraining BERT seems to be already covered in [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py) though. So your contribution should differ significantly from this functionality. Perhaps it can be written in a more educational rather than production-ready way? That would definitely be useful - explaining all concepts from scratch and such. (But not an easy task.)\r\n\r\nI'll give it a shot :) ", "hey @laurenmoos, \r\nA general community request is to work on a keras like wrapper for Transformers. It would be great if you could do that.\r\n\r\nmodel=Roberta()\r\nmodel.pretrain(lm_data)\r\nmodel.finetune(final_data)\r\nmodel.predict(XYZ)", "@aditya-malte I'd love to! \r\n\r\nI will work on that and evaluate the request for additional documentation afterwards. Is there an issue to jump on?", "Let me know if you’re interested. I’d be excited to collaborate!", "@aditya-malte yes!", "Hi,\r\n\r\nDid we make any progress on the feature discussed above? A keras like wrapper sounds awesome for Transformers. I would like to contribute in the development.", "> First version of a notebook is up over at https://github.com/huggingface/blog/tree/master/notebooks\r\n> (thanks @aditya-malte for the help)\r\n\r\n@julien-c Thanks for this. I have a question regarding `special_tokens_map.json` file. When I just use the `vocab.json` and `merges.txt` from the tokenizer, the `run_language_modeling.py` shows the following info message\r\n\r\n```bash\r\n05/01/2020 17:44:01 - INFO - transformers.tokenization_utils - Didn't find file /<path-to-my-output-dir>/special_tokens_map.json. We won't load it.\r\n```\r\n\r\nIn the tutorial this has not been mentioned. Should we create this mapping file too?", "Hi @dashayushman,\r\nThe message you’ve shown is not an error/warning as such but is just an INFO message.\r\nAs far as I remember, the BPE model should work just fine with the vocab and merges file. You can ignore the message.\r\nThanks \r\n", "@julien-c @aditya-malte \r\nfrom blog post:\r\n> If your dataset is very large, you can opt to load and tokenize examples on the fly, rather than as a preprocessing step.\r\n\r\nhow can I do that? Also, save the tokenized data?", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Hi @BramVanroy @julien-c \r\nContinuing #1999, it seems `run_language_modeling.py` is just for PyTorch and fine-tune a masked language model using Tensorflow doesn't have an example script yet. Any plan to make the Tensorflow version of the script or maybe how to modify the current`run_language_modeling.py` so it can be used for Tensorflow too? Thank you.", "I would also like to see an example, how to train a language model (like BERT) from scratch with tensorflow on my own dataset, so i can finetune it later on a specific task. ", "> I would also like to see an example, how to train a language model (like BERT) from scratch with tensorflow on my own dataset, so i can finetune it later on a specific task.\r\n\r\nping @jplu ;)", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,581
1,607
1,607
NONE
null
# 🚀 Feature request It would very useful to have documentation on how to train different models, not necessarily with the use of transformers, but with use external libs (like original BERT, fairseq, etc) Maybe another repository with readmes or docs with recipes from those who already pretrain their model in order to reproduce procedure for other languages or domain. There are many external resources (blogs, articles in arxiv) but without any details and very often they are not reproducible. ## Motivation Have a proven recipe for training the model. Make it easy for others to train a custom model. The community will easily train language or domain-specific models. More models available in transformers library. There are many issues related to this: * https://github.com/huggingface/transformers/issues/1283 * https://github.com/huggingface/transformers/issues/2301 * https://github.com/huggingface/transformers/issues/1672 * https://github.com/huggingface/transformers/issues/1714 * https://github.com/huggingface/transformers/issues/1638 * https://github.com/huggingface/transformers/issues/2279 * https://github.com/huggingface/transformers/issues/1108 * https://github.com/huggingface/transformers/issues/1175 * https://github.com/huggingface/transformers/issues/1381 * https://github.com/huggingface/transformers/issues/1547 * https://github.com/huggingface/transformers/issues/1999 * #1908 * #417 * #170
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2814/reactions", "total_count": 8, "+1": 8, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2814/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2813
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2813/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2813/comments
https://api.github.com/repos/huggingface/transformers/issues/2813/events
https://github.com/huggingface/transformers/issues/2813
563,283,501
MDU6SXNzdWU1NjMyODM1MDE=
2,813
PreTrainedModel.generate do_sample default argument is wrong in the documentation
{ "login": "aligirayhanozbay", "id": 44897017, "node_id": "MDQ6VXNlcjQ0ODk3MDE3", "avatar_url": "https://avatars.githubusercontent.com/u/44897017?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aligirayhanozbay", "html_url": "https://github.com/aligirayhanozbay", "followers_url": "https://api.github.com/users/aligirayhanozbay/followers", "following_url": "https://api.github.com/users/aligirayhanozbay/following{/other_user}", "gists_url": "https://api.github.com/users/aligirayhanozbay/gists{/gist_id}", "starred_url": "https://api.github.com/users/aligirayhanozbay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aligirayhanozbay/subscriptions", "organizations_url": "https://api.github.com/users/aligirayhanozbay/orgs", "repos_url": "https://api.github.com/users/aligirayhanozbay/repos", "events_url": "https://api.github.com/users/aligirayhanozbay/events{/privacy}", "received_events_url": "https://api.github.com/users/aligirayhanozbay/received_events", "type": "User", "site_admin": false }
[ { "id": 1834053813, "node_id": "MDU6TGFiZWwxODM0MDUzODEz", "url": "https://api.github.com/repos/huggingface/transformers/labels/PyTorch", "name": "PyTorch", "color": "a12bef", "default": false, "description": "Anything PyTorch" }, { "id": 1834056761, "node_id": "MDU6TGFiZWwxODM0MDU2NzYx", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling", "name": "Core: Modeling", "color": "FF8446", "default": false, "description": "Internals of the library; Models." }, { "id": 1834067346, "node_id": "MDU6TGFiZWwxODM0MDY3MzQ2", "url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation", "name": "Documentation", "color": "77cc3b", "default": false, "description": "" } ]
closed
false
null
[]
[ "In the documentation for [version 2.4.1/2.4.0](https://huggingface.co/transformers/v2.4.0/main_classes/model.html#transformers.PreTrainedModel.generate), it does indicate it is `False` by default. In the [master documentation](https://huggingface.co/transformers/main_classes/model.html#transformers.PreTrainedModel.generate) though, it is set to `True` by default because we've changed it on the current master.", "I see, however the page title for the master documentation clearly indicates 2.4.1 for the version, which was the source of my confusion. Thank you very much for the clarification", "Indeed, this is very misleading. I'll update it." ]
1,581
1,581
1,581
NONE
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): GPT2LMHeadModel Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Create GPT2LMHeadModel and GPT2Tokenizer objects using the from_pretrained('gpt2') method 2. Use the generate function to generate sequences without any input argument multiple times, and then repeat by setting do_sample = True 3. When do_sample is set to False (or is not supplied at all), the generate method constantly generates the following string: '!\n\nThe first thing I did was to make a list of all the things I would!' When generating with do_sample set to True, changing results are outputted. This is consistent with the behaviour described in the code, except for the default value of do_sample Code sample from a python shell: '''python >>> model = transformers.GPT2LMHeadModel.from_pretrained('gpt2') >>> g2t = transformers.GPT2Tokenizer.from_pretrained('gpt2') >>> g2t.decode(model.generate()[0]) '!\n\nThe first thing I did was to make a list of all the things I would!' >>> g2t.decode(model.generate()[0]) '!\n\nThe first thing I did was to make a list of all the things I would!' >>> g2t.decode(model.generate()[0]) '!\n\nThe first thing I did was to make a list of all the things I would!' >>> g2t.decode(model.generate(do_sample=True)[0]) "!, I can't help but wonder how she's doing. I really have no idea.!" >>> g2t.decode(model.generate(do_sample=True)[0]) '!\n\nThe other guy is trying to take something away from the guy before you even start!' >>> g2t.decode(model.generate(do_sample=True)[0]) '! are you kidding me?\n\n\nBut maybe you should wait for his own "last act!' ''' Similarly, you can do print((transformers.GPT2LMHeadModel.from_pretrained('gpt2')).config.do_sample) to verify that the 'default' argument is in fact False ## Expected behavior The documentation should say do_sample is False by default OR the config should be updated to be in line with the documentation ## Environment info <!-- You can run the command `python transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.4.1 - Platform: Ubuntu GNU/Linux 18.04 - Python version: Python 3.6.9 (default, Nov 7 2019, 10:44:02) [GCC 8.3.0] on linux - PyTorch version (GPU?): 1.4.0 GPU version with Nvidia RTX 2080Ti - Tensorflow version (GPU?): N/A - Using GPU in script?: No - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2813/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2813/timeline
completed
null
null