url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/1209 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1209/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1209/comments | https://api.github.com/repos/huggingface/transformers/issues/1209/events | https://github.com/huggingface/transformers/issues/1209 | 490,093,302 | MDU6SXNzdWU0OTAwOTMzMDI= | 1,209 | run_squad.py predictions | {
"login": "Arjunsankarlal",
"id": 28828445,
"node_id": "MDQ6VXNlcjI4ODI4NDQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/28828445?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Arjunsankarlal",
"html_url": "https://github.com/Arjunsankarlal",
"followers_url": "https://api.github.com/users/Arjunsankarlal/followers",
"following_url": "https://api.github.com/users/Arjunsankarlal/following{/other_user}",
"gists_url": "https://api.github.com/users/Arjunsankarlal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Arjunsankarlal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Arjunsankarlal/subscriptions",
"organizations_url": "https://api.github.com/users/Arjunsankarlal/orgs",
"repos_url": "https://api.github.com/users/Arjunsankarlal/repos",
"events_url": "https://api.github.com/users/Arjunsankarlal/events{/privacy}",
"received_events_url": "https://api.github.com/users/Arjunsankarlal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"The comments say this situation is when the answer is a single null , but I don't see any conditionals to filter for this. hmmm did you figure it out?"
] | 1,567 | 1,574 | 1,573 | NONE | null | ## ❓ Can anyone explain how start_logit and end_logit value is been determined ? And how it can used to measure the reliability of the answer span
In some issues I saw that if start_logit and end_logit = -1 then it is concluded to be unanswerable question.
There are some cases where I get the
> "1": [
{
"text": "hover your mouse pointer",
"probability": 1.0,
"start_logit": -12.987695693969727,
"end_logit": -12.40383529663086
}
],
In this place, what these values of start_logit and end_logit actually mean ? Since logit is natural log of odds, what is considered here as odds ?
In some cases, if the number of nbest predictions is 1, then [here in utils_squad.py](https://github.com/huggingface/pytorch-transformers/blob/master/examples/utils_squad.py#L613-L614) the answer span is made `empty`, does it means that it is the right answer ?
if len(nbest)==1:
nbest.insert(0,
_NbestPrediction(text="empty", start_logit=0.0, end_logit=0.0))
But in the [same file](https://github.com/huggingface/pytorch-transformers/blob/master/examples/utils_squad.py#L618-L620), when there is no nbest predictions, again the answer span is made `empty`
if not nbest:
nbest.append(
_NbestPrediction(text="empty", start_logit=0.0, end_logit=0.0))
For two different situations why the same answer is been set ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1209/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1209/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1208 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1208/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1208/comments | https://api.github.com/repos/huggingface/transformers/issues/1208/events | https://github.com/huggingface/transformers/issues/1208 | 489,816,991 | MDU6SXNzdWU0ODk4MTY5OTE= | 1,208 | How to set the token_type_ids in XLNet correctly? | {
"login": "Dongfeng-He",
"id": 26854181,
"node_id": "MDQ6VXNlcjI2ODU0MTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/26854181?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dongfeng-He",
"html_url": "https://github.com/Dongfeng-He",
"followers_url": "https://api.github.com/users/Dongfeng-He/followers",
"following_url": "https://api.github.com/users/Dongfeng-He/following{/other_user}",
"gists_url": "https://api.github.com/users/Dongfeng-He/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dongfeng-He/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dongfeng-He/subscriptions",
"organizations_url": "https://api.github.com/users/Dongfeng-He/orgs",
"repos_url": "https://api.github.com/users/Dongfeng-He/repos",
"events_url": "https://api.github.com/users/Dongfeng-He/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dongfeng-He/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! We have an example using `token_type_ids` in our `run_glue` script. You can look at how we build the features in the [`utils_glue`, especially concerning the `segment_ids`](https://github.com/huggingface/pytorch-transformers/blob/master/examples/utils_glue.py#L456-L484) which are the `token_type_ids` that will be fed to the model. \r\n\r\nIf I recall correctly the XLNet model has `0` for the first sequence `token_type_ids`, `1` for the second sequence, and `2` for the last (cls) token.\r\n",
"> Hello! We have an example using `token_type_ids` in our `run_glue` script. You can look at how we build the features in the [`utils_glue`, especially concerning the `segment_ids`](https://github.com/huggingface/pytorch-transformers/blob/master/examples/utils_glue.py#L456-L484) which are the `token_type_ids` that will be fed to the model.\r\n> \r\n> If I recall correctly the XLNet model has `0` for the first sequence `token_type_ids`, `1` for the second sequence, and `2` for the last (cls) token.\r\n\r\nThank you for your explanation!\r\n\r\nI still have a question, the code in `utils_glue` is for BERT. As far as I know, the token embeddings and type embeddings are selected from two embedding matrices in BERT, therefore, the type index `0` won't give you a `PAD` token embedding. In XLNet, the type indices are selected in the vocabulary, in which `0` index represents `UNK` token and `1` index represents `BOS` token. \r\n\r\nDo I misunderstand the meaning of `\"indices are selected in the vocabulary\"`, or we can freely use the `BOS` `EOP` `EOD` tokens for our type embeddings?\r\n",
"Oh, that's a typo in XLNet docstring that I thought we had corrected already.\r\nThanks for reminding us of that.\r\n\r\nThe type indices in XLNet are not selected in the vocabulary, they can be arbitrary.\r\nIn XLNet segment ids (what we call `token_type_ids in the repo) don't correspond to embeddings, they are just numbers and the only important thing is that they have to be different for tokens which belong to different segments, hence the flexibility in the exact values (XLNet is using relative segment difference with just two segment embeddings: 0 if the segment id of two tokens are the same, 1 if not). See [here](https://github.com/huggingface/pytorch-transformers/blob/master/pytorch_transformers/modeling_xlnet.py#L926-L928)."
] | 1,567 | 1,568 | 1,568 | NONE | null | Hi, I am fine-tuning an XLNet, and I want to use type embedding to indicate different parts of a sequence. I am facing a difficulty that `“indices are selected in the vocabulary (unlike BERT which has a specific vocabulary for segment indices)”` which is the description of `token_type_ids` in the official document. Does that mean the type embedding and token embedding share the same vocabulary? In that case, how can I select the right indices for the types? If I use `0` and `1` for types, is there a collision between types and the special tokens (like `UNK`)?
Thanks in advance!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1208/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1208/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1207 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1207/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1207/comments | https://api.github.com/repos/huggingface/transformers/issues/1207/events | https://github.com/huggingface/transformers/issues/1207 | 489,742,623 | MDU6SXNzdWU0ODk3NDI2MjM= | 1,207 | convert_roberta_checkpoint_to_pytorch.py 514 max position? | {
"login": "rush86999",
"id": 16848240,
"node_id": "MDQ6VXNlcjE2ODQ4MjQw",
"avatar_url": "https://avatars.githubusercontent.com/u/16848240?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rush86999",
"html_url": "https://github.com/rush86999",
"followers_url": "https://api.github.com/users/rush86999/followers",
"following_url": "https://api.github.com/users/rush86999/following{/other_user}",
"gists_url": "https://api.github.com/users/rush86999/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rush86999/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rush86999/subscriptions",
"organizations_url": "https://api.github.com/users/rush86999/orgs",
"repos_url": "https://api.github.com/users/rush86999/repos",
"events_url": "https://api.github.com/users/rush86999/events{/privacy}",
"received_events_url": "https://api.github.com/users/rush86999/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,567 | 1,573 | 1,573 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
I m not sure if this is intentional but why is max position 514. I m assuming the origingal roberta model is 512 like bert or is this incorrect? This is the only place i find this reference as well. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1207/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1207/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1206 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1206/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1206/comments | https://api.github.com/repos/huggingface/transformers/issues/1206/events | https://github.com/huggingface/transformers/issues/1206 | 489,672,228 | MDU6SXNzdWU0ODk2NzIyMjg= | 1,206 | the best way to cut the upper layers | {
"login": "cherepanovic",
"id": 10064548,
"node_id": "MDQ6VXNlcjEwMDY0NTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/10064548?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cherepanovic",
"html_url": "https://github.com/cherepanovic",
"followers_url": "https://api.github.com/users/cherepanovic/followers",
"following_url": "https://api.github.com/users/cherepanovic/following{/other_user}",
"gists_url": "https://api.github.com/users/cherepanovic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cherepanovic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cherepanovic/subscriptions",
"organizations_url": "https://api.github.com/users/cherepanovic/orgs",
"repos_url": "https://api.github.com/users/cherepanovic/repos",
"events_url": "https://api.github.com/users/cherepanovic/events{/privacy}",
"received_events_url": "https://api.github.com/users/cherepanovic/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, don't know which model you are using so I can't answer precisely but here is the general workflow:\r\n1. load the relevant pretrained configuration with `config = config_class.from_pretrained('your-model-of-interest')`\r\n2. Reduce the number of layers in the configuration with for example: `config.num_hidden_layers = 5` (here you have to check the correct attribute for your model).\r\n3. Use the modified config to build and instantiate your model: `model = model_class.from_pretrained('your-model-of-interest', config=config)`.\r\n\r\nPretty easy, isn't it?",
">>Pretty easy, isn't it?\r\n\r\nindeed!\r\n",
"```\r\nconfig = XLNetConfig.from_pretrained('xlnet-base-cased')\r\nconfig.num_hidden_layers = 3\r\n```\r\nraised this error\r\n\r\n`AttributeError: can't set attribute`",
"config.n_layer = 3 does it work",
"> Hi, don't know which model you are using so I can't answer precisely but here is the general workflow:\r\n> \r\n> 1. load the relevant pretrained configuration with `config = config_class.from_pretrained('your-model-of-interest')`\r\n> \r\n> 2. Reduce the number of layers in the configuration with for example: `config.num_hidden_layers = 5` (here you have to check the correct attribute for your model).\r\n> \r\n> 3. Use the modified config to build and instantiate your model: `model = model_class.from_pretrained('your-model-of-interest', config=config)`.\r\n> \r\n> \r\n> Pretty easy, isn't it?\r\n\r\nI assume this gives the upper 5 layers. Is there a way to get the lower 5 layers ? ",
"it's kind of annoying and non-intuitive, but @cherepanovic, the reason why you're seeing this message is that there are several parameters in many of the configs that are not parameters, but properties. I suppose the authors did this to emphasize that they should not be changed after the model is initialized? Not sure. But here they are:\r\n```\r\n @property\r\n def max_position_embeddings(self):\r\n return self.n_positions\r\n\r\n @property\r\n def hidden_size(self):\r\n return self.n_embd\r\n\r\n @property\r\n def num_attention_heads(self):\r\n return self.n_head\r\n\r\n @property\r\n def num_hidden_layers(self):\r\n return self.n_layer\r\n```\r\n\r\nYou can change these, after initialization, by referring to the actual parameters that the property returns.\r\n\r\nIMO it certainly isn't \"pretty easy\", as doubly-named parameters/properties is kinda poor practice, and it would've been easy from a coding perspective to put getters and setters in there as well."
] | 1,567 | 1,626 | 1,570 | NONE | null | Salut,
what would be the best way to cut the upper layers of transformers (from 12 cutting 5 upper layer results 7 layers model for the use)?
Best regards | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1206/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1206/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1205 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1205/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1205/comments | https://api.github.com/repos/huggingface/transformers/issues/1205/events | https://github.com/huggingface/transformers/pull/1205 | 489,665,253 | MDExOlB1bGxSZXF1ZXN0MzE0NDMxNDI4 | 1,205 | Fix typo | {
"login": "tm4roon",
"id": 53220859,
"node_id": "MDQ6VXNlcjUzMjIwODU5",
"avatar_url": "https://avatars.githubusercontent.com/u/53220859?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tm4roon",
"html_url": "https://github.com/tm4roon",
"followers_url": "https://api.github.com/users/tm4roon/followers",
"following_url": "https://api.github.com/users/tm4roon/following{/other_user}",
"gists_url": "https://api.github.com/users/tm4roon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tm4roon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tm4roon/subscriptions",
"organizations_url": "https://api.github.com/users/tm4roon/orgs",
"repos_url": "https://api.github.com/users/tm4roon/repos",
"events_url": "https://api.github.com/users/tm4roon/events{/privacy}",
"received_events_url": "https://api.github.com/users/tm4roon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1205?src=pr&el=h1) Report\n> Merging [#1205](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1205?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/0b52642d379bed155e8aa4f4088588bfd8ceaa88?src=pr&el=desc) will **increase** coverage by `0.44%`.\n> The diff coverage is `95.65%`.\n\n[](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1205?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1205 +/- ##\n==========================================\n+ Coverage 80.83% 81.27% +0.44% \n==========================================\n Files 46 46 \n Lines 7878 7877 -1 \n==========================================\n+ Hits 6368 6402 +34 \n+ Misses 1510 1475 -35\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1205?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [...h\\_transformers/tests/tokenization\\_tests\\_commons.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1205/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvdG9rZW5pemF0aW9uX3Rlc3RzX2NvbW1vbnMucHk=) | `100% <100%> (ø)` | :arrow_up: |\n| [pytorch\\_transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1205/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdG9rZW5pemF0aW9uX3V0aWxzLnB5) | `89.37% <91.66%> (+8.89%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1205?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1205?src=pr&el=footer). Last update [0b52642...5c6cac1](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1205?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks a lot for that!\r\nTook the occasion to add regression tests and clean up a bit the base class."
] | 1,567 | 1,567 | 1,567 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1205/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1205/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1205",
"html_url": "https://github.com/huggingface/transformers/pull/1205",
"diff_url": "https://github.com/huggingface/transformers/pull/1205.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1205.patch",
"merged_at": 1567712657000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/1204 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1204/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1204/comments | https://api.github.com/repos/huggingface/transformers/issues/1204/events | https://github.com/huggingface/transformers/issues/1204 | 489,632,828 | MDU6SXNzdWU0ODk2MzI4Mjg= | 1,204 | Can't trace any model with pytorch-transformers 1.2 | {
"login": "Bycob",
"id": 15674552,
"node_id": "MDQ6VXNlcjE1Njc0NTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/15674552?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bycob",
"html_url": "https://github.com/Bycob",
"followers_url": "https://api.github.com/users/Bycob/followers",
"following_url": "https://api.github.com/users/Bycob/following{/other_user}",
"gists_url": "https://api.github.com/users/Bycob/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bycob/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bycob/subscriptions",
"organizations_url": "https://api.github.com/users/Bycob/orgs",
"repos_url": "https://api.github.com/users/Bycob/repos",
"events_url": "https://api.github.com/users/Bycob/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bycob/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, thank you for reporting this. I can reproduce it on my side. It seems to be a problem relative to the model being on `cuda`, as it doesn't fail if you don't put the model/ids on `cuda`.\r\n\r\nThis doesn't fail:\r\n\r\n```py\r\nfrom pytorch_transformers import BertModel\r\nimport torch\r\n\r\nmodel = BertModel.from_pretrained(\"bert-base-uncased\", torchscript=True)\r\nmodel.eval()\r\nids = torch.LongTensor([[1, 2, 3]])\r\ntok = torch.zeros_like(ids)\r\natt = torch.ones_like(ids)\r\ntorch.jit.trace(model, (ids, tok, att, ids))\r\n```",
"Traced models on cpu may not be convertible to cuda due to hard coded tensor creation in torchscript. I tried a while ago and it wasn't working, and I found an issue (#1010) referencing a similar problem.",
"Yes, with the merge of #1195, the jit tracing issue of #1010 should now be fixed on master.\r\n\r\nYou test installing from source and see if it solves your issue. ",
"I ran BERT on GPU after tracing it on CPU, and it works fine! Any information about when this fix will be available from the official distribution?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi, while the model outputs has multi outputs like ``sequence_output`` and ``pooled_output``,how to get one of them in C++?"
] | 1,567 | 1,574 | 1,573 | NONE | null | ## 🐛 Bug
I get an error whenever I try to trace any model from pytorch-transformers 1.2.0.
When I roll back to 1.1 everything is fine.
## To Reproduce
```python
from pytorch_transformers import BertModel
import torch
model = BertModel.from_pretrained("bert-base-uncased", torchscript=True)
model.to('cuda')
model.eval()
ids = torch.LongTensor([[1, 2, 3]]).cuda()
tok = torch.zeros_like(ids)
att = torch.ones_like(ids)
torch.jit.trace(model, (ids, tok, att, ids))
```
This script produces the following error:
```
Traceback (most recent call last):
File "/home/louisj/.local/lib/python3.5/site-packages/torch/jit/__init__.py", line 545, in run_mod_and_filter_tensor_outputs
outs = wrap_retval(mod(*_clone_inputs(inputs)))
File "/home/louisj/.local/lib/python3.5/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
RuntimeError: r ASSERT FAILED at /pytorch/aten/src/ATen/core/jit_type.h:142, please report a bug to PyTorch. (expect at /pytorch/aten/src/ATen/core/jit_type.h:142)
frame #0: std::function<std::string ()>::operator()() const + 0x11 (0x7f273aa91441 in /home/louisj/.local/lib/python3.5/site-packages/torch/lib/libc10.so)
frame #1: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x2a (0x7f273aa90d7a in /home/louisj/.local/lib/python3.5/site-packages/torch/lib/libc10.so)
frame #2: std::shared_ptr<c10::DimensionedTensorType const> c10::Type::expect<c10::DimensionedTensorType const>() + 0x140 (0x7f27397ff810 in /home/louisj/.local/lib/python3.5/site-packages/torch/lib/libtorch.so.1)
frame #3: torch::jit::fuser::compileKernel(torch::jit::fuser::KernelSpec const&, torch::jit::fuser::ArgSpec const&, std::vector<long, std::allocator<long> > const&, c10::Device) + 0xa5a (0x7f27397fbdca in /home/louisj/.local/lib/python3.5/site-packages/torch/lib/libtorch.so.1)
frame #4: torch::jit::fuser::runFusion(long, std::vector<c10::IValue, std::allocator<c10::IValue> >&, std::string*) + 0x5b0 (0x7f2739803c20 in /home/louisj/.local/lib/python3.5/site-packages/torch/lib/libtorch.so.1)
frame #5: torch::jit::runFusion(long, std::vector<c10::IValue, std::allocator<c10::IValue> >&) + 0x13 (0x7f2739733bc3 in /home/louisj/.local/lib/python3.5/site-packages/torch/lib/libtorch.so.1)
frame #6: <unknown function> + 0xb2b066 (0x7f273973d066 in /home/louisj/.local/lib/python3.5/site-packages/torch/lib/libtorch.so.1)
frame #7: <unknown function> + 0xa8ebe6 (0x7f27396a0be6 in /home/louisj/.local/lib/python3.5/site-packages/torch/lib/libtorch.so.1)
frame #8: torch::jit::InterpreterState::run(std::vector<c10::IValue, std::allocator<c10::IValue> >&) + 0x22 (0x7f273969c202 in /home/louisj/.local/lib/python3.5/site-packages/torch/lib/libtorch.so.1)
frame #9: <unknown function> + 0xa7685d (0x7f273968885d in /home/louisj/.local/lib/python3.5/site-packages/torch/lib/libtorch.so.1)
frame #10: <unknown function> + 0x457617 (0x7f277a2cd617 in /home/louisj/.local/lib/python3.5/site-packages/torch/lib/libtorch_python.so)
frame #11: <unknown function> + 0x130d0c (0x7f2779fa6d0c in /home/louisj/.local/lib/python3.5/site-packages/torch/lib/libtorch_python.so)
<omitting python frames>
frame #14: python3() [0x4fbfce]
frame #16: python3() [0x574db6]
frame #20: python3() [0x4ec2e3]
frame #22: python3() [0x4fbfce]
frame #24: python3() [0x574db6]
frame #27: python3() [0x5401ef]
frame #30: python3() [0x4ec3f7]
frame #33: python3() [0x5401ef]
frame #35: python3() [0x53fc97]
frame #37: python3() [0x53fc97]
frame #39: python3() [0x60cb42]
frame #44: __libc_start_main + 0xf0 (0x7f279df7d830 in /lib/x86_64-linux-gnu/libc.so.6)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "session.py", line 10, in <module>
torch.jit.trace(model, (ids, tok, att, ids))
File "/home/louisj/.local/lib/python3.5/site-packages/torch/jit/__init__.py", line 702, in trace
_check_trace([example_inputs], func, executor_options, traced, check_tolerance, _force_outplace)
File "/home/louisj/.local/lib/python3.5/site-packages/torch/autograd/grad_mode.py", line 43, in decorate_no_grad
return func(*args, **kwargs)
File "/home/louisj/.local/lib/python3.5/site-packages/torch/jit/__init__.py", line 583, in _check_trace
traced_outs = run_mod_and_filter_tensor_outputs(module, inputs, 'trace')
File "/home/louisj/.local/lib/python3.5/site-packages/torch/jit/__init__.py", line 551, in run_mod_and_filter_tensor_outputs
' with test inputs.\nException:\n' + indent(str(e)))
torch.jit.TracingCheckError: Tracing failed sanity checks!
Encountered an exception while running the trace with test inputs.
Exception:
r ASSERT FAILED at /pytorch/aten/src/ATen/core/jit_type.h:142, please report a bug to PyTorch. (expect at /pytorch/aten/src/ATen/core/jit_type.h:142)
frame #0: std::function<std::string ()>::operator()() const + 0x11 (0x7f273aa91441 in /home/louisj/.local/lib/python3.5/site-packages/torch/lib/libc10.so)
frame #1: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x2a (0x7f273aa90d7a in /home/louisj/.local/lib/python3.5/site-packages/torch/lib/libc10.so)
frame #2: std::shared_ptr<c10::DimensionedTensorType const> c10::Type::expect<c10::DimensionedTensorType const>() + 0x140 (0x7f27397ff810 in /home/louisj/.local/lib/python3.5/site-packages/torch/lib/libtorch.so.1)
frame #3: torch::jit::fuser::compileKernel(torch::jit::fuser::KernelSpec const&, torch::jit::fuser::ArgSpec const&, std::vector<long, std::allocator<long> > const&, c10::Device) + 0xa5a (0x7f27397fbdca in /home/louisj/.local/lib/python3.5/site-packages/torch/lib/libtorch.so.1)
frame #4: torch::jit::fuser::runFusion(long, std::vector<c10::IValue, std::allocator<c10::IValue> >&, std::string*) + 0x5b0 (0x7f2739803c20 in /home/louisj/.local/lib/python3.5/site-packages/torch/lib/libtorch.so.1)
frame #5: torch::jit::runFusion(long, std::vector<c10::IValue, std::allocator<c10::IValue> >&) + 0x13 (0x7f2739733bc3 in /home/louisj/.local/lib/python3.5/site-packages/torch/lib/libtorch.so.1)
frame #6: <unknown function> + 0xb2b066 (0x7f273973d066 in /home/louisj/.local/lib/python3.5/site-packages/torch/lib/libtorch.so.1)
frame #7: <unknown function> + 0xa8ebe6 (0x7f27396a0be6 in /home/louisj/.local/lib/python3.5/site-packages/torch/lib/libtorch.so.1)
frame #8: torch::jit::InterpreterState::run(std::vector<c10::IValue, std::allocator<c10::IValue> >&) + 0x22 (0x7f273969c202 in /home/louisj/.local/lib/python3.5/site-packages/torch/lib/libtorch.so.1)
frame #9: <unknown function> + 0xa7685d (0x7f273968885d in /home/louisj/.local/lib/python3.5/site-packages/torch/lib/libtorch.so.1)
frame #10: <unknown function> + 0x457617 (0x7f277a2cd617 in /home/louisj/.local/lib/python3.5/site-packages/torch/lib/libtorch_python.so)
frame #11: <unknown function> + 0x130d0c (0x7f2779fa6d0c in /home/louisj/.local/lib/python3.5/site-packages/torch/lib/libtorch_python.so)
<omitting python frames>
frame #14: python3() [0x4fbfce]
frame #16: python3() [0x574db6]
frame #20: python3() [0x4ec2e3]
frame #22: python3() [0x4fbfce]
frame #24: python3() [0x574db6]
frame #27: python3() [0x5401ef]
frame #30: python3() [0x4ec3f7]
frame #33: python3() [0x5401ef]
frame #35: python3() [0x53fc97]
frame #37: python3() [0x53fc97]
frame #39: python3() [0x60cb42]
frame #44: __libc_start_main + 0xf0 (0x7f279df7d830 in /lib/x86_64-linux-gnu/libc.so.6)
```
## Environment
* OS: Ubuntu 16.04.6 LTS
* Python version: Python 3.5.2
* PyTorch version: '1.1.0'
* PyTorch Transformers version (or branch): '1.2.0'
* Using GPU ? yes
* Distributed of parallel setup ? no
* Any other relevant information:
I installed pytorch-transformers with pip:
```
pip3 install --user pytorch-transformers==1.2
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1204/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1204/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1203 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1203/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1203/comments | https://api.github.com/repos/huggingface/transformers/issues/1203/events | https://github.com/huggingface/transformers/pull/1203 | 489,602,124 | MDExOlB1bGxSZXF1ZXN0MzE0MzgyMzE2 | 1,203 | [2.0] TF 2.0 support | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1203?src=pr&el=h1) Report\n> Merging [#1203](https://codecov.io/gh/huggingface/transformers/pull/1203?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4a233e5b2c18f0cf508f6b917cd1e02954764699?src=pr&el=desc) will **increase** coverage by `4.27%`.\n> The diff coverage is `89.06%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1203?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1203 +/- ##\n==========================================\n+ Coverage 80.45% 84.73% +4.27% \n==========================================\n Files 57 84 +27 \n Lines 8090 12573 +4483 \n==========================================\n+ Hits 6509 10654 +4145 \n- Misses 1581 1919 +338\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1203?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1203/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl9hdXRvLnB5) | `70.96% <ø> (ø)` | |\n| [transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1203/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3JvYmVydGEucHk=) | `70.8% <ø> (ø)` | |\n| [transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1203/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2F1dG8ucHk=) | `53.94% <ø> (ø)` | |\n| [transformers/configuration\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/1203/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fZGlzdGlsYmVydC5weQ==) | `89.74% <ø> (ø)` | |\n| [transformers/configuration\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/1203/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fb3BlbmFpLnB5) | `89.13% <ø> (ø)` | |\n| [transformers/modeling\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/1203/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RyYW5zZm9feGxfdXRpbGl0aWVzLnB5) | `53.89% <ø> (ø)` | |\n| [transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/1203/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX29wZW5haS5weQ==) | `80.4% <ø> (ø)` | |\n| [transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/1203/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL29wdGltaXphdGlvbi5weQ==) | `96.62% <ø> (ø)` | |\n| [transformers/configuration\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/1203/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25feGxtLnB5) | `93.33% <ø> (ø)` | |\n| [transformers/tests/conftest.py](https://codecov.io/gh/huggingface/transformers/pull/1203/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL2NvbmZ0ZXN0LnB5) | `90% <ø> (ø)` | |\n| ... and [113 more](https://codecov.io/gh/huggingface/transformers/pull/1203/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1203?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1203?src=pr&el=footer). Last update [4a233e5...80bf868](https://codecov.io/gh/huggingface/transformers/pull/1203?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Ok big merge"
] | 1,567 | 1,574 | 1,569 | MEMBER | null | Currently converted models:
- [x] BERT
- [x] GPT-2
- [x] XLNet
- [x] XLM
- [x] Transformer-XL
- [x] GPT
- [x] RoBERTa
- [x] DistilBert
With TF 2.0 Keras imperative interface and Eager, the workflow and models are suprisingly similar:
```python
import numpy
import torch
import tensorflow as tf
from pytorch_transformers import BertModel, TFBertModel, BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
pytorch_model = BertModel.from_pretrained('bert-base-uncased')
tf_model = TFBertModel.from_pretrained('bert-base-uncased')
text = "[CLS] Who was Jim Henson ? Jim [MASK] was a puppeteer [SEP]"
tokens = tokenizer.encode(text)
pytorch_inputs = torch.tensor([tokens])
tf_inputs = tf.constant([tokens])
with torch.no_grad():
pytorch_outputs = pytorch_model(pytorch_inputs)
tf_output = tf_model(tf_inputs, training=False)
numpy.amax(numpy.abs(pytorch_outputs[0].numpy() - tf_output[0].numpy()))
# >>> 2.861023e-06 => we are good, a few 1e-6 is the expected difference
# between TF and PT arising from internals computation ops
```
The convention is to use the same name for classes as the original PyTorch classes but prefixed with `TF`.
If you want to install and use this development branch, you should install from the `tf2` branch like this:
- install TF 2.0: `pip install tensorflow==2.0.0-rc0`
- install pytorch-transformers from the `tf2` branch: `pip install https://github.com/huggingface/pytorch-transformers/archive/tf2.zip`
TO-DO / not forget:
- [ ] check weights initialization
- [x] add weights tying
- [x] add example with losses using `model.compile` /`model.fit`
- [ ] take care of having the two possible gelu implementations for Bert
- [ ] untangle Transfo-XL tokenizer from `torch.load` and `torch.save`
- [x] test that all dropout modules are desactivated when training=False (check determinism)
- [ ] clean up our FP16 support (for PyTorch as well) with (i) an adjustment of masking values and (ii) an adjustment of LayerNorm epsilon (add an attribute in configuration files). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1203/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1203/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1203",
"html_url": "https://github.com/huggingface/transformers/pull/1203",
"diff_url": "https://github.com/huggingface/transformers/pull/1203.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1203.patch",
"merged_at": 1569492663000
} |
https://api.github.com/repos/huggingface/transformers/issues/1202 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1202/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1202/comments | https://api.github.com/repos/huggingface/transformers/issues/1202/events | https://github.com/huggingface/transformers/issues/1202 | 489,557,326 | MDU6SXNzdWU0ODk1NTczMjY= | 1,202 | Learning word-pieces garble the predictions | {
"login": "chikubee",
"id": 25073753,
"node_id": "MDQ6VXNlcjI1MDczNzUz",
"avatar_url": "https://avatars.githubusercontent.com/u/25073753?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chikubee",
"html_url": "https://github.com/chikubee",
"followers_url": "https://api.github.com/users/chikubee/followers",
"following_url": "https://api.github.com/users/chikubee/following{/other_user}",
"gists_url": "https://api.github.com/users/chikubee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chikubee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chikubee/subscriptions",
"organizations_url": "https://api.github.com/users/chikubee/orgs",
"repos_url": "https://api.github.com/users/chikubee/repos",
"events_url": "https://api.github.com/users/chikubee/events{/privacy}",
"received_events_url": "https://api.github.com/users/chikubee/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,567 | 1,573 | 1,573 | NONE | null | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): Bert
Language I am using the model on (English, Chinese....): English
The tasks I am working on is:
* [ x] my own task or dataset: Sentiment Analysis on chatbot conversations
Word-pieces change the prediction completely, I fail to understand why.
If i do a token to token similarity also, by considering the token vector to be the average of wordpieces or the first token vector to see if they carry the same context, they don't.
If I had lemmatized the training set, these pieces would have not learnt anything really.
But that is not intuitive.
<img width="571" alt="Screenshot 2019-09-05 at 12 01 26 PM" src="https://user-images.githubusercontent.com/25073753/64317678-07a95680-cfd6-11e9-8849-f9c11f8531ae.png">
<img width="574" alt="Screenshot 2019-09-05 at 11 59 48 AM" src="https://user-images.githubusercontent.com/25073753/64317679-07a95680-cfd6-11e9-900a-96ec5d318b12.png">
<img width="594" alt="Screenshot 2019-09-05 at 11 59 53 AM" src="https://user-images.githubusercontent.com/25073753/64317680-07a95680-cfd6-11e9-9950-2957c284e6e8.png">
[CLS] token vector is fed to the classifier. Changes need to be done internally.
How can I handle such cases?
Any leads would be helpful.
Thanks in advance. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1202/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1202/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1201 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1201/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1201/comments | https://api.github.com/repos/huggingface/transformers/issues/1201/events | https://github.com/huggingface/transformers/pull/1201 | 489,434,510 | MDExOlB1bGxSZXF1ZXN0MzE0MjUzNjIx | 1,201 | [2.0] - Split configuration and modeling files | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1201?src=pr&el=h1) Report\n> Merging [#1201](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1201?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/0b52642d379bed155e8aa4f4088588bfd8ceaa88?src=pr&el=desc) will **increase** coverage by `0.03%`.\n> The diff coverage is `89.57%`.\n\n[](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1201?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1201 +/- ##\n==========================================\n+ Coverage 80.83% 80.86% +0.03% \n==========================================\n Files 46 57 +11 \n Lines 7878 8016 +138 \n==========================================\n+ Hits 6368 6482 +114 \n- Misses 1510 1534 +24\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1201?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1201/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfYXV0by5weQ==) | `53.94% <100%> (+3.94%)` | :arrow_up: |\n| [pytorch\\_transformers/tests/modeling\\_xlnet\\_test.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1201/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvbW9kZWxpbmdfeGxuZXRfdGVzdC5weQ==) | `95.91% <100%> (+0.02%)` | :arrow_up: |\n| [pytorch\\_transformers/tests/modeling\\_common\\_test.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1201/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvbW9kZWxpbmdfY29tbW9uX3Rlc3QucHk=) | `73.19% <100%> (-4.83%)` | :arrow_down: |\n| [pytorch\\_transformers/tests/modeling\\_gpt2\\_test.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1201/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvbW9kZWxpbmdfZ3B0Ml90ZXN0LnB5) | `93.06% <100%> (+0.06%)` | :arrow_up: |\n| [...rch\\_transformers/tests/modeling\\_distilbert\\_test.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1201/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvbW9kZWxpbmdfZGlzdGlsYmVydF90ZXN0LnB5) | `99.06% <100%> (-0.02%)` | :arrow_down: |\n| [pytorch\\_transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1201/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfZ3B0Mi5weQ==) | `83.83% <100%> (-0.2%)` | :arrow_down: |\n| [pytorch\\_transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1201/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfcm9iZXJ0YS5weQ==) | `75.22% <100%> (-0.67%)` | :arrow_down: |\n| [pytorch\\_transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1201/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfeGxuZXQucHk=) | `77.84% <100%> (-0.99%)` | :arrow_down: |\n| [pytorch\\_transformers/tests/modeling\\_openai\\_test.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1201/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvbW9kZWxpbmdfb3BlbmFpX3Rlc3QucHk=) | `93% <100%> (+0.07%)` | :arrow_up: |\n| [pytorch\\_transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1201/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfdXRpbHMucHk=) | `90.27% <100%> (+0.2%)` | :arrow_up: |\n| ... and [34 more](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1201/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1201?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1201?src=pr&el=footer). Last update [0b52642...85df4f7](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1201?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,567 | 1,578 | 1,567 | MEMBER | null | Refactor to split configuration and modeling files so we can share configuration easily between various frameworks.
This PR is quite annoying to rebase so we should probably merge it pretty soon. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1201/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1201/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1201",
"html_url": "https://github.com/huggingface/transformers/pull/1201",
"diff_url": "https://github.com/huggingface/transformers/pull/1201.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1201.patch",
"merged_at": 1567711019000
} |
https://api.github.com/repos/huggingface/transformers/issues/1200 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1200/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1200/comments | https://api.github.com/repos/huggingface/transformers/issues/1200/events | https://github.com/huggingface/transformers/issues/1200 | 489,430,102 | MDU6SXNzdWU0ODk0MzAxMDI= | 1,200 | Distributed device ordinal question | {
"login": "Zhaofeng-Wu",
"id": 52263101,
"node_id": "MDQ6VXNlcjUyMjYzMTAx",
"avatar_url": "https://avatars.githubusercontent.com/u/52263101?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Zhaofeng-Wu",
"html_url": "https://github.com/Zhaofeng-Wu",
"followers_url": "https://api.github.com/users/Zhaofeng-Wu/followers",
"following_url": "https://api.github.com/users/Zhaofeng-Wu/following{/other_user}",
"gists_url": "https://api.github.com/users/Zhaofeng-Wu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Zhaofeng-Wu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Zhaofeng-Wu/subscriptions",
"organizations_url": "https://api.github.com/users/Zhaofeng-Wu/orgs",
"repos_url": "https://api.github.com/users/Zhaofeng-Wu/repos",
"events_url": "https://api.github.com/users/Zhaofeng-Wu/events{/privacy}",
"received_events_url": "https://api.github.com/users/Zhaofeng-Wu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"No, `local_rank` is only the local rank on each node.",
"Ah I see, thanks! A tangential question, considering the distributed setting only, would it be the same if we simply call `.cuda()` for the model and tensors instead of passing around the device?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,567 | 1,573 | 1,573 | NONE | null | ## ❓ Questions & Help
In the following line, we set the device ordinal to be local rank. However, suppose we have four independent nodes each with only one GPU. Then the 4th node (rank 3) will execute this line with the device ordinal 3, but it really only has 1 GPU, so it'll be invalid to ask for the GPU 3. So won't that break things? Shouldn't it be `torch.device("cuda", 0)`?
https://github.com/huggingface/pytorch-transformers/blob/0287d264e913e10018a95a2723115dc9121e5fc6/examples/run_glue.py#L403 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1200/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1200/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1199 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1199/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1199/comments | https://api.github.com/repos/huggingface/transformers/issues/1199/events | https://github.com/huggingface/transformers/pull/1199 | 489,386,911 | MDExOlB1bGxSZXF1ZXN0MzE0MjE4ODkw | 1,199 | Fixing TransformerXL bool issue #1169 | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1199?src=pr&el=h1) Report\n> Merging [#1199](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1199?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/0b52642d379bed155e8aa4f4088588bfd8ceaa88?src=pr&el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `33.33%`.\n\n[](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1199?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1199 +/- ##\n==========================================\n- Coverage 80.83% 80.81% -0.02% \n==========================================\n Files 46 46 \n Lines 7878 7881 +3 \n==========================================\n+ Hits 6368 6369 +1 \n- Misses 1510 1512 +2\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1199?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1199/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfdHJhbnNmb194bC5weQ==) | `56.9% <33.33%> (-0.1%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1199?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1199?src=pr&el=footer). Last update [0b52642...38b79b5](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1199?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1199?src=pr&el=h1) Report\n> Merging [#1199](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1199?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/0b52642d379bed155e8aa4f4088588bfd8ceaa88?src=pr&el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `42.85%`.\n\n[](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1199?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1199 +/- ##\n==========================================\n- Coverage 80.83% 80.81% -0.02% \n==========================================\n Files 46 46 \n Lines 7878 7881 +3 \n==========================================\n+ Hits 6368 6369 +1 \n- Misses 1510 1512 +2\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1199?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1199/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfdHJhbnNmb194bC5weQ==) | `56.9% <42.85%> (-0.1%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1199?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1199?src=pr&el=footer). Last update [0b52642...0be6a2a](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1199?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Looks great to me!"
] | 1,567 | 1,578 | 1,567 | MEMBER | null | Fixing #1169 regarding using uint or bool masks in Transformer-XL and PyTorch 1.1.0 and 1.2.0.
Hopefully, this solution will be compatible upward with the future PyTorch releases. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1199/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1199/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1199",
"html_url": "https://github.com/huggingface/transformers/pull/1199",
"diff_url": "https://github.com/huggingface/transformers/pull/1199.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1199.patch",
"merged_at": 1567711022000
} |
https://api.github.com/repos/huggingface/transformers/issues/1198 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1198/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1198/comments | https://api.github.com/repos/huggingface/transformers/issues/1198/events | https://github.com/huggingface/transformers/issues/1198 | 489,356,667 | MDU6SXNzdWU0ODkzNTY2Njc= | 1,198 | How to fine-tune xlnet on SQuAD with the parameter setting provided in the paper? | {
"login": "mralexis1",
"id": 53451708,
"node_id": "MDQ6VXNlcjUzNDUxNzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/53451708?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mralexis1",
"html_url": "https://github.com/mralexis1",
"followers_url": "https://api.github.com/users/mralexis1/followers",
"following_url": "https://api.github.com/users/mralexis1/following{/other_user}",
"gists_url": "https://api.github.com/users/mralexis1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mralexis1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mralexis1/subscriptions",
"organizations_url": "https://api.github.com/users/mralexis1/orgs",
"repos_url": "https://api.github.com/users/mralexis1/repos",
"events_url": "https://api.github.com/users/mralexis1/events{/privacy}",
"received_events_url": "https://api.github.com/users/mralexis1/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Here is my attempt to do layer-wise lr decay. It didn't help with the model performance though. Fixing the preprocessing code helped a lot, but still a few points lower than what they reported in the paper and lower than BERT large WWM model. See my comment in #947 \r\n```\r\nlr_layer_decay = 0.75\r\nn_layers = 24\r\nno_lr_layer_decay_group = []\r\nlr_layer_decay_groups = {k:[] for k in range(n_layers)}\r\nfor n, p in model.named_parameters():\r\n\tname_split = n.split(\".\")\r\n\tif name_split[1] == \"layer\":\r\n\t\tlr_layer_decay_groups[int(name_split[2])].append(p) \r\n\telse:\r\n\t\tno_lr_layer_decay_group.append(p)\r\n\r\noptimizer_grouped_parameters = [{\"params\": no_lr_layer_decay_group, \"lr\": learning_rate}]\r\nfor i in range(n_layers):\r\n\tparameters_group = {\"params\": lr_layer_decay_groups[i], \"lr\": learning_rate * (lr_layer_decay ** (n_layers - i - 1))}\r\n\toptimizer_grouped_parameters.append(parameters_group)\r\n\r\noptimizer = AdamW(optimizer_grouped_parameters, lr=learning_rate, eps=1e-6)\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,567 | 1,575 | 1,575 | NONE | null | From [here](https://arxiv.org/pdf/1906.08237.pdf) on page 16, it seems we should set Layer-wise lr decay to 0.75. However, I didn't find a way to do so in `run_squad.py`. Could someone provide a sample command line that could run this fine-tune task with the given parameters?
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1198/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1198/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1197 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1197/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1197/comments | https://api.github.com/repos/huggingface/transformers/issues/1197/events | https://github.com/huggingface/transformers/pull/1197 | 489,158,606 | MDExOlB1bGxSZXF1ZXN0MzE0MDM0MjAz | 1,197 | Fix loading of question answering bert from tf weights. | {
"login": "Talmaj",
"id": 5983634,
"node_id": "MDQ6VXNlcjU5ODM2MzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5983634?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Talmaj",
"html_url": "https://github.com/Talmaj",
"followers_url": "https://api.github.com/users/Talmaj/followers",
"following_url": "https://api.github.com/users/Talmaj/following{/other_user}",
"gists_url": "https://api.github.com/users/Talmaj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Talmaj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Talmaj/subscriptions",
"organizations_url": "https://api.github.com/users/Talmaj/orgs",
"repos_url": "https://api.github.com/users/Talmaj/repos",
"events_url": "https://api.github.com/users/Talmaj/events{/privacy}",
"received_events_url": "https://api.github.com/users/Talmaj/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1197?src=pr&el=h1) Report\n> Merging [#1197](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1197?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/89fd3450a61b5efd76d2524df2454e0a0e4ca070?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1197?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1197 +/- ##\n=======================================\n Coverage 80.83% 80.83% \n=======================================\n Files 46 46 \n Lines 7878 7878 \n=======================================\n Hits 6368 6368 \n Misses 1510 1510\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1197?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1197/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfYmVydC5weQ==) | `88.03% <100%> (ø)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1197?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1197?src=pr&el=footer). Last update [89fd345...d6fb182](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1197?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,567 | 1,575 | 1,575 | NONE | null | I've got an attribute error when loading pretrained tf weights for question answering (bert) in `load_tf_weights_in_bert` at:
```
elif l[0] == 'squad':
pointer = getattr(pointer, 'classifier')
```
since `BertForQuestionAnswering` does not have a 'classifier' attribute but `qa_outputs`. I've added a try except, which resolves the error. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1197/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1197/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1197",
"html_url": "https://github.com/huggingface/transformers/pull/1197",
"diff_url": "https://github.com/huggingface/transformers/pull/1197.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1197.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/1196 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1196/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1196/comments | https://api.github.com/repos/huggingface/transformers/issues/1196/events | https://github.com/huggingface/transformers/issues/1196 | 489,137,849 | MDU6SXNzdWU0ODkxMzc4NDk= | 1,196 | RoBERTa/GPT2 tokenization | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is a more complex question than it may seem but in general, I think both will be pretty similar in practice.\r\n\r\nThis is related to the fact that the GPT-2 tokenizer (also used by RoBERTa) requires a space before all the words (see [this wise note](https://github.com/pytorch/fairseq/blob/master/fairseq/models/roberta/hub_interface.py#L38-L56) in fairseq about it).\r\n\r\nNow at the beginning of a string you don't have a space which can result in strange behaviors.\r\n\r\nHere is an example of the resulting behavior on RoBERTa. You would expect that the strings `Berlin and Munich` and `Munich and Berlin` are tokenized similarly with only the order of the tokens modified but they are not:\r\n```\r\n>>> roberta.encode(\"Berlin and Munich\")\r\ntensor([ 0, 26795, 2614, 8, 10489, 2])\r\n>>> roberta.encode(\"Munich and Berlin\")\r\ntensor([ 0, 448, 879, 1725, 8, 5459, 2])\r\n```\r\nIn this example, the first word is split and not the second.\r\n\r\nIn our tokenizer, to avoid this behavior we decided to always add a space at the beginning of a string (multiple spaces doesn't have an effect so it's ok to always add one) so that the tokenization can be consistent.\r\n\r\nA side effect of this (indicated in the doc/docstring) is that the encoding/decoding process doesn't preserve the absence of a space at the beginning of a string but on the other hand the resulting behavior is more consistent.\r\n```\r\n>>> tokenizer.encode(\"Berlin and Munich\", add_special_tokens=True)\r\n[0, 5459, 8, 10489, 2]\r\n>>> tokenizer.encode(\"Munich and Berlin\", add_special_tokens=True)\r\n[0, 10489, 8, 5459, 2]\r\n```\r\n\r\nHere is a short discussion from my point of view but it would but nice, I think, to have @myleott inputs on this as well.",
"Thanks for your explanation :+1: \r\n\r\nI just ran an experiment for a downstream task (English NER) and F1-score decreased around 0.5% 😟\r\n\r\nI'll repeat that experiment with one commit before 0517e7a1cb4a70bdf32f8d11b56df8d3911d1792 (that introduced the whitespace rule) to find out where this performance drop comes from.",
"Update on that: I used 3bcbebd440c220adbaab657f2d13dac7c89f6453 and re-do my experiment on NER. Now the final F1-score is 92.26 (consistent with a prior result that was 92.31) - in contrast to 91.81 for the latest 1.2.0 version 🤔\r\n\r\nWould it possible to add a flag that uses the \"original\" tokenization 🤔",
"We'll see what we can do (cc @LysandreJik @julien-c).\r\n\r\nIs this difference significantly different with regards to seed run variability?",
"I made a few more experiments with the same dataset and different runs:\r\n\r\n| Version | Run 1 | Run 2 | Run 3 | Avg.\r\n| ------- | ----- | ----- | ----- | ----\r\n| 1.2.0 | 91.81 | 91.82 | 91.78 | 91.80\r\n| 3bcbebd | 92.31 | 92.26 | 92.38 | 92.32\r\n\r\nOn average, the difference is 0.52%.",
"Thanks a lot for the detailed experiments Stefan.\r\n\r\nThe comparison is pretty consistently in favor of the original tokenization so I guess we will switch back to the fairseq tokenization as default and add an option to use the \"consistent-tokenization\".\r\n\r\ncc @LysandreJik @julien-c "
] | 1,567 | 1,569 | 1,569 | COLLABORATOR | null | Hi,
I've one question regarding to the tokenization logic.
I'm using the RoBERTa tokenizer from `fairseq`:
```python
In [15]: tokens = roberta.encode("Berlin and Munich have a lot of puppeteer to see .")
In [16]: tokens
Out[16]:
tensor([ 0, 26795, 2614, 8, 10489, 33, 10, 319, 9, 32986,
9306, 254, 7, 192, 479, 2])
```
Interestingly, Berlin will be splitted into two subwords (with ids 26795 and 2614).
When I use the `pytorch-transformer` implementation:
```
In [21]: tokens = tokenizer.tokenize("<s>Berlin and Munich have a lot of puppeteer to see .</s>")
In [22]: indexed_tokens = tokenizer.convert_tokens_to_ids(tokens)
In [23]: indexed_tokens
Out[23]: [0, 5459, 8, 10489, 33, 10, 319, 9, 32986, 9306, 254, 7, 192, 479, 2]
```
Berlin is not splitted 😅
The `roberta.encode` method will return one subword for Berlin, when I start the sentence with a space - which tokenizer is correct here 🤔 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1196/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1196/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1195 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1195/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1195/comments | https://api.github.com/repos/huggingface/transformers/issues/1195/events | https://github.com/huggingface/transformers/pull/1195 | 489,085,648 | MDExOlB1bGxSZXF1ZXN0MzEzOTc0MjEz | 1,195 | [2.0] Reodering arguments for torch jit #1010 and future TF2.0 compatibility | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1195?src=pr&el=h1) Report\n> Merging [#1195](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1195?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/0b52642d379bed155e8aa4f4088588bfd8ceaa88?src=pr&el=desc) will **decrease** coverage by `0.4%`.\n> The diff coverage is `86.59%`.\n\n[](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1195?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1195 +/- ##\n==========================================\n- Coverage 80.83% 80.42% -0.41% \n==========================================\n Files 46 46 \n Lines 7878 7892 +14 \n==========================================\n- Hits 6368 6347 -21 \n- Misses 1510 1545 +35\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1195?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1195/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfeGxuZXQucHk=) | `78.83% <100%> (ø)` | :arrow_up: |\n| [pytorch\\_transformers/tests/modeling\\_bert\\_test.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1195/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvbW9kZWxpbmdfYmVydF90ZXN0LnB5) | `96.29% <100%> (ø)` | :arrow_up: |\n| [...rch\\_transformers/tests/modeling\\_distilbert\\_test.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1195/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvbW9kZWxpbmdfZGlzdGlsYmVydF90ZXN0LnB5) | `99.08% <100%> (ø)` | :arrow_up: |\n| [pytorch\\_transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1195/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfZGlzdGlsYmVydC5weQ==) | `96.77% <100%> (ø)` | :arrow_up: |\n| [pytorch\\_transformers/modeling\\_xlm.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1195/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfeGxtLnB5) | `87.08% <100%> (ø)` | :arrow_up: |\n| [pytorch\\_transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1195/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfYmVydC5weQ==) | `88.03% <100%> (ø)` | :arrow_up: |\n| [...ytorch\\_transformers/tests/modeling\\_roberta\\_test.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1195/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvbW9kZWxpbmdfcm9iZXJ0YV90ZXN0LnB5) | `78.81% <100%> (ø)` | :arrow_up: |\n| [pytorch\\_transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1195/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfdHJhbnNmb194bC5weQ==) | `56.9% <42.85%> (-0.1%)` | :arrow_down: |\n| [pytorch\\_transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1195/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfb3BlbmFpLnB5) | `81.08% <76.47%> (-0.88%)` | :arrow_down: |\n| [pytorch\\_transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1195/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfZ3B0Mi5weQ==) | `83.13% <76.47%> (-0.91%)` | :arrow_down: |\n| ... and [8 more](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1195/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1195?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1195?src=pr&el=footer). Last update [0b52642...7fba47b](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1195?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Ok merging"
] | 1,567 | 1,576 | 1,568 | MEMBER | null | Torch jit (cf #1010) and TF 2.0 (cf #1104) are more strict than PyTorch on having a specific order of arguments for easy use.
This PR refactor the order of the keyword arguments to make them as natural as possible.
This will be a breaking change for people using positional order to input keyword arguments in the forward pass of the models, hence is delayed to the 2.0 release. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1195/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1195/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1195",
"html_url": "https://github.com/huggingface/transformers/pull/1195",
"diff_url": "https://github.com/huggingface/transformers/pull/1195.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1195.patch",
"merged_at": 1568032971000
} |
https://api.github.com/repos/huggingface/transformers/issues/1194 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1194/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1194/comments | https://api.github.com/repos/huggingface/transformers/issues/1194/events | https://github.com/huggingface/transformers/issues/1194 | 489,082,380 | MDU6SXNzdWU0ODkwODIzODA= | 1,194 | How to finetune DistilBERT on custom data? | {
"login": "008karan",
"id": 18630864,
"node_id": "MDQ6VXNlcjE4NjMwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/18630864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/008karan",
"html_url": "https://github.com/008karan",
"followers_url": "https://api.github.com/users/008karan/followers",
"following_url": "https://api.github.com/users/008karan/following{/other_user}",
"gists_url": "https://api.github.com/users/008karan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/008karan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/008karan/subscriptions",
"organizations_url": "https://api.github.com/users/008karan/orgs",
"repos_url": "https://api.github.com/users/008karan/repos",
"events_url": "https://api.github.com/users/008karan/events{/privacy}",
"received_events_url": "https://api.github.com/users/008karan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hello @008karan,\r\nThere is the class `DistilBertForSequenceClassification` for classification tasks. Its used is really similar to `BertForSequenceClassification`: the main difference is that `DistilBertForSequenceClassification` does not need `token_type_ids` as inputs.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,567 | 1,573 | 1,573 | NONE | null | ## ❓ Questions & Help
I want to build classifier by using DistillBERT. I would like to know how to finetune it on the custom dataset and build classifier on it.
Thansk! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1194/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1194/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1193 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1193/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1193/comments | https://api.github.com/repos/huggingface/transformers/issues/1193/events | https://github.com/huggingface/transformers/issues/1193 | 489,055,216 | MDU6SXNzdWU0ODkwNTUyMTY= | 1,193 | how to get distilbert-base-uncased-distilled-squad? | {
"login": "RyanHuangNLP",
"id": 49582480,
"node_id": "MDQ6VXNlcjQ5NTgyNDgw",
"avatar_url": "https://avatars.githubusercontent.com/u/49582480?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RyanHuangNLP",
"html_url": "https://github.com/RyanHuangNLP",
"followers_url": "https://api.github.com/users/RyanHuangNLP/followers",
"following_url": "https://api.github.com/users/RyanHuangNLP/following{/other_user}",
"gists_url": "https://api.github.com/users/RyanHuangNLP/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RyanHuangNLP/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RyanHuangNLP/subscriptions",
"organizations_url": "https://api.github.com/users/RyanHuangNLP/orgs",
"repos_url": "https://api.github.com/users/RyanHuangNLP/repos",
"events_url": "https://api.github.com/users/RyanHuangNLP/events{/privacy}",
"received_events_url": "https://api.github.com/users/RyanHuangNLP/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hello @RyanHuangNLP,\r\nI am not sure to get your question.\r\nDo you mean that you want to play with the model (do inferences)? If so, did you try `qa_model = DistilBertForQuestionAnswering.from_pretrained('distilbert-base-uncased-distilled-squad')`?",
"@VictorSanh I wonder how to get the 'distilbert-base-uncased-distilled-squad' pretrain model, just use the first six layer of the base one or initialize a six layer bert?",
"If you do:\r\n```\r\nqa_model = DistilBertForQuestionAnswering.from_pretrained('distilbert-base-uncased-distilled-squad')\r\n```\r\nYou will get the 'distilbert-base-uncased-distilled-squad' pretrain model.\r\nNothing more to do.",
"I am sorry not clearly expressed, my question is how to get the 'distilbert-base-uncased-distilled-squad' pretrain model, I know that use that code can get the six layer layer, but how to train the pretrain model is what I concern, there is no six layer bert release",
"Ok, I understand your question now @RyanHuangNLP.\r\nIt is finetuned from `distilbert-base-uncased`. More precisely, the model we release is finetuned AND distilled at the same time: the loss is computed from the classic qa loss (see `run_squad.py` and `DistilBertForQuestionAnswering`) plus the distillation supervision from a BERT SQuAD model (second loss).\r\nWe haven't released the script for doing that (and we plan to do it in the near future) but it is a simple adaptation of `run_squad.py` (mostly adding a second loss i.e. distillation).",
"do we have any release date of that run_squad_adapted.py @VictorSanh ? :D",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,567 | 1,575 | 1,575 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
I think it is initialize a six layer bert and distill with a 12 layer bert, then save the checkpoint file, is it right? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1193/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1193/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1192 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1192/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1192/comments | https://api.github.com/repos/huggingface/transformers/issues/1192/events | https://github.com/huggingface/transformers/issues/1192 | 489,047,515 | MDU6SXNzdWU0ODkwNDc1MTU= | 1,192 | Finetuning BertModel to extract textual features for VQA shows bad results | {
"login": "ggaemo",
"id": 8081512,
"node_id": "MDQ6VXNlcjgwODE1MTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8081512?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ggaemo",
"html_url": "https://github.com/ggaemo",
"followers_url": "https://api.github.com/users/ggaemo/followers",
"following_url": "https://api.github.com/users/ggaemo/following{/other_user}",
"gists_url": "https://api.github.com/users/ggaemo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ggaemo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ggaemo/subscriptions",
"organizations_url": "https://api.github.com/users/ggaemo/orgs",
"repos_url": "https://api.github.com/users/ggaemo/repos",
"events_url": "https://api.github.com/users/ggaemo/events{/privacy}",
"received_events_url": "https://api.github.com/users/ggaemo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It turned out the learning rate was too high (1e-4) using 3e-5 showed good results."
] | 1,567 | 1,567 | 1,567 | NONE | null | ## ❓ Questions & Help
I am trying to use Bert as a textual feature extractor for VQA.
This is the code for tokenizing question text in VQA.
```
self.tokenizer = pytorch_transformers.BertTokenizer.from_pretrained(
'bert-base-uncased')
q = self.tokenizer.encode(q_text, add_special_tokens=True)
```
This code is for extracting features from Bert.
```
self.bert = pytorch_transformers.BertModel.from_pretrained('bert-base-uncased',
output_attentions=True)
question_mask_cls = torch.arange(lengths[0]).to(self.device)
lengths_cls = lengths
q_mask_cls = question_mask_cls[None, :] < lengths_cls[:, None]
q_mask_cls = q_mask_cls.to(torch.float32)
question_embed_t, _, src_attn_list = self.bert(question_padded,
attention_mask=q_mask_cls)
output, tgt_attn_list, tgt_src_attn_list = self.q_decoder(
tgt=obj_feature_bbox_cls.permute(1, 0, 2),
memory=question_embed_t.permute(1, 0, 2),
memory_key_padding_mask=memory_key_padding_mask,
tgt_key_padding_mask=tgt_key_padding_mask)
```
If I free the parameters of Bert, it gives better results. But when I fine tune the whole model, the model does not seem to learn. I've tried the Adam optimizer of pytorch and AdamW provided by this repository. Both of them does not work.
```
optimizer = pytorch_transformers.optimization.AdamW(model.parameters(), lr=3e-5)
```

The orange curve shows the model with Bert parameters freezed, while the pink and skyblue curve shows the model that trains Bert parameters.
Are there any potential issues I am missing?
<!-- A clear and concise description of the question. --> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1192/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1192/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1191 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1191/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1191/comments | https://api.github.com/repos/huggingface/transformers/issues/1191/events | https://github.com/huggingface/transformers/issues/1191 | 488,984,793 | MDU6SXNzdWU0ODg5ODQ3OTM= | 1,191 | how to use 'spiece.model' to create the xlnet_tokenizer | {
"login": "yangzh9106",
"id": 52728728,
"node_id": "MDQ6VXNlcjUyNzI4NzI4",
"avatar_url": "https://avatars.githubusercontent.com/u/52728728?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yangzh9106",
"html_url": "https://github.com/yangzh9106",
"followers_url": "https://api.github.com/users/yangzh9106/followers",
"following_url": "https://api.github.com/users/yangzh9106/following{/other_user}",
"gists_url": "https://api.github.com/users/yangzh9106/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yangzh9106/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yangzh9106/subscriptions",
"organizations_url": "https://api.github.com/users/yangzh9106/orgs",
"repos_url": "https://api.github.com/users/yangzh9106/repos",
"events_url": "https://api.github.com/users/yangzh9106/events{/privacy}",
"received_events_url": "https://api.github.com/users/yangzh9106/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"First, note that you can give proxies to the `from_pretrained` method if your connection problem comes from a proxy (see the doc/docstring for an example).\r\n\r\nYou can also download the sentence piece model from our S3 bucket (see the top of the `tokenization_xlnet.py` file for the url) and save it in a folder with the name `spiece.model`, then just give this folder path to the `from_pretrained` method.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,567 | 1,573 | 1,573 | NONE | null | ## ❓ Questions & Help
For some reason, my computer can not connect to Internet, which means I can not use the code "tokenizer = tokenizer_class.from_pretrained('xlnet-base-cased', do_lower_case = True)" to create the tokenizer.
The Piece model (spiece.model) is used for (de)tokenization, how can I use it to create the tokenizer?
I have tried :
`
from pytorch_transformers import XLNetTokenizer
config = {
'vocab_path' : path.sep.join([BASIC_DIR,'pretrained/pytorch_xlnet_pretrained/spiece.model'])
}
tokenizer = XLNetTokenizer.from_pretrained(config['vocab_path'], do_lower_case = True)
`
it doesn't work. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1191/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1191/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1190 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1190/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1190/comments | https://api.github.com/repos/huggingface/transformers/issues/1190/events | https://github.com/huggingface/transformers/pull/1190 | 488,898,834 | MDExOlB1bGxSZXF1ZXN0MzEzODI2MzEy | 1,190 | Fix reference of import in XLM tokenization | {
"login": "shijie-wu",
"id": 2987758,
"node_id": "MDQ6VXNlcjI5ODc3NTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2987758?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shijie-wu",
"html_url": "https://github.com/shijie-wu",
"followers_url": "https://api.github.com/users/shijie-wu/followers",
"following_url": "https://api.github.com/users/shijie-wu/following{/other_user}",
"gists_url": "https://api.github.com/users/shijie-wu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shijie-wu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shijie-wu/subscriptions",
"organizations_url": "https://api.github.com/users/shijie-wu/orgs",
"repos_url": "https://api.github.com/users/shijie-wu/repos",
"events_url": "https://api.github.com/users/shijie-wu/events{/privacy}",
"received_events_url": "https://api.github.com/users/shijie-wu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1190?src=pr&el=h1) Report\n> Merging [#1190](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1190?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/0287d264e913e10018a95a2723115dc9121e5fc6?src=pr&el=desc) will **decrease** coverage by `0.19%`.\n> The diff coverage is `0%`.\n\n[](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1190?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1190 +/- ##\n=========================================\n- Coverage 80.85% 80.65% -0.2% \n=========================================\n Files 46 46 \n Lines 7876 7878 +2 \n=========================================\n- Hits 6368 6354 -14 \n- Misses 1508 1524 +16\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1190?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1190/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdG9rZW5pemF0aW9uX3hsbS5weQ==) | `82.7% <0%> (-0.71%)` | :arrow_down: |\n| [...orch\\_transformers/tests/tokenization\\_utils\\_test.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1190/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvdG9rZW5pemF0aW9uX3V0aWxzX3Rlc3QucHk=) | `92% <0%> (-4%)` | :arrow_down: |\n| [...h\\_transformers/tests/tokenization\\_tests\\_commons.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1190/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvdG9rZW5pemF0aW9uX3Rlc3RzX2NvbW1vbnMucHk=) | `97.16% <0%> (-2.84%)` | :arrow_down: |\n| [pytorch\\_transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1190/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfdXRpbHMucHk=) | `88.61% <0%> (-1.46%)` | :arrow_down: |\n| [pytorch\\_transformers/file\\_utils.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1190/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvZmlsZV91dGlscy5weQ==) | `70.42% <0%> (-1.41%)` | :arrow_down: |\n| [pytorch\\_transformers/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1190/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdG9rZW5pemF0aW9uX2dwdDIucHk=) | `95.86% <0%> (-0.83%)` | :arrow_down: |\n| [pytorch\\_transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1190/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdG9rZW5pemF0aW9uX3RyYW5zZm9feGwucHk=) | `33.89% <0%> (-0.29%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1190?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1190?src=pr&el=footer). Last update [0287d26...a15562e](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1190?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Yes, thanks a lot @shijie-wu!"
] | 1,567 | 1,567 | 1,567 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1190/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1190/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1190",
"html_url": "https://github.com/huggingface/transformers/pull/1190",
"diff_url": "https://github.com/huggingface/transformers/pull/1190.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1190.patch",
"merged_at": 1567594250000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/1189 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1189/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1189/comments | https://api.github.com/repos/huggingface/transformers/issues/1189/events | https://github.com/huggingface/transformers/issues/1189 | 488,887,031 | MDU6SXNzdWU0ODg4ODcwMzE= | 1,189 | Roberta tokenizer fails on certain unicode characters | {
"login": "0xEdgar",
"id": 13364236,
"node_id": "MDQ6VXNlcjEzMzY0MjM2",
"avatar_url": "https://avatars.githubusercontent.com/u/13364236?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/0xEdgar",
"html_url": "https://github.com/0xEdgar",
"followers_url": "https://api.github.com/users/0xEdgar/followers",
"following_url": "https://api.github.com/users/0xEdgar/following{/other_user}",
"gists_url": "https://api.github.com/users/0xEdgar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/0xEdgar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/0xEdgar/subscriptions",
"organizations_url": "https://api.github.com/users/0xEdgar/orgs",
"repos_url": "https://api.github.com/users/0xEdgar/repos",
"events_url": "https://api.github.com/users/0xEdgar/events{/privacy}",
"received_events_url": "https://api.github.com/users/0xEdgar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Does `fairseq` exhibit the same behavior?\r\n\r\nIf it does, I would ask upstream. But in any case, I'm not sure it's a bug (it's just the internal encoding used by the neural net)",
"I think you need to tokenize by XLM-Roberta-Tokenizer.\r\nI tokenized for Korean."
] | 1,567 | 1,583 | 1,567 | NONE | null | ## 🐛 Bug
<!-- Important information -->
Model I am using: `Roberta`:
Language I am using the model on (English, Chinese....): English
The problem arise when using:
- The `roberta-base` tokenizer and tokenizing unicode accents
```
from pytorch_transformers import *
tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
phrase = "I visited the Côte d'Azur"
for word in phrase.split():
print(tokenizer.tokenize(word))
```
this outputs:
```
['I']
['vis', 'ited']
['the']
['C', 'ô', 'te']
['d', "'", 'Az', 'ur']
```
## Expected behavior
```
['I']
['vis', 'ited']
['the']
['C', 'ô', 'te']
['d', "'", 'Az', 'ur']
```
## Environment
* OS: MacOS 10.14.6
* Python version: 3.6
* PyTorch version: 1.1.0.post2
* PyTorch Transformers version (or branch): 1.1.0
* Using GPU ? no
* Distributed of parallel setup ? no
* Any other relevant information: xlnet and bert do not face this same tokenization issue. It appears that the issue comes from the gpt2-tokenizer
## Additional context
<!-- Add any other context about the problem here. --> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1189/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1189/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1188 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1188/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1188/comments | https://api.github.com/repos/huggingface/transformers/issues/1188/events | https://github.com/huggingface/transformers/issues/1188 | 488,875,048 | MDU6SXNzdWU0ODg4NzUwNDg= | 1,188 | BertEncoder head_mask not subscript-able error when not passed | {
"login": "apsdehal",
"id": 3616806,
"node_id": "MDQ6VXNlcjM2MTY4MDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/3616806?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/apsdehal",
"html_url": "https://github.com/apsdehal",
"followers_url": "https://api.github.com/users/apsdehal/followers",
"following_url": "https://api.github.com/users/apsdehal/following{/other_user}",
"gists_url": "https://api.github.com/users/apsdehal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/apsdehal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/apsdehal/subscriptions",
"organizations_url": "https://api.github.com/users/apsdehal/orgs",
"repos_url": "https://api.github.com/users/apsdehal/repos",
"events_url": "https://api.github.com/users/apsdehal/events{/privacy}",
"received_events_url": "https://api.github.com/users/apsdehal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Got same issue. If no `head_mask` is given to `.forward` method of `BertEncoder`, then the following code will cause `TypeError: 'NoneType' object is not subscriptable`.\r\n\r\nhttps://github.com/huggingface/transformers/blob/a701c9b32126f1e6974d9fcb3a5c3700527d8559/transformers/modeling_bert.py#L348\r\n\r\nI wonder if there will be a case of using `BertEncoder` without any `head_mask`. Though, this issue should be addressed by checking whether `head_mask` is None, and expand them as size `[config.num_hidden_layers]`.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"As bot closed the issue and it is still existing, I provided a fix it in the PR linked above."
] | 1,567 | 1,588 | 1,576 | CONTRIBUTOR | null | ## 🐛 Bug
BertEncoder takes in head_mask parameters with default value of None but on https://github.com/huggingface/pytorch-transformers/blob/master/pytorch_transformers/modeling_bert.py#L431, ith index is checked without checking if head_mask is None. If by default nothing is passed this results in error.
## Fix
Check head_mask is not None in https://github.com/huggingface/pytorch-transformers/blob/master/pytorch_transformers/modeling_bert.py#L431 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1188/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1188/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1187 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1187/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1187/comments | https://api.github.com/repos/huggingface/transformers/issues/1187/events | https://github.com/huggingface/transformers/issues/1187 | 488,669,256 | MDU6SXNzdWU0ODg2NjkyNTY= | 1,187 | Using do_eval from run_glue.py uses the cached result | {
"login": "wahlforss",
"id": 73305,
"node_id": "MDQ6VXNlcjczMzA1",
"avatar_url": "https://avatars.githubusercontent.com/u/73305?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wahlforss",
"html_url": "https://github.com/wahlforss",
"followers_url": "https://api.github.com/users/wahlforss/followers",
"following_url": "https://api.github.com/users/wahlforss/following{/other_user}",
"gists_url": "https://api.github.com/users/wahlforss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wahlforss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wahlforss/subscriptions",
"organizations_url": "https://api.github.com/users/wahlforss/orgs",
"repos_url": "https://api.github.com/users/wahlforss/repos",
"events_url": "https://api.github.com/users/wahlforss/events{/privacy}",
"received_events_url": "https://api.github.com/users/wahlforss/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"If you simply remove the cached file or move it elsewhere, it won't be used by run_glue.py. That's what I have been doing.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,567 | 1,573 | 1,573 | NONE | null | ## ❓ Questions & Help
Using do_eval from run_glue.py uses the cached result. I want to evaluate my fine-tuned models and I can't find any guide on how to do so. Anybody that can point me in the right direction? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1187/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1187/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1186 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1186/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1186/comments | https://api.github.com/repos/huggingface/transformers/issues/1186/events | https://github.com/huggingface/transformers/pull/1186 | 488,641,242 | MDExOlB1bGxSZXF1ZXN0MzEzNjIxMTEz | 1,186 | [README] link to Write With Transformer | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,567 | 1,567 | 1,567 | MEMBER | null | Tomorrow we'll release a new version of Write With Transformer that's gonna let you:
- experiment with different models (gpt2, xlnet) and/or model checkpoints (example for gpt2: small, large, arxiv)
- Share links to your documents.
With those two changes transfomer.huggingface.co is graduating to being an official demo for `pytorch-transformers`'s text generation capabilities. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1186/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1186/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1186",
"html_url": "https://github.com/huggingface/transformers/pull/1186",
"diff_url": "https://github.com/huggingface/transformers/pull/1186.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1186.patch",
"merged_at": 1567701227000
} |
https://api.github.com/repos/huggingface/transformers/issues/1185 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1185/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1185/comments | https://api.github.com/repos/huggingface/transformers/issues/1185/events | https://github.com/huggingface/transformers/issues/1185 | 488,550,636 | MDU6SXNzdWU0ODg1NTA2MzY= | 1,185 | XLnet output attentions doesn't work | {
"login": "aviclu",
"id": 13317450,
"node_id": "MDQ6VXNlcjEzMzE3NDUw",
"avatar_url": "https://avatars.githubusercontent.com/u/13317450?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aviclu",
"html_url": "https://github.com/aviclu",
"followers_url": "https://api.github.com/users/aviclu/followers",
"following_url": "https://api.github.com/users/aviclu/following{/other_user}",
"gists_url": "https://api.github.com/users/aviclu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aviclu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aviclu/subscriptions",
"organizations_url": "https://api.github.com/users/aviclu/orgs",
"repos_url": "https://api.github.com/users/aviclu/repos",
"events_url": "https://api.github.com/users/aviclu/events{/privacy}",
"received_events_url": "https://api.github.com/users/aviclu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"why did you close it?",
"I'm checking, it might be my own but."
] | 1,567 | 1,574 | 1,567 | NONE | null | ## 🐛 Bug
Model I am using - XLNet.
Language I am using the model on English.
The problem arise when using the flag `output_attentions=True`
## To Reproduce
## Expected behavior
When executing
` outputs= self.model(input_ids)
`
I would expect to have a tuple with outputs and attentions but it fails.
The problem probably roots from the lines:
` if self.output_attentions:
attentions.append(outputs[2])
`
I receive Nones instead of the attentions, or sometimes the error:
`IndexError: tuple index out of range`
For the following line in the forward function:
`attentions.append(outputs[2])`
## Environment
* OS: Windows
* Python version: 3.7.3
* PyTorch version: 1.1.0
* PyTorch Transformers version (or branch): Master branch version from 31.08.19
* Using GPU: Yes | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1185/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1185/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1184 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1184/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1184/comments | https://api.github.com/repos/huggingface/transformers/issues/1184/events | https://github.com/huggingface/transformers/issues/1184 | 488,422,837 | MDU6SXNzdWU0ODg0MjI4Mzc= | 1,184 | Convert RoBERTa to TF checkpoint | {
"login": "YoPatapon",
"id": 17683649,
"node_id": "MDQ6VXNlcjE3NjgzNjQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/17683649?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YoPatapon",
"html_url": "https://github.com/YoPatapon",
"followers_url": "https://api.github.com/users/YoPatapon/followers",
"following_url": "https://api.github.com/users/YoPatapon/following{/other_user}",
"gists_url": "https://api.github.com/users/YoPatapon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YoPatapon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YoPatapon/subscriptions",
"organizations_url": "https://api.github.com/users/YoPatapon/orgs",
"repos_url": "https://api.github.com/users/YoPatapon/repos",
"events_url": "https://api.github.com/users/YoPatapon/events{/privacy}",
"received_events_url": "https://api.github.com/users/YoPatapon/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hello, I believe that unfortunately the script currently only works for the `BertModel` base class. You would have to create a similar script for RoBERTa, it shouldn't be too different as both models have very similar architectures!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,567 | 1,573 | 1,573 | NONE | null | Can we use the "convert_pytorch_checkpoint_to_tf" script to convert the RoBERTa checkpoint to the Tensorflow ckpt? Thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1184/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1184/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1183 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1183/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1183/comments | https://api.github.com/repos/huggingface/transformers/issues/1183/events | https://github.com/huggingface/transformers/issues/1183 | 488,413,118 | MDU6SXNzdWU0ODg0MTMxMTg= | 1,183 | 'DistilBertModel' object has no attribute 'init_weights' | {
"login": "xeb",
"id": 7634,
"node_id": "MDQ6VXNlcjc2MzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/7634?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xeb",
"html_url": "https://github.com/xeb",
"followers_url": "https://api.github.com/users/xeb/followers",
"following_url": "https://api.github.com/users/xeb/following{/other_user}",
"gists_url": "https://api.github.com/users/xeb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xeb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xeb/subscriptions",
"organizations_url": "https://api.github.com/users/xeb/orgs",
"repos_url": "https://api.github.com/users/xeb/repos",
"events_url": "https://api.github.com/users/xeb/events{/privacy}",
"received_events_url": "https://api.github.com/users/xeb/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, and thank you for the bug report! It would seem you are using an outdated version of the master branch. Could you update it to the latest and tell me if the error remains?",
"Looks good! I pulled the latest pip package and that worked as well. Thanks."
] | 1,567 | 1,567 | 1,567 | CONTRIBUTOR | null | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): DistilBertModel
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* [x] the official example scripts: I am trying to run the sample in the given examples within a notebook. Specifically the example in [DistilBERT's Example README](https://github.com/huggingface/pytorch-transformers/tree/master/examples/distillation)
## To Reproduce
Steps to reproduce the behavior:
1. Create a new notebook with a Python 3.7 interpretter
2. Type in the following code:
```
!pip install pytorch-transformers
import torch
from pytorch_transformers.tokenization_distilbert import DistilBertTokenizer
from pytorch_transformers.modeling_distilbert import DistilBertModel
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
model = DistilBertModel.from_pretrained('distilbert-base-uncased')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0)
outputs = model(input_ids)
last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple
```
3. Run & receive the error:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-48-3aa5cae06e9c> in <module>
1 tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
----> 2 model = DistilBertModel.from_pretrained('distilbert-base-uncased')
3
4 input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0)
5 outputs = model(input_ids)
/mnt/c/Users/.../venv/lib/python3.7/site-packages/pytorch_transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
472 assert model.config.output_attention == True
473 # Loading from a TF checkpoint file instead of a PyTorch model (slower)
--> 474 config = BertConfig.from_json_file('./tf_model/my_tf_model_config.json')
475 model = BertModel.from_pretrained('./tf_model/my_tf_checkpoint.ckpt.index', from_tf=True, config=config)
476
/mnt/c/Users/.../venv/lib/python3.7/site-packages/pytorch_transformers/modeling_distilbert.py in __init__(self, config)
486 self.transformer = Transformer(config) # Encoder
487
--> 488 self.init_weights()
489
490 def _resize_token_embeddings(self, new_num_tokens):
/mnt/c/Users/.../venv/lib/python3.7/site-packages/torch/nn/modules/module.py in __getattr__(self, name)
589 return modules[name]
590 raise AttributeError("'{}' object has no attribute '{}'".format(
--> 591 type(self).__name__, name))
592
593 def __setattr__(self, name, value):
AttributeError: 'DistilBertModel' object has no attribute 'init_weights'
```
## Expected behavior
Code would execute. Not sure if init_weights should be inherited or if it's a type (there is [_init_weights ](https://github.com/huggingface/pytorch-transformers/blob/master/pytorch_transformers/modeling_distilbert.py#L403)in the DistilBertTokenizer class)
## Environment
* OS: Win10 (with WSL)
* Python version: 3.7.4
* PyTorch version: 1.2
* PyTorch Transformers version (or branch): mainline
* Using GPU ? no
* Distributed of parallel setup ? parallel
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1183/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1183/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1182 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1182/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1182/comments | https://api.github.com/repos/huggingface/transformers/issues/1182/events | https://github.com/huggingface/transformers/pull/1182 | 488,357,720 | MDExOlB1bGxSZXF1ZXN0MzEzMzk1MjQ3 | 1,182 | Updated GLUE script. New feature: Binary mask creation from the tokenizer's encoding. | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1182?src=pr&el=h1) Report\n> Merging [#1182](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1182?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/0d1dad6d5323cf627cb8d7ddd428856ab8475f6b?src=pr&el=desc) will **increase** coverage by `0.27%`.\n> The diff coverage is `94.57%`.\n\n[](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1182?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1182 +/- ##\n==========================================\n+ Coverage 80.77% 81.04% +0.27% \n==========================================\n Files 57 57 \n Lines 8092 8229 +137 \n==========================================\n+ Hits 6536 6669 +133 \n- Misses 1556 1560 +4\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1182?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1182/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdG9rZW5pemF0aW9uX3hsbmV0LnB5) | `89.65% <100%> (+0.46%)` | :arrow_up: |\n| [...h\\_transformers/tests/tokenization\\_tests\\_commons.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1182/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvdG9rZW5pemF0aW9uX3Rlc3RzX2NvbW1vbnMucHk=) | `100% <100%> (ø)` | :arrow_up: |\n| [pytorch\\_transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1182/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdG9rZW5pemF0aW9uX2JlcnQucHk=) | `95.75% <100%> (+0.08%)` | :arrow_up: |\n| [...ytorch\\_transformers/tests/tokenization\\_xlm\\_test.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1182/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvdG9rZW5pemF0aW9uX3hsbV90ZXN0LnB5) | `97.72% <100%> (ø)` | :arrow_up: |\n| [pytorch\\_transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1182/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdG9rZW5pemF0aW9uX3hsbS5weQ==) | `82.98% <100%> (+0.28%)` | :arrow_up: |\n| [...ch\\_transformers/tests/tokenization\\_roberta\\_test.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1182/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvdG9rZW5pemF0aW9uX3JvYmVydGFfdGVzdC5weQ==) | `92.45% <100%> (ø)` | :arrow_up: |\n| [...transformers/tests/tokenization\\_distilbert\\_test.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1182/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvdG9rZW5pemF0aW9uX2Rpc3RpbGJlcnRfdGVzdC5weQ==) | `95.23% <100%> (ø)` | |\n| [pytorch\\_transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1182/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdG9rZW5pemF0aW9uX3JvYmVydGEucHk=) | `100% <100%> (ø)` | :arrow_up: |\n| [...torch\\_transformers/tests/tokenization\\_bert\\_test.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1182/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvdG9rZW5pemF0aW9uX2JlcnRfdGVzdC5weQ==) | `98.66% <100%> (ø)` | :arrow_up: |\n| [...orch\\_transformers/tests/tokenization\\_xlnet\\_test.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1182/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvdG9rZW5pemF0aW9uX3hsbmV0X3Rlc3QucHk=) | `97.91% <100%> (ø)` | :arrow_up: |\n| ... and [1 more](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1182/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1182?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1182?src=pr&el=footer). Last update [0d1dad6...72402d1](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1182?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I like it a lot! Could we also update `run_squad` similarly maybe?",
"Only need to adapt the SQuAD script with the new encode w/mask as well as DistilBERT and I think we're good to go @julien-c "
] | 1,567 | 1,578 | 1,569 | MEMBER | null | The new `tokenizer.encode(seq_0, seq_1, add_special_tokens=True)` method makes life easier when building sequences. However, it makes it harder to create binary masks as the different sequence lengths are unknown. As a feature, I have therefore added a flag to the encode function so that it can output binary masks.
Example:
```py
from pytorch_transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
seq_0 = "This is the one"
seq_1 = "This is the last"
input_ids, mask = tokenizer.encode(seq_0, seq_1, add_special_tokens=True, output_mask=True)
# input_ids: [ 101, 1188, 1110, 1103, 1141, 102, 1188, 1110, 1103, 1314, 102]
# mask: [ 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1]
```
It works for BERT, RoBERTa, XLM, and XLNet. I have refactored the GLUE example with this method. It greatly simplifies input creation.
I have added an additional unit test to the `commontests` suite. Furthermore, in order to make sure the tokenization was correct I compared against the original input creation of the GLUE script to make sure every encoded sequence remained the same. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1182/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1182/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1182",
"html_url": "https://github.com/huggingface/transformers/pull/1182",
"diff_url": "https://github.com/huggingface/transformers/pull/1182.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1182.patch",
"merged_at": 1569492666000
} |
https://api.github.com/repos/huggingface/transformers/issues/1181 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1181/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1181/comments | https://api.github.com/repos/huggingface/transformers/issues/1181/events | https://github.com/huggingface/transformers/issues/1181 | 488,227,590 | MDU6SXNzdWU0ODgyMjc1OTA= | 1,181 | DistilBERT Loss Function Choice and further query on extending to GPT2. | {
"login": "sai-prasanna",
"id": 3595526,
"node_id": "MDQ6VXNlcjM1OTU1MjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/3595526?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sai-prasanna",
"html_url": "https://github.com/sai-prasanna",
"followers_url": "https://api.github.com/users/sai-prasanna/followers",
"following_url": "https://api.github.com/users/sai-prasanna/following{/other_user}",
"gists_url": "https://api.github.com/users/sai-prasanna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sai-prasanna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sai-prasanna/subscriptions",
"organizations_url": "https://api.github.com/users/sai-prasanna/orgs",
"repos_url": "https://api.github.com/users/sai-prasanna/repos",
"events_url": "https://api.github.com/users/sai-prasanna/events{/privacy}",
"received_events_url": "https://api.github.com/users/sai-prasanna/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello @sai-prasanna,\r\nI believe that in the original implementation we release, the Knowledge Distillation loss is batch-averaged meaning that it should not be sensible to the sequence lenghts: `self.ce_loss_fct = nn.KLDivLoss(reduction='batchmean')`. But anyways, you should just make sure that at the end, if your true loss is batch-size-agnostic, then the knowledge distillation loss should be too.\r\n\r\nRegarding your 1st question, the `T**2` rescaling simply ensures that both the true loss and the distillation loss are of the same magnitude. You can refer to [the original paper](https://arxiv.org/abs/1503.02531), section 2: _\"Since the magnitudes of the gradients produced by the soft targets scale as 1/T^2 it is important to multiply them by T^2 when using both hard and soft targets.\"_\r\nVictor",
"Thanks!. I will recheck the loss function ranges more carefully. And I guess I jumped ahead without reading the literature carefully, will revisit the papers.\r\n\r\nI have a few queries with respect to pre-processing text for the student of GPT-2. (I pm'ed you on twitter, but I guess this place is more accessible to others).\r\n\r\nAny guesses on how GPT-2 sequences were sampled for training?\r\n\r\nDid they take any random point in the corpus and sampled from there, or would they select a random token (could be in the middle of a sentence) and continue to fill the sequence from that point?\r\n\r\nAnd what of sequence lengths, would they fill up tokens continuously (going across sentence boundaries) till max sequence length? Or would there be variation in sequence lengths and what would be an ideal way to sample the variations?\r\n",
"> Thanks!. I will recheck the loss function ranges more carefully. And I guess I jumped ahead without reading the literature carefully, will revisit the papers.\r\n> \r\n> I have a few queries with respect to pre-processing text for the student of GPT-2. (I pm'ed you on twitter, but I guess this place is more accessible to others).\r\n> \r\n> Any guesses on how GPT-2 sequences were sampled for training?\r\n\r\nYou should refer to the papers [GPT](https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf) and [GPT2](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) (section 2.1/2.2) for a detailed explanation of how the data are processed.\r\n\r\n> Did they take any random point in the corpus and sampled from there, or would they select a random token (could be in the middle of a sentence) and continue to fill the sequence from that point?\r\n\r\nIn auto-regressive LM (like GPT* for instance), each token (except the last one in the sequence) induce a training signal by having the model predicting the next token.\r\n\r\n> And what of sequence lengths, would they fill up tokens continuously (going across sentence boundaries) till max sequence length? Or would there be variation in sequence lengths and what would be an ideal way to sample the variations?\r\n\r\nMore generally, the longer the sequences, the better it is (that's one of the thing the RoBERTa paper showed). You want to train on as long dependencies as you can.",
"Thanks. Guess gpt 2 also follows gpt's preprocessing.\r\n\r\nI guess my second point was rather unclear. I understand that gpt 2 does traditional lm. I want to know whether inputs to lm while training, strictly start at sentence starts.\r\n\"The quick brown cyborg jumped over the lazy sapien. And the cyborg, ...\"\r\nOr can inputs be like \r\n\"cyborg jumped over the lazy sapien. and the cyborg, ...\"\r\n\"jumped over the lazy sapien. and the cyborg, ...\"\r\n\r\nAny hypothesis on how varying training data like that would affect generation? Say one always gives context that start properly, then would there be any gain in not giving sentences that start from middle.\r\n\r\n",
"@VictorSanh Experimented with KLDivLoss(reduction='batchmean'). I can confirm that the loss scales with the sequence length.\r\n\r\n``` python\r\ndef test_kl_div_loss(batch, timesteps, hidden, n=10000): \r\n loss_fn = nn.KLDivLoss(reduction='batchmean') \r\n student_logits = torch.randn(batch, timesteps, hidden) \r\n teacher_logits = torch.randn(batch, timesteps, hidden) \r\n mean_loss = 0.0 \r\n for _ in range(n): \r\n mean_loss += loss_fn(F.log_softmax(student_logits, dim=-1), F.softmax(teacher_logits, dim=-1)) \r\n mean_loss /= n \r\n return mean_loss \r\n```\r\n``` python\r\nIn [79]: test_kl_div_loss(batch=10, timesteps=10, hidden=10) \r\nOut[79]: tensor(8.4171)\r\nIn [79]: test_kl_div_loss(batch=10, timesteps=100, hidden=10) \r\nOut[79]: tensor(77.5201)\r\nIn [83]: test_kl_div_loss(batch=10, timesteps=1000, hidden=10) \r\nOut[83]: tensor(807.4752)\r\n```\r\nnn.KLDivLoss with batchmean is proportional to total timesteps. And `reduction=mean` is wrong as it averages by the number of classes.\r\n\r\nIn nn.CrossEntropyLoss we flatten the time dimension to batch and then compute cross entropy, this in effect averages the loss across timesteps as the default reduction is 'mean'.\r\n\r\nSo ideally, when computing the KL Div, should we ideally set the reduction='none' and scale the loss by ( 1 / total_actual_non_padding_tokens_in_batch ) ?\r\n\r\n",
"> Thanks. Guess gpt 2 also follows gpt's preprocessing.\r\n> \r\n> I guess my second point was rather unclear. I understand that gpt 2 does traditional lm. I want to know whether inputs to lm while training, strictly start at sentence starts.\r\n> \"The quick brown cyborg jumped over the lazy sapien. And the cyborg, ...\"\r\n> Or can inputs be like\r\n> \"cyborg jumped over the lazy sapien. and the cyborg, ...\"\r\n> \"jumped over the lazy sapien. and the cyborg, ...\"\r\n> \r\n> Any hypothesis on how varying training data like that would affect generation? Say one always gives context that start properly, then would there be any gain in not giving sentences that start from middle.\r\n\r\nYou could do the second option, I am just not sure whether it fundamentally brings significantly more training signal than the 1st option. Thus we usually do the 1st option.\r\nYou should have a look at how it is done in GPT/GPT2. Folks at Nvidia have released their pre-processing script for GPT2: see [here](https://github.com/NVIDIA/Megatron-LM/blob/a0368ddf4732bf5b86ab4260f6f4196fdd01d5fb/openwebtext/make_gpt2_dataset.py).\r\n\r\n\r\n\r\n> @VictorSanh Experimented with KLDivLoss(reduction='batchmean'). I can confirm that the loss scales with the sequence length.\r\n> \r\n> ```python\r\n> def test_kl_div_loss(batch, timesteps, hidden, n=10000): \r\n> loss_fn = nn.KLDivLoss(reduction='batchmean') \r\n> student_logits = torch.randn(batch, timesteps, hidden) \r\n> teacher_logits = torch.randn(batch, timesteps, hidden) \r\n> mean_loss = 0.0 \r\n> for _ in range(n): \r\n> mean_loss += loss_fn(F.log_softmax(student_logits, dim=-1), F.softmax(teacher_logits, dim=-1)) \r\n> mean_loss /= n \r\n> return mean_loss \r\n> ```\r\n> \r\n> ```python\r\n> In [79]: test_kl_div_loss(batch=10, timesteps=10, hidden=10) \r\n> Out[79]: tensor(8.4171)\r\n> In [79]: test_kl_div_loss(batch=10, timesteps=100, hidden=10) \r\n> Out[79]: tensor(77.5201)\r\n> In [83]: test_kl_div_loss(batch=10, timesteps=1000, hidden=10) \r\n> Out[83]: tensor(807.4752)\r\n> ```\r\n> \r\n> nn.KLDivLoss with batchmean is proportional to total timesteps. And `reduction=mean` is wrong as it averages by the number of classes.\r\n> \r\n> In nn.CrossEntropyLoss we flatten the time dimension to batch and then compute cross entropy, this in effect averages the loss across timesteps as the default reduction is 'mean'.\r\n> \r\n> So ideally, when computing the KL Div, should we ideally set the reduction='none' and scale the loss by ( 1 / total_actual_non_padding_tokens_in_batch ) ?\r\n\r\nWhat I simply do in the training code is a `student_logits.view(-1, hidden)` so that at the end, it is sequence-length and batch size agnostic (see [here](https://github.com/huggingface/pytorch-transformers/blob/master/examples/distillation/distiller.py#L329) for instance)\r\n",
"Thanks for taking your time to answer all my queries."
] | 1,567 | 1,568 | 1,568 | NONE | null | ## ❓ Questions & Help
Can you describe the motiavtion behind scaling the KLDivLoss by squared temperature ?
https://github.com/huggingface/pytorch-transformers/blob/50792dbdcccd64f61483ec535ff23ee2e4f9e18d/examples/distillation/distiller.py#L331
When applying the same logic for GPT-2 distillation, I did the following
``` python
def training_step(self, data_batch, batch_i):
"""
Lightning calls this inside the training loop
:param data_batch:
:return:
"""
# forward pass
token_ids, lengths = data_batch
orig_loss_ce, s_logits = self.student(input_ids=token_ids, labels=token_ids)[:2] # (bs, seq_length, voc_size)
self.teacher.eval() # Required to do this every time.
with torch.no_grad():
t_logits = self.teacher(input_ids=token_ids)[0] # (bs, seq_length, voc_size)
loss_kl = self.kld_loss_fct(F.log_softmax(s_logits/self.temperature, dim=-1),
F.softmax(t_logits/self.temperature, dim=-1)) * (self.temperature)**2
loss_kl /= s_logits.shape[1]
loss = self.alpha_kl * loss_kl
if self.alpha_orig_ce > 0.:
loss += self.alpha_orig_ce * orig_loss_ce
if self.alpha_mse > 0.:
loss_mse = self.mse_loss_fct(s_logits, t_logits)/s_logits.size(0) # Reproducing batchmean reduction
loss += self.alpha_mse * loss_mse
# in DP mode (default) make sure if result is scalar, there's another dim in the beginning
if self.trainer.use_dp:
loss = loss.unsqueeze(0)
output = OrderedDict({
'loss': loss
})
# can also return just a scalar instead of a dict (return loss_val)
return output
```
I found that the distillBert implementation lead to high initial loss range for kl (130-180) depending on the average sequence length per batch while cross entropy was in range of 4-5.
So I scaled the loss_kl by the total timesteps in the batch. (My batches don't have any masked tokens). Training did converge to similar perplexity as teacher on the held out set of toronto books.
Is my method motivated, or am I applying the KL wrongly in the GPT2 scenario necessiating the scaling ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1181/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1181/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1180 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1180/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1180/comments | https://api.github.com/repos/huggingface/transformers/issues/1180/events | https://github.com/huggingface/transformers/issues/1180 | 488,216,197 | MDU6SXNzdWU0ODgyMTYxOTc= | 1,180 | DistilBERT baseline | {
"login": "jeremyasapp",
"id": 33673620,
"node_id": "MDQ6VXNlcjMzNjczNjIw",
"avatar_url": "https://avatars.githubusercontent.com/u/33673620?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jeremyasapp",
"html_url": "https://github.com/jeremyasapp",
"followers_url": "https://api.github.com/users/jeremyasapp/followers",
"following_url": "https://api.github.com/users/jeremyasapp/following{/other_user}",
"gists_url": "https://api.github.com/users/jeremyasapp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jeremyasapp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jeremyasapp/subscriptions",
"organizations_url": "https://api.github.com/users/jeremyasapp/orgs",
"repos_url": "https://api.github.com/users/jeremyasapp/repos",
"events_url": "https://api.github.com/users/jeremyasapp/events{/privacy}",
"received_events_url": "https://api.github.com/users/jeremyasapp/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello @jeremyasapp,\r\nThat's a good question!\r\n\r\nHere are the results on the pre-training solely using MLM training signal (the small model is initialized from the teacher though).\r\n\r\n\r\nThis is the same table presented in the blog post to which I added the last row. The drop in performance is consistent across the tasks (except for QNLI or QQP). I believe though that you could slightly improve these figures by continuing the pre-training for a few more epochs.\r\n\r\nI should also refer you to my answer to [this question](https://medium.com/@weadhsu_77395/can-you-provide-a-comparison-with-the-small-model-that-is-trained-from-scratch-d75001057e15#--responses) in the blogpost. It is also related to the influence of the knowledge distillation loss.\r\n\r\n(Btw, it's `DistilBERT` with a single `L` 🤣no worries though, I found your issue ;))\r\nVictor",
"Hey @VictorSanh, thanks for the response.\r\n\r\nDefinitely non trivial improvement! I actually attempted something similar to you guys this summer, but with less success. It's really great to see that this works :)\r\n\r\n"
] | 1,567 | 1,567 | 1,567 | NONE | null | Nice work on DistillBert! I was wondering if you had baseline numbers for the student model trained without the distillation objective? Seems like a very important baseline to understand if distillation was useful. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1180/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1180/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1179 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1179/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1179/comments | https://api.github.com/repos/huggingface/transformers/issues/1179/events | https://github.com/huggingface/transformers/issues/1179 | 488,198,509 | MDU6SXNzdWU0ODgxOTg1MDk= | 1,179 | DistilBERT training is killed because OOM | {
"login": "tomohideshibata",
"id": 16042472,
"node_id": "MDQ6VXNlcjE2MDQyNDcy",
"avatar_url": "https://avatars.githubusercontent.com/u/16042472?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tomohideshibata",
"html_url": "https://github.com/tomohideshibata",
"followers_url": "https://api.github.com/users/tomohideshibata/followers",
"following_url": "https://api.github.com/users/tomohideshibata/following{/other_user}",
"gists_url": "https://api.github.com/users/tomohideshibata/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tomohideshibata/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomohideshibata/subscriptions",
"organizations_url": "https://api.github.com/users/tomohideshibata/orgs",
"repos_url": "https://api.github.com/users/tomohideshibata/repos",
"events_url": "https://api.github.com/users/tomohideshibata/events{/privacy}",
"received_events_url": "https://api.github.com/users/tomohideshibata/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@tomohideshibata Could you paste the complete error message here 🤗\r\n\r\nI'm currently distilling a model and my RAM usage (system) is ~ 20 GB. GPU usage is ~8 GB on a V-100. If the OOM was caused for your GPU then I would recommend decreasing the batch size (which is 5 by default) :)",
"I'm trying to train distibert, but I cannot find the dump.txt which I assume is preprocessed wikipedia and torento corpus datasets. Could someone help? Thanks.",
"@stefan-it The error message was just `killed`.\r\n\r\nGPU memory has no problem. I can make the batch size larger (16). The problem is CPU memory.\r\n\r\nI have just suspected `tensorboard.add_scalar`. I will try to make the volume of outputted logs smaller. If I find something, I will let you know.",
"After 40h of training I could also confirm an increase from 20GB (~20 hours training) to 42GB 🤔\r\n\r\n@forjiuzhou When calling the `binarized_data.py` script you have to specify your input corpus via `--file_path`. It points to `data/dump.txt` by default. So just pass your training/preprocessed corpus to the `--file_path` option.\r\n",
"Yes, I do confirm this bug (?). I am actually scratching my head around this strange behaviour too... so if you actually find the reason, I would more than happy to push an update.\r\n\r\n@forjiuzhou, indeed @stefan-it is correct! Please replace the file `dump.txt` with you own text dataset. Then, I also recommend that you call `token_counts.py` before training (so that you just do it once).",
"I believe we found the bug.\r\nIt was related to some internal bug in PyTorch: see https://github.com/pytorch/pytorch/issues/24200.\r\n\r\nI installed PyTorch from source (it is a pretty recent fix, so it's not in the last release yet), tracked the RAM while distilling and the memory usage is more or less constant.\r\nI am launching a bigger training right now just to make sure this is really causing the memory leak, if so (and I'll get back to you here), it seems you'll have to compile PyTorch from source for now.\r\n\r\nVictor",
"hey guys, I think the reason is there have too many tensorboard log @tomohideshibata @stefan-it , I stop to save log and then i have more train time now ",
"I have suppressed tensorboard logs (the block `for param_name, param in self.student.named_parameters():` was commented out in the function `log_tensorboard`), but the CPU memory consumption seemed unchanged.\r\n\r\nSo, I will try the latest PyTorch.",
"> After 40h of training I could also confirm an increase from 20GB (~20 hours training) to 42GB 🤔\r\n> \r\n> @forjiuzhou When calling the `binarized_data.py` script you have to specify your input corpus via `--file_path`. It points to `data/dump.txt` by default. So just pass your training/preprocessed corpus to the `--file_path` option.\r\n\r\nSorry I seem ask the wrong question in this issue. But I actually don't have the access to wikipedia and \r\ntoronto corpus. And it seems unavailable on internet.",
"> I believe we found the bug.\r\n> It was related to some internal bug in PyTorch: see [pytorch/pytorch#24200](https://github.com/pytorch/pytorch/issues/24200).\r\n> \r\n> I installed PyTorch from source (it is a pretty recent fix, so it's not in the last release yet), tracked the RAM while distilling and the memory usage is more or less constant.\r\n> I am launching a bigger training right now just to make sure this is really causing the memory leak, if so (and I'll get back to you here), it seems you'll have to compile PyTorch from source for now.\r\n> \r\n> Victor\r\n\r\nSo I trained a model for ~16hours and observed no increase in RAM over the training.\r\n\r\nI will update the README to pinpoint this special setup (compiling from source for now) and leave the issue open until the next PyTorch release.",
"@VictorSanh I have installed PyTorch from source, and the training is fine. Thanks!",
"So PyTorch 1.3 was released yesterday 🔥🎉(and it includes new features I am extremely excited about)! \r\nThe release includes the bug fixing, so you should be able to use the stable version available on `pip`! \r\n(Of course, if you prefer, you can still compile PyTorch from source !)",
"> So PyTorch 1.3 was released yesterday 🔥🎉(and it includes new features I am extremely excited about)!\r\n> The release includes the bug fixing, so you should be able to use the stable version available on `pip`!\r\n> (Of course, if you prefer, you can still compile PyTorch from source !)\r\n\r\nI tried to install the PyTorch 1.3, but it's still leaking. ",
"@iamlxb3 do you mind sharing your exact PyTorch configuration? I re-launched the scipts a few days ago w/ `torch==1.4.0` and didn't see memory leak."
] | 1,567 | 1,580 | 1,570 | CONTRIBUTOR | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
I am trying DistilBERT training. The training script (train.py) had gradually consumed CPU memory, and the training was killed because OOM in about one day (the available CPU memory is 96GB).
I used one GPU for the training.
Do you have any idea? Thanks in advance. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1179/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1179/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1178 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1178/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1178/comments | https://api.github.com/repos/huggingface/transformers/issues/1178/events | https://github.com/huggingface/transformers/pull/1178 | 488,189,298 | MDExOlB1bGxSZXF1ZXN0MzEzMjY0MjA4 | 1,178 | added tokens may split a normal token into halves | {
"login": "askerlee",
"id": 1575461,
"node_id": "MDQ6VXNlcjE1NzU0NjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1575461?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/askerlee",
"html_url": "https://github.com/askerlee",
"followers_url": "https://api.github.com/users/askerlee/followers",
"following_url": "https://api.github.com/users/askerlee/following{/other_user}",
"gists_url": "https://api.github.com/users/askerlee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/askerlee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/askerlee/subscriptions",
"organizations_url": "https://api.github.com/users/askerlee/orgs",
"repos_url": "https://api.github.com/users/askerlee/repos",
"events_url": "https://api.github.com/users/askerlee/events{/privacy}",
"received_events_url": "https://api.github.com/users/askerlee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This looks great, thanks a lot @askerlee.\r\n\r\nCan you add a docstring for the new argument?\r\n\r\nIdeally also a test in [tokenization_tests_commons.py](https://github.com/huggingface/pytorch-transformers/blob/master/pytorch_transformers/tests/tokenization_tests_commons.py)",
"Thanks @thomwolf . I updated my patch to make it neater. Will close this pull request and submit a new request soon."
] | 1,567 | 1,567 | 1,567 | NONE | null | In the tokenizer base class, split_on_token() attempts to split input text by each of the added tokens. Because it uses text.split(tok), it may accidentally split a token at the middle.
For example a new token "ht" is added to the vocabulary. Then "light" will be split into "lig" and "". But as "light" is in the original vocabulary, it should be left intact to be processed by self._tokenize().
Hence I'd suggest to replace it with re.split, which will split only at word boundaries ([0-9a-zA-Z_]).
But in languages whose word boundaries are different from English, this behavior may be undesirable and the user can revert to the old text.split(). It is controlled by a newly added flag split_added_on_word_boundary.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1178/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1178/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1178",
"html_url": "https://github.com/huggingface/transformers/pull/1178",
"diff_url": "https://github.com/huggingface/transformers/pull/1178.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1178.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/1177 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1177/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1177/comments | https://api.github.com/repos/huggingface/transformers/issues/1177/events | https://github.com/huggingface/transformers/issues/1177 | 488,114,298 | MDU6SXNzdWU0ODgxMTQyOTg= | 1,177 | How to install previous versions of pytorch-transformers | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"You can install an older version using the standard pypi procedure: `pip install pytorch-transformers==$VERSION`. The examples showcased on this repository probably won't work on older versions though.",
"HI\nThanks a lot, I need though a version which I think was called\npretrained_bert , thanks for your help.\nBest\nJulia\n\nOn Mon, Sep 2, 2019 at 5:25 PM Lysandre Debut <[email protected]>\nwrote:\n\n> You can install an older version using the standard pypi procedure: pip\n> install pytorch-transformers==$VERSION. The examples showcased on this\n> repository probably won't work on older versions though.\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/pytorch-transformers/issues/1177?email_source=notifications&email_token=AM3GZM62DOTECUG7OSEG57TQHUV5FA5CNFSM4IS35RG2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD5WDCZY#issuecomment-527184231>,\n> or mute the thread\n> <https://github.com/notifications/unsubscribe-auth/AM3GZMYYXIZA76ZWLGLSTK3QHUV5FANCNFSM4IS35RGQ>\n> .\n>\n",
"I believe you can still install it with `pip install pytorch-pretrained-BERT==$VERSION`!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,567 | 1,573 | 1,573 | NONE | null | Hi
I am using some codes requiring old versions of your work, do you mind telling me
how to install old version of this repository?
thanks
Best
Julia | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1177/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1177/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1176 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1176/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1176/comments | https://api.github.com/repos/huggingface/transformers/issues/1176/events | https://github.com/huggingface/transformers/pull/1176 | 487,984,730 | MDExOlB1bGxSZXF1ZXN0MzEzMTAzMzIw | 1,176 | merge | {
"login": "Eurus-Holmes",
"id": 34226570,
"node_id": "MDQ6VXNlcjM0MjI2NTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/34226570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Eurus-Holmes",
"html_url": "https://github.com/Eurus-Holmes",
"followers_url": "https://api.github.com/users/Eurus-Holmes/followers",
"following_url": "https://api.github.com/users/Eurus-Holmes/following{/other_user}",
"gists_url": "https://api.github.com/users/Eurus-Holmes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Eurus-Holmes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Eurus-Holmes/subscriptions",
"organizations_url": "https://api.github.com/users/Eurus-Holmes/orgs",
"repos_url": "https://api.github.com/users/Eurus-Holmes/repos",
"events_url": "https://api.github.com/users/Eurus-Holmes/events{/privacy}",
"received_events_url": "https://api.github.com/users/Eurus-Holmes/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1176?src=pr&el=h1) Report\n> Merging [#1176](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1176?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/b6cd856b08e3860e59cc126be86b901ccab4f193?src=pr&el=desc) will **decrease** coverage by `0.67%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1176?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1176 +/- ##\n==========================================\n- Coverage 80.67% 79.99% -0.68% \n==========================================\n Files 46 46 \n Lines 7859 7748 -111 \n==========================================\n- Hits 6340 6198 -142 \n- Misses 1519 1550 +31\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1176?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1176/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdG9rZW5pemF0aW9uX3hsbmV0LnB5) | `81.98% <0%> (-7.21%)` | :arrow_down: |\n| [pytorch\\_transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1176/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfdXRpbHMucHk=) | `83.54% <0%> (-5.03%)` | :arrow_down: |\n| [pytorch\\_transformers/tests/modeling\\_common\\_test.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1176/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvbW9kZWxpbmdfY29tbW9uX3Rlc3QucHk=) | `73.07% <0%> (-4.95%)` | :arrow_down: |\n| [...h\\_transformers/tests/tokenization\\_tests\\_commons.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1176/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvdG9rZW5pemF0aW9uX3Rlc3RzX2NvbW1vbnMucHk=) | `93.26% <0%> (-3.85%)` | :arrow_down: |\n| [pytorch\\_transformers/file\\_utils.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1176/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvZmlsZV91dGlscy5weQ==) | `69.71% <0%> (-0.71%)` | :arrow_down: |\n| [pytorch\\_transformers/modeling\\_xlm.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1176/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfeGxtLnB5) | `86.73% <0%> (-0.35%)` | :arrow_down: |\n| [pytorch\\_transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1176/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfZ3B0Mi5weQ==) | `83.84% <0%> (-0.2%)` | :arrow_down: |\n| [pytorch\\_transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1176/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfb3BlbmFpLnB5) | `81.84% <0%> (-0.12%)` | :arrow_down: |\n| [pytorch\\_transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1176/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfYmVydC5weQ==) | `87.98% <0%> (-0.05%)` | :arrow_down: |\n| [pytorch\\_transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1176/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfZGlzdGlsYmVydC5weQ==) | `96.73% <0%> (-0.04%)` | :arrow_down: |\n| ... and [5 more](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1176/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1176?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1176?src=pr&el=footer). Last update [b6cd856...b190482](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1176?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,567 | 1,567 | 1,567 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1176/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1176/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1176",
"html_url": "https://github.com/huggingface/transformers/pull/1176",
"diff_url": "https://github.com/huggingface/transformers/pull/1176.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1176.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/1175 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1175/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1175/comments | https://api.github.com/repos/huggingface/transformers/issues/1175/events | https://github.com/huggingface/transformers/issues/1175 | 487,974,672 | MDU6SXNzdWU0ODc5NzQ2NzI= | 1,175 | May I get the details of Bert pre-train procedure? | {
"login": "zhuhaozhe",
"id": 54701539,
"node_id": "MDQ6VXNlcjU0NzAxNTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/54701539?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhuhaozhe",
"html_url": "https://github.com/zhuhaozhe",
"followers_url": "https://api.github.com/users/zhuhaozhe/followers",
"following_url": "https://api.github.com/users/zhuhaozhe/following{/other_user}",
"gists_url": "https://api.github.com/users/zhuhaozhe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhuhaozhe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhuhaozhe/subscriptions",
"organizations_url": "https://api.github.com/users/zhuhaozhe/orgs",
"repos_url": "https://api.github.com/users/zhuhaozhe/repos",
"events_url": "https://api.github.com/users/zhuhaozhe/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhuhaozhe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, you should refer to the original Bert repository and papers for details on the pretraining: https://github.com/google-research/bert",
"Thank you! Get it."
] | 1,567 | 1,567 | 1,567 | NONE | null | ## ❓ Questions & Help
I want to measure the time consume in BERT pre-train procedure. May I ask some questions here:
1. What is the pre-train data? Is that same in paper : BooksCorpus & English Wiki.
2. Is there some code for pre-train can I utilize?
3. May I know how much epochs should I train when pre-train 'base' version model or 'large' version model? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1175/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1175/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1174 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1174/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1174/comments | https://api.github.com/repos/huggingface/transformers/issues/1174/events | https://github.com/huggingface/transformers/pull/1174 | 487,948,451 | MDExOlB1bGxSZXF1ZXN0MzEzMDc3MTA0 | 1,174 | Fix byte-level BPE decoding error when using added tokens | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Great addition! There was just a small issue regarding the way the special tokens were joined if they were not at the beginning of the sentence. I fixed it with my commit.\r\n\r\nBefore, the following code:\r\n```py\r\ntok = GPT2Tokenizer.from_pretrained(\"gpt2\")\r\ntok.add_tokens([\"there my\", \"name is\"])\r\nprint(tok.decode(tok.encode(\"Hi there my name is Lysandre\")))\r\n```\r\nwould output:\r\n```\r\n Hithere myname is Lysandre\r\n```\r\n\r\nNow it outputs:\r\n```\r\n Hi there my name is Lysandre\r\n```",
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1174?src=pr&el=h1) Report\n> Merging [#1174](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1174?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/b6cd856b08e3860e59cc126be86b901ccab4f193?src=pr&el=desc) will **increase** coverage by `0.17%`.\n> The diff coverage is `94.73%`.\n\n[](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1174?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1174 +/- ##\n==========================================\n+ Coverage 80.67% 80.84% +0.17% \n==========================================\n Files 46 46 \n Lines 7859 7874 +15 \n==========================================\n+ Hits 6340 6366 +26 \n+ Misses 1519 1508 -11\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1174?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [...h\\_transformers/tests/tokenization\\_tests\\_commons.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1174/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvdG9rZW5pemF0aW9uX3Rlc3RzX2NvbW1vbnMucHk=) | `100% <100%> (+2.88%)` | :arrow_up: |\n| [pytorch\\_transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1174/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdG9rZW5pemF0aW9uX3V0aWxzLnB5) | `80.47% <93.33%> (-0.13%)` | :arrow_down: |\n| [pytorch\\_transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1174/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdG9rZW5pemF0aW9uX3RyYW5zZm9feGwucHk=) | `34.17% <0%> (+0.28%)` | :arrow_up: |\n| [pytorch\\_transformers/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1174/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdG9rZW5pemF0aW9uX2dwdDIucHk=) | `96.69% <0%> (+0.82%)` | :arrow_up: |\n| [pytorch\\_transformers/file\\_utils.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1174/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvZmlsZV91dGlscy5weQ==) | `71.83% <0%> (+1.4%)` | :arrow_up: |\n| [pytorch\\_transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1174/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfdXRpbHMucHk=) | `90.02% <0%> (+1.45%)` | :arrow_up: |\n| [...orch\\_transformers/tests/tokenization\\_utils\\_test.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1174/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvdG9rZW5pemF0aW9uX3V0aWxzX3Rlc3QucHk=) | `96% <0%> (+4%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1174?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1174?src=pr&el=footer). Last update [b6cd856...31d3373](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1174?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Great thanks! Merging now."
] | 1,567 | 1,578 | 1,567 | MEMBER | null | This PR fixes a mismatch between regular unicode added tokens and byte-level BPE tokens when doing decoding.
Wrong behavior reported in #1133.
Also adds regression tests. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1174/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1174/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1174",
"html_url": "https://github.com/huggingface/transformers/pull/1174",
"diff_url": "https://github.com/huggingface/transformers/pull/1174.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1174.patch",
"merged_at": 1567407676000
} |
https://api.github.com/repos/huggingface/transformers/issues/1173 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1173/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1173/comments | https://api.github.com/repos/huggingface/transformers/issues/1173/events | https://github.com/huggingface/transformers/issues/1173 | 487,892,350 | MDU6SXNzdWU0ODc4OTIzNTA= | 1,173 | Write with Transformer doesn't show 774M model? | {
"login": "zacharymacleod",
"id": 6412653,
"node_id": "MDQ6VXNlcjY0MTI2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6412653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zacharymacleod",
"html_url": "https://github.com/zacharymacleod",
"followers_url": "https://api.github.com/users/zacharymacleod/followers",
"following_url": "https://api.github.com/users/zacharymacleod/following{/other_user}",
"gists_url": "https://api.github.com/users/zacharymacleod/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zacharymacleod/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zacharymacleod/subscriptions",
"organizations_url": "https://api.github.com/users/zacharymacleod/orgs",
"repos_url": "https://api.github.com/users/zacharymacleod/repos",
"events_url": "https://api.github.com/users/zacharymacleod/events{/privacy}",
"received_events_url": "https://api.github.com/users/zacharymacleod/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"(nevermind: refreshing the page fixed it... it's weird though because I've been checking every day!)",
"Hi! Your browser cache was probably playing tricks on you :)",
"That's exactly what I figured! Forgot the term for it. Kinda sucks that it did so for several days! :(\r\n\r\nAnyways, I'm having fun with it now so it's all good!",
"Glad you like it!!"
] | 1,567 | 1,567 | 1,567 | NONE | null | According to [this twitter post](https://twitter.com/huggingface/status/1166368535221870592), Write With Transformer has been updated to include the Large model.
However, the Model & Decoder Settings only shows Small and Medium options for model size. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1173/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1173/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1172 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1172/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1172/comments | https://api.github.com/repos/huggingface/transformers/issues/1172/events | https://github.com/huggingface/transformers/issues/1172 | 487,876,148 | MDU6SXNzdWU0ODc4NzYxNDg= | 1,172 | apex fp16 FusedLayerNorm type issues | {
"login": "mksenzov",
"id": 1136043,
"node_id": "MDQ6VXNlcjExMzYwNDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1136043?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mksenzov",
"html_url": "https://github.com/mksenzov",
"followers_url": "https://api.github.com/users/mksenzov/followers",
"following_url": "https://api.github.com/users/mksenzov/following{/other_user}",
"gists_url": "https://api.github.com/users/mksenzov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mksenzov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mksenzov/subscriptions",
"organizations_url": "https://api.github.com/users/mksenzov/orgs",
"repos_url": "https://api.github.com/users/mksenzov/repos",
"events_url": "https://api.github.com/users/mksenzov/events{/privacy}",
"received_events_url": "https://api.github.com/users/mksenzov/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, that's what we do now on master since #1089 (switching back to `torch.nn.LayerNorm`).\r\n\r\nThanks for reporting",
"@thomwolf yes, thank you for your response! I wanted to clarify; if I do fp16 I still see that master is doing \r\n\r\n```\r\ntry:\r\n from apex.normalization.fused_layer_norm import FusedLayerNorm as BertLayerNorm\r\nexcept (ImportError, AttributeError) as e:\r\n logger.info(\"Better speed can be achieved with apex installed from https://www.github.com/nvidia/apex .\")\r\n BertLayerNorm = torch.nn.LayerNorm\r\n\r\n```\r\n\r\nhttps://github.com/huggingface/pytorch-transformers/commit/bdb4409ed8de4d199907c75832398f2c49a564e1 \r\n\r\nand in my case `FusedLayerNorm` seem to cause the issue... so maybe we are talking about different things. Or did you mean that this is a work in progress and it was not merged to master yet?",
"Oh indeed, maybe it's a issue with `finetune_on_pregenerated.py`. The scripts in the `lm_finetuning` folder are in the process of being deprecated. You can try with the newly added `run_lm_finetuning.py` which is actively maintained.",
"setting `--fp16_opt_level` to O2 resolved that error for me.",
"@mksenzov I have the same exact issue. Was wondering if you figured it out?",
"I'm getting the same issue using an optimization level of \"O1\" while running `run_lm_finetuning`. is this expected? \"O2\" seems to work just fine.",
"The problem is that this model in O1 enters to `FusedLayerNorm.forward` with the input in half-precision but its parameters are still in single-precision, and apparently the kernel doesn't support different types (neither does PyTorch's `nn.LayerNorm`). In O2, in contrast, the parameters are changed to half so the issue doesn't occur.\r\n\r\nI believe there's no reason that `FusedLayerNorm` should be called if apex is available because the user may want to disable apex use O1, but it's incompatible with it. On the contrary, `nn.LayerNorm` [is blacklisted in the amp initialization](https://github.com/NVIDIA/apex/blob/656d14b0c9792a1bcdc255b473dc2d6145d026ff/apex/amp/lists/functional_overrides.py#L42), so its input will always be float32 in O1, while `FusedLayerNorm` is not blacklisted.\r\n\r\nPlus, `nn.LayerNorm` is probably fused and [proved to be faster on a V100 to me with both float32 and float16](https://github.com/NVIDIA/apex/issues/449#issuecomment-533926319).",
"Could we also remove the FusedLayerNorm call in modeling_xlnet? "
] | 1,567 | 1,570 | 1,567 | NONE | null | #564 🐛 Bug
I seem to be getting the following error each time I try to train with APEX/fp16 with BERT finetuning. It happened with my own scripts and I also see this with repository's standard `finetune_on_pregenerated.py` which was recently updated. The error diagnostics seem to indicate an issue with the `FusedLayerNorm`. To further confirm: doing a local mod where I replaced the definition of BertLayerNorm with
```BertLayerNorm = torch.nn.LayerNorm```
The change resolves this issue (while, in my case, not noticeably changing the performance).. Apex docs are a bit raw but the most recent set does not suggest to manually manipulate optimizers or layer definitions, perhaps we should just stick to the BertLayerNorm definition as described above?
```
Traceback (most recent call last):
File "ash3/tune_bert.py", line 101, in <module>
main(sys.argv[1:])
File "ash3/tune_bert.py", line 47, in main
pregenerate(init)
File "ash3/tune_bert.py", line 85, in pregenerate
finetune_on_pregenerated(tune_args)
File "/home/madvillain/gitlab/ai/ash3/ash3/finetuning/finetune_on_pregenerated.py", line 292, in main
outputs = model(input_ids, segment_ids, input_mask, lm_label_ids, is_next)
File "/home/madvillain/miniconda3/envs/ash3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/home/madvillain/miniconda3/envs/ash3/lib/python3.6/site-packages/pytorch_transformers/modeling_bert.py", line 785, in forward
prediction_scores, seq_relationship_score = self.cls(sequence_output, pooled_output)
File "/home/madvillain/miniconda3/envs/ash3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/home/madvillain/miniconda3/envs/ash3/lib/python3.6/site-packages/pytorch_transformers/modeling_bert.py", line 533, in forward
prediction_scores = self.predictions(sequence_output)
File "/home/madvillain/miniconda3/envs/ash3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/home/madvillain/miniconda3/envs/ash3/lib/python3.6/site-packages/pytorch_transformers/modeling_bert.py", line 501, in forward
hidden_states = self.transform(hidden_states)
File "/home/madvillain/miniconda3/envs/ash3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/home/madvillain/miniconda3/envs/ash3/lib/python3.6/site-packages/pytorch_transformers/modeling_bert.py", line 483, in forward
hidden_states = self.LayerNorm(hidden_states)
File "/home/madvillain/miniconda3/envs/ash3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/home/madvillain/miniconda3/envs/ash3/lib/python3.6/site-packages/apex/normalization/fused_layer_norm.py", line 159, in forward
input, self.weight, self.bias, self.normalized_shape,self.eps)
File "/home/madvillain/miniconda3/envs/ash3/lib/python3.6/site-packages/apex/normalization/fused_layer_norm.py", line 25, in forward
input_, ctx.normalized_shape, weight_, bias_, ctx.eps)
RuntimeError: expected scalar type Half but found Float (data<c10::Half> at /home/madvillain/miniconda3/envs/ash3/lib/python3.6/site-packages/torch/include/ATen/core/TensorMethods.h:1386)
frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x45 (0x7f6af587edc5 in /home/madvillain/miniconda3/envs/ash3/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #1: c10::Half* at::Tensor::data<c10::Half>() const + 0x2c6 (0x7f6abeb8aa36 in /home/madvillain/miniconda3/envs/ash3/lib/python3.6/site-packages/fused_layer_norm_cuda.cpython-36m-x86_64-linux-gnu.so)
frame #2: cuda_layer_norm(at::Tensor*, at::Tensor*, at::Tensor*, at::Tensor*, int, int, c10::ArrayRef<long>, at::Tensor*, at::Tensor*, double) + 0x3ed (0x7f6abeb87dcd in /home/madvillain/miniconda3/envs/ash3/lib/python3.6/site-packages/fused_layer_norm_cuda.cpython-36m-x86_64-linux-gnu.so)
frame #3: layer_norm_affine(at::Tensor, c10::ArrayRef<long>, at::Tensor, at::Tensor, double) + 0x27a (0x7f6abeb7985a in /home/madvillain/miniconda3/envs/ash3/lib/python3.6/site-packages/fused_layer_norm_cuda.cpython-36m-x86_64-linux-gnu.so)
frame #4: <unknown function> + 0x196c4 (0x7f6abeb866c4 in /home/madvillain/miniconda3/envs/ash3/lib/python3.6/site-packages/fused_layer_norm_cuda.cpython-36m-x86_64-linux-gnu.so)
frame #5: <unknown function> + 0x16e0a (0x7f6abeb83e0a in /home/madvillain/miniconda3/envs/ash3/lib/python3.6/site-packages/fused_layer_norm_cuda.cpython-36m-x86_64-linux-gnu.so)
<omitting python frames>
frame #12: THPFunction_apply(_object*, _object*) + 0x691 (0x7f6b24b0a081 in /home/madvillain/miniconda3/envs/ash3/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
```
Model I am using (Bert, XLNet....): BERT
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* [* ] the official example scripts: (give details)
* [ ] my own modified scripts: (give details)
The tasks I am working on is:
* [* ] an official GLUE/SQUaD task: (give the name) finetune_on_pregenerated.py
* [ ] my own task or dataset: (give details)
## Expected behavior
no failures
## Environment
* OS: Ubuntu 18.04
* Python version: 3.6
* PyTorch version: 1.1.0, 1.2.0
* PyTorch Transformers version (or branch): 1.1.0
* Using GPU ? yes
* Distributed of parallel setup ? no
* Any other relevant information: cudatoolkit 10.0, APEX git hash code: 53eae1986320d016ee7b347d78839dd5e96e7e93
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1172/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1172/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1171 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1171/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1171/comments | https://api.github.com/repos/huggingface/transformers/issues/1171/events | https://github.com/huggingface/transformers/issues/1171 | 487,849,961 | MDU6SXNzdWU0ODc4NDk5NjE= | 1,171 | Can't get GPT2tokenizer to load correctly | {
"login": "buttchurch",
"id": 10161939,
"node_id": "MDQ6VXNlcjEwMTYxOTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/10161939?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/buttchurch",
"html_url": "https://github.com/buttchurch",
"followers_url": "https://api.github.com/users/buttchurch/followers",
"following_url": "https://api.github.com/users/buttchurch/following{/other_user}",
"gists_url": "https://api.github.com/users/buttchurch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/buttchurch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/buttchurch/subscriptions",
"organizations_url": "https://api.github.com/users/buttchurch/orgs",
"repos_url": "https://api.github.com/users/buttchurch/repos",
"events_url": "https://api.github.com/users/buttchurch/events{/privacy}",
"received_events_url": "https://api.github.com/users/buttchurch/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! The `GPT2Tokenizer` attribute `max_len_single_sentence` is a very new attribute. If you have installed the library prior to [this commit](https://github.com/huggingface/pytorch-transformers/commit/3bcbebd440c220adbaab657f2d13dac7c89f6453#diff-b1c89c3ce1d15ed636ed89d250f8f26a), 9 days ago, then you indeed won't be able to access it.\r\n\r\nYou won't be able to access it either if you have installed it via pypi, as the last release was 1.1.0 and it was before that commit. We'll be releasing v1.2.0 very soon, with this addition! Until then, you can [install it from source](https://github.com/huggingface/pytorch-transformers#from-source) if you want to latest additions.",
"Thanks very much! That all makes sense :)"
] | 1,567 | 1,567 | 1,567 | NONE | null | ## ❓ Questions & Help
Hi, I'm a fairly new coder and I'm hitting a roadblock that I just don't understand – maybe someone here can help me, but I figure it's worth asking the community.
I'm trying to run run_lm_finetuning.py, and the tokenizer doesn't seem to be loading correctly. I'm getting this error: `AttributeError: 'GPT2Tokenizer' object has no attribute 'max_len_single_sentence'`
I've looked at the code, and there clearly is a `max_len_single_sentence` attribute in the init, but I can't get to it. I've even tried simply loading a GPT2-tokenizer into a jupyter notebook and trying to get the value, and it has the same error.
I assume I've done something wrong, I just can't figure out what. In case it helps, I've put my entire traceback below.
Any ideas? Thanks!
```
python examples/run_lm_finetuning.py --train_data_file='HFlongs1000.txt' --output_dir='pytorch-transformers/HFOutput' --model_type='gpt2' --tokenizer_name='gpt2' --model_name_or_path='gpt2'
09/01/2019 06:44:36 - WARNING - __main__ - Process rank: -1, device: cuda, n_gpu: 1, distributed training: False, 16-bits training: False
09/01/2019 06:44:36 - INFO - pytorch_transformers.modeling_utils - loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-config.json from cache at /home/jupyter/.cache/torch/pytorch_transformers/4be02c5697d91738003fb1685c9872f284166aa32e061576bbe6aaeb95649fcf.085d5f6a8e7812ea05ff0e6ed0645ab2e75d80387ad55c1ad9806ee70d272f80
09/01/2019 06:44:36 - INFO - pytorch_transformers.modeling_utils - Model config {
"attn_pdrop": 0.1,
"embd_pdrop": 0.1,
"finetuning_task": null,
"initializer_range": 0.02,
"layer_norm_epsilon": 1e-05,
"n_ctx": 1024,
"n_embd": 768,
"n_head": 12,
"n_layer": 12,
"n_positions": 1024,
"num_labels": 1,
"output_attentions": false,
"output_hidden_states": false,
"resid_pdrop": 0.1,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "cls_index",
"summary_use_proj": true,
"torchscript": false,
"vocab_size": 50257
}
09/01/2019 06:44:36 - INFO - pytorch_transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-vocab.json from cache at /home/jupyter/.cache/torch/pytorch_transformers/f2808208f9bec2320371a9f5f891c184ae0b674ef866b79c58177067d15732dd.1512018be4ba4e8726e41b9145129dc30651ea4fec86aa61f4b9f40bf94eac71
09/01/2019 06:44:36 - INFO - pytorch_transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-merges.txt from cache at /home/jupyter/.cache/torch/pytorch_transformers/d629f792e430b3c76a1291bb2766b0a047e36fae0588f9dbc1ae51decdff691b.70bec105b4158ed9a1747fea67a43f5dee97855c64d62b6ec3742f4cfdb5feda
Traceback (most recent call last):
File "examples/run_lm_finetuning.py", line 497, in <module>
main()
File "examples/run_lm_finetuning.py", line 431, in main
args.block_size = tokenizer.max_len_single_sentence # Our input block size will be the max possible for the model
AttributeError: 'GPT2Tokenizer' object has no attribute 'max_len_single_sentence'``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1171/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1171/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1170 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1170/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1170/comments | https://api.github.com/repos/huggingface/transformers/issues/1170/events | https://github.com/huggingface/transformers/issues/1170 | 487,820,716 | MDU6SXNzdWU0ODc4MjA3MTY= | 1,170 | How to use BERT or word embedding for e-commerce product classification. | {
"login": "Raghavendra15",
"id": 7957331,
"node_id": "MDQ6VXNlcjc5NTczMzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/7957331?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Raghavendra15",
"html_url": "https://github.com/Raghavendra15",
"followers_url": "https://api.github.com/users/Raghavendra15/followers",
"following_url": "https://api.github.com/users/Raghavendra15/following{/other_user}",
"gists_url": "https://api.github.com/users/Raghavendra15/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Raghavendra15/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Raghavendra15/subscriptions",
"organizations_url": "https://api.github.com/users/Raghavendra15/orgs",
"repos_url": "https://api.github.com/users/Raghavendra15/repos",
"events_url": "https://api.github.com/users/Raghavendra15/events{/privacy}",
"received_events_url": "https://api.github.com/users/Raghavendra15/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"you solve this problem?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,567 | 1,575 | 1,575 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
I want to classify products on an e-commerce site. For example, if we take Amazon for example if a product name is iPhone XS then it should be categorized to Electronics -> mobile, very straight forward, however, the problem comes when we train the model on clothes and many other sports items.
For example: "George - George Men's Cargo Short" found on walmart is being classified as SPORTS & OUTDOOR, FASHION. However, it should be classified to Clothes.
Currently, we have tried text CNN but I'm very positive that BERT or other word embeddings can enhance the performance.
Base Code:
https://github.com/brightmart/text_classification
However, it appears that TextCNN is better than BERT as per the author of this repository. Does anyone know what's the ideal way to approach this problem?

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1170/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1170/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1169 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1169/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1169/comments | https://api.github.com/repos/huggingface/transformers/issues/1169/events | https://github.com/huggingface/transformers/issues/1169 | 487,809,210 | MDU6SXNzdWU0ODc4MDkyMTA= | 1,169 | Attribute errors with pytorch_transformers tests | {
"login": "tobimichigan",
"id": 5084987,
"node_id": "MDQ6VXNlcjUwODQ5ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5084987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tobimichigan",
"html_url": "https://github.com/tobimichigan",
"followers_url": "https://api.github.com/users/tobimichigan/followers",
"following_url": "https://api.github.com/users/tobimichigan/following{/other_user}",
"gists_url": "https://api.github.com/users/tobimichigan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tobimichigan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tobimichigan/subscriptions",
"organizations_url": "https://api.github.com/users/tobimichigan/orgs",
"repos_url": "https://api.github.com/users/tobimichigan/repos",
"events_url": "https://api.github.com/users/tobimichigan/events{/privacy}",
"received_events_url": "https://api.github.com/users/tobimichigan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, could you try updating your pytorch version to 1.2.0 ?",
"Same issue here. Pytorch==1.2.0, python==3.6.2",
"What exactly is your issue @ukliu ?",
"> What exactly is your issue @ukliu ?\r\n\r\nI was going through the pytorch-transformers tutorial at https://github.com/ukliu/pytorch-transformers \r\n\r\n<img width=\"870\" alt=\"Screen Shot 2019-09-30 at 3 48 10 PM\" src=\"https://user-images.githubusercontent.com/14615401/65910949-bfdddb00-e399-11e9-9970-f73d8e6f388b.png\">\r\n\r\nAll others seems fine, but TransfoXLModel gives an error of AttributeError: 'Tensor' object has no attribute 'bool'",
"That's mainly a pytorch version issue. You can upgrade your pytorch or change the type to torch.uint8 rather than call the .bool() function."
] | 1,567 | 1,575 | 1,575 | NONE | null | ## 🐛 Bug
<!-- Important information -->
Model I am using (from the official repo):
Language I am using the model on (English, Yoruba,Igbo, Hausa etc):
The problem arise when using:
* [ ] the official example scripts: (give details): the test run script
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)=>yes
## To Reproduce
Steps to reproduce the behavior:
1.python -m pytest -sv ./pytorch_transformers/tests/
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
`pytorch_transformers/tests/modeling_transfo_xl_test.py::TransfoXLModelTest::test_transfo_xl_lm_head FAILED
pytorch_transformers/tests/modeling_transfo_xl_test.py::TransfoXLModelTest::test_transfo_xl_model FAILED
pytorch_transformers/tests/tokenization_xlnet_test.py::XLNetTokenizationTest::test_tokenizer_no_lower PASSED
=================================== FAILURES ===================================
__________________ TransfoXLModelTest.test_attention_outputs ___________________
self = <pytorch_transformers.tests.modeling_transfo_xl_test.TransfoXLModelTest testMethod=test_attention_outputs>
def test_attention_outputs(self):
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
for model_class in self.all_model_classes:
config.output_attentions = True
config.output_hidden_states = False
model = model_class(config)
model.eval()
> outputs = model(**inputs_dict)
pytorch_transformers/tests/modeling_common_test.py:73:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py:493: in __call__
result = self.forward(*input, **kwargs)
pytorch_transformers/modeling_transfo_xl.py:1253: in forward
outputs = self._forward(input_ids, mems=mems, head_mask=head_mask)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = TransfoXLModel(
(word_emb): AdaptiveEmbedding(
(emb_layers): ModuleList(
(0): Embedding(10, 32)
(1):... LayerNorm(torch.Size([32]), eps=1e-05, elementwise_affine=True)
)
)
)
(pos_emb): PositionalEmbedding()
)
dec_inp = tensor([[19, 69, 72, 42, 32, 34, 52, 38, 81, 71, 81, 47, 44],
[22, 12, 3, 26, 63, 25, 64, 52, 79, 71, 17, 16,... [82, 26, 62, 95, 55, 79, 8, 90, 33, 83, 64, 53, 68],
[ 7, 57, 63, 40, 74, 77, 50, 77, 19, 7, 53, 38, 19]])
mems = [tensor([[[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0.,... [0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]]])]
head_mask = [None, None, None, None, None]
def _forward(self, dec_inp, mems=None, head_mask=None):
qlen, bsz = dec_inp.size()
# Prepare head mask if needed
# 1.0 in head_mask indicate we keep the head
# attention_probs has shape bsz x n_heads x N x N
# input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] (a head_mask for each layer)
# and head_mask is converted to shape [num_hidden_layers x qlen x klen x bsz x n_head]
if head_mask is not None:
if head_mask.dim() == 1:
head_mask = head_mask.unsqueeze(0).unsqueeze(0).unsqueeze(0).unsqueeze(0)
head_mask = head_mask.expand(self.n_layer, -1, -1, -1, -1)
elif head_mask.dim() == 2:
head_mask = head_mask.unsqueeze(1).unsqueeze(1).unsqueeze(1)
head_mask = head_mask.to(dtype=next(self.parameters()).dtype) # switch to fload if need + fp16 compatibility
else:
head_mask = [None] * self.n_layer
word_emb = self.word_emb(dec_inp)
mlen = mems[0].size(0) if mems is not None else 0
klen = mlen + qlen
if self.same_length:
all_ones = word_emb.new_ones(qlen, klen)
mask_len = klen - self.mem_len
if mask_len > 0:
mask_shift_len = qlen - mask_len
else:
mask_shift_len = qlen
dec_attn_mask = (torch.triu(all_ones, 1+mlen)
> + torch.tril(all_ones, -mask_shift_len)).bool()[:, :, None] # -1
E AttributeError: 'Tensor' object has no attribute 'bool'
pytorch_transformers/modeling_transfo_xl.py:1145: AttributeError
> outputs = model(**inputs)
pytorch_transformers/tests/modeling_common_test.py:185:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py:493: in __call__
result = self.forward(*input, **kwargs)
pytorch_transformers/modeling_transfo_xl.py:1253: in forward
outputs = self._forward(input_ids, mems=mems, head_mask=head_mask)
E AttributeError: 'Tensor' object has no attribute 'bool'
pytorch_transformers/modeling_transfo_xl.py:1145: AttributeError
_________________ TransfoXLModelTest.test_hidden_states_output _________________
self = <pytorch_transformers.tests.modeling_transfo_xl_test.TransfoXLModelTest testMethod=test_hidden_states_output>
def test_hidden_states_output(self):
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
for model_class in self.all_model_classes:
config.output_hidden_states = True
config.output_attentions = False
model = model_class(config)
model.eval()
> outputs = model(**inputs_dict)
pytorch_transformers/tests/modeling_common_test.py:249:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py:493: in __call__
result = self.forward(*input, **kwargs)
pytorch_transformers/modeling_transfo_xl.py:1253: in forward
outputs = self._forward(input_ids, mems=mems, head_mask=head_mask)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
AttributeError: 'Tensor' object has no attribute 'bool'
pytorch_transformers/modeling_transfo_xl.py:1145: AttributeError
__________________ TransfoXLModelTest.test_transfo_xl_lm_head __________________
self = <pytorch_transformers.tests.modeling_transfo_xl_test.TransfoXLModelTest testMethod=test_transfo_xl_lm_head>
def test_transfo_xl_lm_head(self):
self.model_tester.set_seed()
config_and_inputs = self.model_tester.prepare_config_and_inputs()
> output_result = self.model_tester.create_transfo_xl_lm_head(*config_and_inputs)
pytorch_transformers/tests/modeling_transfo_xl_test.py:201:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
pytorch_transformers/tests/modeling_transfo_xl_test.py:142: in create_transfo_xl_lm_head
lm_logits_1, mems_1 = model(input_ids_1)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py:493: in __call__
result = self.forward(*input, **kwargs)
pytorch_transformers/modeling_transfo_xl.py:1349: in forward
transformer_outputs = self.transformer(input_ids, mems=mems, head_mask=head_mask)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py:493: in __call__
result = self.forward(*input, **kwargs)
pytorch_transformers/modeling_transfo_xl.py:1253: in forward
E AttributeError: 'Tensor' object has no attribute 'bool'
pytorch_transformers/modeling_transfo_xl.py:1145: AttributeError
=============================== warnings summary ===============================
-- Docs: http://doc.pytest.org/en/latest/warnings.html
======= 5 failed, 206 passed, 10 skipped, 36 warnings in 171.71 seconds ========
`
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
Seamless execution!!!
## Environment
==============NVSMI LOG==============
Timestamp : Sat Aug 31 11:09:33 2019
Driver Version : 418.67
CUDA Version : 10.1
Attached GPUs : 1
GPU 00000000:00:04.0
Product Name : Tesla K80
Product Brand : Tesla
* OS:Ubuntu 18.04
* Python version:3.6
* PyTorch version:1.1.0
* PyTorch Transformers version (or branch):https://github.com/huggingface/pytorch-transformers
* Using GPU ?yes
* Distributed of parallel setup ?no
* Any other relevant information:
Why do these attribute errors occur? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1169/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1169/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1168 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1168/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1168/comments | https://api.github.com/repos/huggingface/transformers/issues/1168/events | https://github.com/huggingface/transformers/issues/1168 | 487,807,049 | MDU6SXNzdWU0ODc4MDcwNDk= | 1,168 | How to add new pre-trained model pytorch-transformers | {
"login": "ksopyla",
"id": 64201,
"node_id": "MDQ6VXNlcjY0MjAx",
"avatar_url": "https://avatars.githubusercontent.com/u/64201?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ksopyla",
"html_url": "https://github.com/ksopyla",
"followers_url": "https://api.github.com/users/ksopyla/followers",
"following_url": "https://api.github.com/users/ksopyla/following{/other_user}",
"gists_url": "https://api.github.com/users/ksopyla/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ksopyla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ksopyla/subscriptions",
"organizations_url": "https://api.github.com/users/ksopyla/orgs",
"repos_url": "https://api.github.com/users/ksopyla/repos",
"events_url": "https://api.github.com/users/ksopyla/events{/privacy}",
"received_events_url": "https://api.github.com/users/ksopyla/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,567 | 1,573 | 1,573 | NONE | null | ## ❓ Questions & Help
Pytorch-transformers is a great library. I like that it does one thing, give access to the pre-trained SOTA models for NLP.
I and my team we want to help and start contributing:
- at first, want to add Polish BERT model like https://github.com/huggingface/pytorch-transformers/pull/688
But we do not know how to do this :(
Is there any guide or procedure that shows what should be changed in order to add a new model?
We would be grateful if someone guides us. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1168/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1168/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1167 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1167/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1167/comments | https://api.github.com/repos/huggingface/transformers/issues/1167/events | https://github.com/huggingface/transformers/issues/1167 | 487,799,928 | MDU6SXNzdWU0ODc3OTk5Mjg= | 1,167 | ImportError: cannot import name 'DistilBertModel' | {
"login": "zbloss",
"id": 7165947,
"node_id": "MDQ6VXNlcjcxNjU5NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7165947?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zbloss",
"html_url": "https://github.com/zbloss",
"followers_url": "https://api.github.com/users/zbloss/followers",
"following_url": "https://api.github.com/users/zbloss/following{/other_user}",
"gists_url": "https://api.github.com/users/zbloss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zbloss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zbloss/subscriptions",
"organizations_url": "https://api.github.com/users/zbloss/orgs",
"repos_url": "https://api.github.com/users/zbloss/repos",
"events_url": "https://api.github.com/users/zbloss/events{/privacy}",
"received_events_url": "https://api.github.com/users/zbloss/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, you should install it from source right now if you want to use DistilBERT. We're planning a new release 1.2.0 that includes DistilBERT + GPT-2 Large + XLM 100/17 sometimes this week :)."
] | 1,567 | 1,567 | 1,567 | NONE | null | ## 🐛 Bug
Can you update the pypi package? I cannot import DistilBERT on pytorch_transformers==1.1.0 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1167/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1167/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1166 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1166/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1166/comments | https://api.github.com/repos/huggingface/transformers/issues/1166/events | https://github.com/huggingface/transformers/issues/1166 | 487,796,134 | MDU6SXNzdWU0ODc3OTYxMzQ= | 1,166 | Roberta for NER task | {
"login": "sl-victormazzeo",
"id": 3447921,
"node_id": "MDQ6VXNlcjM0NDc5MjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3447921?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sl-victormazzeo",
"html_url": "https://github.com/sl-victormazzeo",
"followers_url": "https://api.github.com/users/sl-victormazzeo/followers",
"following_url": "https://api.github.com/users/sl-victormazzeo/following{/other_user}",
"gists_url": "https://api.github.com/users/sl-victormazzeo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sl-victormazzeo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sl-victormazzeo/subscriptions",
"organizations_url": "https://api.github.com/users/sl-victormazzeo/orgs",
"repos_url": "https://api.github.com/users/sl-victormazzeo/repos",
"events_url": "https://api.github.com/users/sl-victormazzeo/events{/privacy}",
"received_events_url": "https://api.github.com/users/sl-victormazzeo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @militu you should take a look at this long thread discussing NER for BERT (should be the same for RoBERTa): https://github.com/huggingface/pytorch-transformers/issues/64",
"But there is not RobertaForTokenClassification and TFRobertaForTokenClassification like BertForTokenClassification and TFBertForTokenClassification. ",
"Not yet indeed, do you want to submit a PR copying these models from Bert?",
"@thomwolf is this something the team is open to reviewing? I can open a PR that (ambitiously?) adds both `RobertaForTokenClassification` and `TFRobertaForTokenClassification` in the next few days/week.",
"Yes, sure (though I won't commit to a specific delay for reviewing hahaha).\r\n\r\nAdding `RobertaForTokenClassification` and `TFRobertaForTokenClassification` should be very simple and basically kept as a copy-past from Bert similar models.\r\n\r\nThe most important here is actually to finish the PR adding token-to-string-character mappings (#1274 by @michaelrglass) so we can translate NER labels to token labels for training. Though I think there may also be a simpler way to do that by modifying RoBERTa/GPT-2 tokenizers to accept tokenized word but this require some knowledge of the internal functioning of GPT-2 tokenizer.",
"Created a PR and looking for feedback. Hoping to jump on the `run_ner.py` script as well tonight/tomorrow.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This was done and probably should be closed."
] | 1,567 | 1,577 | 1,577 | NONE | null | ## ❓ Questions & Help
Hello, is there a way to use Roberta model for NER task? Is there a script somewhere?
Thank you. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1166/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1166/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1165 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1165/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1165/comments | https://api.github.com/repos/huggingface/transformers/issues/1165/events | https://github.com/huggingface/transformers/issues/1165 | 487,755,884 | MDU6SXNzdWU0ODc3NTU4ODQ= | 1,165 | Dependency errors when trying to use gpt2 using pytorch hub. | {
"login": "VictorAlbertos",
"id": 2614726,
"node_id": "MDQ6VXNlcjI2MTQ3MjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2614726?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorAlbertos",
"html_url": "https://github.com/VictorAlbertos",
"followers_url": "https://api.github.com/users/VictorAlbertos/followers",
"following_url": "https://api.github.com/users/VictorAlbertos/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorAlbertos/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorAlbertos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorAlbertos/subscriptions",
"organizations_url": "https://api.github.com/users/VictorAlbertos/orgs",
"repos_url": "https://api.github.com/users/VictorAlbertos/repos",
"events_url": "https://api.github.com/users/VictorAlbertos/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorAlbertos/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I found that https://github.com/huggingface/pytorch-transformers/commit/256086bc6908448fc6aff9b1e19d95c4f6019bee is the source of the issue. Reading the changes I could guess that the new way for retrieving the tokenizer and model is as follows:\r\n\r\n```python\r\ntokenizer = torch.hub.load('huggingface/pytorch-transformers', 'tokenizer', 'gpt2')\r\nmodel = torch.hub.load('huggingface/pytorch-transformers', 'modelWithLMHead', 'gpt2')\r\n```\r\n\r\nBut I'm not sure if there is an issue with the docs in hub as they seem to not be updated. ",
"Yes, we are in the process of updating the hub",
"I'm reopening this issue because I'm getting the next error when trying to import the tokenizer:\r\n`ImportError: cannot import name 'add_start_docstrings'`\r\n\r\n<img width=\"1046\" alt=\"Screenshot 2019-09-06 at 20 15 36\" src=\"https://user-images.githubusercontent.com/2614726/64450882-4da01080-d0e3-11e9-94d0-10a0e57c0e80.png\">\r\n\r\n ",
"We can only help you if we have more information on the version/release you\nare using.\n\nOn Fri, 6 Sep 2019 at 21:17, Víctor Albertos <[email protected]>\nwrote:\n\n> Reopened #1165\n> <https://github.com/huggingface/pytorch-transformers/issues/1165>.\n>\n> —\n> You are receiving this because you commented.\n>\n>\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/pytorch-transformers/issues/1165?email_source=notifications&email_token=ABYDIHJBMXLUIDWHL7OBEXTQIKNEXA5CNFSM4ISTI6HKYY3PNVWWK3TUL52HS4DFWZEXG43VMVCXMZLOORHG65DJMZUWGYLUNFXW5KTDN5WW2ZLOORPWSZGOTPQM65I#event-2615201653>,\n> or mute the thread\n> <https://github.com/notifications/unsubscribe-auth/ABYDIHMPKQVNPQFHVBGEDWLQIKNEXANCNFSM4ISTI6HA>\n> .\n>\n",
"I'm using the version from the hub, you don't specify a version there, as far as I know. ",
"It's the version of the master branch by default.\r\n\r\nI fixed the bug with ee027c8.\r\n\r\nNote that you can [specify a specific release](https://pytorch.org/docs/stable/hub.html#torch.hub.load) with torch hub, e.g. use release `1.2.0` with `torch.hub('huggingface/pytorch-transformers:1.2.0', 'model', 'bert-base-uncased')`.\r\n\r\nThat's what I would advise as it allows you to have clean versioning of your code (you will be sure, in 3 months from now, of the exact version of the model you were using to get your results).",
"Thanks for fixing the bug so quickly and for the additional information I was not aware of the versioning feature."
] | 1,567 | 1,567 | 1,567 | NONE | null | It started today, yesterday it was working fine. When I try to download `gpt2` model from pytorch hub repository, as follows:
```python
torch.hub.load('huggingface/pytorch-pretrained-BERT', 'gpt2Tokenizer', 'gpt2')
```
I get the following error: `ModuleNotFoundError: No module named 'sacremoses'`. If I add that dependency manually then I get another error: `ModuleNotFoundError: No module named 'sentencepiece'`. Then I add `sentencepiece` dependency manually just to get another error: `RuntimeError: Cannot find callable gpt2Tokenizer in hubconf`. And this error seems to be related to API changes.
I'm using a Google Colab GPU instance.
If this is not the right place to post this issue, please re-direct me to the proper source to post the issue. Thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1165/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1165/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1164 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1164/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1164/comments | https://api.github.com/repos/huggingface/transformers/issues/1164/events | https://github.com/huggingface/transformers/pull/1164 | 487,755,616 | MDExOlB1bGxSZXF1ZXN0MzEyOTQ2NDEx | 1,164 | distillation: fix ModuleNotFoundError error in token counts script | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Great, thanks @stefan-it!"
] | 1,567 | 1,567 | 1,567 | COLLABORATOR | null | Hi,
I'm currently trying out "distillation" 😅
This PR a `ModuleNotFoundError` message in the `token_counts.py` script (same error was recently fixed in 803c1cc4eacd38f1b854578d7d717b5e4a1ada47 🤗 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1164/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1164/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1164",
"html_url": "https://github.com/huggingface/transformers/pull/1164",
"diff_url": "https://github.com/huggingface/transformers/pull/1164.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1164.patch",
"merged_at": 1567382408000
} |
https://api.github.com/repos/huggingface/transformers/issues/1163 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1163/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1163/comments | https://api.github.com/repos/huggingface/transformers/issues/1163/events | https://github.com/huggingface/transformers/issues/1163 | 487,743,738 | MDU6SXNzdWU0ODc3NDM3Mzg= | 1,163 | [Help] how to make a constrained text generation | {
"login": "lexmen318",
"id": 43975514,
"node_id": "MDQ6VXNlcjQzOTc1NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/43975514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lexmen318",
"html_url": "https://github.com/lexmen318",
"followers_url": "https://api.github.com/users/lexmen318/followers",
"following_url": "https://api.github.com/users/lexmen318/following{/other_user}",
"gists_url": "https://api.github.com/users/lexmen318/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lexmen318/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lexmen318/subscriptions",
"organizations_url": "https://api.github.com/users/lexmen318/orgs",
"repos_url": "https://api.github.com/users/lexmen318/repos",
"events_url": "https://api.github.com/users/lexmen318/events{/privacy}",
"received_events_url": "https://api.github.com/users/lexmen318/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Yes, I think the sockeye paper and code is the right place to start even if it may look complicated at first sight. Try to combine it with the `run_generation.py` example.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"> ## ❓ Questions & Help\r\n> What I need is to make a constrained text generation via XLNet or GPT-2:\r\n> \r\n> Input: No one has the intention of building a wall.\r\n> Constraint: the output should include two pre-defined key words: 'No one' and 'construct'.\r\n> Expected output(e.g.): No one has the intention, a wall to construct.\r\n> (with a text length being predefined).\r\n> \r\n> I found some reference like followings,\r\n> https://awslabs.github.io/sockeye/inference.html#lexical-constraints\r\n> \r\n> but it is too complecate to transfer. Could u give me some advice?\r\n> \r\n> thx a log!\r\n\r\nHi have you successfully implemented this constrained generation method? Thanks a lot!"
] | 1,567 | 1,589 | 1,573 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
What I need is to make a constrained text generation via XLNet or GPT-2:
Input: No one has the intention of building a wall.
Constraint: the output should include two pre-defined key words: 'No one' and 'construct'.
Expected output(e.g.): No one has the intention, a wall to construct.
(with a text length being predefined).
I found some reference like followings,
https://awslabs.github.io/sockeye/inference.html#lexical-constraints
but it is too complecate to transfer. Could u give me some advice?
thx a log!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1163/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1163/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1162 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1162/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1162/comments | https://api.github.com/repos/huggingface/transformers/issues/1162/events | https://github.com/huggingface/transformers/pull/1162 | 487,730,302 | MDExOlB1bGxSZXF1ZXN0MzEyOTI5ODY1 | 1,162 | XLNet bias fix on resize embeddings (cf #1124) | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,567 | 1,576 | 1,567 | MEMBER | null | Fixed an issue where the linear layer bias wouldn't be resized along the weight resize when there was an embedding matrix resize with XLNet (cf #1124).
This fix works for any model that needs to tie its weights between an embedding layer & a linear layer if . that linear layer has a bias. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1162/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1162/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1162",
"html_url": "https://github.com/huggingface/transformers/pull/1162",
"diff_url": "https://github.com/huggingface/transformers/pull/1162.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1162.patch",
"merged_at": 1567458844000
} |
https://api.github.com/repos/huggingface/transformers/issues/1161 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1161/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1161/comments | https://api.github.com/repos/huggingface/transformers/issues/1161/events | https://github.com/huggingface/transformers/issues/1161 | 487,591,607 | MDU6SXNzdWU0ODc1OTE2MDc= | 1,161 | Large Memory Layers | {
"login": "Enumaris",
"id": 34777557,
"node_id": "MDQ6VXNlcjM0Nzc3NTU3",
"avatar_url": "https://avatars.githubusercontent.com/u/34777557?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Enumaris",
"html_url": "https://github.com/Enumaris",
"followers_url": "https://api.github.com/users/Enumaris/followers",
"following_url": "https://api.github.com/users/Enumaris/following{/other_user}",
"gists_url": "https://api.github.com/users/Enumaris/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Enumaris/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Enumaris/subscriptions",
"organizations_url": "https://api.github.com/users/Enumaris/orgs",
"repos_url": "https://api.github.com/users/Enumaris/repos",
"events_url": "https://api.github.com/users/Enumaris/events{/privacy}",
"received_events_url": "https://api.github.com/users/Enumaris/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,567 | 1,572 | 1,572 | NONE | null | ## 🚀 Feature
Implement models with Large Memory Layers from this paper: https://arxiv.org/pdf/1907.05242.pdf
## Motivation
These models seem very promising. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1161/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1161/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1160 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1160/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1160/comments | https://api.github.com/repos/huggingface/transformers/issues/1160/events | https://github.com/huggingface/transformers/issues/1160 | 487,586,422 | MDU6SXNzdWU0ODc1ODY0MjI= | 1,160 | --seed does not change the fintuning results of the xlnet model | {
"login": "zhaoguangxiang",
"id": 17742385,
"node_id": "MDQ6VXNlcjE3NzQyMzg1",
"avatar_url": "https://avatars.githubusercontent.com/u/17742385?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhaoguangxiang",
"html_url": "https://github.com/zhaoguangxiang",
"followers_url": "https://api.github.com/users/zhaoguangxiang/followers",
"following_url": "https://api.github.com/users/zhaoguangxiang/following{/other_user}",
"gists_url": "https://api.github.com/users/zhaoguangxiang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhaoguangxiang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhaoguangxiang/subscriptions",
"organizations_url": "https://api.github.com/users/zhaoguangxiang/orgs",
"repos_url": "https://api.github.com/users/zhaoguangxiang/repos",
"events_url": "https://api.github.com/users/zhaoguangxiang/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhaoguangxiang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"In the command line you are showing, you should add a `--seed ${seed}` argument to set the seed, otherwise, it will stay the same.",
"> In the command line you are showing, you should add a --seed ${seed} argument to set the seed, otherwise, it will stay the same.\r\n\r\nSorry, i forgot it. \r\n"
] | 1,567 | 1,567 | 1,567 | NONE | null | ## 🐛 Bug
<!-- Important information -->
Model I am using (XLNet):
Language I am using the model on (English):
The problem arise when using:
* [ ] the official example scripts
gpu=3
seed=1
task=MRPC
bsz=32
learning_rate=5e-5
max_steps=800
warmup_steps=200
save_steps=400
export CUDA_VISIBLE_DEVICES=${gpu}
export GLUE_DIR=/home/zhaoguangxiang/bert/glue_data
python3 ./examples/run_glue.py \
--model_type xlnet \
--model_name_or_path xlnet-large-cased \
--do_train \
--do_eval \
--task_name=${task} \
--data_dir=${GLUE_DIR}/${task} \
--output_dir=checkpoint/xl_${task}_seed${seed}/ \
--max_seq_length=128 \
--per_gpu_eval_batch_size=${bsz} \
--per_gpu_train_batch_size=${bsz} \
--gradient_accumulation_steps=1 \
--max_steps=${max_steps} \
--model_name=xlnet-large-cased \
--overwrite_output_dir \
--overwrite_cache \
--save_steps ${save_steps} \
--learning_rate ${learning_rate} \
--warmup_steps=${warmup_steps}
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
## To Reproduce
Steps to reproduce the behavior:
1. change the seed
2. results does not change, and keep acc=0.8774509803921569 for every seed
## Environment
* OS:linux
* Python version:3.6
* PyTorch version: 1.2
* PyTorch Transformers version (or branch): latest
* Using GPU ? 1 * TITAN RTX
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1160/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1160/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1159 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1159/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1159/comments | https://api.github.com/repos/huggingface/transformers/issues/1159/events | https://github.com/huggingface/transformers/issues/1159 | 487,530,208 | MDU6SXNzdWU0ODc1MzAyMDg= | 1,159 | Problem with optimizers after migration | {
"login": "ramild",
"id": 9999944,
"node_id": "MDQ6VXNlcjk5OTk5NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9999944?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ramild",
"html_url": "https://github.com/ramild",
"followers_url": "https://api.github.com/users/ramild/followers",
"following_url": "https://api.github.com/users/ramild/following{/other_user}",
"gists_url": "https://api.github.com/users/ramild/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ramild/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ramild/subscriptions",
"organizations_url": "https://api.github.com/users/ramild/orgs",
"repos_url": "https://api.github.com/users/ramild/repos",
"events_url": "https://api.github.com/users/ramild/events{/privacy}",
"received_events_url": "https://api.github.com/users/ramild/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"You should get the same behavior than `BertAdam` by setting `correct_bias=False` in `AdamW` and using the `WarmupLinearSchedule` together with it.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,567 | 1,573 | 1,573 | NONE | null | ## 📚 Migration
<!-- Important information -->
Model I am using (Bert, XLNet....): Bert
Language I am using the model on (English, Chinese....): Russian
The problem arise when using:
The optimizers. I tried the default parameters from the example with max_grad_norm = 1.0 and lr = 2e-5.
```
warmup_proportion = float(num_warmup_steps) / float(num_total_steps) # 0.1
### Previously BertAdam optimizer was instantiated like this:
optimizer = BertAdam(model.parameters(), lr=lr, schedule='warmup_linear', warmup=warmup_proportion, t_total=num_total_steps)
### and used like this:
for batch in train_data:
loss = model(batch)
loss.backward()
optimizer.step()
### In PyTorch-Transformers, optimizer and schedules are splitted and instantiated like this:
optimizer = AdamW(model.parameters(), lr=lr, correct_bias=False) # To reproduce BertAdam specific behavior set correct_bias=False
scheduler = WarmupLinearSchedule(optimizer, warmup_steps=num_warmup_steps, t_total=num_total_steps) # PyTorch scheduler
### and used like this:
for batch in train_data:
loss = model(batch)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), max_grad_norm) # Gradient clipping is not in AdamW anymore (so you can use amp without issue)
scheduler.step()
optimizer.step()
```
The tasks I am working on is:
* [ ] my own task or dataset: private dataset on comments from client support
Details of the issue:
After migration I see that the model converges more slowly and failed to obtain such an accuracy as it was before. In my multi-class classification dataset I get about 0.59 accuracy while the previous version resulted in about 0.63 accuracy after convergence. Do the optimizers in both versions are equivalent? If not, how can I make them absolutely the same?
## Environment
* OS: Ubuntu
* Python version: 3.6
* PyTorch version: 1.0
* PyTorch Transformers version (or branch): 1.0
* Using GPU ? Yes
* Distributed of parallel setup ? No | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1159/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1159/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1158 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1158/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1158/comments | https://api.github.com/repos/huggingface/transformers/issues/1158/events | https://github.com/huggingface/transformers/pull/1158 | 487,493,795 | MDExOlB1bGxSZXF1ZXN0MzEyNzQ4MTA3 | 1,158 | regarding #1026 pull request | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Oh I see what you mean, indeed that's a more general issue with saving and loading tokenizer with specific configuration parameters. This is actually also relevant to our work on XLM's tokenizer in #1092",
"Dear Thomas,\r\nThe pull request #1026 does not work unfortunately when using eval_all_check_points, and I was wondering if you could undo that merge, sorry for this, this new pull request here works for me.\r\nthanks. ",
"Ok let's do that for now and I'll think about a more general way to save tokenizer configurations.",
"awesome. thanks ",
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1158?src=pr&el=h1) Report\n> Merging [#1158](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1158?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/e0caab0cf052c86e456bc4b4fdac5788433ed935?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1158?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1158 +/- ##\n======================================\n Coverage 80.7% 80.7% \n======================================\n Files 46 46 \n Lines 7411 7411 \n======================================\n Hits 5981 5981 \n Misses 1430 1430\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1158?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1158?src=pr&el=footer). Last update [e0caab0...0a2fecd](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1158?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1158?src=pr&el=h1) Report\n> Merging [#1158](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1158?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/e0caab0cf052c86e456bc4b4fdac5788433ed935?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1158?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1158 +/- ##\n======================================\n Coverage 80.7% 80.7% \n======================================\n Files 46 46 \n Lines 7411 7411 \n======================================\n Hits 5981 5981 \n Misses 1430 1430\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1158?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1158?src=pr&el=footer). Last update [e0caab0...0a2fecd](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1158?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Addressing this up-stream with #1092"
] | 1,567 | 1,567 | 1,567 | NONE | null | Dear Thomas,
This is regarding my #1026 pull request, here is my understanding of the reproducibility issue I was getting:
- on line 451, in the codes tokenizer is reloaded without setting do_lower_case, then if you use both do_train+do_eval, you will get different results than if you do do_eval only on the same directory since if you use only do_eval only, tokenizer is read from line 408 where do_lower_case is considered.
- the second issue I see is that if you do both do_train and do_eval you read tokenizer from the output_dir, but if you do only do_eval you read tokenizer from args.model_name_or_path which can be different and could results in different results, so this is better to reload the tokenizer once from output_dir during the evaluation and remove it from training part.
thanks.
Best regards,
Rabeeh
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1158/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1158/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1158",
"html_url": "https://github.com/huggingface/transformers/pull/1158",
"diff_url": "https://github.com/huggingface/transformers/pull/1158.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1158.patch",
"merged_at": 1567175433000
} |
https://api.github.com/repos/huggingface/transformers/issues/1157 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1157/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1157/comments | https://api.github.com/repos/huggingface/transformers/issues/1157/events | https://github.com/huggingface/transformers/issues/1157 | 487,491,602 | MDU6SXNzdWU0ODc0OTE2MDI= | 1,157 | How to load pretraind XLM model | {
"login": "ksopyla",
"id": 64201,
"node_id": "MDQ6VXNlcjY0MjAx",
"avatar_url": "https://avatars.githubusercontent.com/u/64201?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ksopyla",
"html_url": "https://github.com/ksopyla",
"followers_url": "https://api.github.com/users/ksopyla/followers",
"following_url": "https://api.github.com/users/ksopyla/following{/other_user}",
"gists_url": "https://api.github.com/users/ksopyla/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ksopyla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ksopyla/subscriptions",
"organizations_url": "https://api.github.com/users/ksopyla/orgs",
"repos_url": "https://api.github.com/users/ksopyla/repos",
"events_url": "https://api.github.com/users/ksopyla/events{/privacy}",
"received_events_url": "https://api.github.com/users/ksopyla/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"~~Hello, we haven't yet converted those models and hosted them on our S3, but you indeed should be able to do it yourself; we used [this script](https://github.com/huggingface/pytorch-transformers/blob/master/pytorch_transformers/convert_xlm_checkpoint_to_pytorch.py) for the other XLM checkpoints, you could use it to convert this checkpoint.~~\r\n\r\nThe models are now available on our S3. You should upgrade your `pytorch-transformers` version to the current source (master), you can then load your model with:\r\n\r\n```py\r\nfrom pytorch_transformers import XLMModel\r\nmodel = XLMModel.from_pretrained(\"xlm-mlm-17-1280\")\r\n# or \r\nmodel = XLMModel.from_pretrained(\"xlm-mlm-100-1280\")\r\n```",
"Wow. You are fast :)\r\nThank you."
] | 1,567 | 1,567 | 1,567 | NONE | null | ## ❓ Questions & Help
Facebook recently released a new pre-trained language model (17 and 100) (https://github.com/facebookresearch/XLM#pretrained-cross-lingual-language-models)
I want to load 17 language model.
Could someone guide me on how to achieve this? The straightforward way didn't work for me.
I have downloaded the 3 files listed on facebook GitHub:
- model - https://dl.fbaipublicfiles.com/XLM/mlm_17_1280.pth
- bpe codes - https://dl.fbaipublicfiles.com/XLM/codes_xnli_17
- vocabulary - https://dl.fbaipublicfiles.com/XLM/vocab_xnli_17
and saved them in folder '/home/ksopyla/xlm/mlm17l'. Then I have tried to load the model with XLMModel.form_pretrained function
`
model = XLMTokenizer.from_pretrained('/home/ksopyla/xlm/mlm17/')
`
got
`
Model name '/home/ksopyla/xlm/mlm17' was not found in model name list (xlm-mlm-en-2048, xlm-mlm-ende-1024, xlm-mlm-enfr-1024, xlm-mlm-enro-1024, xlm-mlm-tlm-xnli15-1024, xlm-mlm-xnli15-1024, xlm-clm-enfr-1024, xlm-clm-ende-1024). We assumed '/home/ksopyla/xlm/mlm17/config.json' was a path or url but couldn't find any file associated to this path or url.
Traceback (most recent call last):
File "/home/ksopyla/.vscode/extensions/ms-python.python-2019.8.30787/pythonFiles/ptvsd_launcher.py", line 43, in <module>
main(ptvsdArgs)
File "/home/ksopyla/.vscode/extensions/ms-python.python-2019.8.30787/pythonFiles/lib/python/ptvsd/__main__.py", line 432, in main
run()
File "/home/ksopyla/.vscode/extensions/ms-python.python-2019.8.30787/pythonFiles/lib/python/ptvsd/__main__.py", line 316, in run_file
runpy.run_path(target, run_name='__main__')
File "/home/ksopyla/.pyenv/versions/3.7.3/lib/python3.7/runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "/home/ksopyla/.pyenv/versions/3.7.3/lib/python3.7/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/home/ksopyla/.pyenv/versions/3.7.3/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/ksopyla/dev/document_embeddings/xlm_from_pretraind.py", line 34, in <module>
output_hidden_states=True,
File "/home/ksopyla/.local/share/virtualenvs/szrek-data-PaoX74GN/lib/python3.7/site-packages/pytorch_transformers/modeling_utils.py", line 430, in from_pretrained
**kwargs
TypeError: cannot unpack non-iterable NoneType object
`
I suspect that I should change the file names and adjust the vocab file format. But I can't find it in the documentation. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1157/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1157/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1156 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1156/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1156/comments | https://api.github.com/repos/huggingface/transformers/issues/1156/events | https://github.com/huggingface/transformers/issues/1156 | 487,382,123 | MDU6SXNzdWU0ODczODIxMjM= | 1,156 | About distilled the SQuAD? | {
"login": "renxingkai",
"id": 15783015,
"node_id": "MDQ6VXNlcjE1NzgzMDE1",
"avatar_url": "https://avatars.githubusercontent.com/u/15783015?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/renxingkai",
"html_url": "https://github.com/renxingkai",
"followers_url": "https://api.github.com/users/renxingkai/followers",
"following_url": "https://api.github.com/users/renxingkai/following{/other_user}",
"gists_url": "https://api.github.com/users/renxingkai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/renxingkai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/renxingkai/subscriptions",
"organizations_url": "https://api.github.com/users/renxingkai/orgs",
"repos_url": "https://api.github.com/users/renxingkai/repos",
"events_url": "https://api.github.com/users/renxingkai/events{/privacy}",
"received_events_url": "https://api.github.com/users/renxingkai/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"i have the same question. how to deal with the max sequence length between teacher model and student model",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,567 | 1,572 | 1,572 | NONE | null | ## ❓ Questions & Help
Thank you for your excellent work. I want to know that have you released the distilled model code on SQuAD dataset? And how to set the max sequence length on teacher model and student model? Are they the same length? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1156/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1156/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1155 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1155/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1155/comments | https://api.github.com/repos/huggingface/transformers/issues/1155/events | https://github.com/huggingface/transformers/pull/1155 | 487,288,106 | MDExOlB1bGxSZXF1ZXN0MzEyNTgyODgw | 1,155 | Update apex fp16 implementation | {
"login": "anhnt170489",
"id": 24732444,
"node_id": "MDQ6VXNlcjI0NzMyNDQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/24732444?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anhnt170489",
"html_url": "https://github.com/anhnt170489",
"followers_url": "https://api.github.com/users/anhnt170489/followers",
"following_url": "https://api.github.com/users/anhnt170489/following{/other_user}",
"gists_url": "https://api.github.com/users/anhnt170489/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anhnt170489/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anhnt170489/subscriptions",
"organizations_url": "https://api.github.com/users/anhnt170489/orgs",
"repos_url": "https://api.github.com/users/anhnt170489/repos",
"events_url": "https://api.github.com/users/anhnt170489/events{/privacy}",
"received_events_url": "https://api.github.com/users/anhnt170489/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks, these examples will probably be deprecated and replaced by the new more general `run_lm_finetuning` example which can train several models with normal and masked language modeling."
] | 1,567 | 1,567 | 1,567 | NONE | null | As the issue I raised here: https://github.com/huggingface/pytorch-transformers/issues/1143
I updated the implementation of apex fp16, following latest Apex version's document. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1155/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1155/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1155",
"html_url": "https://github.com/huggingface/transformers/pull/1155",
"diff_url": "https://github.com/huggingface/transformers/pull/1155.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1155.patch",
"merged_at": 1567200552000
} |
https://api.github.com/repos/huggingface/transformers/issues/1154 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1154/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1154/comments | https://api.github.com/repos/huggingface/transformers/issues/1154/events | https://github.com/huggingface/transformers/pull/1154 | 487,277,219 | MDExOlB1bGxSZXF1ZXN0MzEyNTc0MjI1 | 1,154 | fix: hard coding for max number | {
"login": "ziliwang",
"id": 13744942,
"node_id": "MDQ6VXNlcjEzNzQ0OTQy",
"avatar_url": "https://avatars.githubusercontent.com/u/13744942?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ziliwang",
"html_url": "https://github.com/ziliwang",
"followers_url": "https://api.github.com/users/ziliwang/followers",
"following_url": "https://api.github.com/users/ziliwang/following{/other_user}",
"gists_url": "https://api.github.com/users/ziliwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ziliwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ziliwang/subscriptions",
"organizations_url": "https://api.github.com/users/ziliwang/orgs",
"repos_url": "https://api.github.com/users/ziliwang/repos",
"events_url": "https://api.github.com/users/ziliwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/ziliwang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, thanks @ziliwang "
] | 1,567 | 1,567 | 1,567 | CONTRIBUTOR | null | fp16 max number is 65504, the original 1e30 will cause Nan in fp16 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1154/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1154/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1154",
"html_url": "https://github.com/huggingface/transformers/pull/1154",
"diff_url": "https://github.com/huggingface/transformers/pull/1154.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1154.patch",
"merged_at": 1567200189000
} |
https://api.github.com/repos/huggingface/transformers/issues/1153 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1153/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1153/comments | https://api.github.com/repos/huggingface/transformers/issues/1153/events | https://github.com/huggingface/transformers/pull/1153 | 487,191,641 | MDExOlB1bGxSZXF1ZXN0MzEyNTA3MjM1 | 1,153 | [WIP] Refactor Tokenizers creation to support in-memory initialization | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"@thomwolf cc @honnibal Drafting this PR to have dedicated space for discussions.",
"@thomwolf @honnibal Can you have a look plz :) ?",
"Ok, I went through this PR and it looks nice.\r\nGreat job @mfuntowicz.\r\nNo problem for the slight code duplication in the tokenizer loading classes, as you've noticed, the repo's philosophy is rather pragmatic and we only add abstractions when they are needed for easier code maintenance and added functionalities.\r\nThanks for following the general organization of the repo as well.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,567 | 1,575 | 1,575 | MEMBER | null | As pointed out in #916 currently tokenizers ask for the path from where they'll load the required vocabulary files.
This PR allows tokenizers to take their vocab from data living in-memory and cold-storage.
Implementations details:
- All tokenizer now have a specific ${TokenizerName}Vocab dataclass holding all the required information to run the model.
- All ${TokenizerName}Vocab dataclass provide a from_pretrained method in charge of reading necessary files
- All tokenizer now take as first argument vocabs which has to ${TokenizerName}Vocab instance.
- All model now have a static member vocab_class which points to the desired ${TokenizerName}Vocab data class
- Some ${TokenizerName}Vocab.from_pretrained share loading routines and thus code is currently duplicated across all of them. It might be possible to refactor to use a generic method that handles such loading.
- [x] Bert
- [x] Transformer XL
- [x] GPT
- [x] GPT-2
- [x] XLNet
- [x] XLM
- [x] RoBERTa
- [x] DistilBERT | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1153/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1153/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1153",
"html_url": "https://github.com/huggingface/transformers/pull/1153",
"diff_url": "https://github.com/huggingface/transformers/pull/1153.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1153.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/1152 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1152/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1152/comments | https://api.github.com/repos/huggingface/transformers/issues/1152/events | https://github.com/huggingface/transformers/pull/1152 | 487,169,075 | MDExOlB1bGxSZXF1ZXN0MzEyNDg4NTA0 | 1,152 | fix adding special tokens | {
"login": "epwalsh",
"id": 8812459,
"node_id": "MDQ6VXNlcjg4MTI0NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/8812459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/epwalsh",
"html_url": "https://github.com/epwalsh",
"followers_url": "https://api.github.com/users/epwalsh/followers",
"following_url": "https://api.github.com/users/epwalsh/following{/other_user}",
"gists_url": "https://api.github.com/users/epwalsh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/epwalsh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/epwalsh/subscriptions",
"organizations_url": "https://api.github.com/users/epwalsh/orgs",
"repos_url": "https://api.github.com/users/epwalsh/repos",
"events_url": "https://api.github.com/users/epwalsh/events{/privacy}",
"received_events_url": "https://api.github.com/users/epwalsh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Look good to me (the failing test on `head_masking` is not related to this PR).\r\nThanks @epwalsh!"
] | 1,567 | 1,567 | 1,567 | CONTRIBUTOR | null | Currently there is a bug when adding `additional_special_tokens` in the form of a tuple, instead of a list. To reproduce:
```python
from pytorch_transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("xlnet-base-cased")
tokenizer.add_special_tokens({"additional_special_tokens": ("@a@", "@b@")})
tokenizer.all_special_tokens
```
Results in:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-4-81549ce398a5> in <module>()
----> 1 tokenizer.all_special_tokens
~/GitHub/pytorch-transformers/pytorch_transformers/tokenization_utils.py in
all_special_tokens(self)
677 set_attr = self.special_tokens_map
678 for attr_value in set_attr.values():
--> 679 all_toks = all_toks + (attr_value if isinstance(attr_val
ue, (list, tuple)) else [attr_value])
680 all_toks = list(set(all_toks))
681 return all_toks
TypeError: can only concatenate list (not "tuple") to list
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1152/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1152/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1152",
"html_url": "https://github.com/huggingface/transformers/pull/1152",
"diff_url": "https://github.com/huggingface/transformers/pull/1152.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1152.patch",
"merged_at": 1567200119000
} |
https://api.github.com/repos/huggingface/transformers/issues/1151 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1151/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1151/comments | https://api.github.com/repos/huggingface/transformers/issues/1151/events | https://github.com/huggingface/transformers/issues/1151 | 487,157,527 | MDU6SXNzdWU0ODcxNTc1Mjc= | 1,151 | Idea to improve DistilBERT | {
"login": "tchaton",
"id": 12861981,
"node_id": "MDQ6VXNlcjEyODYxOTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/12861981?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tchaton",
"html_url": "https://github.com/tchaton",
"followers_url": "https://api.github.com/users/tchaton/followers",
"following_url": "https://api.github.com/users/tchaton/following{/other_user}",
"gists_url": "https://api.github.com/users/tchaton/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tchaton/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tchaton/subscriptions",
"organizations_url": "https://api.github.com/users/tchaton/orgs",
"repos_url": "https://api.github.com/users/tchaton/repos",
"events_url": "https://api.github.com/users/tchaton/events{/privacy}",
"received_events_url": "https://api.github.com/users/tchaton/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"good idea. i aggree with the second plan",
"If you give it a try, please tell me in the loop : [email protected]",
"There is Albert now https://arxiv.org/abs/1909.11942 which seems to be even better. It isn't based on KD.",
"anthor solution is here. [https://github.com/intersun/PKD-for-BERT-Model-Compression](url). Here is a awesome about distillaiton: [https://github.com/dkozlov/awesome-knowledge-distillation](url)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,567 | 1,577 | 1,577 | NONE | null | ## 🚀 Feature
<!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. -->
Based on https://medium.com/huggingface/distilbert-8cf3380435b5, you are using KL_loss to train the student from the teacher.
You can do something a little bit different there.
Loss = KL_loss(teacher, student when teacher is right) and CE(when the teacher is wrong).
Therefore, the student should converge to be better than the teacher by no learning from its mistake.
Or even better, you could correct the teacher by artificially telling him the GT;
By replacing in the teacher prediction the GT class probability value by 1 and then renormalizing
by the sum of probs (here: some of sort of artificial smooth).
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. -->
Improve DistillBert performance.
## Additional context
<!-- Add any other context or screenshots about the feature request here. --> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1151/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1151/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1150 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1150/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1150/comments | https://api.github.com/repos/huggingface/transformers/issues/1150/events | https://github.com/huggingface/transformers/issues/1150 | 487,105,629 | MDU6SXNzdWU0ODcxMDU2Mjk= | 1,150 | What is the relationship between `run_lm_finetuning.py` and the scripts in `lm_finetuning`? | {
"login": "zphang",
"id": 1668462,
"node_id": "MDQ6VXNlcjE2Njg0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1668462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zphang",
"html_url": "https://github.com/zphang",
"followers_url": "https://api.github.com/users/zphang/followers",
"following_url": "https://api.github.com/users/zphang/following{/other_user}",
"gists_url": "https://api.github.com/users/zphang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zphang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zphang/subscriptions",
"organizations_url": "https://api.github.com/users/zphang/orgs",
"repos_url": "https://api.github.com/users/zphang/repos",
"events_url": "https://api.github.com/users/zphang/events{/privacy}",
"received_events_url": "https://api.github.com/users/zphang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi! The folder `lm_finetuning` is especially targeted at BERT. It gives details on two different losses that were used to pre-train BERT: the masked language modeling objective (MLM) and the next sentence prediction objective (NSP). It gives several insights to BERT's fine-tuning.\r\n\r\nThe file `run_lm_finetuning`, on the other hand, showcases how to fine-tune language modeling on several models: BERT, GPT, GPT-2, and RoBERTa. It only uses a single objective; MLM for BERT and RoBERTa and CLM (causal language modeling) for GPT and GPT-2.",
"How can we fine-tune on the next sentence prediction task? I did not find the `lm_finetuning` files. Thank you.",
"Hi @JiajunBao, these scripts were community maintained and have since been removed. We do not have any script working on the next sentence prediction task. I believe the `lm_finetuning` files were last up to date in 1.1.0, so you may look [here](https://github.com/huggingface/transformers/tree/1.1.0/examples/lm_finetuning).",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,567 | 1,577 | 1,577 | CONTRIBUTOR | null | ## ❓ Questions & Help
It looks like there are now two scripts for running LM fine-tuning. While `run_lm_finetuning` seems to be newer, the documentation in `lm_finetuning` seems to indicate that there is more subtlety to generating the right data for performing LM fine-tuning in the BERT format. Does the new script take this into account?
Sorry if I'm missing something obvious! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1150/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1150/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1149 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1149/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1149/comments | https://api.github.com/repos/huggingface/transformers/issues/1149/events | https://github.com/huggingface/transformers/issues/1149 | 487,058,960 | MDU6SXNzdWU0ODcwNTg5NjA= | 1,149 | Closing bracket is missing in token_counts.py for DistilBERT | {
"login": "tomohideshibata",
"id": 16042472,
"node_id": "MDQ6VXNlcjE2MDQyNDcy",
"avatar_url": "https://avatars.githubusercontent.com/u/16042472?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tomohideshibata",
"html_url": "https://github.com/tomohideshibata",
"followers_url": "https://api.github.com/users/tomohideshibata/followers",
"following_url": "https://api.github.com/users/tomohideshibata/following{/other_user}",
"gists_url": "https://api.github.com/users/tomohideshibata/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tomohideshibata/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomohideshibata/subscriptions",
"organizations_url": "https://api.github.com/users/tomohideshibata/orgs",
"repos_url": "https://api.github.com/users/tomohideshibata/repos",
"events_url": "https://api.github.com/users/tomohideshibata/events{/privacy}",
"received_events_url": "https://api.github.com/users/tomohideshibata/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! Indeed, the closing bracket was missing, fixed it with caf1d11! Thanks for the bug report :)."
] | 1,567 | 1,567 | 1,567 | CONTRIBUTOR | null | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): DistilBERT
The problem arise when using:
* [x] the official example scripts: (give details)
I'm trying DistilBERT, and get an "invalid syntax error" when I run `examples/distillation/scripts/token_counts.py`.
A closing bracket seems missing at line 27.
```
parser.add_argument("--data_file", type=str, default="data/dump.bert-base-uncased.pickle",
help="The binarized dataset."
```
## Environment
* OS: Linux
* Python version: 3.6.9
* PyTorch version: 1.2.0
* PyTorch Transformers version (or branch): 1.1.0
* Using GPU ? yes
* Distributed of parallel setup ? no
* Any other relevant information:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1149/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1149/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1148 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1148/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1148/comments | https://api.github.com/repos/huggingface/transformers/issues/1148/events | https://github.com/huggingface/transformers/pull/1148 | 487,055,088 | MDExOlB1bGxSZXF1ZXN0MzEyMzk1Mjc1 | 1,148 | Documentation auto-deploy | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Great, thanks @LysandreJik!"
] | 1,567 | 1,576 | 1,567 | MEMBER | null | Documentation is now deployed automatically. @thomwolf @julien-c | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1148/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1148/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1148",
"html_url": "https://github.com/huggingface/transformers/pull/1148",
"diff_url": "https://github.com/huggingface/transformers/pull/1148.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1148.patch",
"merged_at": 1567164497000
} |
https://api.github.com/repos/huggingface/transformers/issues/1147 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1147/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1147/comments | https://api.github.com/repos/huggingface/transformers/issues/1147/events | https://github.com/huggingface/transformers/issues/1147 | 487,031,183 | MDU6SXNzdWU0ODcwMzExODM= | 1,147 | GPT2-large fails to load the tokenizer | {
"login": "pywirrarika",
"id": 457373,
"node_id": "MDQ6VXNlcjQ1NzM3Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/457373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pywirrarika",
"html_url": "https://github.com/pywirrarika",
"followers_url": "https://api.github.com/users/pywirrarika/followers",
"following_url": "https://api.github.com/users/pywirrarika/following{/other_user}",
"gists_url": "https://api.github.com/users/pywirrarika/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pywirrarika/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pywirrarika/subscriptions",
"organizations_url": "https://api.github.com/users/pywirrarika/orgs",
"repos_url": "https://api.github.com/users/pywirrarika/repos",
"events_url": "https://api.github.com/users/pywirrarika/events{/privacy}",
"received_events_url": "https://api.github.com/users/pywirrarika/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! Could you try to install from source (1.2.0)? Is there any warning in your terminal such as \r\n```\r\nModel name 'gpt2-large' was not found in model name list (gpt2, gpt2-medium). We assumed 'gpt2-large' was a path or url but couldn't...\r\n```\r\nor something along those lines?",
"Thank you! Solved with the update. And yes, checking again I got your mentioned error message. \r\n"
] | 1,567 | 1,567 | 1,567 | NONE | null | ## 🐛 Bug
Using: GPT2-Large
## To Reproduce
When I load the gpt2-large model, in the same way as gpt2 and gpt2-medium I get an NoneType when loading the tokenizer.
```
self.tokenizer = GPT2Tokenizer.from_pretrained("gpt2-large", bos_token="_start_", unk_token='_unk_', eos_token="_eos_", sep_token="_delimiter_", cls_token="_classify_", pad_token='_pad_' )
self.model = GPT2LMHeadModel.from_pretrained("gpt2-large")
```
At this point in the debugger self.tokenizer == None is True
The issue is clear when I try to use the tokenizer.
```
File "gpt2_train.py", line 70, in load
num_added_toks = self.tokenizer.add_special_tokens(special_tokens_dict)
AttributeError: 'NoneType' object has no attribute 'add_special_tokens'
```
## Expected behavior
Get a tokenizer object
## Environment
* OS: Linux
* Python version: 3.7.3
* PyTorch version: 1.1.0
* PyTorch Transformers version (or branch): 1.1.0 (last github pull)
* Using GPU ? yes
* Distributed of parallel setup ? no
* Any other relevant information:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1147/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1147/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1146 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1146/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1146/comments | https://api.github.com/repos/huggingface/transformers/issues/1146/events | https://github.com/huggingface/transformers/issues/1146 | 486,965,614 | MDU6SXNzdWU0ODY5NjU2MTQ= | 1,146 | Attention values occasionally exceed 1 in BertModel | {
"login": "Akella17",
"id": 16236287,
"node_id": "MDQ6VXNlcjE2MjM2Mjg3",
"avatar_url": "https://avatars.githubusercontent.com/u/16236287?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Akella17",
"html_url": "https://github.com/Akella17",
"followers_url": "https://api.github.com/users/Akella17/followers",
"following_url": "https://api.github.com/users/Akella17/following{/other_user}",
"gists_url": "https://api.github.com/users/Akella17/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Akella17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Akella17/subscriptions",
"organizations_url": "https://api.github.com/users/Akella17/orgs",
"repos_url": "https://api.github.com/users/Akella17/repos",
"events_url": "https://api.github.com/users/Akella17/events{/privacy}",
"received_events_url": "https://api.github.com/users/Akella17/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@thomwolf Did someone looking into this issue?",
"No and I'm afraid we don't really have the bandwidth for that at the moment.\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I think this is related to a feature of Dropout applied to the attention values. Dropout will scale up the values that are not zeroed out which causes the problem you described. See this [pytorch/issues/5752](https://github.com/pytorch/pytorch/issues/5752). \r\nSetting the model to eval mode should produce normal attention values."
] | 1,567 | 1,582 | 1,578 | NONE | null | ```Python
outputs = self.model(x, attention_mask = x_mask) # Models outputs are now tuples
print(outputs[2][-1].max())
print((outputs[2][-1]>1).sum().item()) # Number of attention values > 1
print((outputs[2][-1]>-1).sum().item()) # Total number of attention values
```
```
tensor(1.0750, device='cuda:7', grad_fn=<MaxBackward1>)
1545
480000
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1146/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1146/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1145 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1145/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1145/comments | https://api.github.com/repos/huggingface/transformers/issues/1145/events | https://github.com/huggingface/transformers/issues/1145 | 486,929,919 | MDU6SXNzdWU0ODY5Mjk5MTk= | 1,145 | How to finetune GPT2 | {
"login": "alecalma",
"id": 17485593,
"node_id": "MDQ6VXNlcjE3NDg1NTkz",
"avatar_url": "https://avatars.githubusercontent.com/u/17485593?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alecalma",
"html_url": "https://github.com/alecalma",
"followers_url": "https://api.github.com/users/alecalma/followers",
"following_url": "https://api.github.com/users/alecalma/following{/other_user}",
"gists_url": "https://api.github.com/users/alecalma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alecalma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alecalma/subscriptions",
"organizations_url": "https://api.github.com/users/alecalma/orgs",
"repos_url": "https://api.github.com/users/alecalma/repos",
"events_url": "https://api.github.com/users/alecalma/events{/privacy}",
"received_events_url": "https://api.github.com/users/alecalma/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, we have an example to fine-tune several models on [language modeling here](https://github.com/huggingface/pytorch-transformers/blob/master/examples/run_lm_finetuning.py).\r\nYou can look into GPT-2's training on the CLM task, which is done on WikiText-2 in this example.",
"@LysandreJik would you please provide an example of usage? \r\nIn the code you mentioned WikiText-2 only in doctoring.\r\nI believe this input file is a text file without any new line, right?\r\nCan't we pass an input file, with one sentence per line?",
"Good catch, it was initially made for WikiText-2 but it was generalized to be used with any text file. ~I'll add an example of usage shortly in our Documentation section.~ An example is now available in the [documentation](https://huggingface.co/pytorch-transformers/examples.html#causal-lm-fine-tuning-on-gpt-gpt-2-masked-lm-fine-tuning-on-bert-roberta).\r\n\r\nYou can run it like so:\r\n```bash\r\npython run_lm_finetuning.py \\\r\n --train_data_file=$TEXT_FILE \\\r\n --output_dir=$OUTPUT_DIRECTORY \\\r\n --model_type=gpt2 \\\r\n --model_name_or_path=gpt2 \\\r\n --do_train\r\n```\r\n\r\nYou don't need to remove any newline in your text file, it all depends on what you're looking for. If you're keeping the line returns, the model will learn to generate line returns as well.\r\n\r\nYou can easily change the way the model inputs are built by changing the `TextDataset` class.\r\nRight now, with:\r\n\r\n```py\r\nwhile len(tokenized_text) >= block_size: # Truncate in block of block_size\r\n self.examples.append(tokenizer.add_special_tokens_single_sentence(tokenized_text[:block_size]))\r\n tokenized_text = tokenized_text[block_size:]\r\n```\r\n\r\nWe are simply creating token lists (of size `block_size`) that will then be fed to the model. We are not doing any special preprocessing (such as removing the line returns).",
"@LysandreJik Great thanks.\r\nThe current version of ```TextDataset``` class will concat text from different articles (if any) together, right? I mean there is no notion of separate documents (articles) and it's all a continious collection of tokens?\r\n",
"That's true. If you're looking to get the best prediction out of it, you should be careful that unrelated pieces of text are not concatenated in a single input. We didn't do it in that example for simplicity's sake.",
"@LysandreJik in Line 76 of the code:\r\n```\r\nself.examples.append(tokenizer.add_special_tokens_single_sentence(tokenized_text[:block_size]))\r\n```\r\n\r\nIf models other than Bert is used, then the tokenizer does not make use of special tokens, right? It is only applicable for Bert",
"Both BERT and RoBERTa use special tokens. For GPT and GPT-2, no special token will be added using this method, since, as you said, they do not make use of special tokens.",
"In the code you mentioned that we might want to add model specific padding. I wonder if got-2 has padding implemented? if yes, does it accept right-side zero padding similar to BERT?\r\nI want to finetune gpt-2 on a dataset which each instance length is generally less than 65 tokens, I want to make all the same length by adding 0 padding up to max_length of 128.\r\nany idea?",
"How we can add a [CLS] token to beginning of every inputs for gpt2 (and add it to vocabulary) and fine-tune it? \r\nI see an example of adding [CLS] in ```modeling_gpt2.py``` for the ```GPT2DoubleHeadsModel``` class. I wonder if we can finetune gpt2 with added [CLS] token?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"> In the code you mentioned that we might want to add model specific padding. I wonder if got-2 has padding implemented? if yes, does it accept right-side zero padding similar to BERT?\r\n> I want to finetune gpt-2 on a dataset which each instance length is generally less than 65 tokens, I want to make all the same length by adding 0 padding up to max_length of 128.\r\n> any idea?\r\n\r\nI think you can use ANY tokens for padding as GPT-2 is causal. You just need to mask out these positions when calculating loss.",
"https://github.com/huggingface/pytorch-transformers/blob/master/examples/run_lm_finetuning.py : this link doesn't seem to exist anymore? How do I finetune a GPT-2 on my custom data?",
"@y12uc231 The examples folder was reorganized to group by framework and task. You can now find examples for finetuning pytorch models on language modeling tasks [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling). As the README notes, legacy scripts can be found [here](https://github.com/huggingface/transformers/tree/main/examples/legacy). ",
"Sounds great, thanks!\r\n\r\nWhen I was trying to use the script above there is an option that says \"--model_type MODEL_TYPE If training from scratch..\", does it train the model from scratch or only finetunes it?",
"In the [legacy language modeling script](https://github.com/huggingface/transformers/blob/main/examples/legacy/run_language_modeling.py), to finetune, pass the checkpoint you wish to use with the `model_name_or_path` option. To train from scratch use the `model_type` option and leave `model_name_or_path` as `None`."
] | 1,567 | 1,680 | 1,573 | NONE | null | ## ❓ Questions & Help
Hi all,
I would like to finetune the pretrained gpt2 model with a newspapers dataset. Do you know how would that be possible? I haven't found any train scipt for gpt2.
Thanks a lot. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1145/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1145/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1144 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1144/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1144/comments | https://api.github.com/repos/huggingface/transformers/issues/1144/events | https://github.com/huggingface/transformers/issues/1144 | 486,882,532 | MDU6SXNzdWU0ODY4ODI1MzI= | 1,144 | where can i assign step in function lr_lambda of Class WramupLinearSchedule? | {
"login": "lsy641",
"id": 26696711,
"node_id": "MDQ6VXNlcjI2Njk2NzEx",
"avatar_url": "https://avatars.githubusercontent.com/u/26696711?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lsy641",
"html_url": "https://github.com/lsy641",
"followers_url": "https://api.github.com/users/lsy641/followers",
"following_url": "https://api.github.com/users/lsy641/following{/other_user}",
"gists_url": "https://api.github.com/users/lsy641/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lsy641/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lsy641/subscriptions",
"organizations_url": "https://api.github.com/users/lsy641/orgs",
"repos_url": "https://api.github.com/users/lsy641/repos",
"events_url": "https://api.github.com/users/lsy641/events{/privacy}",
"received_events_url": "https://api.github.com/users/lsy641/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, to use a scheduler you have to tell it when to perform an optimization step, as detailed on the [pytorch documentation](https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate). It increases the step by one every time you call `scheduler.step()`.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,567 | 1,572 | 1,572 | NONE | null | As i know LambdaLR's get_lr function recieve last_epoch as parameter, so where does it get the step of current training?
class WarmupLinearSchedule(LambdaLR):
""" Linear warmup and then linear decay.
Linearly increases learning rate from 0 to 1 over `warmup_steps` training steps.
Linearly decreases learning rate from 1. to 0. over remaining `t_total - warmup_steps` steps.
"""
def __init__(self, optimizer, warmup_steps, t_total, last_epoch=-1):
self.warmup_steps = warmup_steps
self.t_total = t_total
super(WarmupLinearSchedule, self).__init__(optimizer, self.lr_lambda, last_epoch=last_epoch)
def lr_lambda(self, step):
if step < self.warmup_steps:
return float(step) / float(max(1, self.warmup_steps))
return max(0.0, float(self.t_total - step) / float(max(1.0, self.t_total - self.warmup_steps))) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1144/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1144/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1143 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1143/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1143/comments | https://api.github.com/repos/huggingface/transformers/issues/1143/events | https://github.com/huggingface/transformers/issues/1143 | 486,882,075 | MDU6SXNzdWU0ODY4ODIwNzU= | 1,143 | Why still using old implementation of apex fp16 | {
"login": "anhnt170489",
"id": 24732444,
"node_id": "MDQ6VXNlcjI0NzMyNDQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/24732444?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anhnt170489",
"html_url": "https://github.com/anhnt170489",
"followers_url": "https://api.github.com/users/anhnt170489/followers",
"following_url": "https://api.github.com/users/anhnt170489/following{/other_user}",
"gists_url": "https://api.github.com/users/anhnt170489/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anhnt170489/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anhnt170489/subscriptions",
"organizations_url": "https://api.github.com/users/anhnt170489/orgs",
"repos_url": "https://api.github.com/users/anhnt170489/repos",
"events_url": "https://api.github.com/users/anhnt170489/events{/privacy}",
"received_events_url": "https://api.github.com/users/anhnt170489/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"AFAIK no particular reason, feel free to open a PR"
] | 1,567 | 1,567 | 1,567 | NONE | null | ## 🚀 Feature
According to Nvidia apex fp16 documentary: https://nvidia.github.io/apex/amp.html
The implementation of new apex version doesn't require using FP16_Optimizer wrapping over FusedAdam. So I wonder why the team still kept the old implementation. Is there any special reason ? If not, I will make a pull request of implementation due to new apex version | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1143/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1143/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1142 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1142/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1142/comments | https://api.github.com/repos/huggingface/transformers/issues/1142/events | https://github.com/huggingface/transformers/issues/1142 | 486,862,438 | MDU6SXNzdWU0ODY4NjI0Mzg= | 1,142 | FP16_Optimizer is not an Optimizer when fp_16 | {
"login": "anhnt170489",
"id": 24732444,
"node_id": "MDQ6VXNlcjI0NzMyNDQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/24732444?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anhnt170489",
"html_url": "https://github.com/anhnt170489",
"followers_url": "https://api.github.com/users/anhnt170489/followers",
"following_url": "https://api.github.com/users/anhnt170489/following{/other_user}",
"gists_url": "https://api.github.com/users/anhnt170489/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anhnt170489/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anhnt170489/subscriptions",
"organizations_url": "https://api.github.com/users/anhnt170489/orgs",
"repos_url": "https://api.github.com/users/anhnt170489/repos",
"events_url": "https://api.github.com/users/anhnt170489/events{/privacy}",
"received_events_url": "https://api.github.com/users/anhnt170489/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think this issue should be fixed on master (even though I'm not exactly sure which script you are referring to).",
"I already fixed this in my pull request. This error is caused by FP16_Optimizer, by using new Apex implementation, we deprecated FP16_Optimizer, so this bug is no longer issued",
"Ok closing the issue then, thanks."
] | 1,567 | 1,567 | 1,567 | NONE | null | ## 🐛 Bug
When using fp_16, the scheduler should be:
scheduler = WarmupLinearSchedule(optimizer.optimizer, warmup_steps=args.warmup_steps,
t_total=num_train_optimization_steps)
if not, that will be "TypeError: FP16_Optimizer is not an Optimizer" | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1142/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1142/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1141 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1141/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1141/comments | https://api.github.com/repos/huggingface/transformers/issues/1141/events | https://github.com/huggingface/transformers/pull/1141 | 486,798,877 | MDExOlB1bGxSZXF1ZXN0MzEyMTg5MDQw | 1,141 | Small modification of comment in the run_glue.py example | {
"login": "Lawiss",
"id": 30115537,
"node_id": "MDQ6VXNlcjMwMTE1NTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/30115537?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Lawiss",
"html_url": "https://github.com/Lawiss",
"followers_url": "https://api.github.com/users/Lawiss/followers",
"following_url": "https://api.github.com/users/Lawiss/following{/other_user}",
"gists_url": "https://api.github.com/users/Lawiss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Lawiss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lawiss/subscriptions",
"organizations_url": "https://api.github.com/users/Lawiss/orgs",
"repos_url": "https://api.github.com/users/Lawiss/repos",
"events_url": "https://api.github.com/users/Lawiss/events{/privacy}",
"received_events_url": "https://api.github.com/users/Lawiss/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Indeed, missed that one, thank you."
] | 1,567 | 1,567 | 1,567 | CONTRIBUTOR | null | Add RoBERTa to the comment as it was not explicit that RoBERTa don't use token_type_ids. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1141/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1141/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1141",
"html_url": "https://github.com/huggingface/transformers/pull/1141",
"diff_url": "https://github.com/huggingface/transformers/pull/1141.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1141.patch",
"merged_at": 1567082611000
} |
https://api.github.com/repos/huggingface/transformers/issues/1140 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1140/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1140/comments | https://api.github.com/repos/huggingface/transformers/issues/1140/events | https://github.com/huggingface/transformers/issues/1140 | 486,783,525 | MDU6SXNzdWU0ODY3ODM1MjU= | 1,140 | Can't Using Binarization Script for DistilBERT | {
"login": "SreeramV181",
"id": 35638026,
"node_id": "MDQ6VXNlcjM1NjM4MDI2",
"avatar_url": "https://avatars.githubusercontent.com/u/35638026?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SreeramV181",
"html_url": "https://github.com/SreeramV181",
"followers_url": "https://api.github.com/users/SreeramV181/followers",
"following_url": "https://api.github.com/users/SreeramV181/following{/other_user}",
"gists_url": "https://api.github.com/users/SreeramV181/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SreeramV181/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SreeramV181/subscriptions",
"organizations_url": "https://api.github.com/users/SreeramV181/orgs",
"repos_url": "https://api.github.com/users/SreeramV181/repos",
"events_url": "https://api.github.com/users/SreeramV181/events{/privacy}",
"received_events_url": "https://api.github.com/users/SreeramV181/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hello @SreeramV181,\r\nit should be fixed in commit 803c1cc4eacd38f1b854578d7d717b5e4a1ada47.\r\nThanks for pointing that out!\r\nVictor"
] | 1,567 | 1,567 | 1,567 | NONE | null | ## 🐛 Bug
<!-- Important information -->
I'm currently using DistilBERT and running into issues when I run scripts/binarized_data.py. I get the following error:
Traceback (most recent call last):
File "scripts/binarized_data.py", line 25, in <module>
from ..utils import logger
ValueError: attempted relative import beyond top-level package
I haven't modified anything within the package. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1140/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1140/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1139 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1139/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1139/comments | https://api.github.com/repos/huggingface/transformers/issues/1139/events | https://github.com/huggingface/transformers/pull/1139 | 486,767,094 | MDExOlB1bGxSZXF1ZXN0MzEyMTYzMzMy | 1,139 | Need multiple capabilities | {
"login": "SreeramV181",
"id": 35638026,
"node_id": "MDQ6VXNlcjM1NjM4MDI2",
"avatar_url": "https://avatars.githubusercontent.com/u/35638026?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SreeramV181",
"html_url": "https://github.com/SreeramV181",
"followers_url": "https://api.github.com/users/SreeramV181/followers",
"following_url": "https://api.github.com/users/SreeramV181/following{/other_user}",
"gists_url": "https://api.github.com/users/SreeramV181/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SreeramV181/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SreeramV181/subscriptions",
"organizations_url": "https://api.github.com/users/SreeramV181/orgs",
"repos_url": "https://api.github.com/users/SreeramV181/repos",
"events_url": "https://api.github.com/users/SreeramV181/events{/privacy}",
"received_events_url": "https://api.github.com/users/SreeramV181/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1139?src=pr&el=h1) Report\n> Merging [#1139](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1139?src=pr&el=desc) into [generative-finetuning](https://codecov.io/gh/huggingface/pytorch-transformers/commit/529a16dec6cc9bfcf8954a1b16546960f2fab6fa?src=pr&el=desc) will **increase** coverage by `0.87%`.\n> The diff coverage is `96.42%`.\n\n[](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1139?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## generative-finetuning #1139 +/- ##\n=========================================================\n+ Coverage 79.61% 80.48% +0.87% \n=========================================================\n Files 42 46 +4 \n Lines 6918 7411 +493 \n=========================================================\n+ Hits 5508 5965 +457 \n- Misses 1410 1446 +36\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1139?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1139/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfdHJhbnNmb194bC5weQ==) | `57.12% <ø> (-0.42%)` | :arrow_down: |\n| [pytorch\\_transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1139/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdG9rZW5pemF0aW9uX3JvYmVydGEucHk=) | `95.37% <ø> (-0.93%)` | :arrow_down: |\n| [pytorch\\_transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1139/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfcm9iZXJ0YS5weQ==) | `75.89% <ø> (ø)` | :arrow_up: |\n| [pytorch\\_transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1139/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfdXRpbHMucHk=) | `82.14% <ø> (-1.28%)` | :arrow_down: |\n| [pytorch\\_transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1139/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfZ3B0Mi5weQ==) | `75.84% <ø> (ø)` | :arrow_up: |\n| [pytorch\\_transformers/modeling\\_xlm.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1139/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfeGxtLnB5) | `86.66% <ø> (ø)` | :arrow_up: |\n| [pytorch\\_transformers/tokenization\\_distilbert.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1139/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdG9rZW5pemF0aW9uX2Rpc3RpbGJlcnQucHk=) | `100% <100%> (ø)` | |\n| [pytorch\\_transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1139/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfeGxuZXQucHk=) | `79.01% <100%> (ø)` | :arrow_up: |\n| [pytorch\\_transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1139/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfYmVydC5weQ==) | `87.98% <100%> (ø)` | :arrow_up: |\n| [...torch\\_transformers/tests/tokenization\\_bert\\_test.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1139/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvdG9rZW5pemF0aW9uX2JlcnRfdGVzdC5weQ==) | `98.66% <100%> (ø)` | :arrow_up: |\n| ... and [20 more](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1139/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1139?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1139?src=pr&el=footer). Last update [529a16d...e0caab0](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1139?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,567 | 1,567 | 1,567 | NONE | null | Need both Generative Finetuning and Distilling Capabilities | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1139/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1139/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1139",
"html_url": "https://github.com/huggingface/transformers/pull/1139",
"diff_url": "https://github.com/huggingface/transformers/pull/1139.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1139.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/1138 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1138/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1138/comments | https://api.github.com/repos/huggingface/transformers/issues/1138/events | https://github.com/huggingface/transformers/issues/1138 | 486,734,521 | MDU6SXNzdWU0ODY3MzQ1MjE= | 1,138 | loss explosion | {
"login": "g-jing",
"id": 44223191,
"node_id": "MDQ6VXNlcjQ0MjIzMTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/44223191?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/g-jing",
"html_url": "https://github.com/g-jing",
"followers_url": "https://api.github.com/users/g-jing/followers",
"following_url": "https://api.github.com/users/g-jing/following{/other_user}",
"gists_url": "https://api.github.com/users/g-jing/gists{/gist_id}",
"starred_url": "https://api.github.com/users/g-jing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/g-jing/subscriptions",
"organizations_url": "https://api.github.com/users/g-jing/orgs",
"repos_url": "https://api.github.com/users/g-jing/repos",
"events_url": "https://api.github.com/users/g-jing/events{/privacy}",
"received_events_url": "https://api.github.com/users/g-jing/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"just because of a large learning rate"
] | 1,567 | 1,567 | 1,567 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
I am using Bert-CRF model to do Named Entity Recognition task. I am using the average of the last four layers as the input of CRF model. But the loss will increase and become Nan in a few batches. Anyone meet that problem before? Any suggestions will be appreciated! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1138/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1138/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1137 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1137/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1137/comments | https://api.github.com/repos/huggingface/transformers/issues/1137/events | https://github.com/huggingface/transformers/issues/1137 | 486,546,360 | MDU6SXNzdWU0ODY1NDYzNjA= | 1,137 | Cannot import DistilBert classes | {
"login": "delip",
"id": 347398,
"node_id": "MDQ6VXNlcjM0NzM5OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/347398?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/delip",
"html_url": "https://github.com/delip",
"followers_url": "https://api.github.com/users/delip/followers",
"following_url": "https://api.github.com/users/delip/following{/other_user}",
"gists_url": "https://api.github.com/users/delip/gists{/gist_id}",
"starred_url": "https://api.github.com/users/delip/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/delip/subscriptions",
"organizations_url": "https://api.github.com/users/delip/orgs",
"repos_url": "https://api.github.com/users/delip/repos",
"events_url": "https://api.github.com/users/delip/events{/privacy}",
"received_events_url": "https://api.github.com/users/delip/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"My bad. Forgot to ignore pip cache\r\n```\r\npip install git+https://github.com/huggingface/pytorch-transformers --no-cache-dir\r\n```"
] | 1,567 | 1,567 | 1,567 | NONE | null | Tried installing from the master, and couldn't do it.

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1137/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1137/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1136 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1136/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1136/comments | https://api.github.com/repos/huggingface/transformers/issues/1136/events | https://github.com/huggingface/transformers/pull/1136 | 486,509,172 | MDExOlB1bGxSZXF1ZXN0MzExOTU0MDg1 | 1,136 | swap order of optimizer.step() and scheduler.step() | {
"login": "adai183",
"id": 13679375,
"node_id": "MDQ6VXNlcjEzNjc5Mzc1",
"avatar_url": "https://avatars.githubusercontent.com/u/13679375?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adai183",
"html_url": "https://github.com/adai183",
"followers_url": "https://api.github.com/users/adai183/followers",
"following_url": "https://api.github.com/users/adai183/following{/other_user}",
"gists_url": "https://api.github.com/users/adai183/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adai183/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adai183/subscriptions",
"organizations_url": "https://api.github.com/users/adai183/orgs",
"repos_url": "https://api.github.com/users/adai183/repos",
"events_url": "https://api.github.com/users/adai183/events{/privacy}",
"received_events_url": "https://api.github.com/users/adai183/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"indeed, thanks @adai183 ",
"🤗"
] | 1,567 | 1,567 | 1,567 | CONTRIBUTOR | null | The current code results in the following warning:
```
UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule.See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
"https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1136/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1136/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1136",
"html_url": "https://github.com/huggingface/transformers/pull/1136",
"diff_url": "https://github.com/huggingface/transformers/pull/1136.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1136.patch",
"merged_at": 1567022453000
} |
https://api.github.com/repos/huggingface/transformers/issues/1135 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1135/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1135/comments | https://api.github.com/repos/huggingface/transformers/issues/1135/events | https://github.com/huggingface/transformers/pull/1135 | 486,476,717 | MDExOlB1bGxSZXF1ZXN0MzExOTI3NzU2 | 1,135 | distilbert: fix number of hidden_size | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"CI fails related or unrelated 🤔",
"Yes, good catch @stefan-it! Thanks"
] | 1,567 | 1,567 | 1,567 | COLLABORATOR | null | Hi,
this PR corrects the return value of the `hidden_size` function (which should be the dimension size, as it is used in all other models) :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1135/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1135/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1135",
"html_url": "https://github.com/huggingface/transformers/pull/1135",
"diff_url": "https://github.com/huggingface/transformers/pull/1135.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1135.patch",
"merged_at": 1567022413000
} |
https://api.github.com/repos/huggingface/transformers/issues/1134 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1134/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1134/comments | https://api.github.com/repos/huggingface/transformers/issues/1134/events | https://github.com/huggingface/transformers/issues/1134 | 486,463,871 | MDU6SXNzdWU0ODY0NjM4NzE= | 1,134 | Schedulers cause memory accumulation across folds in cross-validation? | {
"login": "JohnGiorgi",
"id": 8917831,
"node_id": "MDQ6VXNlcjg5MTc4MzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8917831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JohnGiorgi",
"html_url": "https://github.com/JohnGiorgi",
"followers_url": "https://api.github.com/users/JohnGiorgi/followers",
"following_url": "https://api.github.com/users/JohnGiorgi/following{/other_user}",
"gists_url": "https://api.github.com/users/JohnGiorgi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JohnGiorgi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JohnGiorgi/subscriptions",
"organizations_url": "https://api.github.com/users/JohnGiorgi/orgs",
"repos_url": "https://api.github.com/users/JohnGiorgi/repos",
"events_url": "https://api.github.com/users/JohnGiorgi/events{/privacy}",
"received_events_url": "https://api.github.com/users/JohnGiorgi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I am facing the same issue.When I use the WarmupLinearSchedule and the 7th epoch training , I get a CUDA out of memory issue",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Running `import gc`, then`gc.collect()` and emptying the GPU’s cache should solve the issue temporarily. See #1742 "
] | 1,567 | 1,575 | 1,575 | CONTRIBUTOR | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
I am facing a strange issue when using the schedulers available in this library within a cross-validation loop. Basically, in each fold, I initialize a new model, optimizer, and scheduler. GPU memory accumulates until I eventually get a CUDA out of memory issue.
The simplest example I could come up with to reproduce the error is:
```python
import torch
from pytorch_transformers import WarmupConstantSchedule, WarmupCosineSchedule, WarmupLinearSchedule, WarmupCosineWithHardRestartsSchedule
# In my actual project, this is a for loop over the k-folds of k-fold cross-validation.
# In this example I use a while just to demonstrate the OOM error.
while True:
net = torch.nn.Linear(10000, 10000)
net = net.cuda()
optimizer = torch.optim.Adam(net.parameters(), lr=1e-3)
scheduler = WarmupCosineWithHardRestartsSchedule(optimizer, 1, 1000)
# I also tried all the other schedulers. Same issue.
# scheduler = WarmupConstantSchedule(optimizer, 1)
# scheduler = WarmupCosineSchedule(optimizer, 1, 1000)
# scheduler = WarmupLinearSchedule(optimizer, 1, 1000)
del net, optimizer, scheduler
```
This will run until it (very quickly) uses up all 12GB on my Titan XP GPU. To make sure it was truly the initialization of the scheduler, I also tested
```python
import torch
from pytorch_transformers import WarmupCosineWithHardRestartsSchedule
while True:
net = torch.nn.Linear(10000, 10000)
net = net.cuda()
optimizer = torch.optim.Adam(net.parameters(), lr=1e-3)
del net, optimizer
```
And did not see the memory accumulation or OOM error.
My question(s) is/are:
- Is this a known problem?
- Am I doing something dumb?
- How might I use a new scheduler for each fold of k-fold cross-validation in a way that doesn't lead to this issue?
Thanks a lot. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1134/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1134/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1133 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1133/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1133/comments | https://api.github.com/repos/huggingface/transformers/issues/1133/events | https://github.com/huggingface/transformers/issues/1133 | 486,378,197 | MDU6SXNzdWU0ODYzNzgxOTc= | 1,133 | GPT2 Tokenizer decoding fails when the added tokens include a space | {
"login": "harkous",
"id": 5602332,
"node_id": "MDQ6VXNlcjU2MDIzMzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5602332?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/harkous",
"html_url": "https://github.com/harkous",
"followers_url": "https://api.github.com/users/harkous/followers",
"following_url": "https://api.github.com/users/harkous/following{/other_user}",
"gists_url": "https://api.github.com/users/harkous/gists{/gist_id}",
"starred_url": "https://api.github.com/users/harkous/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harkous/subscriptions",
"organizations_url": "https://api.github.com/users/harkous/orgs",
"repos_url": "https://api.github.com/users/harkous/repos",
"events_url": "https://api.github.com/users/harkous/events{/privacy}",
"received_events_url": "https://api.github.com/users/harkous/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I can second this, I am seeing the same error",
"Indeed, there is a mismatch between added tokens and byte-level BPE tokens here. Fixing it with #1174.",
"Found that you can replace the space with `Ġ`. `Ċ` can replace `\\n`."
] | 1,566 | 1,648 | 1,567 | CONTRIBUTOR | null | ## 🐛 Bug
After adding a new token that contains a space to the GPT2 tokenizer, the tokenizer produces an error at decoding time (see example code below). My current workaround is to preprocess that token to remove spaces before adding it and to postprocess the token after decoding. But I thought I'd share this in case this is something that the library can warn against (e.g. added tokens should not include spaces) or even support.
<!-- Important information -->
Model I am using (Bert, XLNet....): GPT2
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* [ ] the official example scripts: (give details)
* [x] my own modified scripts: (give details)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details)
## To Reproduce
Steps to reproduce the behavior:
1. Run the following code:
```python
from pytorch_transformers.tokenization_gpt2 import GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
tokenizer.add_tokens(["special token"])
encoded = tokenizer.encode("special token")
tokenizer.decode(encoded)
```
2. Currently, I get the error:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-5-f47101f92e14> in <module>
----> 1 tokenizer.decode(encoded)
~/miniconda3/lib/python3.7/site-packages/pytorch_transformers/tokenization_utils.py in decode(self, token_ids, skip_special_tokens, clean_up_tokenization_spaces)
665 token_ids, skip_special_tokens=skip_special_tokens
666 )
--> 667 text = self.convert_tokens_to_string(filtered_tokens)
668 if clean_up_tokenization_spaces:
669 text = self.clean_up_tokenization(text)
~/miniconda3/lib/python3.7/site-packages/pytorch_transformers/tokenization_gpt2.py in convert_tokens_to_string(self, tokens)
187 """ Converts a sequence of tokens (string) in a single string. """
188 text = ''.join(tokens)
--> 189 text = bytearray([self.byte_decoder[c] for c in text]).decode('utf-8', errors=self.errors)
190 return text
191
~/miniconda3/lib/python3.7/site-packages/pytorch_transformers/tokenization_gpt2.py in <listcomp>(.0)
187 """ Converts a sequence of tokens (string) in a single string. """
188 text = ''.join(tokens)
--> 189 text = bytearray([self.byte_decoder[c] for c in text]).decode('utf-8', errors=self.errors)
190 return text
191
KeyError: ' '
```
## Expected behavior
I expect the decoder to return the string `"special token"`
## Environment
* OS: OSX
* Python version: 3.7.3
* PyTorch version: 1.1.0
* PyTorch Transformers version (or branch): master (d06c5a2a0acd8525d969a8f8f5b968ec0ec110b4)
* Using GPU ? No
* Distributed of parallel setup ? No
* Any other relevant information:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1133/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1133/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1132 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1132/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1132/comments | https://api.github.com/repos/huggingface/transformers/issues/1132/events | https://github.com/huggingface/transformers/issues/1132 | 486,282,132 | MDU6SXNzdWU0ODYyODIxMzI= | 1,132 | How to split consecutive numbers? | {
"login": "ZacBi",
"id": 22130631,
"node_id": "MDQ6VXNlcjIyMTMwNjMx",
"avatar_url": "https://avatars.githubusercontent.com/u/22130631?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZacBi",
"html_url": "https://github.com/ZacBi",
"followers_url": "https://api.github.com/users/ZacBi/followers",
"following_url": "https://api.github.com/users/ZacBi/following{/other_user}",
"gists_url": "https://api.github.com/users/ZacBi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZacBi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZacBi/subscriptions",
"organizations_url": "https://api.github.com/users/ZacBi/orgs",
"repos_url": "https://api.github.com/users/ZacBi/repos",
"events_url": "https://api.github.com/users/ZacBi/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZacBi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,566 | 1,572 | 1,572 | NONE | null | ## ❓ Questions & Help
   In some NER datasets of BIO(ES) format, each number of a consecutive number string labeled with a corresponding tag, e.g., "All Jhon need is only 10 yuan" will be labeled as "O, PER, O, O, O, O, O, O". In this case, "10" is labeled as "O, O". But in **BertTokenizer** and **PreTrainedTokenizer**, I don't find(or know?) effective params to deal with this situation.
<!-- A clear and concise description of the question. --> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1132/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1132/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1131 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1131/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1131/comments | https://api.github.com/repos/huggingface/transformers/issues/1131/events | https://github.com/huggingface/transformers/issues/1131 | 486,262,580 | MDU6SXNzdWU0ODYyNjI1ODA= | 1,131 | mems output in XLNet | {
"login": "drasros",
"id": 16518885,
"node_id": "MDQ6VXNlcjE2NTE4ODg1",
"avatar_url": "https://avatars.githubusercontent.com/u/16518885?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/drasros",
"html_url": "https://github.com/drasros",
"followers_url": "https://api.github.com/users/drasros/followers",
"following_url": "https://api.github.com/users/drasros/following{/other_user}",
"gists_url": "https://api.github.com/users/drasros/gists{/gist_id}",
"starred_url": "https://api.github.com/users/drasros/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/drasros/subscriptions",
"organizations_url": "https://api.github.com/users/drasros/orgs",
"repos_url": "https://api.github.com/users/drasros/repos",
"events_url": "https://api.github.com/users/drasros/events{/privacy}",
"received_events_url": "https://api.github.com/users/drasros/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Indeed I see, we have to add:\r\n- an explanation in XLM's docstring that you should set the model configuration `mem_len` parameter if you want to use the memory (the answer to your main question). You can do for instance `model = XLNetModel.from_pretrained('xlnet-large-cased', mem_len=1024)` if you want a max memory of 1024 tokens. By default the model doesn't use memory (`mem_len = None`)\r\n- a check in the code to avoid the error you are reporting.",
"Thanks for your reply and the new doc!"
] | 1,566 | 1,579 | 1,567 | NONE | null | ## 🐛 Bug
Hi,
I am trying to use memory of the last forward pass (mems arg) with XLNet. I am getting a tuple of None as mems output instead of a tuple of tensor. The same code with TransformerXL is running fine. Am I doing anything wrong or is this a bug ? Below is a short code snippet to reproduce the error.
Many thanks,
A
```python
import torch
from pytorch_transformers import TransfoXLTokenizer, TransfoXLModel, XLNetTokenizer, XLNetModel
text = ['This is the first sentence. ', 'And this is another one']
# transformer-XL
tokenizer = TransfoXLTokenizer.from_pretrained('transfo-xl-wt103')
model = TransfoXLModel.from_pretrained('transfo-xl-wt103')
mems = None
for i in range(2):
input_ids = torch.tensor(tokenizer.encode(text[i])).unsqueeze(0)
outputs = model(input_ids, mems=mems)
mems = outputs[1]
# RUNS OK
# XLNet
tokenizer = XLNetTokenizer.from_pretrained('xlnet-large-cased')
model = XLNetModel.from_pretrained('xlnet-large-cased')
mems = None
for i in range(2):
input_ids = torch.tensor(tokenizer.encode(text[i])).unsqueeze(0)
outputs = model(input_ids, mems=mems)
mems = outputs[1]
# We get tuple of None in first model output, second forward crashes.
# File "/home/asors/anaconda3/envs/psco/lib/python3.7/site-packages/pytorch_transformers/modeling_xlnet.py", line 858, in forward
# mlen = mems[0].shape[0] if mems is not None else 0
```
<!-- Important information -->
Model I am using: XLNet
Language I am using the model on (English, Chinese....):
* OS: Ubuntu
* Python version: 3.7
* PyTorch version: 1.1.0
* PyTorch Transformers version (or branch): 1.1.0 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1131/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1131/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1130 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1130/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1130/comments | https://api.github.com/repos/huggingface/transformers/issues/1130/events | https://github.com/huggingface/transformers/issues/1130 | 486,260,503 | MDU6SXNzdWU0ODYyNjA1MDM= | 1,130 | Output of BertModel does not match fixed feature vectors extracted from the last hidden layer | {
"login": "sysuzyx",
"id": 43226406,
"node_id": "MDQ6VXNlcjQzMjI2NDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/43226406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sysuzyx",
"html_url": "https://github.com/sysuzyx",
"followers_url": "https://api.github.com/users/sysuzyx/followers",
"following_url": "https://api.github.com/users/sysuzyx/following{/other_user}",
"gists_url": "https://api.github.com/users/sysuzyx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sysuzyx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sysuzyx/subscriptions",
"organizations_url": "https://api.github.com/users/sysuzyx/orgs",
"repos_url": "https://api.github.com/users/sysuzyx/repos",
"events_url": "https://api.github.com/users/sysuzyx/events{/privacy}",
"received_events_url": "https://api.github.com/users/sysuzyx/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, have you solved this problem?\r\nAnd does anyone know the order of all_encoder_layers?\r\nThanks.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,566 | 1,573 | 1,573 | NONE | null | I want to finetune bert in my task. However, the bert ouput does not match the fixed feature vectors.
I extracted the fixed feature vectors like this:
I use `extract_features.py` to extract the fixed feature vectors of last hidden layer (layer -1). The command line is below:
`python extract_features.py --input_file=input.txt --output_file=output.json --vocab_file=model_path/vocab.txt --bert_config_file=model_path/bert_config.json --init_checkpoint=model_path/bert_model.ckpt --layers=-1 --max_seq_length=128 --batch_size=8`
The bert model output extracted like below:
The load model code is like below:
`model_dict = model.state_dict()`
`#load pretrained released bert model`
`pretrained_dict = torch.load( 'pytorch_model.bin')`
`pretrained_dict = {k: v for k, v in pretrained_dict.items() if k in model_dict}`
`#update params`
`model_dict.update(pretrained_dict)`
`model.load_state_dict(model_dict)`
The init model code is like below:
`class A(nn.Module):`
` def __init__(self):`
` self.config = BertConfig.from_json_file('config.json')`
` self.bert = BertModel(self.config)`
`def inference(self, input_ids):`
` all_encoder_layers, _ = self.bert(input_ids, token_type_ids=None, attention_mask=input_mask)`
`return all_encoder_layers[-1]`
(I've omitted some irrelevant code.)
Then I ouput tensor all_encoder_layers[-1].
all_encoder_layers[-1] doesn't match the feature vectors extracted by `extract_features.py` .
I checked the params in my model. Bert params have been loaded and it is consistent with pretrained params. Also. the input sequence is consistent.
Can anybody help me? Is there any settings I forget?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1130/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1130/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1129 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1129/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1129/comments | https://api.github.com/repos/huggingface/transformers/issues/1129/events | https://github.com/huggingface/transformers/issues/1129 | 486,256,964 | MDU6SXNzdWU0ODYyNTY5NjQ= | 1,129 | Fine-tuning (BERT & RoBERTa) base outperforms large | {
"login": "wahlforss",
"id": 73305,
"node_id": "MDQ6VXNlcjczMzA1",
"avatar_url": "https://avatars.githubusercontent.com/u/73305?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wahlforss",
"html_url": "https://github.com/wahlforss",
"followers_url": "https://api.github.com/users/wahlforss/followers",
"following_url": "https://api.github.com/users/wahlforss/following{/other_user}",
"gists_url": "https://api.github.com/users/wahlforss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wahlforss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wahlforss/subscriptions",
"organizations_url": "https://api.github.com/users/wahlforss/orgs",
"repos_url": "https://api.github.com/users/wahlforss/repos",
"events_url": "https://api.github.com/users/wahlforss/events{/privacy}",
"received_events_url": "https://api.github.com/users/wahlforss/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Maybe you need more GPUs and a bigger batch size",
"I have 8 GPUs 2080 rtx, each with 10gb of data. But yeah I use a batch size of 4. However, I need the sequence length of 512 so it is impossible to increase the batch size.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,566 | 1,573 | 1,573 | NONE | null | ## ❓ Questions & Help
On all datasets I have used so far, base always outperforms large after the fine-tune. This is true for both BERT and RoBERTa.
Why is that? Am I doing something wrong? Does large require far more epochs to train or a different learning rate? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1129/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1129/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1128 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1128/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1128/comments | https://api.github.com/repos/huggingface/transformers/issues/1128/events | https://github.com/huggingface/transformers/issues/1128 | 486,231,173 | MDU6SXNzdWU0ODYyMzExNzM= | 1,128 | cannot import name 'RobertaConfig | {
"login": "wahlforss",
"id": 73305,
"node_id": "MDQ6VXNlcjczMzA1",
"avatar_url": "https://avatars.githubusercontent.com/u/73305?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wahlforss",
"html_url": "https://github.com/wahlforss",
"followers_url": "https://api.github.com/users/wahlforss/followers",
"following_url": "https://api.github.com/users/wahlforss/following{/other_user}",
"gists_url": "https://api.github.com/users/wahlforss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wahlforss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wahlforss/subscriptions",
"organizations_url": "https://api.github.com/users/wahlforss/orgs",
"repos_url": "https://api.github.com/users/wahlforss/repos",
"events_url": "https://api.github.com/users/wahlforss/events{/privacy}",
"received_events_url": "https://api.github.com/users/wahlforss/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Needed to update the pip package... ",
"which pip package did you have to update? Same problem here...\r\n",
"@stellaywu what is your current `transformers` or `pytorch-transformers` version?",
"@LysandreJik it's transformer 2.0.0 ",
"Thatś weird, transformers 2.0.0 works on a clean install in my environment. Could you please double check that the python code you´re running is in the same environment? Something like this:\r\n\r\n```py\r\nimport transformers\r\nprint(transformers.__version__)\r\n# ´2.0.0´\r\n\r\nprint(tranformers.RobertaConfig)\r\n# Does it crash with ´AttributeError: module ´transformers´ has no attribute RobertaConfig´ ?\r\n```",
"you are right, it doesn't. I have probably mixed up environment. Thanks!",
"> @stellaywu what is your current `transformers` or `pytorch-transformers` version?\r\n\r\nDear, I have an error:\r\n`ImportError: cannot import name 'RobertaForQuestionAnswering' from 'pytorch_transformers'`\r\n\r\nActually, I have installed the pytorch_transformers by:\r\n`pip install pytorch-transformers`\r\n\r\nhowever, the error is occurred.\r\n\r\nAny idea for this?\r\n ",
"You should upgrade your transformers version, `RobertaForQuestionAnswering` was probably not present in this early a version:\r\n\r\n```\r\n!pip install transformers torch\r\nfrom transformers import RobertaForQuestionAnswering\r\n```",
"> You should upgrade your transformers version, `RobertaForQuestionAnswering` was probably not present in this early a version:\r\n> \r\n> ```\r\n> !pip install transformers torch\r\n> from transformers import RobertaForQuestionAnswering\r\n> ```\r\n\r\nActually, I use pytorch_transformers not transformers. Could you have any suggests?",
"Installing version v1.1.0 or v1.2.0 of `pytorch-transformers`, I can also import `RobertaConfig`. RoBERTa was added in v1.1.0, so any version earlier than that will not have it.\r\n\r\nIs there a reason you're not using `transformers`? Most models are in `transformers`, as are most features, and a lot of bugs have been solved since `pytorch-transformers`."
] | 1,566 | 1,610 | 1,566 | NONE | null | ## 🐛 Bug
When I run run_glue.py with the roberta model I get an ImportError: cannot import name 'RobertaConfig'
I can't run the run_glue.py with any model since it cannot import the RobertaConfig on line 34.
Any ideas why? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1128/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1128/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1127 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1127/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1127/comments | https://api.github.com/repos/huggingface/transformers/issues/1127/events | https://github.com/huggingface/transformers/pull/1127 | 486,208,136 | MDExOlB1bGxSZXF1ZXN0MzExNzA2NjQ5 | 1,127 | DistilBERT | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@VictorSanh Thanks for adding this :heart: (I'm currently adding this model to Flair)\r\n\r\nOne question: the BERT model configuration has a key `hidden_size`. For DilBERT it is now `dim`. Is this change intended 🤔",
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1127?src=pr&el=h1) Report\n> Merging [#1127](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1127?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/d06c5a2a0acd8525d969a8f8f5b968ec0ec110b4?src=pr&el=desc) will **increase** coverage by `1.1%`.\n> The diff coverage is `96.79%`.\n\n[](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1127?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1127 +/- ##\n=========================================\n+ Coverage 79.61% 80.71% +1.1% \n=========================================\n Files 42 46 +4 \n Lines 6898 7391 +493 \n=========================================\n+ Hits 5492 5966 +474 \n- Misses 1406 1425 +19\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1127?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1127/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfdHJhbnNmb194bC5weQ==) | `57.12% <ø> (-0.42%)` | :arrow_down: |\n| [pytorch\\_transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1127/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfeGxuZXQucHk=) | `79.01% <ø> (ø)` | :arrow_up: |\n| [pytorch\\_transformers/modeling\\_xlm.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1127/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfeGxtLnB5) | `86.66% <ø> (ø)` | :arrow_up: |\n| [pytorch\\_transformers/tokenization\\_distilbert.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1127/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdG9rZW5pemF0aW9uX2Rpc3RpbGJlcnQucHk=) | `100% <100%> (ø)` | |\n| [pytorch\\_transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1127/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfYmVydC5weQ==) | `87.98% <100%> (ø)` | :arrow_up: |\n| [...torch\\_transformers/tests/tokenization\\_bert\\_test.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1127/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvdG9rZW5pemF0aW9uX2JlcnRfdGVzdC5weQ==) | `98.66% <100%> (ø)` | :arrow_up: |\n| [pytorch\\_transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1127/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfYXV0by5weQ==) | `56.36% <71.42%> (+0.36%)` | :arrow_up: |\n| [pytorch\\_transformers/tests/modeling\\_common\\_test.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1127/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvbW9kZWxpbmdfY29tbW9uX3Rlc3QucHk=) | `94.73% <80%> (-0.21%)` | :arrow_down: |\n| [...ch\\_transformers/tests/tokenization\\_dilbert\\_test.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1127/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvdG9rZW5pemF0aW9uX2RpbGJlcnRfdGVzdC5weQ==) | `95.23% <95.23%> (ø)` | |\n| [pytorch\\_transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1127/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfZGlzdGlsYmVydC5weQ==) | `96.73% <96.73%> (ø)` | |\n| ... and [7 more](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1127/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1127?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1127?src=pr&el=footer). Last update [d06c5a2...e7706f5](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1127?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,566 | 1,578 | 1,567 | MEMBER | null | Preparing the release for DistilBERT (smaller, faster, lighter, cheaper version of BERT) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1127/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1127/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1127",
"html_url": "https://github.com/huggingface/transformers/pull/1127",
"diff_url": "https://github.com/huggingface/transformers/pull/1127.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1127.patch",
"merged_at": 1567003389000
} |
https://api.github.com/repos/huggingface/transformers/issues/1126 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1126/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1126/comments | https://api.github.com/repos/huggingface/transformers/issues/1126/events | https://github.com/huggingface/transformers/issues/1126 | 486,120,054 | MDU6SXNzdWU0ODYxMjAwNTQ= | 1,126 | Bert initialization | {
"login": "Albert-Ma",
"id": 7343136,
"node_id": "MDQ6VXNlcjczNDMxMzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7343136?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Albert-Ma",
"html_url": "https://github.com/Albert-Ma",
"followers_url": "https://api.github.com/users/Albert-Ma/followers",
"following_url": "https://api.github.com/users/Albert-Ma/following{/other_user}",
"gists_url": "https://api.github.com/users/Albert-Ma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Albert-Ma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Albert-Ma/subscriptions",
"organizations_url": "https://api.github.com/users/Albert-Ma/orgs",
"repos_url": "https://api.github.com/users/Albert-Ma/repos",
"events_url": "https://api.github.com/users/Albert-Ma/events{/privacy}",
"received_events_url": "https://api.github.com/users/Albert-Ma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, I think you'll find this [particular issue interesting](https://github.com/huggingface/pytorch-transformers/issues/202).\r\n\r\n[Thomas Wolf's comment](https://github.com/huggingface/pytorch-transformers/issues/202#issuecomment-522613642) in particular may be of help.",
"> Hi, I think you'll find this [particular issue interesting](https://github.com/huggingface/pytorch-transformers/issues/202).\r\n> \r\n> [Thomas Wolf's comment](https://github.com/huggingface/pytorch-transformers/issues/202#issuecomment-522613642) in particular may be of help.\r\n\r\nThanks. But I am not saying to train a language model from scratch, I am saying to train a glue task from scratch. So I think there has much difference between this."
] | 1,566 | 1,567 | 1,567 | NONE | null | When I train bert model from scratch, it can not convergence and the loss does not decrease. Still don't work even try different learning rate many times.
But It works when I tried the tf version. I checked code and there have no much difference except the initialization.
So does anybody have some ideas about this? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1126/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1126/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1125 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1125/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1125/comments | https://api.github.com/repos/huggingface/transformers/issues/1125/events | https://github.com/huggingface/transformers/issues/1125 | 486,021,648 | MDU6SXNzdWU0ODYwMjE2NDg= | 1,125 | UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 1176: character maps to <undefined> | {
"login": "neonrights",
"id": 2925802,
"node_id": "MDQ6VXNlcjI5MjU4MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2925802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/neonrights",
"html_url": "https://github.com/neonrights",
"followers_url": "https://api.github.com/users/neonrights/followers",
"following_url": "https://api.github.com/users/neonrights/following{/other_user}",
"gists_url": "https://api.github.com/users/neonrights/gists{/gist_id}",
"starred_url": "https://api.github.com/users/neonrights/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neonrights/subscriptions",
"organizations_url": "https://api.github.com/users/neonrights/orgs",
"repos_url": "https://api.github.com/users/neonrights/repos",
"events_url": "https://api.github.com/users/neonrights/events{/privacy}",
"received_events_url": "https://api.github.com/users/neonrights/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Looking at [these lines](https://github.com/huggingface/pytorch-transformers/blob/07681b6b5859b630077b742b2f06d440869f17e3/pytorch_transformers/tokenization_gpt2.py#L108-L115), the issue seems to be the file is encoded in utf-8 but read using a different encoder.\r\n\r\nChanging line 112 to `self.encoder = json.load(open(vocab_file, 'r', encoding='utf-8'))` should fix this issue.",
"Thanks this is fixed on master now with #1074",
"Is there a test for which encoder should be used?",
"Encountered same error and had the same doubt. Used 'iso-8859-1' as it suits me almost anytime. Worked just fine. @ChebonRunner "
] | 1,566 | 1,604 | 1,567 | NONE | null | ## 🐛 Bug
<!-- Important information -->
```UnicodeDecodeError``` when using vocab file generated by ```GPT2Tokenizer```. Specifically, I created an instance of the ```GPT2Tokenizer``` by calling ```from_pretrained('gpt2')``` then saved the vocab and merges file for that instance to a local directory. When creating a new ```GPT2Tokenizer``` from the saved files I encounter a ```UnicodeDecodeError``` when reading from the vocab file.
Model I am using (Bert, XLNet....): GPT2 Tokenizer
Language I am using the model on (English, Chinese....): N/A
The problem arise when using:
* [x] the official example scripts: (give details)
* [ ] my own modified scripts: (give details)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details)
## To Reproduce
Steps to reproduce the behavior:
```python
pretrained_tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
vocab_file, merges_file = pretrained_tokenizer.save_vocabulary('.')
new_tokenizer = GPT2Tokenizer(vocab_file, merges_file) # <- UnicodeDecodeError occurs here
```
## Expected behavior
I expect ```new_tokenizer``` to initialize a tokenizer with the same behavior as ```pretrained_tokenizer```.
## Environment
* OS: Windows 10
* Python version: 3.6.8
* PyTorch version: 1.1.0
* PyTorch Transformers version (or branch): 1.1.0
## Additional context
seems likely to be a bug during encoding in ```save_vocabulary``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1125/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1125/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1124 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1124/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1124/comments | https://api.github.com/repos/huggingface/transformers/issues/1124/events | https://github.com/huggingface/transformers/issues/1124 | 485,975,571 | MDU6SXNzdWU0ODU5NzU1NzE= | 1,124 | XLNet resize embedding size ERROR | {
"login": "Saner3",
"id": 30628796,
"node_id": "MDQ6VXNlcjMwNjI4Nzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/30628796?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Saner3",
"html_url": "https://github.com/Saner3",
"followers_url": "https://api.github.com/users/Saner3/followers",
"following_url": "https://api.github.com/users/Saner3/following{/other_user}",
"gists_url": "https://api.github.com/users/Saner3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Saner3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Saner3/subscriptions",
"organizations_url": "https://api.github.com/users/Saner3/orgs",
"repos_url": "https://api.github.com/users/Saner3/repos",
"events_url": "https://api.github.com/users/Saner3/events{/privacy}",
"received_events_url": "https://api.github.com/users/Saner3/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Should be fixed on master by @LysandreJik's PR!"
] | 1,566 | 1,567 | 1,567 | NONE | null | ## ❓ Questions & Help
I add new tokens to XLNetLMHeadModel and use resize function
```
tokenizer.add_tokens(["<token1>", "<token2>"])
model.resize_token_embeddings(len(tokenizer))
```
But when running, the following error occurs:
```
Traceback (most recent call last):
...
File "/nas/home/jsun/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "../pytorch_transformers/modeling_xlnet.py", line 1059, in forward
logits = self.lm_loss(transformer_outputs[0])
File "/nas/home/jsun/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/nas/home/jsun/.local/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 92, in forward
return F.linear(input, self.weight, self.bias)
File "/nas/home/jsun/.local/lib/python3.6/site-packages/torch/nn/functional.py", line 1410, in linear
output += bias
RuntimeError: The size of tensor a (32003) must match the size of tensor b (32000) at non-singleton dimension 2
```
It is because in `resize_token_embeddings`, it changes the embedding size and calls `tie_weight` function to resize the LM head weight. But it forgot to change the size of `bias`, since XLNet has
```
self.lm_loss = nn.Linear(config.d_model, config.n_token, bias=True)
```
while other models have bias=False | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1124/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1124/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1123 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1123/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1123/comments | https://api.github.com/repos/huggingface/transformers/issues/1123/events | https://github.com/huggingface/transformers/issues/1123 | 485,876,617 | MDU6SXNzdWU0ODU4NzY2MTc= | 1,123 | Extracting Features Example | {
"login": "SaschaStenger",
"id": 29093487,
"node_id": "MDQ6VXNlcjI5MDkzNDg3",
"avatar_url": "https://avatars.githubusercontent.com/u/29093487?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SaschaStenger",
"html_url": "https://github.com/SaschaStenger",
"followers_url": "https://api.github.com/users/SaschaStenger/followers",
"following_url": "https://api.github.com/users/SaschaStenger/following{/other_user}",
"gists_url": "https://api.github.com/users/SaschaStenger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SaschaStenger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SaschaStenger/subscriptions",
"organizations_url": "https://api.github.com/users/SaschaStenger/orgs",
"repos_url": "https://api.github.com/users/SaschaStenger/repos",
"events_url": "https://api.github.com/users/SaschaStenger/events{/privacy}",
"received_events_url": "https://api.github.com/users/SaschaStenger/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Yes it was removed from the repo. I think we’ll add it again (and update it\nto pytorch-transformers) since several people have been missing it.\n\nCc @LysandreJik\n\nOn Tue, 27 Aug 2019 at 17:21, Sascha Stenger <[email protected]>\nwrote:\n\n> ❓ Questions & Help\n>\n> Hello.\n>\n> Sorry, if my question is out of date or i just didn't find it, but i'm\n> looking for the example/extract_features.py\n> that was supposed to be in in this repo (as mentioned in this\n> stackoverflow post\n> <https://stackoverflow.com/questions/55369821/how-to-train-a-neural-network-model-with-bert-embeddings-instead-of-static-embed>)\n> and can't find it anymore. Was it just in an earlier release and got\n> scrapped?\n>\n> Thank you in advance for any help\n>\n> —\n> You are receiving this because you are subscribed to this thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/pytorch-transformers/issues/1123?email_source=notifications&email_token=ABYDIHI6ESI3KSNXXUQZFZDQGVA7BA5CNFSM4IQFZBAKYY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4HHV4OEQ>,\n> or mute the thread\n> <https://github.com/notifications/unsubscribe-auth/ABYDIHMWWEDMXVWDLZIX4NDQGVA7BANCNFSM4IQFZBAA>\n> .\n>\n",
"That's nice to hear. Thank you very much ",
"Any update on the extract_features script?\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,566 | 1,576 | 1,576 | NONE | null | ## ❓ Questions & Help
Hello.
Sorry, if my question is out of date or i just didn't find it, but i'm looking for the example/extract_features.py
that was supposed to be in in this repo (as mentioned in this stackoverflow [post](https://stackoverflow.com/questions/55369821/how-to-train-a-neural-network-model-with-bert-embeddings-instead-of-static-embed)) and can't find it anymore. Was it just in an earlier release and got scrapped?
Thank you in advance for any help | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1123/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1123/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1122 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1122/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1122/comments | https://api.github.com/repos/huggingface/transformers/issues/1122/events | https://github.com/huggingface/transformers/issues/1122 | 485,864,446 | MDU6SXNzdWU0ODU4NjQ0NDY= | 1,122 | PyTorch library dependency | {
"login": "makcedward",
"id": 36614806,
"node_id": "MDQ6VXNlcjM2NjE0ODA2",
"avatar_url": "https://avatars.githubusercontent.com/u/36614806?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/makcedward",
"html_url": "https://github.com/makcedward",
"followers_url": "https://api.github.com/users/makcedward/followers",
"following_url": "https://api.github.com/users/makcedward/following{/other_user}",
"gists_url": "https://api.github.com/users/makcedward/gists{/gist_id}",
"starred_url": "https://api.github.com/users/makcedward/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/makcedward/subscriptions",
"organizations_url": "https://api.github.com/users/makcedward/orgs",
"repos_url": "https://api.github.com/users/makcedward/repos",
"events_url": "https://api.github.com/users/makcedward/events{/privacy}",
"received_events_url": "https://api.github.com/users/makcedward/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I have the same issue with `pytorch 1.0.1`. When pytorch version is upgraded (to `1.2.0` for instance), this error is removed, however I get an import error:\r\n```\r\nImportError: /opt/conda/lib/python3.7/site-packages/fused_layer_norm_cuda.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZN2at19UndefinedTensorImpl10_singletonE\r\n```\r\n",
"> I have the same issue with `pytorch 1.0.1`. When pytorch version is upgraded (to `1.2.0` for instance), this error is removed, however I get an import error:\r\n> \r\n> ```\r\n> ImportError: /opt/conda/lib/python3.7/site-packages/fused_layer_norm_cuda.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZN2at19UndefinedTensorImpl10_singletonE\r\n> ```\r\n\r\nMy issue is fixed after upgrading pytorch to 1.2.0. Which model/ function do you call ?",
"For some reason, I had to reinstall apex after upgrading pytorch. ",
"> For some reason, I had to reinstall apex after upgrading pytorch.\r\n\r\nSo have you fixed your ImportError? I met this error when initializing BertAdam. I will try to migrate my code from pytorch-pretrained-bert to pytorch-transformers.",
"Yes, upgraded pytorch, then reinstalled apex.",
"> Yes, upgraded pytorch, then reinstalled apex.\r\n\r\nThank you. I will try.",
"Just to confirm, I had the same issue (ImportError). As @tayciryahmed said, re-installing apex would do the trick as it did for me :) ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,566 | 1,574 | 1,574 | NONE | null | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): XLNet
Language I am using the model on (English, Chinese....): English
```
pytorch_transformers\modeling_xlnet.py in forward(self, input_ids, token_type_ids, input_mask, attention_mask, mems, perm_mask, target_mapping, head_mask)
925 # `1` indicates not in the same segment [qlen x klen x bsz]
926 seg_mat = (token_type_ids[:, None] != cat_ids[None, :]).long()
--> 927 seg_mat = F.one_hot(seg_mat, num_classes=2).to(dtype_float)
928 else:
929 seg_mat = None
AttributeError: module 'torch.nn.functional' has no attribute 'one_hot'
```
## To Reproduce
Steps to reproduce the behavior:
1. Install pytprch 1.0.1 version
2. Use XLNet model to do the prediction.
'torch.nn.functional''s 'one_hot' function introduced from [1.1.0](https://pytorch.org/docs/1.1.0/nn.html#one-hot) while [requirements.txt](https://github.com/huggingface/pytorch-transformers/blob/master/requirements.txt#L2 ) requests 1.0.0+ | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1122/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/huggingface/transformers/issues/1122/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1121 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1121/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1121/comments | https://api.github.com/repos/huggingface/transformers/issues/1121/events | https://github.com/huggingface/transformers/issues/1121 | 485,808,293 | MDU6SXNzdWU0ODU4MDgyOTM= | 1,121 | Using pretrained XLNET for long sentences | {
"login": "aviclu",
"id": 13317450,
"node_id": "MDQ6VXNlcjEzMzE3NDUw",
"avatar_url": "https://avatars.githubusercontent.com/u/13317450?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aviclu",
"html_url": "https://github.com/aviclu",
"followers_url": "https://api.github.com/users/aviclu/followers",
"following_url": "https://api.github.com/users/aviclu/following{/other_user}",
"gists_url": "https://api.github.com/users/aviclu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aviclu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aviclu/subscriptions",
"organizations_url": "https://api.github.com/users/aviclu/orgs",
"repos_url": "https://api.github.com/users/aviclu/repos",
"events_url": "https://api.github.com/users/aviclu/events{/privacy}",
"received_events_url": "https://api.github.com/users/aviclu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,566 | 1,572 | 1,572 | NONE | null | ## ❓ Questions & Help
Is it possible to feed the pretrained large XLNET model with sentences of length of more than 512 tokens?
If no, is there any model which supports that? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1121/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1121/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1120 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1120/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1120/comments | https://api.github.com/repos/huggingface/transformers/issues/1120/events | https://github.com/huggingface/transformers/pull/1120 | 485,748,696 | MDExOlB1bGxSZXF1ZXN0MzExMzMzNTYx | 1,120 | Change attention mask dtype to be bool. Fix #1119 | {
"login": "CrafterKolyan",
"id": 9883873,
"node_id": "MDQ6VXNlcjk4ODM4NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9883873?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CrafterKolyan",
"html_url": "https://github.com/CrafterKolyan",
"followers_url": "https://api.github.com/users/CrafterKolyan/followers",
"following_url": "https://api.github.com/users/CrafterKolyan/following{/other_user}",
"gists_url": "https://api.github.com/users/CrafterKolyan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CrafterKolyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CrafterKolyan/subscriptions",
"organizations_url": "https://api.github.com/users/CrafterKolyan/orgs",
"repos_url": "https://api.github.com/users/CrafterKolyan/repos",
"events_url": "https://api.github.com/users/CrafterKolyan/events{/privacy}",
"received_events_url": "https://api.github.com/users/CrafterKolyan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes it's better, thanks also for that @CrafterKolyan!"
] | 1,566 | 1,566 | 1,566 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1120/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1120/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1120",
"html_url": "https://github.com/huggingface/transformers/pull/1120",
"diff_url": "https://github.com/huggingface/transformers/pull/1120.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1120.patch",
"merged_at": 1566910861000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/1119 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1119/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1119/comments | https://api.github.com/repos/huggingface/transformers/issues/1119/events | https://github.com/huggingface/transformers/issues/1119 | 485,745,480 | MDU6SXNzdWU0ODU3NDU0ODA= | 1,119 | Tons of warnings on use of TransfoXLModel. masked_fill_ input dtype torch.uint8 should be changed to torch.bool | {
"login": "CrafterKolyan",
"id": 9883873,
"node_id": "MDQ6VXNlcjk4ODM4NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9883873?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CrafterKolyan",
"html_url": "https://github.com/CrafterKolyan",
"followers_url": "https://api.github.com/users/CrafterKolyan/followers",
"following_url": "https://api.github.com/users/CrafterKolyan/following{/other_user}",
"gists_url": "https://api.github.com/users/CrafterKolyan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CrafterKolyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CrafterKolyan/subscriptions",
"organizations_url": "https://api.github.com/users/CrafterKolyan/orgs",
"repos_url": "https://api.github.com/users/CrafterKolyan/repos",
"events_url": "https://api.github.com/users/CrafterKolyan/events{/privacy}",
"received_events_url": "https://api.github.com/users/CrafterKolyan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,566 | 1,566 | 1,566 | CONTRIBUTOR | null | ## 🕑 Usage of deprecated behaviour
Using example from documentation web page: https://huggingface.co/pytorch-transformers/model_doc/transformerxl.html#pytorch_transformers.TransfoXLModel
```
import torch
from pytorch_transformers import *
tokenizer = TransfoXLTokenizer.from_pretrained('transfo-xl-wt103')
model = TransfoXLModel.from_pretrained('transfo-xl-wt103')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0)
outputs = model(input_ids)
last_hidden_states, mems = outputs[:2]
```
Get tons of same warnings:
> /pytorch/aten/src/ATen/native/LegacyDefinitions.cpp:14: UserWarning: masked_fill_ received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead.

Created #1120 to fix it. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1119/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1119/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1118 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1118/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1118/comments | https://api.github.com/repos/huggingface/transformers/issues/1118/events | https://github.com/huggingface/transformers/pull/1118 | 485,691,554 | MDExOlB1bGxSZXF1ZXN0MzExMjg2Njg5 | 1,118 | Documentation fix #1117 | {
"login": "CrafterKolyan",
"id": 9883873,
"node_id": "MDQ6VXNlcjk4ODM4NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9883873?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CrafterKolyan",
"html_url": "https://github.com/CrafterKolyan",
"followers_url": "https://api.github.com/users/CrafterKolyan/followers",
"following_url": "https://api.github.com/users/CrafterKolyan/following{/other_user}",
"gists_url": "https://api.github.com/users/CrafterKolyan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CrafterKolyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CrafterKolyan/subscriptions",
"organizations_url": "https://api.github.com/users/CrafterKolyan/orgs",
"repos_url": "https://api.github.com/users/CrafterKolyan/repos",
"events_url": "https://api.github.com/users/CrafterKolyan/events{/privacy}",
"received_events_url": "https://api.github.com/users/CrafterKolyan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1118?src=pr&el=h1) Report\n> Merging [#1118](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1118?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/e08c01aa1ad63efff83548ea69d5ba3ce4a75acc?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1118?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1118 +/- ##\n=======================================\n Coverage 79.61% 79.61% \n=======================================\n Files 42 42 \n Lines 6898 6898 \n=======================================\n Hits 5492 5492 \n Misses 1406 1406\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1118?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1118/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfZ3B0Mi5weQ==) | `75.84% <ø> (ø)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1118?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1118?src=pr&el=footer). Last update [e08c01a...26bda77](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1118?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Indeed!"
] | 1,566 | 1,566 | 1,566 | CONTRIBUTOR | null | Rename parameter in documentation + Delete its second occurrence. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1118/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1118/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1118",
"html_url": "https://github.com/huggingface/transformers/pull/1118",
"diff_url": "https://github.com/huggingface/transformers/pull/1118.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1118.patch",
"merged_at": 1566910731000
} |
https://api.github.com/repos/huggingface/transformers/issues/1117 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1117/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1117/comments | https://api.github.com/repos/huggingface/transformers/issues/1117/events | https://github.com/huggingface/transformers/issues/1117 | 485,690,079 | MDU6SXNzdWU0ODU2OTAwNzk= | 1,117 | Wrong parameter name in documentation | {
"login": "CrafterKolyan",
"id": 9883873,
"node_id": "MDQ6VXNlcjk4ODM4NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9883873?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CrafterKolyan",
"html_url": "https://github.com/CrafterKolyan",
"followers_url": "https://api.github.com/users/CrafterKolyan/followers",
"following_url": "https://api.github.com/users/CrafterKolyan/following{/other_user}",
"gists_url": "https://api.github.com/users/CrafterKolyan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CrafterKolyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CrafterKolyan/subscriptions",
"organizations_url": "https://api.github.com/users/CrafterKolyan/orgs",
"repos_url": "https://api.github.com/users/CrafterKolyan/repos",
"events_url": "https://api.github.com/users/CrafterKolyan/events{/privacy}",
"received_events_url": "https://api.github.com/users/CrafterKolyan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes!"
] | 1,566 | 1,566 | 1,566 | CONTRIBUTOR | null | Documentation web page: https://huggingface.co/pytorch-transformers/model_doc/gpt2.html#pytorch_transformers.GPT2DoubleHeadsModel
See `Inputs -> multiple_choice_labels`. There is actually no such parameter in `GPT2DoubleHeadsModel.forward` method. It was renamed to `mc_labels`. Also it is presented two times in documentation which seems to be just a copy past error.
Please change `multiple_choice_labels` to `mc_labels` and delete second occurrence of this parameter in documentation.
Created #1118 to fix documentation. Also you may squash it with fix of #1115 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1117/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1117/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1116 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1116/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1116/comments | https://api.github.com/repos/huggingface/transformers/issues/1116/events | https://github.com/huggingface/transformers/pull/1116 | 485,685,207 | MDExOlB1bGxSZXF1ZXN0MzExMjgxNTM4 | 1,116 | Delete nonexistent parameter from documentation fix #1115 | {
"login": "CrafterKolyan",
"id": 9883873,
"node_id": "MDQ6VXNlcjk4ODM4NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9883873?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CrafterKolyan",
"html_url": "https://github.com/CrafterKolyan",
"followers_url": "https://api.github.com/users/CrafterKolyan/followers",
"following_url": "https://api.github.com/users/CrafterKolyan/following{/other_user}",
"gists_url": "https://api.github.com/users/CrafterKolyan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CrafterKolyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CrafterKolyan/subscriptions",
"organizations_url": "https://api.github.com/users/CrafterKolyan/orgs",
"repos_url": "https://api.github.com/users/CrafterKolyan/repos",
"events_url": "https://api.github.com/users/CrafterKolyan/events{/privacy}",
"received_events_url": "https://api.github.com/users/CrafterKolyan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1116?src=pr&el=h1) Report\n> Merging [#1116](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1116?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/e08c01aa1ad63efff83548ea69d5ba3ce4a75acc?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1116?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1116 +/- ##\n=======================================\n Coverage 79.61% 79.61% \n=======================================\n Files 42 42 \n Lines 6898 6898 \n=======================================\n Hits 5492 5492 \n Misses 1406 1406\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1116?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1116/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfZ3B0Mi5weQ==) | `75.84% <ø> (ø)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1116?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1116?src=pr&el=footer). Last update [e08c01a...c8933bb](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1116?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks @CrafterKolyan!"
] | 1,566 | 1,566 | 1,566 | CONTRIBUTOR | null | Changed documentation of GPT2Model, GPT2LMHeadModel and GPT2DoubleHeadsModel | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1116/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1116/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1116",
"html_url": "https://github.com/huggingface/transformers/pull/1116",
"diff_url": "https://github.com/huggingface/transformers/pull/1116.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1116.patch",
"merged_at": 1566910604000
} |
https://api.github.com/repos/huggingface/transformers/issues/1115 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1115/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1115/comments | https://api.github.com/repos/huggingface/transformers/issues/1115/events | https://github.com/huggingface/transformers/issues/1115 | 485,681,208 | MDU6SXNzdWU0ODU2ODEyMDg= | 1,115 | No parameter which is presented in documentation | {
"login": "CrafterKolyan",
"id": 9883873,
"node_id": "MDQ6VXNlcjk4ODM4NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9883873?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CrafterKolyan",
"html_url": "https://github.com/CrafterKolyan",
"followers_url": "https://api.github.com/users/CrafterKolyan/followers",
"following_url": "https://api.github.com/users/CrafterKolyan/following{/other_user}",
"gists_url": "https://api.github.com/users/CrafterKolyan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CrafterKolyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CrafterKolyan/subscriptions",
"organizations_url": "https://api.github.com/users/CrafterKolyan/orgs",
"repos_url": "https://api.github.com/users/CrafterKolyan/repos",
"events_url": "https://api.github.com/users/CrafterKolyan/events{/privacy}",
"received_events_url": "https://api.github.com/users/CrafterKolyan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Indeed, thanks for that!",
"Sorry, but I still get this error, and I can see the parameter on the forward functions at the source code. Am I doing something wrong ?\r\n\r\nThanks for this amazing contribution!",
"Can you open a new issue with details on the error?\r\n`attention_mask` has been added to GPT2 now so it's not the same situation.",
"Sorry, my problem was that there was no attention_mask parameter on the forward function, but I can see it now. Thanks"
] | 1,566 | 1,570 | 1,566 | CONTRIBUTOR | null | Documentation web page: https://huggingface.co/pytorch-transformers/model_doc/gpt2.html#pytorch_transformers.GPT2Model
See `Inputs -> attention_mask`.
There is actually no parameter `attention_mask` in `GPT2Model.forward` method (see https://github.com/huggingface/pytorch-transformers/blob/master/pytorch_transformers/modeling_gpt2.py#L473)
Of course trying to provide `attention_mask` parameter to model raises an exception:
> TypeError: forward() got an unexpected keyword argument 'attention_mask'
Please either add parameter `attention_mask` to `GPT2Model.forward` or delete it from documentation.
Same for https://huggingface.co/pytorch-transformers/model_doc/gpt2.html#pytorch_transformers.GPT2LMHeadModel
and for
https://huggingface.co/pytorch-transformers/model_doc/gpt2.html#pytorch_transformers.GPT2DoubleHeadsModel
I've created #1116 in case you want just delete it from documentation. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1115/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1115/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1114 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1114/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1114/comments | https://api.github.com/repos/huggingface/transformers/issues/1114/events | https://github.com/huggingface/transformers/issues/1114 | 485,645,709 | MDU6SXNzdWU0ODU2NDU3MDk= | 1,114 | Does RoBERTa needs input_type_ids as Bert ? | {
"login": "Lawiss",
"id": 30115537,
"node_id": "MDQ6VXNlcjMwMTE1NTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/30115537?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Lawiss",
"html_url": "https://github.com/Lawiss",
"followers_url": "https://api.github.com/users/Lawiss/followers",
"following_url": "https://api.github.com/users/Lawiss/following{/other_user}",
"gists_url": "https://api.github.com/users/Lawiss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Lawiss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lawiss/subscriptions",
"organizations_url": "https://api.github.com/users/Lawiss/orgs",
"repos_url": "https://api.github.com/users/Lawiss/repos",
"events_url": "https://api.github.com/users/Lawiss/events{/privacy}",
"received_events_url": "https://api.github.com/users/Lawiss/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"RoBERTa does not use `token_type_ids`. We made a choice to still have an embedding layer (which is all zeros, so they don't contribute anything additively) so that we use the exact same implementation as BERT.",
"Understood, thanks for the quick answer ! :)"
] | 1,566 | 1,566 | 1,566 | CONTRIBUTOR | null | ## ❓ Questions & Help
Hello,
I'm trying to fine-tune RoBERTa for a sentence-pair classification task. With Bert, I used the token_type_ids to identify sentence A and B. But it seems that the Roberta "token_type" Embedding is configured with a dictionnairy of size 1 from what I understand of the model summary : (token_type_embeddings): Embedding(1, 768).
So, does RoBERTa needs token_type_ids ? If not, why there is an Embedding layer for token_type_ids ?
The documentation of the RobertaModel class omit to talk about the token_type_ids present among the parameter : [modeling_roberta.py](https://github.com/huggingface/pytorch-transformers/blob/e08c01aa1ad63efff83548ea69d5ba3ce4a75acc/pytorch_transformers/modeling_roberta.py#L97).
Thank you in advance.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1114/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1114/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1113 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1113/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1113/comments | https://api.github.com/repos/huggingface/transformers/issues/1113/events | https://github.com/huggingface/transformers/issues/1113 | 485,566,500 | MDU6SXNzdWU0ODU1NjY1MDA= | 1,113 | [Help] How to do mean/max pooling to get sentence embedding? | {
"login": "brytjy",
"id": 46053996,
"node_id": "MDQ6VXNlcjQ2MDUzOTk2",
"avatar_url": "https://avatars.githubusercontent.com/u/46053996?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brytjy",
"html_url": "https://github.com/brytjy",
"followers_url": "https://api.github.com/users/brytjy/followers",
"following_url": "https://api.github.com/users/brytjy/following{/other_user}",
"gists_url": "https://api.github.com/users/brytjy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brytjy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brytjy/subscriptions",
"organizations_url": "https://api.github.com/users/brytjy/orgs",
"repos_url": "https://api.github.com/users/brytjy/repos",
"events_url": "https://api.github.com/users/brytjy/events{/privacy}",
"received_events_url": "https://api.github.com/users/brytjy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! Yes this would work, but it would certainly be slower than using [the torch `mean` function](https://pytorch.org/docs/stable/torch.html#torch.mean)!",
"Understood thanks! :)"
] | 1,566 | 1,566 | 1,566 | NONE | null | Hi, I read a few questions raised before regarding sentence embedding and came across mean/max pooling suggestions.
Not too sure how to go about doing mean/max pooling.
Is my implementation correct for mean pooling? I simply took the sum of all the token vectors and divide them by total sequence length.

Thanks :)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1113/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1113/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1112 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1112/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1112/comments | https://api.github.com/repos/huggingface/transformers/issues/1112/events | https://github.com/huggingface/transformers/issues/1112 | 485,563,252 | MDU6SXNzdWU0ODU1NjMyNTI= | 1,112 | Implement the QuickStart but got an error when using BertForMaskedLM to predict a masked token | {
"login": "JJJJane",
"id": 36539347,
"node_id": "MDQ6VXNlcjM2NTM5MzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/36539347?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JJJJane",
"html_url": "https://github.com/JJJJane",
"followers_url": "https://api.github.com/users/JJJJane/followers",
"following_url": "https://api.github.com/users/JJJJane/following{/other_user}",
"gists_url": "https://api.github.com/users/JJJJane/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JJJJane/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JJJJane/subscriptions",
"organizations_url": "https://api.github.com/users/JJJJane/orgs",
"repos_url": "https://api.github.com/users/JJJJane/repos",
"events_url": "https://api.github.com/users/JJJJane/events{/privacy}",
"received_events_url": "https://api.github.com/users/JJJJane/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, this is not an error, but a warning. This warning tells you that some of the weights that were in your pretrained model were not used by the model with which you loaded them. In this case, it concerns the classification layer weight/bias.",
"I see, thanks!"
] | 1,566 | 1,566 | 1,566 | NONE | null | ## ❓ Questions & Help
I was running the BERT example following the instruction of pytorch-transformers' docs, but when Iusing BertForMaskedLM to predict a masked token, an error occured:
"INFO:pytorch_transformers.modeling_utils:Weights from pretrained model not used in BertForMaskedLM: ['cls.seq_relationship.weight', 'cls.seq_relationship.bias']"
Any idea how to fix this? Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1112/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1112/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1111 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1111/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1111/comments | https://api.github.com/repos/huggingface/transformers/issues/1111/events | https://github.com/huggingface/transformers/issues/1111 | 485,492,158 | MDU6SXNzdWU0ODU0OTIxNTg= | 1,111 | Can we get a 1.1.1 release so that AutoRoberta is included? | {
"login": "matt-gardner",
"id": 3291951,
"node_id": "MDQ6VXNlcjMyOTE5NTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3291951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/matt-gardner",
"html_url": "https://github.com/matt-gardner",
"followers_url": "https://api.github.com/users/matt-gardner/followers",
"following_url": "https://api.github.com/users/matt-gardner/following{/other_user}",
"gists_url": "https://api.github.com/users/matt-gardner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/matt-gardner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/matt-gardner/subscriptions",
"organizations_url": "https://api.github.com/users/matt-gardner/orgs",
"repos_url": "https://api.github.com/users/matt-gardner/repos",
"events_url": "https://api.github.com/users/matt-gardner/events{/privacy}",
"received_events_url": "https://api.github.com/users/matt-gardner/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Yes! @LysandreJik ",
"Sounds good, we'll release one soon. Probably around the end of the week or early next week.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,566 | 1,572 | 1,572 | NONE | null | See issue title. I'm about to open a PR to add roberta to allennlp; it'd be nice to have a released version to depend on, instead of a github commit. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1111/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1111/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1110 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1110/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1110/comments | https://api.github.com/repos/huggingface/transformers/issues/1110/events | https://github.com/huggingface/transformers/pull/1110 | 485,426,590 | MDExOlB1bGxSZXF1ZXN0MzExMDcyNTY1 | 1,110 | Torch.hub now based on AutoModels - Updating AutoModels with AutoModelWithLMHead, Sequence Classification and Question Answering | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1110?src=pr&el=h1) Report\n> Merging [#1110](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1110?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/df9d6effae43e92761eb92540bc45fac846789ee?src=pr&el=desc) will **decrease** coverage by `0.04%`.\n> The diff coverage is `73.33%`.\n\n[](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1110?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1110 +/- ##\n==========================================\n- Coverage 79.61% 79.56% -0.05% \n==========================================\n Files 42 42 \n Lines 6898 6965 +67 \n==========================================\n+ Hits 5492 5542 +50 \n- Misses 1406 1423 +17\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1110?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_transformers/tests/modeling\\_auto\\_test.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1110/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvbW9kZWxpbmdfYXV0b190ZXN0LnB5) | `98.18% <100%> (+2.18%)` | :arrow_up: |\n| [pytorch\\_transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1110/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfYXV0by5weQ==) | `51.72% <54.54%> (-4.28%)` | :arrow_down: |\n| [pytorch\\_transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1110/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfdXRpbHMucHk=) | `84.18% <0%> (+0.76%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1110?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1110?src=pr&el=footer). Last update [df9d6ef...84a3a96](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1110?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Not sure torch.hub will work directly. We should check with @VictorSanh.",
"Just checked the integration with Pytorch Hub, it works on my end. For example, you can try it out with: \r\n\r\n```python\r\nimport torch\r\ntorch.hub.load('huggingface/pytorch-transformers:automodels', 'autoModelWithLMHead', 'distilbert-base-uncased')\r\n```",
"Ok, I think we are all good with this. Happy to have `torch.hub` integration again."
] | 1,566 | 1,567 | 1,567 | MEMBER | null | Added new AutoModels, as long as the accompanying tests. Updated the TorchHub configuration file to redirect to those AutoModels. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1110/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1110/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1110",
"html_url": "https://github.com/huggingface/transformers/pull/1110",
"diff_url": "https://github.com/huggingface/transformers/pull/1110.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1110.patch",
"merged_at": 1567199338000
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.