url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
sequence
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/5320
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5320/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5320/comments
https://api.github.com/repos/huggingface/transformers/issues/5320/events
https://github.com/huggingface/transformers/issues/5320
646,441,144
MDU6SXNzdWU2NDY0NDExNDQ=
5,320
Reformer model axial.position.shape config not working
{ "login": "as-stevens", "id": 61624036, "node_id": "MDQ6VXNlcjYxNjI0MDM2", "avatar_url": "https://avatars.githubusercontent.com/u/61624036?v=4", "gravatar_id": "", "url": "https://api.github.com/users/as-stevens", "html_url": "https://github.com/as-stevens", "followers_url": "https://api.github.com/users/as-stevens/followers", "following_url": "https://api.github.com/users/as-stevens/following{/other_user}", "gists_url": "https://api.github.com/users/as-stevens/gists{/gist_id}", "starred_url": "https://api.github.com/users/as-stevens/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/as-stevens/subscriptions", "organizations_url": "https://api.github.com/users/as-stevens/orgs", "repos_url": "https://api.github.com/users/as-stevens/repos", "events_url": "https://api.github.com/users/as-stevens/events{/privacy}", "received_events_url": "https://api.github.com/users/as-stevens/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Either my calculator and me are wrong or the max pos embedding is not matching the axial pos shape\n\n![Screenshot_20200626_211633_com.android.calculator2.jpg](https://user-images.githubusercontent.com/47894090/85893082-715b5100-b7f2-11ea-9d85-fd341fcc5340.jpg)", "> Either my calculator and me are wrong or the max pos embedding is not matching the axial pos shape\r\n> \r\n> ![Screenshot_20200626_211633_com.android.calculator2.jpg](https://user-images.githubusercontent.com/47894090/85893082-715b5100-b7f2-11ea-9d85-fd341fcc5340.jpg)\r\n\r\nDefinitely your calculator looks like in learning phase! :)\r\n\r\nI am wondering if I load the models;\r\n\r\nmodel = ReformerForSequenceClassification.from_pretrained(\"./cnp\", num_labels = 2, output_attentions = False, output_hidden_states = False,)\r\nI assume the config will not be overridden, I mean it will complain about the config not being default.\r\n\r\nNot sure but, I am trying;\r\nmodel = ReformerForSequenceClassification(ReformerConfig())\r\nmodel.load_state_dict(torch.load(\"./cnp\"))\r\n\r\nwhich will load the custom model.\r\n\r\n", "Could you upload the trained model ?\r\nThen it's easier to reproduce the error and play around with the code", "> Could you upload the trained model ?\r\n> Then it's easier to reproduce the error and play around with the code\r\n\r\nThe model is trained on a very few samples (~100) and in the office network, which unfortunately does not allow to upload to outside world.\r\n\r\nI will try to upload one model after training it on collab.", "\r\nconfig = ReformerConfig()\r\nconfig.max_position_embeddings = 8192\r\nconfig.axial_pos_shape=[64, 128]\r\n\r\nmodel = ReformerForSequenceClassification(config)\r\nmodel.load_state_dict(torch.load(\"./cnp/pytorch_model.bin\"))\r\n\r\nThis is how I am trying to load the config, I tried an other approach but that did not work as well.\r\n", "@as-stevens are you training a Reformer Model from scratch?", "Regarding your code:\r\n\r\n```python\r\nconfig = ReformerConfig()\r\nconfig.max_position_embeddings = 8192\r\nconfig.axial_pos_shape=[64, 128]\r\n\r\nmodel = ReformerForSequenceClassification(config)\r\nmodel.load_state_dict(torch.load(\"./cnp/pytorch_model.bin\"))\r\n```\r\n\r\nplease note that it is not recommended instantiating a config and then later changing already set attributes. It's better to do the following:\r\n```\r\nconfig = ReformerConfig(max_position_embeddings=8192, axial_pos_shape=[64,128])\r\n```\r\n\r\nThis code for example works:\r\n```python \r\nfrom transformers import ReformerModel, ReformerConfig\r\nconfig = ReformerConfig(max_position_embeddings=8192, axial_pos_shape=[64, 128])\r\nmodel = ReformerModel(config)\r\nmodel.eval()\r\n\r\ninput_ids = torch.tensor([10 * [2, 4]])\r\nmodel(input_ids)\r\n```", "If you load a model from your checkpoint: \"./cnp/pytorch_model.bin\" the axial position embedding weights have to have the same dimensions in order for this to work", "Closing this for now. Feel free to re-open if you encounter other problems. Also make sure to have read the corresponding documentation of Reformer: https://huggingface.co/transformers/model_doc/reformer.html#axial-positional-encodings .", "@patrickvonplaten Thank you! Let me try your suggestion and run the classifier. Update you with the findings." ]
1,593
1,593
1,593
CONTRIBUTOR
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> I am trying to play around with the config parameters of the Reformer model; I change the axial position shape parameters to ; "axial_pos_shape": [ 64, 256 ] and the "max_position_embeddings": 8192 (multiplication of axial_pos_shape) But it get the following error; **RuntimeError: Error(s) in loading state_dict for ReformerForSequenceClassification: size mismatch for reformer.embeddings.position_embeddings.weights.0: copying a param with shape torch.Size([512, 1, 64]) from checkpoint, the shape in current model is torch.Size([64, 1, 64]). size mismatch for reformer.embeddings.position_embeddings.weights.1: copying a param with shape torch.Size([1, 1024, 192]) from checkpoint, the shape in current model is torch.Size([1, 256, 192]).** The rest of parameters are unchanged. What's wrong with the config? ## Details <!-- Description of your issue --> <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5320/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5320/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5319
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5319/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5319/comments
https://api.github.com/repos/huggingface/transformers/issues/5319/events
https://github.com/huggingface/transformers/pull/5319
646,433,165
MDExOlB1bGxSZXF1ZXN0NDQwNzIzMjU2
5,319
Fix `xxx_length` behavior when using XLNet in pipeline
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5319?src=pr&el=h1) Report\n> Merging [#5319](https://codecov.io/gh/huggingface/transformers/pull/5319?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bf0d12c220cfd19025736c488bdabda9efd20b9e&el=desc) will **increase** coverage by `1.33%`.\n> The diff coverage is `0.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5319/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5319?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5319 +/- ##\n==========================================\n+ Coverage 77.91% 79.24% +1.33% \n==========================================\n Files 138 138 \n Lines 24284 24290 +6 \n==========================================\n+ Hits 18920 19249 +329 \n+ Misses 5364 5041 -323 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5319?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5319/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `75.45% <0.00%> (-0.97%)` | :arrow_down: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5319/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `70.93% <0.00%> (-3.49%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5319/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `38.44% <0.00%> (-0.95%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5319/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.03% <0.00%> (-0.59%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5319/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.68% <0.00%> (-0.03%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5319/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.33% <0.00%> (ø)` | |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5319/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `93.75% <0.00%> (ø)` | |\n| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5319/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `49.44% <0.00%> (ø)` | |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5319/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (ø)` | |\n| ... and [6 more](https://codecov.io/gh/huggingface/transformers/pull/5319/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5319?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5319?src=pr&el=footer). Last update [bf0d12c...d3d5d50](https://codecov.io/gh/huggingface/transformers/pull/5319?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Warnings might scare the user that something went wrong, especially as there is no way to remove that behavior in the current API. Maybe documenting it properly is enough?\r\n\r\nMerging in the meantime because I need this to write model cards for XLNet." ]
1,593
1,593
1,593
COLLABORATOR
null
When using a `pipeline` for text gerenation with XLNet and transformer-XL, the pipeline adds a long prompt at the beginning to help the model. It messes up with `min_length` and `max_length` so I fixed that by adding to those kwargs (if passed) the length of the prompt. Also made the prompt use the tokenizer eos token instead of putting the three possibilities at the end.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5319/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5319/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5319", "html_url": "https://github.com/huggingface/transformers/pull/5319", "diff_url": "https://github.com/huggingface/transformers/pull/5319.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5319.patch", "merged_at": 1593270592000 }
https://api.github.com/repos/huggingface/transformers/issues/5318
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5318/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5318/comments
https://api.github.com/repos/huggingface/transformers/issues/5318/events
https://github.com/huggingface/transformers/pull/5318
646,424,432
MDExOlB1bGxSZXF1ZXN0NDQwNzE2MTkz
5,318
[CI] GH-runner stores artifacts like CircleCI
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5318?src=pr&el=h1) Report\n> Merging [#5318](https://codecov.io/gh/huggingface/transformers/pull/5318?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c4d4e8bdbd25d9463d41de6398940329c89b7fb6&el=desc) will **increase** coverage by `1.17%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5318/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5318?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5318 +/- ##\n==========================================\n+ Coverage 77.90% 79.07% +1.17% \n==========================================\n Files 140 138 -2 \n Lines 24334 24078 -256 \n==========================================\n+ Hits 18957 19040 +83 \n+ Misses 5377 5038 -339 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5318?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5318/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `87.50% <0.00%> (-6.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5318/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `70.93% <0.00%> (-3.49%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5318/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `77.90% <0.00%> (-3.08%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5318/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `76.42% <0.00%> (-2.84%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5318/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.62% <0.00%> (-1.49%)` | :arrow_down: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5318/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `88.21% <0.00%> (-1.02%)` | :arrow_down: |\n| [src/transformers/tokenization\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5318/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmVmb3JtZXIucHk=) | `40.67% <0.00%> (-0.99%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5318/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.16% <0.00%> (-0.56%)` | :arrow_down: |\n| [src/transformers/modeling\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5318/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `80.19% <0.00%> (-0.44%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5318/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `75.63% <0.00%> (-0.36%)` | :arrow_down: |\n| ... and [45 more](https://codecov.io/gh/huggingface/transformers/pull/5318/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5318?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5318?src=pr&el=footer). Last update [c4d4e8b...54a74d9](https://codecov.io/gh/huggingface/transformers/pull/5318?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,593
1,593
1,593
CONTRIBUTOR
null
Split this out to merge when GH runner is working.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5318/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5318/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5318", "html_url": "https://github.com/huggingface/transformers/pull/5318", "diff_url": "https://github.com/huggingface/transformers/pull/5318.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5318.patch", "merged_at": 1593543714000 }
https://api.github.com/repos/huggingface/transformers/issues/5317
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5317/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5317/comments
https://api.github.com/repos/huggingface/transformers/issues/5317/events
https://github.com/huggingface/transformers/pull/5317
646,391,778
MDExOlB1bGxSZXF1ZXN0NDQwNjkwMTYy
5,317
Clearer lr schedule math
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "looks neat", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,593
1,599
1,599
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5317/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5317/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5317", "html_url": "https://github.com/huggingface/transformers/pull/5317", "diff_url": "https://github.com/huggingface/transformers/pull/5317.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5317.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/5316
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5316/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5316/comments
https://api.github.com/repos/huggingface/transformers/issues/5316/events
https://github.com/huggingface/transformers/pull/5316
646,389,882
MDExOlB1bGxSZXF1ZXN0NDQwNjg4NjQy
5,316
[pl_examples] default warmup steps=0
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5316?src=pr&el=h1) Report\n> Merging [#5316](https://codecov.io/gh/huggingface/transformers/pull/5316?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/79a82cc06aaa68088639bf9bb000752cfd33a8c6&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5316/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5316?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5316 +/- ##\n=======================================\n Coverage 79.29% 79.30% \n=======================================\n Files 138 138 \n Lines 24282 24282 \n=======================================\n+ Hits 19254 19256 +2 \n+ Misses 5028 5026 -2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5316?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5316/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.14% <0.00%> (-0.13%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5316/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (+0.33%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5316/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.30% <0.00%> (+1.25%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5316?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5316?src=pr&el=footer). Last update [79a82cc...ee4cc93](https://codecov.io/gh/huggingface/transformers/pull/5316?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,593
1,593
1,593
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5316/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5316/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5316", "html_url": "https://github.com/huggingface/transformers/pull/5316", "diff_url": "https://github.com/huggingface/transformers/pull/5316.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5316.patch", "merged_at": 1593198221000 }
https://api.github.com/repos/huggingface/transformers/issues/5315
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5315/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5315/comments
https://api.github.com/repos/huggingface/transformers/issues/5315/events
https://github.com/huggingface/transformers/pull/5315
646,385,185
MDExOlB1bGxSZXF1ZXN0NDQwNjg0OTIz
5,315
Add BART-base modeling and configuration
{ "login": "JetRunner", "id": 22514219, "node_id": "MDQ6VXNlcjIyNTE0MjE5", "avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JetRunner", "html_url": "https://github.com/JetRunner", "followers_url": "https://api.github.com/users/JetRunner/followers", "following_url": "https://api.github.com/users/JetRunner/following{/other_user}", "gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}", "starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions", "organizations_url": "https://api.github.com/users/JetRunner/orgs", "repos_url": "https://api.github.com/users/JetRunner/repos", "events_url": "https://api.github.com/users/JetRunner/events{/privacy}", "received_events_url": "https://api.github.com/users/JetRunner/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5315?src=pr&el=h1) Report\n> Merging [#5315](https://codecov.io/gh/huggingface/transformers/pull/5315?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9022ef021a56db975d25c7108cbd19d0dd399174&el=desc) will **increase** coverage by `0.88%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5315/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5315?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5315 +/- ##\n==========================================\n+ Coverage 77.08% 77.97% +0.88% \n==========================================\n Files 138 138 \n Lines 23841 23841 \n==========================================\n+ Hits 18379 18590 +211 \n+ Misses 5462 5251 -211 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5315?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5315/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `93.75% <ø> (ø)` | |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5315/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.24% <ø> (ø)` | |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5315/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `76.42% <0.00%> (-0.39%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5315/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.38% <0.00%> (-0.24%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5315/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.00% <0.00%> (+0.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5315/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.90% <0.00%> (+1.38%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5315/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `94.92% <0.00%> (+75.00%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5315?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5315?src=pr&el=footer). Last update [9022ef0...b9cc4aa](https://codecov.io/gh/huggingface/transformers/pull/5315?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,593
1,593
1,593
CONTRIBUTOR
null
Fix 404 Not Found for BART-base
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5315/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5315/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5315", "html_url": "https://github.com/huggingface/transformers/pull/5315", "diff_url": "https://github.com/huggingface/transformers/pull/5315.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5315.patch", "merged_at": 1593190390000 }
https://api.github.com/repos/huggingface/transformers/issues/5314
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5314/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5314/comments
https://api.github.com/repos/huggingface/transformers/issues/5314/events
https://github.com/huggingface/transformers/issues/5314
646,378,977
MDU6SXNzdWU2NDYzNzg5Nzc=
5,314
Transformer-XL not working with DistributedDataParallel
{ "login": "RafaelWO", "id": 38643099, "node_id": "MDQ6VXNlcjM4NjQzMDk5", "avatar_url": "https://avatars.githubusercontent.com/u/38643099?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RafaelWO", "html_url": "https://github.com/RafaelWO", "followers_url": "https://api.github.com/users/RafaelWO/followers", "following_url": "https://api.github.com/users/RafaelWO/following{/other_user}", "gists_url": "https://api.github.com/users/RafaelWO/gists{/gist_id}", "starred_url": "https://api.github.com/users/RafaelWO/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RafaelWO/subscriptions", "organizations_url": "https://api.github.com/users/RafaelWO/orgs", "repos_url": "https://api.github.com/users/RafaelWO/repos", "events_url": "https://api.github.com/users/RafaelWO/events{/privacy}", "received_events_url": "https://api.github.com/users/RafaelWO/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "I have the same kind of problem with transformers 3.3.1 and a BertModel, but the error is now an inplace operation on torch.cuda.LongTensor problem, something related to EmbeddingBackward.", "Could you please open a new issue with the full stacktrace + the information related to your environment?", "Done as issue #7848." ]
1,593
1,602
1,599
CONTRIBUTOR
null
# 🐛 Bug ## Information I'm using the pretrained `TransfoXLLMHeadModel` for finetuning but I get an error when trying to use it with pytorch's `DistributedDataParallel`: > self.reducer.prepare_for_backward(list(_find_tensors(output))) RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`; (2) making sure all `forward` function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable). (prepare_for_backward at /opt/conda/conda-bld/pytorch_1579022034529/work/torch/csrc/distributed/c10d/reducer.cpp:514) I'm using the pytorch distributed launch utility. I've already gone through similar issues regarding this topic, but none of the solutions solved my problem. A guess from me is, that this is caused by the returned `mems` and `DistributedDataParallel` could think that they require a gradient? Model I am using: Transformer-XL Language I am using the model on: English The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.11.0 - Platform: Linux-4.15.0-108-generic-x86_64-with-debian-buster-sid - Python version: 3.6.10 - PyTorch version (GPU?): 1.4.0 (True) - Tensorflow version (GPU?): 2.1.0 (True) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes `torch.distributed.launch`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5314/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5314/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5313
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5313/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5313/comments
https://api.github.com/repos/huggingface/transformers/issues/5313/events
https://github.com/huggingface/transformers/pull/5313
646,344,205
MDExOlB1bGxSZXF1ZXN0NDQwNjUxNjIw
5,313
Model cards for finance-koelectra models
{ "login": "krevas", "id": 27683515, "node_id": "MDQ6VXNlcjI3NjgzNTE1", "avatar_url": "https://avatars.githubusercontent.com/u/27683515?v=4", "gravatar_id": "", "url": "https://api.github.com/users/krevas", "html_url": "https://github.com/krevas", "followers_url": "https://api.github.com/users/krevas/followers", "following_url": "https://api.github.com/users/krevas/following{/other_user}", "gists_url": "https://api.github.com/users/krevas/gists{/gist_id}", "starred_url": "https://api.github.com/users/krevas/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/krevas/subscriptions", "organizations_url": "https://api.github.com/users/krevas/orgs", "repos_url": "https://api.github.com/users/krevas/repos", "events_url": "https://api.github.com/users/krevas/events{/privacy}", "received_events_url": "https://api.github.com/users/krevas/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5313?src=pr&el=h1) Report\n> Merging [#5313](https://codecov.io/gh/huggingface/transformers/pull/5313?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/08c9607c3d025f9f1a0c40e6d124d5d5d446208e&el=desc) will **increase** coverage by `1.88%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5313/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5313?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5313 +/- ##\n==========================================\n+ Coverage 77.41% 79.29% +1.88% \n==========================================\n Files 138 138 \n Lines 24282 24282 \n==========================================\n+ Hits 18798 19255 +457 \n+ Misses 5484 5027 -457 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5313?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.14% <0.00%> (+0.36%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+1.41%)` | :arrow_up: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.37% <0.00%> (+1.44%)` | :arrow_up: |\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.74% <0.00%> (+1.69%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.62% <0.00%> (+2.20%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.86% <0.00%> (+2.27%)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `99.15% <0.00%> (+2.53%)` | :arrow_up: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `70.93% <0.00%> (+6.97%)` | :arrow_up: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `76.41% <0.00%> (+9.68%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `91.43% <0.00%> (+42.03%)` | :arrow_up: |\n| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/5313/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5313?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5313?src=pr&el=footer). Last update [08c9607...f17b437](https://codecov.io/gh/huggingface/transformers/pull/5313?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,593
1,593
1,593
CONTRIBUTOR
null
Added new cards for the models.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5313/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5313/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5313", "html_url": "https://github.com/huggingface/transformers/pull/5313", "diff_url": "https://github.com/huggingface/transformers/pull/5313.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5313.patch", "merged_at": 1593409665000 }
https://api.github.com/repos/huggingface/transformers/issues/5312
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5312/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5312/comments
https://api.github.com/repos/huggingface/transformers/issues/5312/events
https://github.com/huggingface/transformers/pull/5312
646,343,543
MDExOlB1bGxSZXF1ZXN0NDQwNjUxMTA2
5,312
Add benchmark notebook
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5312?src=pr&el=h1) Report\n> Merging [#5312](https://codecov.io/gh/huggingface/transformers/pull/5312?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/135791e8ef12802ceb21a4abbb3a93f7da1bf390&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5312/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5312?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5312 +/- ##\n=======================================\n Coverage 79.30% 79.30% \n=======================================\n Files 138 138 \n Lines 24283 24283 \n=======================================\n Hits 19258 19258 \n Misses 5025 5025 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5312?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5312/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.62% <0.00%> (-0.15%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5312/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `78.93% <0.00%> (+0.19%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5312?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5312?src=pr&el=footer). Last update [135791e...487fa69](https://codecov.io/gh/huggingface/transformers/pull/5312?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,593
1,593
1,593
MEMBER
null
This PR adds a general notebook on how to use the Benchmark tools
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5312/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5312/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5312", "html_url": "https://github.com/huggingface/transformers/pull/5312", "diff_url": "https://github.com/huggingface/transformers/pull/5312.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5312.patch", "merged_at": 1593185893000 }
https://api.github.com/repos/huggingface/transformers/issues/5311
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5311/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5311/comments
https://api.github.com/repos/huggingface/transformers/issues/5311/events
https://github.com/huggingface/transformers/issues/5311
646,326,159
MDU6SXNzdWU2NDYzMjYxNTk=
5,311
AllenNLP SPECTER model
{ "login": "ckald", "id": 204759, "node_id": "MDQ6VXNlcjIwNDc1OQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204759?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ckald", "html_url": "https://github.com/ckald", "followers_url": "https://api.github.com/users/ckald/followers", "following_url": "https://api.github.com/users/ckald/following{/other_user}", "gists_url": "https://api.github.com/users/ckald/gists{/gist_id}", "starred_url": "https://api.github.com/users/ckald/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ckald/subscriptions", "organizations_url": "https://api.github.com/users/ckald/orgs", "repos_url": "https://api.github.com/users/ckald/repos", "events_url": "https://api.github.com/users/ckald/events{/privacy}", "received_events_url": "https://api.github.com/users/ckald/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
null
[]
[ "Wonderful if we can use embeddings of scientific documents in transformers.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "SPECTER is a great work. Is there any plan to have the Huggingface version of SPECTER?", "This should be addressed now: https://huggingface.co/allenai/specter\r\nHow to use: https://github.com/allenai/specter" ]
1,593
1,611
1,599
NONE
null
# 🌟 New model addition SPECTER: Document-level Representation Learning using Citation-informed Transformers ## Model description [SPECTER](https://arxiv.org/pdf/2004.07180.pdf) by AllenAI is a new method to generate document-level embedding of scientific documents based on pretraining a Transformer language model on a powerful signal of document-level relatedness: the citation graph. It is based on SciBERT. This model is showing us promising results for scientific paper similarity tasks. However, the repo is quite rough around the edges and inference performance could be better. According to the AllenNLP repo issues, they do not plan to adapt the models to ONNX runtime. It would be great to have a plug'n'play implementation for Transformers. Instructions on how to do this are also welcome 🤝 ## Open source status * [x] the model implementation is available: https://github.com/allenai/specter * [x] the model weights are available: see repo above * [x] who are the authors: AllenAI, @armancohan, @sergeyf
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5311/reactions", "total_count": 4, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/5311/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5310
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5310/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5310/comments
https://api.github.com/repos/huggingface/transformers/issues/5310/events
https://github.com/huggingface/transformers/pull/5310
646,313,738
MDExOlB1bGxSZXF1ZXN0NDQwNjI3NjIz
5,310
overflowing tokens are now always returned
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5310?src=pr&el=h1) Report\n> Merging [#5310](https://codecov.io/gh/huggingface/transformers/pull/5310?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/79a82cc06aaa68088639bf9bb000752cfd33a8c6&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `0.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5310/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5310?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5310 +/- ##\n=======================================\n Coverage 79.29% 79.30% \n=======================================\n Files 138 138 \n Lines 24282 24282 \n=======================================\n+ Hits 19254 19256 +2 \n+ Misses 5028 5026 -2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5310?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5310/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5310/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.14% <0.00%> (-0.13%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5310/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (+0.33%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5310/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.30% <0.00%> (+1.25%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5310?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5310?src=pr&el=footer). Last update [79a82cc...5583df5](https://codecov.io/gh/huggingface/transformers/pull/5310?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Yes, it's in #5308 :-)" ]
1,593
1,593
1,593
MEMBER
null
closes https://github.com/huggingface/transformers/issues/5293 @thomwolf may be superseded by https://github.com/huggingface/transformers/pull/5308/files
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5310/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5310/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5310", "html_url": "https://github.com/huggingface/transformers/pull/5310", "diff_url": "https://github.com/huggingface/transformers/pull/5310.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5310.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/5309
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5309/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5309/comments
https://api.github.com/repos/huggingface/transformers/issues/5309/events
https://github.com/huggingface/transformers/issues/5309
646,306,908
MDU6SXNzdWU2NDYzMDY5MDg=
5,309
Links out of date under transformers/examples/README.md
{ "login": "Khasir", "id": 12619339, "node_id": "MDQ6VXNlcjEyNjE5MzM5", "avatar_url": "https://avatars.githubusercontent.com/u/12619339?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Khasir", "html_url": "https://github.com/Khasir", "followers_url": "https://api.github.com/users/Khasir/followers", "following_url": "https://api.github.com/users/Khasir/following{/other_user}", "gists_url": "https://api.github.com/users/Khasir/gists{/gist_id}", "starred_url": "https://api.github.com/users/Khasir/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Khasir/subscriptions", "organizations_url": "https://api.github.com/users/Khasir/orgs", "repos_url": "https://api.github.com/users/Khasir/repos", "events_url": "https://api.github.com/users/Khasir/events{/privacy}", "received_events_url": "https://api.github.com/users/Khasir/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,593
1,593
1,593
NONE
null
# 🐛 Typos ## Information Under the Big Table of Tasks in `transformers/examples/README.md`, the links for summarization and translation are out of date. They should lead to `examples/seq2seq` instead of `examples/summarization` and `examples/translation`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5309/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5309/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5308
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5308/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5308/comments
https://api.github.com/repos/huggingface/transformers/issues/5308/events
https://github.com/huggingface/transformers/pull/5308
646,300,946
MDExOlB1bGxSZXF1ZXN0NDQwNjE3MDU5
5,308
[tokenizers] Updates data processors, docstring, examples and model cards to the new API
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5308?src=pr&el=h1) Report\n> Merging [#5308](https://codecov.io/gh/huggingface/transformers/pull/5308?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/135791e8ef12802ceb21a4abbb3a93f7da1bf390&el=desc) will **decrease** coverage by `2.10%`.\n> The diff coverage is `58.82%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5308/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5308?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5308 +/- ##\n==========================================\n- Coverage 79.30% 77.20% -2.11% \n==========================================\n Files 138 138 \n Lines 24283 24285 +2 \n==========================================\n- Hits 19258 18749 -509 \n- Misses 5025 5536 +511 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5308?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5308/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (ø)` | |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5308/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <ø> (ø)` | |\n| [src/transformers/modeling\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5308/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `80.73% <ø> (ø)` | |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5308/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `87.65% <ø> (ø)` | |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5308/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.62% <ø> (-2.54%)` | :arrow_down: |\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5308/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `96.04% <ø> (-1.70%)` | :arrow_down: |\n| [src/transformers/modeling\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5308/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `80.62% <ø> (ø)` | |\n| [src/transformers/modeling\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/5308/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mbGF1YmVydC5weQ==) | `84.37% <ø> (ø)` | |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5308/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `85.94% <ø> (ø)` | |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/5308/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `93.11% <ø> (ø)` | |\n| ... and [47 more](https://codecov.io/gh/huggingface/transformers/pull/5308/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5308?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5308?src=pr&el=footer). Last update [135791e...5b1fbb2](https://codecov.io/gh/huggingface/transformers/pull/5308?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "@sshleifer @patrickvonplaten and @yjernite I updated your examples (seq2seq and eli5). Maybe you want to check it.", "> @sshleifer @patrickvonplaten and @yjernite I updated your examples (seq2seq and eli5). Maybe you want to check it.\r\n\r\nlgtm!" ]
1,593
1,593
1,593
MEMBER
null
Updates the data-processors to the new recommended tokenizers' API instead of the old one. Also update the docstrings, examples, and model-cards which were using the old API. Supersede #5310 @sshleifer you have a couple of methods only your models use (bart and marian). I'm not sure about the consequences of updating those API so I'll let you update them. Here is the doc on the new tokenizer API if you need it: https://huggingface.co/transformers/master/preprocessing.html Recommended updates: - use `__call__` instead of `encode_plus` and `batch_encode_plus` - use `padding` and `truncation` instead of `max_length` only and `pad_to_max_length`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5308/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5308/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5308", "html_url": "https://github.com/huggingface/transformers/pull/5308", "diff_url": "https://github.com/huggingface/transformers/pull/5308.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5308.patch", "merged_at": 1593193694000 }
https://api.github.com/repos/huggingface/transformers/issues/5307
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5307/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5307/comments
https://api.github.com/repos/huggingface/transformers/issues/5307/events
https://github.com/huggingface/transformers/pull/5307
646,289,424
MDExOlB1bGxSZXF1ZXN0NDQwNjA3NTcw
5,307
More model cards
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5307?src=pr&el=h1) Report\n> Merging [#5307](https://codecov.io/gh/huggingface/transformers/pull/5307?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/88d7f96e33c3f3e541bcdd913f2ff1e50aa18c1b&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5307/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5307?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5307 +/- ##\n=======================================\n Coverage 79.30% 79.30% \n=======================================\n Files 138 138 \n Lines 24283 24283 \n=======================================\n+ Hits 19257 19258 +1 \n+ Misses 5026 5025 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5307?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5307/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (+0.33%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5307?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5307?src=pr&el=footer). Last update [88d7f96...0bf35c5](https://codecov.io/gh/huggingface/transformers/pull/5307?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Looks great!\r\n", "That's awesome, thanks!" ]
1,593
1,593
1,593
COLLABORATOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5307/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5307/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5307", "html_url": "https://github.com/huggingface/transformers/pull/5307", "diff_url": "https://github.com/huggingface/transformers/pull/5307.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5307.patch", "merged_at": 1593421136000 }
https://api.github.com/repos/huggingface/transformers/issues/5306
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5306/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5306/comments
https://api.github.com/repos/huggingface/transformers/issues/5306/events
https://github.com/huggingface/transformers/pull/5306
646,221,381
MDExOlB1bGxSZXF1ZXN0NDQwNTUxNTU0
5,306
[Generation] fix docs for decoder_input_ids
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Pinging @sshleifer, @sgugger and @LysandreJik for notification.", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5306?src=pr&el=h1) Report\n> Merging [#5306](https://codecov.io/gh/huggingface/transformers/pull/5306?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/135791e8ef12802ceb21a4abbb3a93f7da1bf390&el=desc) will **decrease** coverage by `0.84%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5306/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5306?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5306 +/- ##\n==========================================\n- Coverage 79.30% 78.45% -0.85% \n==========================================\n Files 138 138 \n Lines 24283 24283 \n==========================================\n- Hits 19258 19052 -206 \n- Misses 5025 5231 +206 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5306?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5306/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.77% <ø> (ø)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5306/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.26% <ø> (+0.12%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5306/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.78% <0.00%> (-74.20%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5306?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5306?src=pr&el=footer). Last update [135791e...d4e9c3f](https://codecov.io/gh/huggingface/transformers/pull/5306?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,593
1,593
1,593
MEMBER
null
Fix docs for decoder input ids.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5306/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5306/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5306", "html_url": "https://github.com/huggingface/transformers/pull/5306", "diff_url": "https://github.com/huggingface/transformers/pull/5306.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5306.patch", "merged_at": 1593183492000 }
https://api.github.com/repos/huggingface/transformers/issues/5305
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5305/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5305/comments
https://api.github.com/repos/huggingface/transformers/issues/5305/events
https://github.com/huggingface/transformers/issues/5305
646,146,066
MDU6SXNzdWU2NDYxNDYwNjY=
5,305
Does BERT public an embedding file like glove.840B.300d.txt?
{ "login": "14H034160212", "id": 23516191, "node_id": "MDQ6VXNlcjIzNTE2MTkx", "avatar_url": "https://avatars.githubusercontent.com/u/23516191?v=4", "gravatar_id": "", "url": "https://api.github.com/users/14H034160212", "html_url": "https://github.com/14H034160212", "followers_url": "https://api.github.com/users/14H034160212/followers", "following_url": "https://api.github.com/users/14H034160212/following{/other_user}", "gists_url": "https://api.github.com/users/14H034160212/gists{/gist_id}", "starred_url": "https://api.github.com/users/14H034160212/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/14H034160212/subscriptions", "organizations_url": "https://api.github.com/users/14H034160212/orgs", "repos_url": "https://api.github.com/users/14H034160212/repos", "events_url": "https://api.github.com/users/14H034160212/events{/privacy}", "received_events_url": "https://api.github.com/users/14H034160212/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "BERT is a fine-tuning based model. As far as I'm aware, there are no 'embedding files' as such. You could generate embedding relevant to your case, by providing the model with words (see documentation) and generate outputs.", "> BERT is a fine-tuning based model. As far as I'm aware, there are no 'embedding files' as such. You could generate embedding relevant to your case, by providing the model with words (see documentation) and generate outputs.\r\n\r\nMany thanks!", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,593
1,598
1,598
NONE
null
# ❓ Questions & Help Hi, I have seen BERT public the pre-training language model like BERT-based-uncased. Does BERT public an embedding file like glove.840B.300d.txt? Many thanks.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5305/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5305/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5304
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5304/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5304/comments
https://api.github.com/repos/huggingface/transformers/issues/5304/events
https://github.com/huggingface/transformers/issues/5304
646,118,626
MDU6SXNzdWU2NDYxMTg2MjY=
5,304
Bert Abs not using GPU
{ "login": "MichaelJanz", "id": 66110831, "node_id": "MDQ6VXNlcjY2MTEwODMx", "avatar_url": "https://avatars.githubusercontent.com/u/66110831?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MichaelJanz", "html_url": "https://github.com/MichaelJanz", "followers_url": "https://api.github.com/users/MichaelJanz/followers", "following_url": "https://api.github.com/users/MichaelJanz/following{/other_user}", "gists_url": "https://api.github.com/users/MichaelJanz/gists{/gist_id}", "starred_url": "https://api.github.com/users/MichaelJanz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MichaelJanz/subscriptions", "organizations_url": "https://api.github.com/users/MichaelJanz/orgs", "repos_url": "https://api.github.com/users/MichaelJanz/repos", "events_url": "https://api.github.com/users/MichaelJanz/events{/privacy}", "received_events_url": "https://api.github.com/users/MichaelJanz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This can be resolved by not using the remi/bertabs-finetuned-cnndm-extractive-abstractive-summarization model but the https://huggingface.co/remi/bertabs-finetuned-extractive-abstractive-summarization. With cnndm prefix I would expect it as a gpu version, but the opposite is the case.\r\nChanging the model fixes the error" ]
1,593
1,593
1,593
CONTRIBUTOR
null
# 🐛 Bug ## Information Model I am using: Bert abs (https://github.com/huggingface/transformers/tree/master/examples/seq2seq/bertabs) -> remi/bertabs-finetuned-cnndm-extractive-abstractive-summarization model Language I am using the model on: English The problem arises when using: * [x] the official example scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Install Conda Environment with proper pytorch installation 2. Verify that `torch.cuda.is_available()` return True 3. Start run_summarization.py as stated in the Readme ## Expected behavior I would expect, that the models uses GPU for Training / Generation. Instead I have a very high CPU usage, Gpu usage is nearly zero and GPU clock stays idle. Since I verified that the GPU and cuda is available, I expect it to use it. --no_cuda is not set, so the default of False is taken used ## Environment info - `transformers` version: 2.11.0 - Platform: Windows-10-10.0.18362-SP0 - Python version: 3.7.7 - PyTorch version (GPU?): 1.5.1 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5304/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5304/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5303
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5303/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5303/comments
https://api.github.com/repos/huggingface/transformers/issues/5303/events
https://github.com/huggingface/transformers/issues/5303
646,086,012
MDU6SXNzdWU2NDYwODYwMTI=
5,303
Attempted relative import with no known parent package
{ "login": "MichaelJanz", "id": 66110831, "node_id": "MDQ6VXNlcjY2MTEwODMx", "avatar_url": "https://avatars.githubusercontent.com/u/66110831?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MichaelJanz", "html_url": "https://github.com/MichaelJanz", "followers_url": "https://api.github.com/users/MichaelJanz/followers", "following_url": "https://api.github.com/users/MichaelJanz/following{/other_user}", "gists_url": "https://api.github.com/users/MichaelJanz/gists{/gist_id}", "starred_url": "https://api.github.com/users/MichaelJanz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MichaelJanz/subscriptions", "organizations_url": "https://api.github.com/users/MichaelJanz/orgs", "repos_url": "https://api.github.com/users/MichaelJanz/repos", "events_url": "https://api.github.com/users/MichaelJanz/events{/privacy}", "received_events_url": "https://api.github.com/users/MichaelJanz/received_events", "type": "User", "site_admin": false }
[ { "id": 1990944155, "node_id": "MDU6TGFiZWwxOTkwOTQ0MTU1", "url": "https://api.github.com/repos/huggingface/transformers/labels/bertabs", "name": "bertabs", "color": "9ab22e", "default": false, "description": "" } ]
closed
false
null
[]
[ "Changing line 15 in run_summarization.py from \r\n`from .utils_summarization import (` to `from utils_summarization import (`\r\nfixed that error", "@sshleifer - can you take a look here maybe? ", "I do have a similar error, when I changed from .utils_summarization import ( to from utils_summarization import ( I got another error which is: \r\n/content/transformers/examples/seq2seq/bertabs\r\nTraceback (most recent call last):\r\n File \"run_summarization.py\", line 12, in <module>\r\n from modeling_bertabs import BertAbs, build_predictor\r\n File \"/content/transformers/examples/seq2seq/bertabs/modeling_bertabs.py\", line 30, in <module>\r\n from configuration_bertabs import BertAbsConfig\r\n File \"/content/transformers/examples/seq2seq/bertabs/configuration_bertabs.py\", line 19, in <module>\r\n from transformers import PretrainedConfig\r\nModuleNotFoundError: No module named 'transformers'\r\n\r\nCan you help me please?", "@Hildweig did you install the dependencies with \r\n`pip install -r requirements.txt` ?", "Also to make the example run I had to change the used model in run_summarization.py to \r\nremi/bertabs-finetuned-extractive-abstractive-summarization", "I did install the requirements, thank you! changing the model in run_summarization.py fixed the issue! However the output is very bad and makes no sense, this is what I got for a good text:\r\n\r\nthe u.s. is on the way to meet with the u.s.. the u.s. has pledged to help ease ease ease the situation. the u.n. is the most most most likely to provide provide provide a detailed detailed detailed detail. the u..s. is due to the u.s. , and and the u.s. will provide provide some areas to help\r\n------------------------------------------------\r\nDid you get a good summary?", "Why are you guys using bertabs? \r\nIt seems to not work very well, according to #3647 .\r\n\r\n@MichaelJanz would you be willing to contribute a PR with your changes?", "@sshleifer what do you suggest using for abstractive summarization? ", "@Hildweig Depends what your goal is?\r\n\r\nFor getting good scores on the cnndm dataset, I'd recommend `sshleifer/distilbart-cnn-12-6` and the finetuning in `examples/seq2seq/finetune.py`.\r\n\r\nFor your own data, start with a cnn checkpoint if you want 3 sentence summaries and an xsum checkpoint if you want 1 sentence summaries.\r\n\r\nFor running experiments to see how summarization finetuning works, you can start from `bart-large`, but these experiments are slow.\r\n\r\nTo make a useful open source contribution, you could try modifying/changing hyperparameters in `./train_distilbart_cnn.sh` to try to get a high score. Bonus points if you use `--logger wandb_shared`.\r\n\r\nAlso I recently update the setup instructions and script. It's now on master [here](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/README.md). There are some tips in there that try to cover common cases.\r\n\r\nSpeed/Rouge tradeoffs detailed of different bart variants: [here](https://docs.google.com/spreadsheets/d/1EkhDMwVO02m8jCD1cG3RoFPLicpcL1GQHTQjfvDYgIM/edit#gid=0&range=B2:G12)\r\n[tweet](https://twitter.com/sam_shleifer/status/1276160367853547522?s=20) describing the distilbart project from a high level.\r\n\r\n\r\n", "@Hildweig I tested different input data and it performed (qualitatively considered) well for a random book review: https://bookpage.com/reviews/25272-deb-caletti-girl-unframed-ya#.Xvmf-CgzaUm\r\n\r\nThe result was: \r\n\r\n_sydney has to spend the summer with her mother , the once - famous movie star , lila shore , at her sumptuous mansion in san francisco 's exclusive sea cliff neighborhood. sydney has a lot of freedom to explore the city on her own , \r\nwhich is how she meets nicco and begins a relationship that will unexpectedly change all of their lives forever. in the book , sydney 's story appears to suggest that \r\nthe book itself is her testimony about the lead - up to a terrible crime. as a woman in the world , it often means being looked at but not seen by it , she says_\r\n\r\nHowever, using larger texts like a Sherlock Holmes novel did not work well, without any metrics considered.\r\n\r\n@sshleifer \r\nSure, I can create a PR.\r\nAlso thank you for the hint, I am thankful for any advices I can get for my master thesis. \r\n\r\nMay I ask you about your advice on my goal?\r\nI want to build a system, that is capable of creating summaries of book reviews for instagram posts in the german language. I am thinking of using the german bert (or similar) and fine tune it on a dataset I still have to get. Do you have any advice for me, you would like to share?", "However, the generated text does not look like abstractive, rather extractive", "Pr created in [#5355](https://github.com/huggingface/transformers/pull/5355)", "@sshleifer thank you! My goal is to fine-tune it on duc dataset.\r\n@MichaelJanz for me I have to summarize huge documents (17 page or so).", "Hi @sshleifer and @MichaelJanz,\r\n\r\nThis seems to be a problem of python package structure. I am getting a similar error but with the token_classification/run_ner.py file. \r\n\r\nFile \"examples/token-classification/run_ner.py\", line 42, in <module>\r\n from modeling_auto import AutoModelForTokenClassification\r\n File \"transformers/src/transformers/modeling_auto.py\", line 22, in <module>\r\n from .configuration_auto import (\r\nImportError: attempted relative import with no known parent package\r\n\r\nI have not installed transformers library using pip because I want to use the local codes (cloned from transformers library). After reading various stackoverflow suggestions (https://stackoverflow.com/questions/16981921/relative-imports-in-python-3 and https://napuzba.com/a/import-error-relative-no-parent), I believe that when I am importing the transformer package locally from my own directory, then it is not able read+load transformer as a package.\r\n\r\nI am using python3.7 \r\n\r\nCan you please suggest how to read transformer as a package from local codes. \r\n\r\nThanks..." ]
1,593
1,595
1,594
CONTRIBUTOR
null
# 🐛 Bug ## Information Model I am using: Bertabs (https://github.com/huggingface/transformers/tree/master/examples/seq2seq/bertabs) Language I am using the model on: English The problem arises when using: [x ] the official example scripts: (give details below) The tasks I am working on is: * [x] my own task or dataset: (give details below) ## To reproduce Hi, I did as the Readme says but the following error is thrown, when I want to start the training via: `python run_summarization.py --documents_dir "data/stories" --summaries_output_dir "out" --no_cuda false --batch_size 4 --min_length 50 --max_length 200 --beam_size 5 --alpha 0.95 --block_trigram true `: ` File "run_summarization.py", line 15, in <module> from .utils_summarization import ( ImportError: attempted relative import with no known parent package` I am using Win10, and an Anaconda Env. Steps to reproduce the behavior: 1. Install a new Anaconda Env with torch 2. Do as the Readme says 3. Put the code into a file or start it via console. ## Expected behavior Start Training, or any error message, which I can resolve. ## Environment info Using Windows 10, and an Anaconda Env. Thank you
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5303/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5303/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5302
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5302/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5302/comments
https://api.github.com/repos/huggingface/transformers/issues/5302/events
https://github.com/huggingface/transformers/pull/5302
646,085,438
MDExOlB1bGxSZXF1ZXN0NDQwNDQxMTc2
5,302
Transformer-XL: Fixed tokenization of brackets, numbers etc.
{ "login": "RafaelWO", "id": 38643099, "node_id": "MDQ6VXNlcjM4NjQzMDk5", "avatar_url": "https://avatars.githubusercontent.com/u/38643099?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RafaelWO", "html_url": "https://github.com/RafaelWO", "followers_url": "https://api.github.com/users/RafaelWO/followers", "following_url": "https://api.github.com/users/RafaelWO/following{/other_user}", "gists_url": "https://api.github.com/users/RafaelWO/gists{/gist_id}", "starred_url": "https://api.github.com/users/RafaelWO/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RafaelWO/subscriptions", "organizations_url": "https://api.github.com/users/RafaelWO/orgs", "repos_url": "https://api.github.com/users/RafaelWO/repos", "events_url": "https://api.github.com/users/RafaelWO/events{/privacy}", "received_events_url": "https://api.github.com/users/RafaelWO/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Test fail due to the test in `test_tokenization_fast`, but I have not touched that file.\r\nFor me, the problem is in [test_tokenization_fast.py line 45](../tree/master/tests/test_tokenization_fast.py#L45). It works for me if I replace the path `tests/fixtures/...` with `./fixtures/...`\r\n\r\nShould I fix this?", "Hey! I'm interested in this as I have noticed the same phenomenon. Some of those files were moved recently; could you rebase from master to check whether that fixes the pathing issue?", "That's already looking better! If the tests still fail, it might be because the current expected outputs of the tests are incorrect. If your results are better than the expected outputs, that should be clear quickly.", "Is this fine if there are 229 files which have changed? If I do a compare from my repo with the master branch, it's only the 2 as before...", "For me, the reason for failure is still the same as mentioned above: `test_tokenization_fast.py:45: FileNotFoundError`\r\n```\r\n> with open(\"tests/fixtures/sample_text.txt\", encoding=\"utf-8\") as f_data:\r\nE FileNotFoundError: [Errno 2] No such file or directory: 'tests/fixtures/sample_text.txt'\r\n```", "That's weird, I'm seeing\r\n\r\n`AssertionError: Sequences differ: [100,[42 chars]832, 6193, 43, 24, 24, 24, 24, 24, 24, 29289, [4136 chars]1, 3] != [100,[42 chars]832, 24, 24, 29289, 476, 35, 24, 19, 5387, 812[3333 chars], 24]`\r\n\r\nin both CI tests, at `tests/test_tokenization_fast.py:62`", "Hmm strange... Maybe it's because I executed the tests in PyCharm.\r\n**Edit**: In PyCharm the root path for the tests was configured to be `.../tests/`, so that was the problem for me earlier.\r\n\r\nOk if I run it in the console I see the same result as you above!", "@TevenLeScao \r\n> If the tests still fail, it might be because the current expected outputs of the tests are incorrect.\r\n\r\nHow can I fix this?", "The issue seems to be that the Rust and Python tokenizers don't agree as the issue is also present in the Rust tokenizers. @n1t0 @Pierrci maybe could you take a look at this?", "I have no idea what changed unfortunately, but yes the rust tokenizers and the python one should have the same output for the tests to pass.", "Does it make sense if the `TransfoXLTokenizerFast` would also use the tokenizer from `sacremoses`?", "Not really. The fast tokenizers are implemented in Rust, not Python, so it's not possible to use sacremoses. Also, the fast tokenizers keep track of the offsets/alignment information, and I guess this is not something that sacremoses provides, does it? I'm really not familiar with it so I have no idea if it would be easy to re-implement otherwise.", "What exactly do you mean with offsets/alignment information? I'm not familiar with Rust nor the fast tokenizers in general. ", "Hi again, is there any update on the issue with the fast tokenizers?\r\n\r\nShould I maybe open a new PR which is up to date with the master branch so that we get rid of the messy commit history and lots of \"changes\"?", "Hi @RafaelWO , sorry for the delay.\r\n\r\n - A PR without the extra commits would definitely help\r\n - In the end we couldn't really decide on a solution: I think the consensus with @n1t0 and @thomwolf was that deleting the Rust TransfoXL tokenizer was the best but it's pretty harsh in terms of BC-breaking. I think the best would be to give it a deprecation warning for a bit and remove the Rust = Python tokenizer equality test for TransfoXL in the meantime.", "- I will reopen one\r\n- Sounds good, I will post my changes in a new PR and then we can discuss the deletion of the Rust tokenizer", "Reopened in PR #6322 " ]
1,593
1,596
1,596
CONTRIBUTOR
null
Fixes #5136 As explained in the above issue, this PR fixes the tokenization of the `TransfoXLTokenizer` by using the `sacremoses` library with an extended feature of tokenizing comma-separated and floating point numbers. That way the input text is tokenized the same way as in the WikiText-103 dateset used for pretraining. Open questions: - [ ] Should the method `prepare_for_tokenization` be removed, since it just issues a warning for no whitespace before punctuation? This is solved by the `MosesTokenizer` anyway.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5302/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5302/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5302", "html_url": "https://github.com/huggingface/transformers/pull/5302", "diff_url": "https://github.com/huggingface/transformers/pull/5302.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5302.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/5301
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5301/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5301/comments
https://api.github.com/repos/huggingface/transformers/issues/5301/events
https://github.com/huggingface/transformers/issues/5301
646,080,877
MDU6SXNzdWU2NDYwODA4Nzc=
5,301
Request for inclusion of PEGASUS for text summarization by Google.
{ "login": "avacaondata", "id": 35173563, "node_id": "MDQ6VXNlcjM1MTczNTYz", "avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4", "gravatar_id": "", "url": "https://api.github.com/users/avacaondata", "html_url": "https://github.com/avacaondata", "followers_url": "https://api.github.com/users/avacaondata/followers", "following_url": "https://api.github.com/users/avacaondata/following{/other_user}", "gists_url": "https://api.github.com/users/avacaondata/gists{/gist_id}", "starred_url": "https://api.github.com/users/avacaondata/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/avacaondata/subscriptions", "organizations_url": "https://api.github.com/users/avacaondata/orgs", "repos_url": "https://api.github.com/users/avacaondata/repos", "events_url": "https://api.github.com/users/avacaondata/events{/privacy}", "received_events_url": "https://api.github.com/users/avacaondata/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "We are currently working on it as far as I know :-) @sshleifer ", "Duplicate of #4918 , but yes it is WIP!" ]
1,593
1,593
1,593
NONE
null
# 🚀 Feature request Google has recently released PEGASUS, a new model for text summarization: https://ai.googleblog.com/2020/06/pegasus-state-of-art-model-for.html. I think It would be interesting to have it included in Transformers library. The code and checkpoints are here: https://github.com/google-research/pegasus. This model seems to accomplish human-like summarization.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5301/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5301/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5300
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5300/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5300/comments
https://api.github.com/repos/huggingface/transformers/issues/5300/events
https://github.com/huggingface/transformers/issues/5300
646,027,884
MDU6SXNzdWU2NDYwMjc4ODQ=
5,300
T5ForConditionalGeneration fp16 nan loss
{ "login": "xwhan", "id": 10139074, "node_id": "MDQ6VXNlcjEwMTM5MDc0", "avatar_url": "https://avatars.githubusercontent.com/u/10139074?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xwhan", "html_url": "https://github.com/xwhan", "followers_url": "https://api.github.com/users/xwhan/followers", "following_url": "https://api.github.com/users/xwhan/following{/other_user}", "gists_url": "https://api.github.com/users/xwhan/gists{/gist_id}", "starred_url": "https://api.github.com/users/xwhan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xwhan/subscriptions", "organizations_url": "https://api.github.com/users/xwhan/orgs", "repos_url": "https://api.github.com/users/xwhan/repos", "events_url": "https://api.github.com/users/xwhan/events{/privacy}", "received_events_url": "https://api.github.com/users/xwhan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "See #4586." ]
1,593
1,593
1,593
NONE
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): Language I am using the model on (English, Chinese ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: (Pdb) net_inputs["attention_mask"] tensor([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], device='cuda:0') (Pdb) net_inputs["lm_labels"] tensor([[ 3, 1489, 89, -100, -100, -100], [ 5441, 1511, 13, 3, 10314, 152]], device='cuda:0') (Pdb) net_inputs["lm_labels"] tensor([[ 3, 1489, 89, -100, -100, -100], [ 5441, 1511, 13, 3, 10314, 152]], device='cuda:0') (Pdb) 1. model = T5ForConditionalGeneration.from_pretrained("t5-large") 2. outputs = model(input_ids=net_inputs["input_ids"], attention_mask=net_inputs["attention_mask"], lm_labels=net_inputs["lm_labels"]) 3. loss, lm_logits = outputs[0], outputs[1]; print(loss) <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> Seems like a masking issue. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.11.0 - Platform: V100 - Python version: 3.6.10 - PyTorch version (GPU?): 1.5.0 - Tensorflow version (GPU?):N/A - Using GPU in script?: yes - Using distributed or parallel set-up in script?: parallel
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5300/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5300/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5299
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5299/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5299/comments
https://api.github.com/repos/huggingface/transformers/issues/5299/events
https://github.com/huggingface/transformers/issues/5299
646,011,892
MDU6SXNzdWU2NDYwMTE4OTI=
5,299
No documentation for MMBT on official docs
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "I also cannot figure out how to use this model.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,593
1,599
1,599
CONTRIBUTOR
null
I tried finding MMBT on the official documentation: https://huggingface.co/transformers/ but I could not find any references to it, even though there is an implementation for it in the source code.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5299/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5299/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5298
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5298/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5298/comments
https://api.github.com/repos/huggingface/transformers/issues/5298/events
https://github.com/huggingface/transformers/issues/5298
645,996,245
MDU6SXNzdWU2NDU5OTYyNDU=
5,298
The start and end position of BertForQuestionAnswering
{ "login": "hi-weiyuan", "id": 34810978, "node_id": "MDQ6VXNlcjM0ODEwOTc4", "avatar_url": "https://avatars.githubusercontent.com/u/34810978?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hi-weiyuan", "html_url": "https://github.com/hi-weiyuan", "followers_url": "https://api.github.com/users/hi-weiyuan/followers", "following_url": "https://api.github.com/users/hi-weiyuan/following{/other_user}", "gists_url": "https://api.github.com/users/hi-weiyuan/gists{/gist_id}", "starred_url": "https://api.github.com/users/hi-weiyuan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hi-weiyuan/subscriptions", "organizations_url": "https://api.github.com/users/hi-weiyuan/orgs", "repos_url": "https://api.github.com/users/hi-weiyuan/repos", "events_url": "https://api.github.com/users/hi-weiyuan/events{/privacy}", "received_events_url": "https://api.github.com/users/hi-weiyuan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The position is based on the tokens. So this means that if you have a context: \r\n\r\n\"The dog runs in the park.\" and a question \"Who runs in the park?\" and the corresponding tokens (=`tokenizer(\"Who runs in the park?\", \"The dog runs in the park.\",).input_ids`) are \r\n\r\n`[101, 2040, 3216, 1999, 1996, 2380, 1029, 102, 1996, 3899, 3216, 1999, 1996, 2380, 1012, 102]`, then the model will tell you at what start and end position **of the input_ids** the answer to the question will be located. To better understand how to correctly use QA you might want to take a look at the example under this model in the docs: https://huggingface.co/transformers/master/model_doc/bert.html#bertforquestionanswering" ]
1,593
1,593
1,593
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> I am confused with the start and end position of Bert for QA model because of wordpiece. My question is: what is the value of position based on? For example, if start_position=10, it means the 10th word of the input or the 10th subword of input after wordpiece? Also, is the position value contains the length of question? <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5298/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5298/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5297
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5297/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5297/comments
https://api.github.com/repos/huggingface/transformers/issues/5297/events
https://github.com/huggingface/transformers/issues/5297
645,939,677
MDU6SXNzdWU2NDU5Mzk2Nzc=
5,297
Can we have a way for a tokenizer to transform word level or character level annotations?
{ "login": "brian8128", "id": 10691563, "node_id": "MDQ6VXNlcjEwNjkxNTYz", "avatar_url": "https://avatars.githubusercontent.com/u/10691563?v=4", "gravatar_id": "", "url": "https://api.github.com/users/brian8128", "html_url": "https://github.com/brian8128", "followers_url": "https://api.github.com/users/brian8128/followers", "following_url": "https://api.github.com/users/brian8128/following{/other_user}", "gists_url": "https://api.github.com/users/brian8128/gists{/gist_id}", "starred_url": "https://api.github.com/users/brian8128/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/brian8128/subscriptions", "organizations_url": "https://api.github.com/users/brian8128/orgs", "repos_url": "https://api.github.com/users/brian8128/repos", "events_url": "https://api.github.com/users/brian8128/events{/privacy}", "received_events_url": "https://api.github.com/users/brian8128/received_events", "type": "User", "site_admin": false }
[ { "id": 1834056635, "node_id": "MDU6TGFiZWwxODM0MDU2NjM1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization", "name": "Core: Tokenization", "color": "FF4446", "default": false, "description": "Internals of the library; Tokenization." } ]
closed
false
null
[]
[ "I just found out about the `return_offsets_mapping` functionality in the `PreTrainedTokenizerFast` tokenizers. I think I can use that functionality to solve my problem. ", "@brian8128 could you share a code example of how you do that?", "Hey Avijit - I didn't test this because it's sort of copied from various places in my codebase but here's the general idea. There may be multiple different char level labels within one token so here I take the most common one using the scipy.stats.mode function. \r\n\r\n```\r\nfrom scipy.stats import mode\r\nfrom transformers import BertTokenizerFast\r\n\r\ntexts = ... # list of strings\r\nchar_level_labels = ... # list of 1-d numpy arrays corresponding in length to texts\r\n\r\ntokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased')\r\nbatch = tokenizer.batch_encode_plus(texts, return_offsets_mapping=True)\r\n\r\noffset_mappings = batch['offset_mapping']\r\n\r\ntoken_level_labels = []\r\nfor offset_mapping, char_level_label in zip(offset_mappings, char_level_labels):\r\n token_level_label = []\r\n for so, eo in offset_mapping:\r\n # Huggingface adds a start and end token that don't correspond to any\r\n # chars, we'll label these tokens with -1\r\n label = mode(char_level_label[so:eo]).mode[0] if eo - so > 0 else -1\r\n token_level_label.append(label)\r\n\r\n token_level_labels.append(np.array(token_level_labels))\r\n```" ]
1,593
1,594
1,593
NONE
null
# 🚀 Feature request If I have a string with some annotations and I want to tokenize it I'd like the tokenizer to be able to transform the annotations as well. For example suppose I have `s = "therein lies the problem."` and I'm interested in the substring `"the problem"`. So I have a string `s` and I know that the substring I'm interested in is at index 13:24. But then I tokenize `s` so that I can put it into a huggingface model and get out `['there', '##in', 'lies', 'the', 'problem', '.']` and it doesn't match up with my annotation anymore. Could we add an annotation as an additional argument to the tokenizer.tokenize function so that I could get something like the following: ``` tokenizer.tokenize("therein lies the problem.", selection=(13,24)) > ['there', '##in', 'lies', 'the', 'problem', '.'], [0, 0, 0, 1, 1, 0] ``` Substrings of interest are almost always going to line up with token boundaries so it doesn't matter too much what happens to a token that is partially in and partially outside of the selected region. Is there a way of doing something like this now?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5297/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5297/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5296
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5296/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5296/comments
https://api.github.com/repos/huggingface/transformers/issues/5296/events
https://github.com/huggingface/transformers/pull/5296
645,920,119
MDExOlB1bGxSZXF1ZXN0NDQwMzE0NDkw
5,296
Update outdated TensorFlow -> PyTorch model transfer CLI example
{ "login": "xuhdev", "id": 325476, "node_id": "MDQ6VXNlcjMyNTQ3Ng==", "avatar_url": "https://avatars.githubusercontent.com/u/325476?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xuhdev", "html_url": "https://github.com/xuhdev", "followers_url": "https://api.github.com/users/xuhdev/followers", "following_url": "https://api.github.com/users/xuhdev/following{/other_user}", "gists_url": "https://api.github.com/users/xuhdev/gists{/gist_id}", "starred_url": "https://api.github.com/users/xuhdev/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xuhdev/subscriptions", "organizations_url": "https://api.github.com/users/xuhdev/orgs", "repos_url": "https://api.github.com/users/xuhdev/repos", "events_url": "https://api.github.com/users/xuhdev/events{/privacy}", "received_events_url": "https://api.github.com/users/xuhdev/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5296?src=pr&el=h1) Report\n> Merging [#5296](https://codecov.io/gh/huggingface/transformers/pull/5296?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7cc15bdd9675d1cec9186a8963c1f59be899ee68&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5296/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5296?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5296 +/- ##\n==========================================\n- Coverage 79.29% 79.28% -0.01% \n==========================================\n Files 138 138 \n Lines 24280 24280 \n==========================================\n- Hits 19252 19251 -1 \n- Misses 5028 5029 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5296?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5296/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `78.74% <0.00%> (-0.20%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5296/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (ø)` | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5296?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5296?src=pr&el=footer). Last update [7cc15bd...65cabe0](https://codecov.io/gh/huggingface/transformers/pull/5296?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Could someone review this PR?", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,593
1,604
1,604
CONTRIBUTOR
null
The example is outdated and reports an error.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5296/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5296/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5296", "html_url": "https://github.com/huggingface/transformers/pull/5296", "diff_url": "https://github.com/huggingface/transformers/pull/5296.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5296.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/5295
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5295/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5295/comments
https://api.github.com/repos/huggingface/transformers/issues/5295/events
https://github.com/huggingface/transformers/issues/5295
645,877,261
MDU6SXNzdWU2NDU4NzcyNjE=
5,295
Is summing of attention_mask intended?
{ "login": "shaoyent", "id": 8154586, "node_id": "MDQ6VXNlcjgxNTQ1ODY=", "avatar_url": "https://avatars.githubusercontent.com/u/8154586?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shaoyent", "html_url": "https://github.com/shaoyent", "followers_url": "https://api.github.com/users/shaoyent/followers", "following_url": "https://api.github.com/users/shaoyent/following{/other_user}", "gists_url": "https://api.github.com/users/shaoyent/gists{/gist_id}", "starred_url": "https://api.github.com/users/shaoyent/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shaoyent/subscriptions", "organizations_url": "https://api.github.com/users/shaoyent/orgs", "repos_url": "https://api.github.com/users/shaoyent/repos", "events_url": "https://api.github.com/users/shaoyent/events{/privacy}", "received_events_url": "https://api.github.com/users/shaoyent/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Yes, it's intended. The attention mask has values of 0 where its attending to the tokens, and has a value of `-10000` for tokens it's not attending. By summing this attention mask, it zeros out the attentions that should not be kept. See [here](https://github.com/huggingface/transformers/blob/2ffef0d0c7a6cfa5a59c4b883849321f66c79d62/src/transformers/modeling_utils.py#L228) for the implementation.", "@LysandreJik I found that `attention_mask` is simply computed by `encoded_inputs[\"attention_mask\"] = [1] * len(encoded_inputs[\"input_ids\"])` in [tokenization_utils_base.py](https://github.com/huggingface/transformers/blob/2ffef0d0c7a6cfa5a59c4b883849321f66c79d62/src/transformers/tokenization_utils_base.py#L1944). Should `attention_mask` be synchronized with `encoded_inputs[\"input_ids\"]` when a [MASK] appears in the input? For example,\r\n\r\nInput: This is [MASK] pen.\r\n\r\nShould the `attention_mask` be `1 1 0 1` rather than `1 1 1 1`?", "The attention mask is computed according to the padding tokens, not masking tokens. See [here](https://github.com/huggingface/transformers/blob/2ffef0d0c7a6cfa5a59c4b883849321f66c79d62/src/transformers/tokenization_utils_base.py#L1932) for a sequence that requires padding." ]
1,593
1,593
1,593
CONTRIBUTOR
null
# ❓ Questions & Help Is summing of `attention_mask` intended? ## Details Documents describe `attention_mask` as a mask to avoid performing attention on padding token indices. However from the [code](https://github.com/huggingface/transformers/blob/2ffef0d0c7a6cfa5a59c4b883849321f66c79d62/src/transformers/modeling_bert.py#L243) in `BertSelfAttention` the attention mask is added to the scores. ``` attention_scores = attention_scores + attention_mask ``` Is this intended? Does this fully mask out the context?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5295/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5295/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5294
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5294/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5294/comments
https://api.github.com/repos/huggingface/transformers/issues/5294/events
https://github.com/huggingface/transformers/issues/5294
645,837,143
MDU6SXNzdWU2NDU4MzcxNDM=
5,294
Slow Integration Test for examples/seq2seq/finetune.py
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 1108649053, "node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz", "url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted", "name": "Help wanted", "color": "008672", "default": false, "description": "Extra attention is needed, help appreciated" } ]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "ok cool", "I will work on this.\r\n\r\n@sshleifer, is it supposed to be w/o a tokenizer?\r\n\r\n```\r\nfrom transformers import AutoTokenizer\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"sshleifer/student_xsum_12_3\")\r\n```\r\n\r\n```\r\nOSError: Model name 'sshleifer/student_xsum_12_3' was not found in tokenizers model name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). We assumed 'sshleifer/student_xsum_12_3' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.\r\n```\r\nIt's broken online too (Compute button):\r\nhttps://huggingface.co/sshleifer/student_xsum_12_3?text=My+name+is+Thomas+and+my+main", "### Tokenizer issue\r\n@stas00 No that's just me being lazy, fixed that model, let me know if you need others.\r\n\r\nseparately: Can you write to S3?\r\nI just ran \r\n```bash\r\ncp_bart_tok () {\r\n\texport ss=s3://models.huggingface.co/bert/sshleifer\r\n\taw3 cp $ss/distilbart-xsum-1-1/merges.txt $1\r\n\taw3 cp $ss/distilbart-xsum-1-1/tokenizer_config.json $1\r\n\taw3 cp $ss/distilbart-xsum-1-1/vocab.json $1\r\n}\r\ncp_bart_tok $ss/student_xsum_12_1/\r\n```\r\nso easy for me to copy the bart tokenizer to other places.\r\n\r\n\r\n### Status of this issue\r\nthis issue is moved forward for **translation** with `examples/seq2seq/test_bash_script.py`\r\nIt's run by `.github/self-scheduled.yml`.\r\n\r\n\r\nThere are many possible axes for improvement: \r\n- testing summarization\r\n- may not need to set `do_predict=False` [here]:\r\n(https://github.com/huggingface/transformers/blob/c69ea5efc4eac65b183e8d07b1bf91d20bbe0c8c/examples/seq2seq/test_bash_script.py#L77). I made a PL issue where do_predict was breaking in fp16, but then I turned off fp16 here.\r\n- we could use a larger model like `sshleifer/student_mbart_en_ro_1_1/` and actually learn something (and wait longer).\r\n- we could run add a new github workflow against torch 1.6 (add a new `.github/next_torch_version.yml`)\r\n- understand current [failures](https://github.com/huggingface/transformers/runs/910644181?check_suite_focus=true)\r\n\r\nThanks so much for your help and let me know where/how I can support!\r\n", "> @stas00 No that's just me being lazy, fixed that model, let me know if you need others.\r\n\r\nThe problem is still there - you can quickly test [here](https://huggingface.co/sshleifer/student_xsum_12_3?text=My+name+is+Thomas+and+my+main)\r\n\r\n> separately: Can you write to S3?\r\n\r\nI don't think I can - at least nobody gave me permissions to do so.\r\n", "> ### Status of this issue\r\n> this issue is moved forward for translation with examples/seq2seq/test_bash_script.py\r\n> [...]\r\n> Thanks so much for your help and let me know where/how I can support!\r\n\r\nLet me study it first and I will ask follow up questions once I did so.\r\n\r\nI'm learning this library and while I have spare resources I'd be happy to help fixing problems - feel free to delegate some issue to me, so I won't need to dig through past issues looking for what I could work on.\r\n\r\nI am not sure github issue comments is the most efficient way to collaborate - perhaps you're on skype/email/some IM? My email is [email protected].", "Closing this. Will make new issues that are not started." ]
1,593
1,595
1,595
CONTRIBUTOR
null
Train `sshleifer/student-xsum-12-3` for N batches on xsum data. Ensure that loss goes down. and is below ~2. This test should ONLY run on GPU and takes between 30s and 3 mins. python code to fetch xsum data: ``` import wget import tarfile wget.download('https://s3.amazonaws.com/datasets.huggingface.co/summarization/xsum.tar.gz') tarball = tarfile.open('xsum.tar.gz') tarball.extractall() data_dir='xsum' ``` cc @williamFalcon
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5294/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5294/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5293
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5293/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5293/comments
https://api.github.com/repos/huggingface/transformers/issues/5293/events
https://github.com/huggingface/transformers/issues/5293
645,826,420
MDU6SXNzdWU2NDU4MjY0MjA=
5,293
run_squad.py :: ValueError: Input [] is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers.
{ "login": "SarangSanjayGujar-lilly", "id": 63612087, "node_id": "MDQ6VXNlcjYzNjEyMDg3", "avatar_url": "https://avatars.githubusercontent.com/u/63612087?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SarangSanjayGujar-lilly", "html_url": "https://github.com/SarangSanjayGujar-lilly", "followers_url": "https://api.github.com/users/SarangSanjayGujar-lilly/followers", "following_url": "https://api.github.com/users/SarangSanjayGujar-lilly/following{/other_user}", "gists_url": "https://api.github.com/users/SarangSanjayGujar-lilly/gists{/gist_id}", "starred_url": "https://api.github.com/users/SarangSanjayGujar-lilly/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SarangSanjayGujar-lilly/subscriptions", "organizations_url": "https://api.github.com/users/SarangSanjayGujar-lilly/orgs", "repos_url": "https://api.github.com/users/SarangSanjayGujar-lilly/repos", "events_url": "https://api.github.com/users/SarangSanjayGujar-lilly/events{/privacy}", "received_events_url": "https://api.github.com/users/SarangSanjayGujar-lilly/received_events", "type": "User", "site_admin": false }
[ { "id": 1834056635, "node_id": "MDU6TGFiZWwxODM0MDU2NjM1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization", "name": "Core: Tokenization", "color": "FF4446", "default": false, "description": "Internals of the library; Tokenization." } ]
closed
false
null
[]
[ "Hi! Do you mind pasting the command you use to run the script?", "I'm facing the same error while training Electra and MiniLM on Squad:\r\nValueError: Input [] is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers.\r\n\r\nThis error is thrown up after reading of about 2% of the examples.\r\nMy env is the same as above. The command I use is:\r\n\r\npython examples/question-answering/run_squad.py \\\r\n --model_type bert \\\r\n --model_name_or_path microsoft/MiniLM-L12-H384-uncased \\\r\n --do_train \\\r\n --do_eval \\\r\n --do_lower_case \\\r\n --train_file \"/content/transformers/dev-v1.1.json\" \\\r\n --predict_file \"/content/transformers/dev-v1.1.json\" \\\r\n --per_gpu_train_batch_size 12 \\\r\n --learning_rate 3e-5 \\\r\n --num_train_epochs 2.0 \\\r\n --max_seq_length 384 \\\r\n --doc_stride 128 \\\r\n --output_dir \"/content/drive/My Drive/bert/newdir5\"\r\n\r\n", "Thanks, looking into it.", "I works now!", "Yes, it should be fixed in v3.0.0!" ]
1,593
1,593
1,593
NONE
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): Bert Language I am using the model on (English, Chinese ...): English The problem arises when using: [*] the official example scripts: (give details below) [ ] my own modified scripts: (give details below) The tasks I am working on is: [*] an official GLUE/SQUaD task: (give the name) [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. while running run_squad.py 2. Training & testing with https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Error File "/usr/lib/python3.6/multiprocessing/pool.py", line 320, in <genexpr> return (item for chunk in result for item in chunk) File "/usr/lib/python3.6/multiprocessing/pool.py", line 735, in next raise value ValueError: Input [] is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers. ## Expected behavior To work ## Environment info - `transformers` version: 2.11.0 - Platform: Linux-5.3.0-1017-aws-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.5.1 (False) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5293/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5293/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5292
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5292/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5292/comments
https://api.github.com/repos/huggingface/transformers/issues/5292/events
https://github.com/huggingface/transformers/issues/5292
645,820,391
MDU6SXNzdWU2NDU4MjAzOTE=
5,292
Saving and loading tokenizers with torch.save fails
{ "login": "mittalsuraj18", "id": 5629517, "node_id": "MDQ6VXNlcjU2Mjk1MTc=", "avatar_url": "https://avatars.githubusercontent.com/u/5629517?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mittalsuraj18", "html_url": "https://github.com/mittalsuraj18", "followers_url": "https://api.github.com/users/mittalsuraj18/followers", "following_url": "https://api.github.com/users/mittalsuraj18/following{/other_user}", "gists_url": "https://api.github.com/users/mittalsuraj18/gists{/gist_id}", "starred_url": "https://api.github.com/users/mittalsuraj18/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mittalsuraj18/subscriptions", "organizations_url": "https://api.github.com/users/mittalsuraj18/orgs", "repos_url": "https://api.github.com/users/mittalsuraj18/repos", "events_url": "https://api.github.com/users/mittalsuraj18/events{/privacy}", "received_events_url": "https://api.github.com/users/mittalsuraj18/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "I met save issue before. Based on my experience, it is more related to PyTorch. \r\nSee https://pytorch.org/tutorials/beginner/saving_loading_models.html, you should use state_dict to save and load model.", "But @FacingBugs, this is an issue with the tokenizers and not the models ", "For tokenizer, using Transformer package provided API: \r\n```\r\ntokenizer.save_pretrained(your_output_model_dir)\r\n```", "@FacingBugs actually I have raised this bug because it was causing an issue in another library which uses this package\r\nhttps://github.com/flairNLP/flair/issues/1712\r\nAnd since `torch.save` is mostly used to persist the models and dependencies for pytorch based learning, I believe the fix should be implemented in the transformers library itself rather than other dependent libraries which may add on top of transformers to provide their custom pytorch models in which case `torch.save` would mostly be used to save the models.", "I think according to the PyTorch documentation, ```torch.save()``` is not recommended. \r\n\r\nI cite this from the documentation:\r\n\"However in this case, the serialized data is bound to the specific classes and the exact directory structure used, so it can break in various ways when used in other projects, or after some serious refactors.\"\r\n\r\n\r\nFor my personal experience with the Transformer package, the ```xx.save_pretrained()``` works for most of the cases (models, tokenizers, configs). For the tokenizer, I think the package actually saved several other files besides the vocab file. I think using the save_pretrained method should be the best practice. \r\n\r\nHope this can help you.", "Wouldn’t this make persisting other models on top of transformers difficult because now we have to save and track multiple files instead of a single file?", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,593
1,599
1,599
NONE
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): Albert Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Load albert base tokenizer using `AutoTokenizer.from_pretrained` 2. Save it to a file using `torch.save` 3. Delete `~/.cache/torch/transformers` directory 4. Now try to load from the file using `torch.load` 5. Loading fails as the cached file does not exist ``` import transformers import torch token = transformers.AutoTokenizer.from_pretrained("albert-base-v2") torch.save({"token":token}, "./token.pt") ``` Delete `~/.cache/torch/` directory Then Run ``` import torch torch.load("./token.pt") ``` ## Expected behavior Tokenizer should load successfully. ## Environment info - `transformers` version: 2.11.0 - Platform: Linux-4.19.104-microsoft-standard-x86_64-with-debian-bullseye-sid - Python version: 3.7.6 - PyTorch version (GPU?): 1.3.1+cpu (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5292/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5292/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5291
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5291/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5291/comments
https://api.github.com/repos/huggingface/transformers/issues/5291/events
https://github.com/huggingface/transformers/pull/5291
645,809,426
MDExOlB1bGxSZXF1ZXN0NDQwMjIzMDc3
5,291
CircleCI stores cleaner output at test_outputs.txt
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5291?src=pr&el=h1) Report\n> Merging [#5291](https://codecov.io/gh/huggingface/transformers/pull/5291?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/79a82cc06aaa68088639bf9bb000752cfd33a8c6&el=desc) will **decrease** coverage by `1.39%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5291/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5291?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5291 +/- ##\n==========================================\n- Coverage 79.29% 77.90% -1.40% \n==========================================\n Files 138 138 \n Lines 24282 24282 \n==========================================\n- Hits 19254 18916 -338 \n- Misses 5028 5366 +338 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5291?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5291/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.78% <0.00%> (-74.20%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5291/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5291/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.38% <0.00%> (-0.24%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5291/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.48% <0.00%> (-0.15%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5291/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (+0.33%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5291/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.30% <0.00%> (+1.25%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5291?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5291?src=pr&el=footer). Last update [79a82cc...73e5da8](https://codecov.io/gh/huggingface/transformers/pull/5291?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "I wonder how it looks when it fails. Is it the same output you would usually see? If that's the case, I'm all for that change!", "Yes the tracebacks are completely unchanged. You just only see a `.` if the tests passes. And you don't have to scroll through `logger.info` (if you don't want to, it's still in the default circleci page).", "Merging. Tag me if any issues." ]
1,593
1,593
1,593
CONTRIBUTOR
null
Changes: - Don't run pytest with `-v` or `--cov`. I left run_tests_torch_and_tf using `--cov` but would love to delete that. I have never used circleci to determine code coverage. - the run* jobs create artifact files called test_output.txt that are easier to read than scrolling in the circleci gui - self scheduled runner also attempts to make a test_output.txt Before: ![image](https://user-images.githubusercontent.com/6045025/85790787-d33a8e80-b6fe-11ea-9e10-d4c48dfe713f.png) After: [artifact](https://53278-155220641-gh.circle-artifacts.com/0/test_output.txt) is very manageable and [ui](https://app.circleci.com/pipelines/github/huggingface/transformers/8144/workflows/816b1f60-bda9-4456-9f51-92d6cff7b266/jobs/53278/steps) is less noisy.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5291/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5291/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5291", "html_url": "https://github.com/huggingface/transformers/pull/5291", "diff_url": "https://github.com/huggingface/transformers/pull/5291.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5291.patch", "merged_at": 1593194372000 }
https://api.github.com/repos/huggingface/transformers/issues/5290
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5290/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5290/comments
https://api.github.com/repos/huggingface/transformers/issues/5290/events
https://github.com/huggingface/transformers/issues/5290
645,808,170
MDU6SXNzdWU2NDU4MDgxNzA=
5,290
Model with fastest inference?
{ "login": "tqdo", "id": 53948469, "node_id": "MDQ6VXNlcjUzOTQ4NDY5", "avatar_url": "https://avatars.githubusercontent.com/u/53948469?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tqdo", "html_url": "https://github.com/tqdo", "followers_url": "https://api.github.com/users/tqdo/followers", "following_url": "https://api.github.com/users/tqdo/following{/other_user}", "gists_url": "https://api.github.com/users/tqdo/gists{/gist_id}", "starred_url": "https://api.github.com/users/tqdo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tqdo/subscriptions", "organizations_url": "https://api.github.com/users/tqdo/orgs", "repos_url": "https://api.github.com/users/tqdo/repos", "events_url": "https://api.github.com/users/tqdo/events{/privacy}", "received_events_url": "https://api.github.com/users/tqdo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@tqdo There is extensive info and a spreadsheet [here](https://huggingface.co/transformers/benchmarks.html?highlight=benchmark).", "Thanks a lot" ]
1,593
1,593
1,593
NONE
null
Just curious whether there is any benchmark for inference speed of models in the transformers library? I am interested in the question-answer task but I think a benchmark on any tasks that BERT can do should be good. Thank you.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5290/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5290/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5289
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5289/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5289/comments
https://api.github.com/repos/huggingface/transformers/issues/5289/events
https://github.com/huggingface/transformers/pull/5289
645,801,057
MDExOlB1bGxSZXF1ZXN0NDQwMjE2MTc1
5,289
[pipelines] Change summarization default to distilbart-cnn-12-6
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 1771187924, "node_id": "MDU6TGFiZWwxNzcxMTg3OTI0", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Pipeline", "name": "Core: Pipeline", "color": "FF7066", "default": false, "description": "Internals of the library; Pipeline." } ]
closed
false
null
[]
[ "CI Failure is spurious.", "Could you please tell me in **distilbart-cnn-12-6** what does **12 & 6** stands for?", "12 Encoder layers and 6 decoder layers I would suggest", "Thank you Sir, @patrickvonplaten " ]
1,593
1,593
1,593
CONTRIBUTOR
null
- Also adds an integration test that runs on GPU if available. - Other pipelines could do the same if that would be helpful.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5289/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5289/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5289", "html_url": "https://github.com/huggingface/transformers/pull/5289", "diff_url": "https://github.com/huggingface/transformers/pull/5289.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5289.patch", "merged_at": 1593186204000 }
https://api.github.com/repos/huggingface/transformers/issues/5288
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5288/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5288/comments
https://api.github.com/repos/huggingface/transformers/issues/5288/events
https://github.com/huggingface/transformers/issues/5288
645,800,009
MDU6SXNzdWU2NDU4MDAwMDk=
5,288
Is there a Longformer For Sequence Classification?
{ "login": "Weilin37", "id": 5770543, "node_id": "MDQ6VXNlcjU3NzA1NDM=", "avatar_url": "https://avatars.githubusercontent.com/u/5770543?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Weilin37", "html_url": "https://github.com/Weilin37", "followers_url": "https://api.github.com/users/Weilin37/followers", "following_url": "https://api.github.com/users/Weilin37/following{/other_user}", "gists_url": "https://api.github.com/users/Weilin37/gists{/gist_id}", "starred_url": "https://api.github.com/users/Weilin37/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Weilin37/subscriptions", "organizations_url": "https://api.github.com/users/Weilin37/orgs", "repos_url": "https://api.github.com/users/Weilin37/repos", "events_url": "https://api.github.com/users/Weilin37/events{/privacy}", "received_events_url": "https://api.github.com/users/Weilin37/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
null
[]
[ "It is [there](https://github.com/huggingface/transformers/blob/24f46ea3f3e5006ca38735306753a846a0823174/src/transformers/modeling_longformer.py#L796). \r\n\r\nYou may need to use a source install, I'm not sure it was already there in the last release.", "Thanks. It was in the release but it just wasnt in the documentation." ]
1,593
1,595
1,595
NONE
null
I noticed that Longformer does not have a "LongformerForSequenceClassification". Is there a reason for this and is this something that would be added in the near future?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5288/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5288/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5287
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5287/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5287/comments
https://api.github.com/repos/huggingface/transformers/issues/5287/events
https://github.com/huggingface/transformers/pull/5287
645,791,903
MDExOlB1bGxSZXF1ZXN0NDQwMjA4NzI3
5,287
[tokenizers] Several small improvements and bug fixes
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5287?src=pr&el=h1) Report\n> Merging [#5287](https://codecov.io/gh/huggingface/transformers/pull/5287?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/24f46ea3f3e5006ca38735306753a846a0823174&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `97.14%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5287/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5287?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5287 +/- ##\n=======================================\n Coverage 79.08% 79.08% \n=======================================\n Files 138 138 \n Lines 24078 24093 +15 \n=======================================\n+ Hits 19041 19054 +13 \n- Misses 5037 5039 +2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5287?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5287/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.15% <94.44%> (-0.01%)` | :arrow_down: |\n| [src/transformers/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5287/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `97.18% <100.00%> (+0.06%)` | :arrow_up: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5287/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `94.52% <100.00%> (ø)` | |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/5287/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.20% <100.00%> (-0.09%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5287/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91.16% <0.00%> (-0.32%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5287?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5287?src=pr&el=footer). Last update [24f46ea...209dcc7](https://codecov.io/gh/huggingface/transformers/pull/5287?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,593
1,593
1,593
MEMBER
null
Various improvements for tokenizers: - Avoid recursion loop for special tokens id look-up in Fast tokenizers - Fix #5232 by removing the unsupported method `convert_tokens_to_string` for Fast tokenizers - Fix #5256 by aligning the behavior of the slow tokenizer on the behavior of the fast tokenizer for special tokens inside the input. A little bit of background on the modifications in Roberta tokenizer: We now align the behavior of the byte-level BPE tokenizer to the Fast version which is the most consistent with the way the original tokenizer behaved: all the special tokens are assumed to not have a prefix space so the user can control whether he wants to have a space or not in the string. We do an exception for the mask token in Roberta which is assumed to represent a word and thus has a prefix space by default (can be overided at initialization). This is necessary to be able to use Roberta in filled-mask completion easily. This is already built-in for the Fast tokenizer. Here I update the slow tokenizer to have this behavior using the newly introduced `AddedToken` which lets you control the space behaviors of the special tokens.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5287/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5287/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5287", "html_url": "https://github.com/huggingface/transformers/pull/5287", "diff_url": "https://github.com/huggingface/transformers/pull/5287.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5287.patch", "merged_at": 1593116235000 }
https://api.github.com/repos/huggingface/transformers/issues/5286
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5286/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5286/comments
https://api.github.com/repos/huggingface/transformers/issues/5286/events
https://github.com/huggingface/transformers/issues/5286
645,775,923
MDU6SXNzdWU2NDU3NzU5MjM=
5,286
save_pretrained on master results in tokenizers that cannot be loaded in v2.11
{ "login": "vladislavkoz", "id": 40685761, "node_id": "MDQ6VXNlcjQwNjg1NzYx", "avatar_url": "https://avatars.githubusercontent.com/u/40685761?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vladislavkoz", "html_url": "https://github.com/vladislavkoz", "followers_url": "https://api.github.com/users/vladislavkoz/followers", "following_url": "https://api.github.com/users/vladislavkoz/following{/other_user}", "gists_url": "https://api.github.com/users/vladislavkoz/gists{/gist_id}", "starred_url": "https://api.github.com/users/vladislavkoz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vladislavkoz/subscriptions", "organizations_url": "https://api.github.com/users/vladislavkoz/orgs", "repos_url": "https://api.github.com/users/vladislavkoz/repos", "events_url": "https://api.github.com/users/vladislavkoz/events{/privacy}", "received_events_url": "https://api.github.com/users/vladislavkoz/received_events", "type": "User", "site_admin": false }
[ { "id": 1834056635, "node_id": "MDU6TGFiZWwxODM0MDU2NjM1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization", "name": "Core: Tokenization", "color": "FF4446", "default": false, "description": "Internals of the library; Tokenization." } ]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "Would it be possible to run \r\n```\r\npip install transformers --upgrade\r\n``` \r\nand try again? We have fixed a lot of bugs since 2.8.0\r\n\r\nPasted tracebacks are much easier to read than screenshots.\r\n", "I belive that i was runing it. Let me try one more time.\n\nчт, 25 июня 2020 г., 21:50 Sam Shleifer <[email protected]>:\n\n> Would it be possible to run\n>\n> pip install transformers --upgrade\n>\n> and try again? We have fixed a lot of bugs since 2.8.0\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/5286#issuecomment-649756239>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AJWNBQJVBSPKL52ZGPRPJ4LRYOL6XANCNFSM4OIUNXTQ>\n> .\n>\n", "Just checked it twice. Looks like I've run it in another conda env. Here is an another error message(with transformers==2.11.0). \r\n![image](https://user-images.githubusercontent.com/40685761/85783301-dc901f00-b72f-11ea-98ae-cbbdbf37695f.png)\r\n", "Would you like me to create another issue?", "I can reproduce now, thanks. Will fix.", "Issue is that code on master saves `special_tokens_map.json` as \r\n```\r\n{\"bos_token\": \"<s>\", \"eos_token\": \"</s>\", \"unk_token\": \"<unk>\", \"sep_token\": \"</s>\", \"pad_token\": \"<pad>\", \"cls_token\": \"<s>\", \"mask_token\": {\"content\": \"<mask>\", \"single_word\": false, \"lstrip\": true, \"rstrip\": false, \"normalized\": true}}\r\n```\r\n\r\nand `v2.11` cannot load this format (where `mask_token` is a dict).\r\n\r\n\r\nI deleted `special_tokens_mask.json`, which seems to fix things. (the original `facebook/bart-large-cnn/` doesn't have a `special_tokens_mask.json`).\r\n\r\ncc @thomwolf", "I'm able to create tokenizer only for 'distilbart-xsum-12-1' and 'distilbart-xsum-9-6' (I still see 'special token mask_token... error for all other distilbart tokenizers')\r\nThe model can be uploaded only with these tokenizers. Then on the summarization step, I'm getting the following error:\r\n![image](https://user-images.githubusercontent.com/40685761/85829662-cddb5380-b793-11ea-9971-09f58fe517b3.png)\r\nReproducible with both PyTorch versions: 1.5.1 and https://download.pytorch.org/whl/cpu/torch-1.0.1.post2-cp37-cp37m-linux_x86_64.whl", "Could I see the command you ran + more traceback/like what the ids were?\r\nOr could you try to reproduce the issue in google colab?\r\n", "1. **When I'm trying to create a tokenizer with the following command:**\r\ntokenizer = AutoTokenizer.from_pretrained(\"sshleifer/distilbart-cnn-12-6\") it fails with:\r\n\r\n\"special token {} has to be either str or AddedTokenFast but got: {}\".format(key, type(value))\r\nTypeError: special token mask_token has to be either str or AddedTokenFast but got: <class 'dict'>\r\n\r\n-----------------------------------------\r\n\r\n\r\n2. **And here is the code snippet for another error message:**\r\ntokenizer = AutoTokenizer.from_pretrained(\"sshleifer/distilbart-xsum-9-6\")\r\nmodel = AutoModelWithLMHead.from_pretrained(\"sshleifer/distilbart-xsum-9-6\")\r\nself.summarizer = pipeline(\"summarization\", model=model, tokenizer=tokenizer)\r\nself.summarizer(text) # It fails here. \r\n\r\nThe \"text\" variable contains the following(It was working with the simple text part from Wikipedia but fails with the following one): \r\n\r\nJune 29, 2020 | Primary Care Collaborative\r\nJuly 22, 2020 | National Hispanic Medical Association\r\nJuly 29, 2020 | Business Health Coalition\r\nJune 23, 2020 | The Hill\r\nJune 25, 2020\r\nJune 24, 2020 | Primary Care Collaborative\r\nNews Room\r\nTopic\r\nJune 25, 2020\r\nPrimary care practices are projected to lose more than $65,000 in revenue per full-time physician in 2020, following drastic declines in office visits and fees for services from March to May during the COVID-19 pandemic, according to a...\r\nJune 24, 2020 | Primary Care Collaborative\r\nIn the wake of police brutality and pervasive racial injustice, which has spurred numerous, ongoing demonstrations across the country, the Primary Care Collaborative (PCC) reaffirms its commitment to racial equality.\r\nPCC underscores this...\r\nJune 24, 2020 | Primary Care Collaborative\r\nOn June 18, PCC joined many other leading organizations in the primary care community in an hour-long chat on Twitter about the current and future state of primary care during the coronavirus pandemic.\r\nIf you missed the conversation, you...\r\nJune 23, 2020 | The Hill\r\nAnthony Fauci, the nation's top infectious disease expert, said Tuesday that he thinks institutional racism has played a role in the disproportionate impact the coronavirus outbreak has had on the Black community in the U.S.\r\n\"...\r\nJune 20, 2020\r\nWASHINGTON  —  Even as hospitals and physicians’ offices nationwide struggle to stay afloat amid the downturn caused by coronavirus, a small group of clinics is thriving, sustained by a model of care that many experts hope could reshape...\r\nJune 18, 2020 | Primary Care Collaborative\r\nCheck back weekly for the latest survey results and updates.\r\nFor last week's data, see Week 13 Results.\r\nWho replied to the survey in Week 14?\r\nThe Larry A. Green Center, the Primary Care Collaborative and 3rd Conversation are partnering...\r\nJune 18, 2020 | PCPCC Press Release\r\nWASHINGTON (June 18, 2020) – The Larry A. Green Center, in collaboration with the Primary Care Collaborative (PCC) and 3rd Conversation, today released new data showing that more than 80 percent of primary care clinicians say professional...\r\nJune 12, 2020 | The Commonwealth Fund\r\nOn this episode of The Dose podcast, health policy expert Farzad Mostashari, M.D., who advises and supports hundreds of primary care practices across the country, explains what it will take to ensure doctors can continue caring for...\r\nJune 12, 2020 | Primary Care Collaborative\r\nSix former leaders of the Centers for Medicare and Medicaid Services sent a joint letter June 10 to congressional leaders about the role of payment and regulatory flexibility in responding to the COVID-19 pandemic and addressing serious...\r\nJune 12, 2020 | PR Newswire\r\nSAN FRANCISCO, June 12, 2020 -- Innovaccer, Inc., a leading healthcare technology company [and a PCC Executive Member] released its research-based report, titled \"What COVID-19 Means to American Healthcare: Trends, Impacts, Predictions,...\r\nJune 10, 2020 | Primary Care Collaborative\r\nCheck back weekly for the latest survey results and updates.\r\nFor last week's data, see Week 12 Results.\r\nWho replied to the survey in Week 13?\r\nA primary care clinician survey (weekly) and a patient survey (generally every other week) are...\r\nJune 10, 2020 | PCPCC Press Release\r\nWASHINGTON (June 10, 2020) – The Larry A. Green Center, in collaboration with the Primary Care Collaborative (PCC) and 3rd Conversation, today released new data showing that a staggering 86 percent of Americans believe racism is impacting...\r\nJune 4, 2020 | PCPCC Press Release\r\nWASHINGTON (June 4, 2020) – New survey data released today by the Larry A. Green Center, in collaboration with the Primary Care Collaborative (PCC) and 3rd Conversation, shows that over 70% of primary care patients are comfortable using...\r\nJune 3, 2020 | Primary Care Collaborative\r\nCheck back weekly for the latest survey results and updates.\r\nFor last week's data, see Week 11 Results.\r\nWho replied to the survey in Week 12?\r\nA primary care clinician survey (weekly), and a patient survey (generally every other week) are...\r\nJune 1, 2020 | The Hill\r\nThe COVID-19 pandemic has unmasked many weaknesses in our public health and health care systems. But the outbreak also has accelerated, within weeks, useful health care innovations that would have normally taken years to develop. A strong...\r\nJune 1, 2020\r\nThe week of June 1 is a time of national advocacy for primary care. The PCC and many other organizations are part of this campaign, called #saveprimarycare. We are reaching out to Congress and the administration to call for dedicated...\r\nMay 27, 2020 | Primary Care Collaborative\r\nCheck back weekly for the latest survey results and updates.\r\nFor last week's data, see Week 10 Results.\r\nWho replied to the surveys?\r\nThe Larry A. Green Center is now fielding two separate surveys: one to primary care clinicians, and a...\r\nMay 27, 2020\r\nWASHINGTON (May 27, 2020) – In new data released today by the Larry A. Green Center, in collaboration with 3rd Conversation and the Primary Care Collaborative (PCC), Americans report feeling “panicked, upset, or heartbroken” at the...\r\nMay 21, 2020\r\nWASHINGTON, May 21, 2020—In a new survey of primary care clinicians and their response to the COVID-19 pandemic, conducted May 15-18, more than half (55%) fear they are unprepared for the next wave of the pandemic due to high stress among...\r\nMay 21, 2020 | Primary Care Collaborative\r\nCheck back weekly for the latest survey results and updates.\r\nFor last week's data, see Week 9 Results.\r\nWho replied to the survey in Week 10?\r\nThe week 10 sample was much smaller (736) than last week’s sample and of relatively different...\r\nPages\r\n\r\n ", "Any updates? ", "Looks like I've found an issue with:\r\n\"special token {} has to be either str or AddedTokenFast but got: {}\".format(key, type(value))\r\nTypeError: special token mask_token has to be either str or AddedTokenFast but got: <class 'dict'>\r\n\r\nThe issue is fixed. The problem was in my local cache so now it works. But it still fails for the summarization using the text above." ]
1,593
1,595
1,593
NONE
null
# 🐛 Bug ## Information Model I am using sshleifer/distilbart- * The problem arises when using: tokenizer = AutoTokenizer.from_pretrained("sshleifer/distilbart-xsum-12-3") -it fails here ## To reproduce Steps to reproduce the behavior: tokenizer = AutoTokenizer.from_pretrained("sshleifer/distilbart-xsum-12-3") -it fails here ![image](https://user-images.githubusercontent.com/40685761/85780041-de0c1800-b72c-11ea-840e-ed4402da3a01.png) ## Environment info - `transformers` version:2.8.0 - Platform: - Python version: 3.6 - PyTorch version (GPU?): https://download.pytorch.org/whl/cpu/torch-1.0.1.post2-cp37-cp37m-linux_x86_64.whl - Tensorflow version (GPU?): - Using GPU in script?: No - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5286/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/5286/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5285
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5285/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5285/comments
https://api.github.com/repos/huggingface/transformers/issues/5285/events
https://github.com/huggingface/transformers/issues/5285
645,765,840
MDU6SXNzdWU2NDU3NjU4NDA=
5,285
Roberta's Positional Embedding Offset
{ "login": "h324yang", "id": 6326212, "node_id": "MDQ6VXNlcjYzMjYyMTI=", "avatar_url": "https://avatars.githubusercontent.com/u/6326212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/h324yang", "html_url": "https://github.com/h324yang", "followers_url": "https://api.github.com/users/h324yang/followers", "following_url": "https://api.github.com/users/h324yang/following{/other_user}", "gists_url": "https://api.github.com/users/h324yang/gists{/gist_id}", "starred_url": "https://api.github.com/users/h324yang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/h324yang/subscriptions", "organizations_url": "https://api.github.com/users/h324yang/orgs", "repos_url": "https://api.github.com/users/h324yang/repos", "events_url": "https://api.github.com/users/h324yang/events{/privacy}", "received_events_url": "https://api.github.com/users/h324yang/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "That's certainly possible. As you can see from my comment, and PR #5188 , I don't fully understand the motivation for the offset. It is very tricky.", "I figured out why. See here https://github.com/pytorch/fairseq/issues/1177\r\nSo basically the purpose is to make positional embedding = 0 on padding positions (positions where token is padding token), using the `padding_idx` parameter in torch.nn.Embedding. \r\n\r\nI think we can simply use masked_fill() to make positional embedding = 0 on padding positions, so the code is easier to understand (no need for the offset).", "Exactly! \r\nWould love to do that, but the migration of the existing bart state dicts is non trivial, since they already store the extra position embedding. Even if we tracked down all bart models with `config.static_position_embeddings=False` and resized their positional embeddings, we would break code that is not up to date w master (lots of code).\r\n\r\nSo I think we must settle for documenting what is going on better in `LearnedPositionalEmbedding` and accept the unfortunate reality that we are stuck with the offset forever (or until we have some futuristic model hub tooling to version state dicts).", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,593
1,604
1,604
NONE
null
https://github.com/huggingface/transformers/blob/d4c2cb402d6674211726fd5f4803d1090664e438/src/transformers/modeling_bart.py#L754 https://github.com/huggingface/transformers/blob/d4c2cb402d6674211726fd5f4803d1090664e438/src/transformers/modeling_bart.py#L763 So this offset is added because the function `create_position_ids_from_input_ids` shifts the position ids by padding_idx + 1. However, I wonder if other models should also include this? https://github.com/huggingface/transformers/blob/d4c2cb402d6674211726fd5f4803d1090664e438/src/transformers/modeling_roberta.py#L54 For instance, when I am using `Longformer`, it looks like the offset is not added to `Roberta`, so I need to add such a offset to config.max_position_embeddings
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5285/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5285/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5284
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5284/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5284/comments
https://api.github.com/repos/huggingface/transformers/issues/5284/events
https://github.com/huggingface/transformers/issues/5284
645,753,135
MDU6SXNzdWU2NDU3NTMxMzU=
5,284
Tokenizer batch_encode_plus unexpected behavior
{ "login": "vrdn-23", "id": 16606656, "node_id": "MDQ6VXNlcjE2NjA2NjU2", "avatar_url": "https://avatars.githubusercontent.com/u/16606656?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vrdn-23", "html_url": "https://github.com/vrdn-23", "followers_url": "https://api.github.com/users/vrdn-23/followers", "following_url": "https://api.github.com/users/vrdn-23/following{/other_user}", "gists_url": "https://api.github.com/users/vrdn-23/gists{/gist_id}", "starred_url": "https://api.github.com/users/vrdn-23/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vrdn-23/subscriptions", "organizations_url": "https://api.github.com/users/vrdn-23/orgs", "repos_url": "https://api.github.com/users/vrdn-23/repos", "events_url": "https://api.github.com/users/vrdn-23/events{/privacy}", "received_events_url": "https://api.github.com/users/vrdn-23/received_events", "type": "User", "site_admin": false }
[ { "id": 1834056635, "node_id": "MDU6TGFiZWwxODM0MDU2NjM1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization", "name": "Core: Tokenization", "color": "FF4446", "default": false, "description": "Internals of the library; Tokenization." } ]
closed
false
null
[]
[ "Yes, this was fixed on master today with https://github.com/huggingface/transformers/pull/5252\r\n" ]
1,593
1,593
1,593
NONE
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): bert-base-multilingual-cased Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) A script to tokenize sentences into inputs for the model The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) Standard text that is used for intent and domain classification ## To reproduce Steps to reproduce the behavior: ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bert-base-multilingual-cased") tokenizer.batch_encode_plus(["hello my name is Sam"], return_tensors="pt", pad_to_max_length=True, add_special_tokens=True)['input_ids'] >>>tensor([[ 101, 61694, 10133, 15127, 11324, 10124, 14268, 102]]) tokenizer.batch_encode_plus(["hello my name is Sam"], return_tensors="pt", pad_to_max_length=True, add_special_tokens=False)['input_ids'] >>>tensor([[61694, 10133, 15127, 11324, 10124, 14268, 0, 0]]) tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base") tokenizer.batch_encode_plus(["hello my name is Sam"], return_tensors="pt", pad_to_max_length=True, add_special_tokens=True)['input_ids'] >>>tensor([[ 0, 33600, 31, 759, 9351, 83, 3362, 2]]) tokenizer.batch_encode_plus(["hello my name is Sam"], return_tensors="pt", pad_to_max_length=True, add_special_tokens=False)['input_ids'] >>>tensor([[33600, 31, 759, 9351, 83, 3362, 1, 1]]) ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior When running batch_encode_plus for a single example without adding special tokens and pad_to_max_length set to True, I would not expect any pad tokens. I did not see any mention in the documentation as to why this behavior is the expected norm. <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.11.0 - Platform: MacOS - Python version: 3.7.3 - PyTorch version (GPU?): 1.5.0 - Tensorflow version (GPU?): N/A - Using GPU in script?: No - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5284/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5284/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5283
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5283/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5283/comments
https://api.github.com/repos/huggingface/transformers/issues/5283/events
https://github.com/huggingface/transformers/pull/5283
645,752,587
MDExOlB1bGxSZXF1ZXN0NDQwMTc2MTAy
5,283
Gpt2 model card
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5283?src=pr&el=h1) Report\n> Merging [#5283](https://codecov.io/gh/huggingface/transformers/pull/5283?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0e1fce3c0129d05b65a83cdd89e8eadded553f2e&el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5283/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5283?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5283 +/- ##\n==========================================\n- Coverage 79.11% 79.09% -0.02% \n==========================================\n Files 138 138 \n Lines 24080 24080 \n==========================================\n- Hits 19050 19046 -4 \n- Misses 5030 5034 +4 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5283?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5283/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.51% <0.00%> (-1.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5283/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.62% <0.00%> (-0.15%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5283/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.26% <0.00%> (+0.12%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5283?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5283?src=pr&el=footer). Last update [0e1fce3...a1c3ed8](https://codecov.io/gh/huggingface/transformers/pull/5283?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "(@julien-c approved offline)" ]
1,593
1,593
1,593
COLLABORATOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5283/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5283/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5283", "html_url": "https://github.com/huggingface/transformers/pull/5283", "diff_url": "https://github.com/huggingface/transformers/pull/5283.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5283.patch", "merged_at": 1593173312000 }
https://api.github.com/repos/huggingface/transformers/issues/5282
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5282/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5282/comments
https://api.github.com/repos/huggingface/transformers/issues/5282/events
https://github.com/huggingface/transformers/issues/5282
645,669,414
MDU6SXNzdWU2NDU2Njk0MTQ=
5,282
Bart: Instatiate lm_head once without wasting memory
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1845609017, "node_id": "MDU6TGFiZWwxODQ1NjA5MDE3", "url": "https://api.github.com/repos/huggingface/transformers/labels/seq2seq", "name": "seq2seq", "color": "fef2c0", "default": false, "description": "" } ]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "this will happen as part of TPU issue.", "Can you link the issue? Is there an open PR for this already? ", "This one: https://github.com/huggingface/transformers/pull/5960 , but it's broken afaict.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,593
1,607
1,607
CONTRIBUTOR
null
Marking this down since I agreed to do it https://github.com/huggingface/transformers/pull/4803#discussion_r443381438 cc @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5282/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5282/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5281
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5281/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5281/comments
https://api.github.com/repos/huggingface/transformers/issues/5281/events
https://github.com/huggingface/transformers/issues/5281
645,669,237
MDU6SXNzdWU2NDU2NjkyMzc=
5,281
Segmentation fault when trying to load models
{ "login": "michaelcapizzi", "id": 8990766, "node_id": "MDQ6VXNlcjg5OTA3NjY=", "avatar_url": "https://avatars.githubusercontent.com/u/8990766?v=4", "gravatar_id": "", "url": "https://api.github.com/users/michaelcapizzi", "html_url": "https://github.com/michaelcapizzi", "followers_url": "https://api.github.com/users/michaelcapizzi/followers", "following_url": "https://api.github.com/users/michaelcapizzi/following{/other_user}", "gists_url": "https://api.github.com/users/michaelcapizzi/gists{/gist_id}", "starred_url": "https://api.github.com/users/michaelcapizzi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/michaelcapizzi/subscriptions", "organizations_url": "https://api.github.com/users/michaelcapizzi/orgs", "repos_url": "https://api.github.com/users/michaelcapizzi/repos", "events_url": "https://api.github.com/users/michaelcapizzi/events{/privacy}", "received_events_url": "https://api.github.com/users/michaelcapizzi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Bumping to `torch==1.5.1` fixes this issue. But it's still unclear why.", "I have also met the same issue and upgrading to torch1.5.1 also solves my problem.", "Possibly related to https://github.com/huggingface/transformers/issues/4857", "**Downgrade to sentencepiece==0.1.91 solve it.** \r\nI am using PyTorch 1.2.0 + transformers3.0.0", "> **Downgrade to sentencepiece==0.1.91 solve it.**\r\n> I am using PyTorch 1.2.0 + transformers3.0.0\r\n\r\nAlso PyTorch 1.4.0 + transformers 3.0.2", "Closing this as solved by #5418. Feel free to re-open if you still face an issue.", "For me either adding `sentencepiece==0.1.91 + torch==1.3.1 + transformers==2.4.1`\r\nor `torch==1.5.1 + transformers==2.4.1` worked.", "I come across the same problem too. \r\nMy solution is just to import torch before import the transformers", "> I come across the same problem too. My solution is just to import torch before import the transformers\r\n\r\nI followed your solution and it worked🤣. Before doing in this way, I downloaded almost every version of sentencepiece from 0.1.91 to 0.1.97. Although I do not know why, but it's something to happy about." ]
1,593
1,676
1,596
NONE
null
We are using `Azure ML` pipelines to train our `transformers` models. We have had it working for a few weeks, and then recently (just noticed it a few days ago), when trying to initialize a model, we are getting `Segmentation fault`. I tried just loading the models locally this morning and have the same issues. See snippet below. ``` config = config_class.from_pretrained(model_name, num_labels=10) tokenizer = tokenizer_class.from_pretrained(model_name, do_lower_case=False) model = model_class.from_pretrained("distilroberta-base", from_tf=False, config=config) ``` I also tried to download the `*_model.bin` and pass a *local path* instead of the model *name* and also got a `Segmentation fault`. I also tried to use `bert-base-uncased` instead of `distilroberta-base` and had the same issue. I am running on Ubuntu, with the following package versions: ``` torch==1.3.0 tokenizers=0.0.11 transformers==2.4.1 ``` **UPDATE**: I hacked some example scripts and had success, so I *think* the issue is that our code uses... ``` "roberta": (RobertaConfig, RobertaForTokenClassification, RobertaTokenizer), "mroberta": (RobertaConfig, RobertaForMultiLabelTokenClassification, RobertaTokenizer), # our custom multilabel class ``` instead of what the example scripts use... ``` AutoConfig, AutoModelForTokenClassification, AutoTokenizer, ``` Was there a breaking change to model files recently that would mean that our use of the "non-auto" classes are no longer usable? **UPDATE 2**: Our original code does *not* cause a `Segmentation fault` on Windows.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5281/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5281/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5280
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5280/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5280/comments
https://api.github.com/repos/huggingface/transformers/issues/5280/events
https://github.com/huggingface/transformers/pull/5280
645,638,800
MDExOlB1bGxSZXF1ZXN0NDQwMDgyMjUw
5,280
Remove links for all docs
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5280?src=pr&el=h1) Report\n> Merging [#5280](https://codecov.io/gh/huggingface/transformers/pull/5280?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0e1fce3c0129d05b65a83cdd89e8eadded553f2e&el=desc) will **decrease** coverage by `0.02%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5280/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5280?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5280 +/- ##\n==========================================\n- Coverage 79.11% 79.08% -0.03% \n==========================================\n Files 138 138 \n Lines 24080 24080 \n==========================================\n- Hits 19050 19043 -7 \n- Misses 5030 5037 +7 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5280?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5280/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.51% <0.00%> (-1.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5280/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `88.05% <0.00%> (-1.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5280/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.62% <0.00%> (-0.15%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5280?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5280?src=pr&el=footer). Last update [0e1fce3...49327ba](https://codecov.io/gh/huggingface/transformers/pull/5280?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,593
1,593
1,593
COLLABORATOR
null
Now that someone has done an awesome version selector for the docs, there's no need to list all versions in the README.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5280/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 2, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5280/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5280", "html_url": "https://github.com/huggingface/transformers/pull/5280", "diff_url": "https://github.com/huggingface/transformers/pull/5280.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5280.patch", "merged_at": 1593099906000 }
https://api.github.com/repos/huggingface/transformers/issues/5279
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5279/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5279/comments
https://api.github.com/repos/huggingface/transformers/issues/5279/events
https://github.com/huggingface/transformers/pull/5279
645,638,461
MDExOlB1bGxSZXF1ZXN0NDQwMDgxOTUx
5,279
Add DPR model
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5279?src=pr&el=h1) Report\n> Merging [#5279](https://codecov.io/gh/huggingface/transformers/pull/5279?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7c41057d5090f5e665f2404878369ecb13939def&el=desc) will **decrease** coverage by `0.80%`.\n> The diff coverage is `37.29%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5279/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5279?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5279 +/- ##\n==========================================\n- Coverage 78.34% 77.54% -0.81% \n==========================================\n Files 138 141 +3 \n Lines 23841 24085 +244 \n==========================================\n- Hits 18679 18676 -3 \n- Misses 5162 5409 +247 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5279?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/5279/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kcHIucHk=) | `28.29% <28.29%> (ø)` | |\n| [src/transformers/configuration\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/5279/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2Rwci5weQ==) | `62.50% <62.50%> (ø)` | |\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/5279/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.23% <100.00%> (+0.01%)` | :arrow_up: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/5279/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5279/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-28.03%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5279/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `89.95% <0.00%> (-0.92%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5279/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.12% <0.00%> (-0.89%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5279/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `76.42% <0.00%> (-0.39%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5279/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91.50% <0.00%> (-0.32%)` | :arrow_down: |\n| ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/5279/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5279?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5279?src=pr&el=footer). Last update [7c41057...7a90958](https://codecov.io/gh/huggingface/transformers/pull/5279?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Looks very cool! \r\nI would have three general things:\r\n\r\n1) For all parameter names I would try to stick to the ones we already have `config` instead of `cfg`, `pad_token_id` instead of `pad_id`. For me `Bert` is always the gold standard and I would try to name everything as it is named in Bert. All these `hidden_states`, `output_embeddings`, ...\r\n\r\n2) I don't like single letter or very short variable names, it makes it very hard to understand it and impossible sometimes to run select and replace commands in such files. I do not mind having long parameter names at all. But also not sure what other think here @LysandreJik \r\n\r\n3) I would always favor composition over inheritance (there was only one case which I would have changed) \r\n\r\n4) IMO, we should not introduce new class methods and in general new design choices that the user doesn't know from other models. The class method `init_encoder` is not needed here IMO. Also, all these `make it super easy for the user ` methods are not always good if it makes the model less easy to understand. Would always try to keep models as \"thin\" as possible and not add any \"magic\" methods if not needed", "What do you think of having a special tokenizer for the `DPRReader` ? I find the current way to tokenize the inputs a bit weird.\r\n\r\nI can have a custom `DPRReaderTokenizer` with a new `__call__` method like\r\n```python\r\ndef __call__(self, question: str, titles: List[str], texts: List[str], ...):\r\n```", "Ok I think this one is ready to merge. Could you do a final pass to make sure everything is ok @LysandreJik @thomwolf ?\r\nI'll do another PR about the tokenization stuff.", "I did some changes @LysandreJik :\r\n- have 1:1 correspondances betwen model + tokenizers \r\n- change `base_model_prefix` to be the attribute of the model the classes are wrapping\r\n- remove the wrong `model_input_names `\r\n\r\nNot sure why the CI doesn't pass. It seems related to `test_tokenization_bert_japanese ` though :/", "I changed the tokenizers config names to match the pretrained model names.\r\nAbout the `.generate` method that could be in the tokenizer I totally agree. But as it is linked to the way I'm going to change the reader's tokenizer, I will do the change at the same time in the next PR if it's good for you.\r\n\r\nIs there anything else that needs to be improved ?", "Feel free to do another pass on the PR @thomwolf @LysandreJik to make sure that all the model+tokenizers aspect are all good now :)\r\nThere is still room for some improvements but I keep them for the next PR:\r\n- have a custom __call__ for the tokenizer of the reader\r\n- move the deocing stuff of `.generate` to the tokenizer or the reader", "Thanks for your comments @LysandreJik :)\r\nIf there is a 3.0.1 that's going to be shipped I'd rather have everything in this PR then as there will be big breaking changes", "Ok @LysandreJik I added the tests for the models and the tokenizers :)\r\nI couldn't use all the tests of `ModelTesterMixin` as some of them don't apply here, but I used those that are relevant.", "I merged your PR and updated the docs @LysandreJik \r\nThanks for your help ;)", "Very cool! Let's merge, thanks for iteration @lhoestq :)", "Is there a short working example on the full end-to-end retrieval to reader pipeline?", "If I may add to @weilin37 comment, fine tuning included" ]
1,593
1,600
1,594
MEMBER
null
# Dense Passage Retrieval ## Intro The Dense Passage Retrieval (DPR) model from facebook ([github](https://github.com/facebookresearch/DPR), [arxiv](https://arxiv.org/abs/2004.04906)). It is used to do Open Domain Question Answering by extracting answer spans from a set of documents. This model actually comes in three different parts: - a context encoder - a question encoder - a reader for span prediction I did a schema to show the roles and the pipeline that one could build with those parts. The components in RED are the one in transformers. You can use whatever you want for the retrieval part, but there will be a new [retrieval feature](https://github.com/huggingface/nlp/pull/298) in the 🤗nlp library soon that will make the use of models like DPR easier. <img src="https://user-images.githubusercontent.com/42851186/85740169-b05da980-b701-11ea-975d-fbff4f368a5e.png" height="300"> ## Implementation - All three components share an encoding part with bert so I factorized this into the class `DprBertEncoder `. - The reader has a `.generate` method that finds the best spans and return them, while the `.forward` method only returns the logits. - In the config I allow to specify to load the weights from [files provided in the official repo](https://github.com/facebookresearch/DPR/blob/master/data/download_data.py). I've already added one pretrained weight file per component in S3. ## Things I'd like to improve: - I think we can probably improve the tokenization step. Right now the reader inputs are currently two sets of input_ids. One for the question with text_pair=context_title, and one for the context_text (i.e. the content in which we are looking for answer spans). This is because they all need to be combined like ``` [CLS] <question_input_ids> [SEP] <context_title_input_ids> [SEP] <context_text_input_ids> ``` I was thinking of making a custom tokenizer just for the reader, let me know if it sounds reasonable. ## Example of usage (outdated) Provided we have a retrieval module (here named`wiki`) we can do: ```python tokenizer = DprTokenizer.from_pretrained('dpr-model-base') ctx_encoder = DprContextEncoder.from_pretrained('facebook/dpr-ctx_encoder-single-nq-base') question_encoder = DprQuestionEncoder.from_pretrained('facebook/dpr-question_encoder-single-nq-base') reader = DprReader.from_pretrained('facebook/dpr-reader-single-nq-base') # First step: retrieve the best wikipedia passages question = 'Who created the Pokemon games ?' question_emb = question_encoder(tokenizer.encode(question, return_tensors="pt")).numpy() scores, passages = wiki.get_nearest_examples("embeddings", question_emb, k=10) # Second step: Feed the reader with the question and the retrieved snippets encoded_question_and_titles = [ tokenizer.encode(question, text_pair=passage["title"], return_tensors="pt") for passage in passages] encoded_texts = [ tokenizer.encode(passage["text"], return_tensors="pt", add_special_tokens=False) for passage in passages] predicted_spans = reader.generate(encoded_question_and_titles, encoded_texts) # Last step: print the result best_span = predicted_spans[0] best_span_ids = encoded_texts[best_span.doc_id].numpy().flatten() best_span_ids = best_span_ids[best_span.start_index:best_span.end_index + 1] print(tokenizer.decode(best_span_ids)) # >>> satoshi tajiri ``` ----------------- I'd be very happy to have some feedbacks on this one, as it is my first contribution to the library :D
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5279/reactions", "total_count": 6, "+1": 0, "-1": 0, "laugh": 0, "hooray": 4, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5279/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5279", "html_url": "https://github.com/huggingface/transformers/pull/5279", "diff_url": "https://github.com/huggingface/transformers/pull/5279.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5279.patch", "merged_at": 1594126573000 }
https://api.github.com/repos/huggingface/transformers/issues/5278
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5278/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5278/comments
https://api.github.com/repos/huggingface/transformers/issues/5278/events
https://github.com/huggingface/transformers/pull/5278
645,628,142
MDExOlB1bGxSZXF1ZXN0NDQwMDczMzE2
5,278
[dbart] push picture
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5278?src=pr&el=h1) Report\n> Merging [#5278](https://codecov.io/gh/huggingface/transformers/pull/5278?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/24f46ea3f3e5006ca38735306753a846a0823174&el=desc) will **increase** coverage by `0.38%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5278/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5278?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5278 +/- ##\n==========================================\n+ Coverage 79.08% 79.46% +0.38% \n==========================================\n Files 138 138 \n Lines 24078 24080 +2 \n==========================================\n+ Hits 19041 19135 +94 \n+ Misses 5037 4945 -92 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5278?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5278/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `88.05% <0.00%> (-1.26%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5278/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `76.42% <0.00%> (-0.39%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5278/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `92.82% <0.00%> (-0.35%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5278/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.26% <0.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5278/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.76% <0.00%> (+0.41%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5278/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `90.86% <0.00%> (+0.91%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5278/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `92.78% <0.00%> (+1.30%)` | :arrow_up: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5278/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `56.68% <0.00%> (+28.02%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5278?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5278?src=pr&el=footer). Last update [24f46ea...6f28be5](https://codecov.io/gh/huggingface/transformers/pull/5278?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "This looks obnoxiously large in the diff viewer. Should we do some fancy markdown instead of\r\n```markdown\r\n![DBART](https://github.com/sshleifer/transformers_fork/raw/add-distilbart-pic/examples/seq2seq/distilbart_w_logos.png)\r\n```\r\n", "The README view is fixed-width on GitHub so you should be fine.\r\n\r\nHowever, I would try to find a more permanent host/URL for your image, I suspect your fork's branch will get deleted at some point.", "#5394" ]
1,593
1,593
1,593
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5278/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5278/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5278", "html_url": "https://github.com/huggingface/transformers/pull/5278", "diff_url": "https://github.com/huggingface/transformers/pull/5278.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5278.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/5277
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5277/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5277/comments
https://api.github.com/repos/huggingface/transformers/issues/5277/events
https://github.com/huggingface/transformers/issues/5277
645,627,754
MDU6SXNzdWU2NDU2Mjc3NTQ=
5,277
can't open file 'transformers-cli'
{ "login": "krannnn", "id": 66248879, "node_id": "MDQ6VXNlcjY2MjQ4ODc5", "avatar_url": "https://avatars.githubusercontent.com/u/66248879?v=4", "gravatar_id": "", "url": "https://api.github.com/users/krannnn", "html_url": "https://github.com/krannnn", "followers_url": "https://api.github.com/users/krannnn/followers", "following_url": "https://api.github.com/users/krannnn/following{/other_user}", "gists_url": "https://api.github.com/users/krannnn/gists{/gist_id}", "starred_url": "https://api.github.com/users/krannnn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/krannnn/subscriptions", "organizations_url": "https://api.github.com/users/krannnn/orgs", "repos_url": "https://api.github.com/users/krannnn/repos", "events_url": "https://api.github.com/users/krannnn/events{/privacy}", "received_events_url": "https://api.github.com/users/krannnn/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,593
1,593
1,593
NONE
null
when running ```!python transformers-cli convert --model_type xlnet``` I'm getting the following error : ```python: can't open file 'transformers-cli': [Errno 2] No such file or directory``` Any possible fix to this problem? thank you
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5277/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5277/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5276
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5276/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5276/comments
https://api.github.com/repos/huggingface/transformers/issues/5276/events
https://github.com/huggingface/transformers/pull/5276
645,624,309
MDExOlB1bGxSZXF1ZXN0NDQwMDcwMTYz
5,276
Bert base model card
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5276?src=pr&el=h1) Report\n> Merging [#5276](https://codecov.io/gh/huggingface/transformers/pull/5276?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0e1fce3c0129d05b65a83cdd89e8eadded553f2e&el=desc) will **decrease** coverage by `1.46%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5276/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5276?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5276 +/- ##\n==========================================\n- Coverage 79.11% 77.64% -1.47% \n==========================================\n Files 138 138 \n Lines 24080 24080 \n==========================================\n- Hits 19050 18698 -352 \n- Misses 5030 5382 +352 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5276?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5276/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.17% <0.00%> (-81.14%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5276/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `19.92% <0.00%> (-75.00%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5276/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `66.72% <0.00%> (-9.69%)` | :arrow_down: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5276/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `63.95% <0.00%> (-6.98%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5276/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.58% <0.00%> (-2.57%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5276/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `76.35% <0.00%> (-2.30%)` | :arrow_down: |\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5276/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `95.97% <0.00%> (-1.73%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5276/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `94.92% <0.00%> (-1.45%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5276/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `38.20% <0.00%> (-1.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5276/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.51% <0.00%> (-1.39%)` | :arrow_down: |\n| ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/5276/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5276?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5276?src=pr&el=footer). Last update [0e1fce3...de13069](https://codecov.io/gh/huggingface/transformers/pull/5276?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,593
1,593
1,593
COLLABORATOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5276/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5276/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5276", "html_url": "https://github.com/huggingface/transformers/pull/5276", "diff_url": "https://github.com/huggingface/transformers/pull/5276.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5276.patch", "merged_at": 1593172879000 }
https://api.github.com/repos/huggingface/transformers/issues/5275
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5275/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5275/comments
https://api.github.com/repos/huggingface/transformers/issues/5275/events
https://github.com/huggingface/transformers/issues/5275
645,614,783
MDU6SXNzdWU2NDU2MTQ3ODM=
5,275
Description of how to preprocess text corpus for roBERTa LM training
{ "login": "PhilipMay", "id": 229382, "node_id": "MDQ6VXNlcjIyOTM4Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PhilipMay", "html_url": "https://github.com/PhilipMay", "followers_url": "https://api.github.com/users/PhilipMay/followers", "following_url": "https://api.github.com/users/PhilipMay/following{/other_user}", "gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}", "starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions", "organizations_url": "https://api.github.com/users/PhilipMay/orgs", "repos_url": "https://api.github.com/users/PhilipMay/repos", "events_url": "https://api.github.com/users/PhilipMay/events{/privacy}", "received_events_url": "https://api.github.com/users/PhilipMay/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Well - this issue is still open IMO...", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Maybe better to ask to the original RoBERTa authors or on the forum at https://discuss.huggingface.co?\r\n\r\nWe are trying to keep the issues for bug/features reports now.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,593
1,610
1,610
CONTRIBUTOR
null
# 🚀 Feature request There are different locations where you desribe how to train roBERTa models. For example here: - https://huggingface.co/blog/how-to-train - https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb - https://gist.github.com/aditya-malte/2d4f896f471be9c38eb4d723a710768b But nobody sais how to preprocess the text corpus. I think it must be one sentence per row. But does it need empty lines between documents? Is it ok to shuffle the text line by line? Could you please claify this?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5275/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5275/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5274
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5274/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5274/comments
https://api.github.com/repos/huggingface/transformers/issues/5274/events
https://github.com/huggingface/transformers/pull/5274
645,596,801
MDExOlB1bGxSZXF1ZXN0NDQwMDQ3ODE1
5,274
[examples/seq2seq] more README improvements
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,593
1,593
1,593
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5274/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5274/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5274", "html_url": "https://github.com/huggingface/transformers/pull/5274", "diff_url": "https://github.com/huggingface/transformers/pull/5274.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5274.patch", "merged_at": 1593094382000 }
https://api.github.com/repos/huggingface/transformers/issues/5273
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5273/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5273/comments
https://api.github.com/repos/huggingface/transformers/issues/5273/events
https://github.com/huggingface/transformers/issues/5273
645,595,704
MDU6SXNzdWU2NDU1OTU3MDQ=
5,273
I have some problems with the "bert-large-uncased" model
{ "login": "superwars", "id": 53506929, "node_id": "MDQ6VXNlcjUzNTA2OTI5", "avatar_url": "https://avatars.githubusercontent.com/u/53506929?v=4", "gravatar_id": "", "url": "https://api.github.com/users/superwars", "html_url": "https://github.com/superwars", "followers_url": "https://api.github.com/users/superwars/followers", "following_url": "https://api.github.com/users/superwars/following{/other_user}", "gists_url": "https://api.github.com/users/superwars/gists{/gist_id}", "starred_url": "https://api.github.com/users/superwars/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/superwars/subscriptions", "organizations_url": "https://api.github.com/users/superwars/orgs", "repos_url": "https://api.github.com/users/superwars/repos", "events_url": "https://api.github.com/users/superwars/events{/privacy}", "received_events_url": "https://api.github.com/users/superwars/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, I tried reproducing with the following code, but couldn't get it to crash:\r\n\r\n```py\r\nfrom transformers import BertTokenizer, BertModel, BertConfig\r\n\r\nclass Args:\r\n bert_model = \"bert-large-uncased\"\r\n\r\nargs = Args()\r\n\r\nconfig = BertConfig.from_pretrained(args.bert_model, output_hidden_states = True, output_attentions = True)\r\ntokenizer = BertTokenizer.from_pretrained(args.bert_model, do_lower_case=True, output_hidden_states=True)\r\nbert = BertModel.from_pretrained(args.bert_model, config = config) # args.bert_model is 'bert-large-uncased'\r\n```\r\n\r\nCan you specify your environment? Are you sure your `args.bert_model` is `bert-large-uncased`?", "I made a stupid mistake. The bert model is used in two places in my code, and the second place is\r\n`\r\n config = BertConfig(args.bert_model, output_hidden_states = True, output_attentions = True)\r\n bert = BertModel.from_pretrained(args.bert_model, config = config)\r\n`\r\nwhere I forgot to use the from_pretrained function for config, the right code should be\r\n`\r\n config = BertConfig.from_pretrained(args.bert_model, output_hidden_states = True, output_attentions = True)\r\n bert = BertModel.from_pretrained(args.bert_model, config = config)\r\n`" ]
1,593
1,593
1,593
NONE
null
This is the wrong code segment: from transformers import BertTokenizer, BertModel, BertConfig config = BertConfig.from_pretrained(args.bert_model, output_hidden_states = True, output_attentions = True) tokenizer = BertTokenizer.from_pretrained(args.bert_model, do_lower_case=True, output_hidden_states=True) bert = BertModel.from_pretrained(args.bert_model, config = config) # args.bert_model is 'bert-large-uncased' And below is the bug report: Traceback (most recent call last): File "main.py", line 119, in <module> main(args) File "main.py", line 64, in main net = BertProber(rel_vec_representation, args) File "/home/jzhao/program/interpret_bert/re/model.py", line 62, in __init__ self.bert = BertModel.from_pretrained(args.bert_model, config = config) File "/home/jzhao/anaconda3/envs/python36/lib/python3.6/site-packages/transformers/modeling_utils.py", line 466, in from_pretrained model = cls(config, *model_args, **model_kwargs) File "/home/jzhao/anaconda3/envs/python36/lib/python3.6/site-packages/transformers/modeling_bert.py", line 615, in __init__ self.embeddings = BertEmbeddings(config) File "/home/jzhao/anaconda3/envs/python36/lib/python3.6/site-packages/transformers/modeling_bert.py", line 149, in __init__ self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=0) File "/home/jzhao/anaconda3/envs/python36/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 97, in __init__ self.weight = Parameter(torch.Tensor(num_embeddings, embedding_dim)) TypeError: new() received an invalid combination of arguments - got (str, int), but expected one of: * (torch.device device) * (torch.Storage storage) * (Tensor other) * (tuple of ints size, torch.device device) didn't match because some of the arguments have invalid types: (str, int) * (object data, torch.device device) didn't match because some of the arguments have invalid types: (str, int)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5273/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5273/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5272
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5272/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5272/comments
https://api.github.com/repos/huggingface/transformers/issues/5272/events
https://github.com/huggingface/transformers/issues/5272
645,536,259
MDU6SXNzdWU2NDU1MzYyNTk=
5,272
A question about the test accuracy of BERT-based-uncased model on the MNLI dataset
{ "login": "14H034160212", "id": 23516191, "node_id": "MDQ6VXNlcjIzNTE2MTkx", "avatar_url": "https://avatars.githubusercontent.com/u/23516191?v=4", "gravatar_id": "", "url": "https://api.github.com/users/14H034160212", "html_url": "https://github.com/14H034160212", "followers_url": "https://api.github.com/users/14H034160212/followers", "following_url": "https://api.github.com/users/14H034160212/following{/other_user}", "gists_url": "https://api.github.com/users/14H034160212/gists{/gist_id}", "starred_url": "https://api.github.com/users/14H034160212/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/14H034160212/subscriptions", "organizations_url": "https://api.github.com/users/14H034160212/orgs", "repos_url": "https://api.github.com/users/14H034160212/repos", "events_url": "https://api.github.com/users/14H034160212/events{/privacy}", "received_events_url": "https://api.github.com/users/14H034160212/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Same, I can't replicate the results in the README file.", "> Same, I can't replicate the results in the README file.\r\n\r\nI also try to use BERT-large-uncased model and get 0.6209 eval_mnli/acc and 0.6281 eval_mnli-mm/acc.", "FYI, I can get 84.25 eval_mnli/acc by using a non-zero number of warmup steps. . \r\nSpecifically, adding these lines in `trainer.py`:\r\n```\r\nwarmup_steps = float(num_training_steps)*0.1\r\nscheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=warmup_steps, num_training_steps=num_training_steps)\r\n```", "> FYI, I can get 84.25 eval_mnli/acc by using a non-zero number of warmup steps. .\r\n> Specifically, adding these lines in `trainer.py`:\r\n> \r\n> ```\r\n> warmup_steps = float(num_training_steps)*0.1\r\n> scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=warmup_steps, num_training_steps=num_training_steps)\r\n> ```\r\n\r\nMany thanks for your sharing. Cheers!", "> FYI, I can get 84.25 eval_mnli/acc by using a non-zero number of warmup steps. .\r\n> Specifically, adding these lines in `trainer.py`:\r\n> \r\n> ```\r\n> warmup_steps = float(num_training_steps)*0.1\r\n> scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=warmup_steps, num_training_steps=num_training_steps)\r\n> ```\r\nHi, Chaitanya. I try to fix the code, but I got an error when running the new program. Here is a double check that does the path of the ``trainer.py`` is ``src/transformers/trainer.py``? And the lines you added are beginning from the line 312 in the ``trainer.py``? Does the original code is ``scheduler = get_linear_schedule_with_warmup(\r\n optimizer, num_warmup_steps=self.args.warmup_steps, num_training_steps=num_training_steps\r\n )``? Many Thanks.\r\n![微信截图_20200704002130](https://user-images.githubusercontent.com/23516191/86468851-5fe2ed80-bd8c-11ea-9ae7-5fdf26b77919.png)\r\n", "Adding a non-zero number of warmup steps in `src/transformers/trainer.py` is the only change I made. This error seems to occur because you have an output label that is outside the range [0, n_classes) while computing the loss. \r\nDid you make any other changes to the trainer/dataset? And are you training with the original MNLI dataset?", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,593
1,599
1,599
NONE
null
# 🐛 Bug I am curious about how accuracy you guys can reach by using the BERT-based-uncased model on the MNLI task? I got 61% test accuracy on the MNLI-matched dataset and 62% accuracy on the MNLI-unmatched dataset. ## Information Model I am using Bert-based-uncased: Language I am using the model on English: The problem arises when using: * I use the exactly the same official example scripts: (give details below) https://github.com/huggingface/transformers#quick-tour https://github.com/huggingface/transformers#run_gluepy-fine-tuning-on-glue-tasks-for-sequence-classification The tasks I am working on is: * An official GLUE task: (MNLI) <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: v2.11.0 - Platform: Linux - Python version: 3.7 - PyTorch version (GPU?): 1.5.0 - Using GPU in script?: 2 Tesla V100 GPUs - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5272/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5272/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5271
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5271/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5271/comments
https://api.github.com/repos/huggingface/transformers/issues/5271/events
https://github.com/huggingface/transformers/issues/5271
645,502,947
MDU6SXNzdWU2NDU1MDI5NDc=
5,271
BART finetune.py: model not learning anything
{ "login": "alexgaskell10", "id": 51463426, "node_id": "MDQ6VXNlcjUxNDYzNDI2", "avatar_url": "https://avatars.githubusercontent.com/u/51463426?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alexgaskell10", "html_url": "https://github.com/alexgaskell10", "followers_url": "https://api.github.com/users/alexgaskell10/followers", "following_url": "https://api.github.com/users/alexgaskell10/following{/other_user}", "gists_url": "https://api.github.com/users/alexgaskell10/gists{/gist_id}", "starred_url": "https://api.github.com/users/alexgaskell10/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexgaskell10/subscriptions", "organizations_url": "https://api.github.com/users/alexgaskell10/orgs", "repos_url": "https://api.github.com/users/alexgaskell10/repos", "events_url": "https://api.github.com/users/alexgaskell10/events{/privacy}", "received_events_url": "https://api.github.com/users/alexgaskell10/received_events", "type": "User", "site_admin": false }
[ { "id": 1936351150, "node_id": "MDU6TGFiZWwxOTM2MzUxMTUw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Examples", "name": "Examples", "color": "d4c5f9", "default": false, "description": "Which is related to examples in general" } ]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "Seems to be coming from the --warmup_steps arg. If I set to 0, the model seems to learn as expected. If it is > 0 the model doesn't learn anything.", "Interesting. Could you link me to that dataset/instructions?\r\nI just reran cnn_dm with warmup_steps=0 and my loss curve looks very similar in wandb. pink is no warmup:\r\n\r\n![image](https://user-images.githubusercontent.com/6045025/85876097-38e05680-b7a3-11ea-9c39-7d9c572cd4be.png)\r\n\r\nAnyways, I will set default warmup_steps=0.\r\n", "Hmmm interesting, I agree yours both look normal. I am using Pubmed dataset and truncating input docs at 850 input tokens. I'm pretty sure my model wasn't learning anything when warmuo_steps>0 for the following reasons:\r\n- The generated summaries were identical across training\r\n- The target summaries begin with '\\<S\\>' token but I believe the CNN data the model was finetuned on didn't. As a result, the model learns to begin output summaries with '\\<S\\>' very quickly. However, for each case with warmup_steps > 0 the generated val summaries didn't begin with this token\r\n- Loss curves below (orange: warmup_steps>0, grey: warmup_steps=0):\r\n- When warmup_steps>0 model followed identical loss curves even after a changed hparams like learning rate\r\n\r\n<img width=\"561\" alt=\"Screenshot 2020-06-26 at 16 56 37\" src=\"https://user-images.githubusercontent.com/51463426/85877481-22e88b00-b7cf-11ea-8b21-11320c0737b4.png\">\r\n\r\nI played around with it quite a bit and this pattern repeated for any warmp_steps>0 (including =1).", "That is pretty compelling evidence.\r\nI will change the default to 0.\r\nNothing else changed between runs?\r\n\r\n", "Yes- I have tried across runs changing only this hparam.", "Interesting. I fixed master. Would love to hear your experimental results/developer experience as you continue!", "For sure, I'll pop in this library lots over the coming weeks I'm sure. Great library btw" ]
1,593
1,593
1,593
NONE
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): Language I am using the model on (English, Chinese ...): The problem arises when using: * [x] the official example scripts: (give details below) ./examples/summarization/finetune.sh * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) Finetuning on PubMed dataset. ## To reproduce Steps to reproduce the behavior: 1. set DATA_DIR & OUT_DIR 2. Run command: python finetune.py \ --model_name_or_path=facebook/bart-large-cnn \ --learning_rate=3e-5 \ --gpus 1 \ --do_predict \ --do_train \ --n_val 1000 \ --val_check_interval 0.1 \ --sortish_sampler \ --max_target_length=80 \ --val_max_target_length=200 \ --test_max_target_length=200 \ --max_source_length=850 \ --train_batch_size=1 \ --eval_batch_size=1 \ --data_dir=$DATA_DIR \ --output_dir=$OUT_DIR \ --logger=wandb \ $@ <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> The model is not learning anything. The generated eval summaries are identical throughout training and identical across training instances with different hyperparams. Appears as though backprop is not happening but there is not error message. (Probably I am missing something simple) ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.11.0 - Platform: Linux-5.0.0-37-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.5.1+cu101 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5271/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5271/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5270
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5270/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5270/comments
https://api.github.com/repos/huggingface/transformers/issues/5270/events
https://github.com/huggingface/transformers/pull/5270
645,397,277
MDExOlB1bGxSZXF1ZXN0NDM5ODg0NjMz
5,270
Create README.md
{ "login": "krevas", "id": 27683515, "node_id": "MDQ6VXNlcjI3NjgzNTE1", "avatar_url": "https://avatars.githubusercontent.com/u/27683515?v=4", "gravatar_id": "", "url": "https://api.github.com/users/krevas", "html_url": "https://github.com/krevas", "followers_url": "https://api.github.com/users/krevas/followers", "following_url": "https://api.github.com/users/krevas/following{/other_user}", "gists_url": "https://api.github.com/users/krevas/gists{/gist_id}", "starred_url": "https://api.github.com/users/krevas/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/krevas/subscriptions", "organizations_url": "https://api.github.com/users/krevas/orgs", "repos_url": "https://api.github.com/users/krevas/repos", "events_url": "https://api.github.com/users/krevas/events{/privacy}", "received_events_url": "https://api.github.com/users/krevas/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5270?src=pr&el=h1) Report\n> Merging [#5270](https://codecov.io/gh/huggingface/transformers/pull/5270?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0e1fce3c0129d05b65a83cdd89e8eadded553f2e&el=desc) will **decrease** coverage by `0.02%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5270/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5270?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5270 +/- ##\n==========================================\n- Coverage 79.11% 79.08% -0.03% \n==========================================\n Files 138 138 \n Lines 24080 24080 \n==========================================\n- Hits 19050 19043 -7 \n- Misses 5030 5037 +7 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5270?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5270/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.51% <0.00%> (-1.39%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5270/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `38.44% <0.00%> (-1.18%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5270/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.77% <0.00%> (ø)` | |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5270/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `77.18% <0.00%> (+0.76%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5270?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5270?src=pr&el=footer). Last update [0e1fce3...67157e9](https://codecov.io/gh/huggingface/transformers/pull/5270?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,593
1,593
1,593
CONTRIBUTOR
null
Create README.md for finance-koelectra-base-discriminator model
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5270/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5270/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5270", "html_url": "https://github.com/huggingface/transformers/pull/5270", "diff_url": "https://github.com/huggingface/transformers/pull/5270.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5270.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/5269
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5269/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5269/comments
https://api.github.com/repos/huggingface/transformers/issues/5269/events
https://github.com/huggingface/transformers/pull/5269
645,396,193
MDExOlB1bGxSZXF1ZXN0NDM5ODgzNjQ0
5,269
Fix LR decay in TF Trainer
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@LysandreJik Sorry for the inconvenience :(", "Will merge once the code quality passes", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5269?src=pr&el=h1) Report\n> Merging [#5269](https://codecov.io/gh/huggingface/transformers/pull/5269?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0e1fce3c0129d05b65a83cdd89e8eadded553f2e&el=desc) will **decrease** coverage by `0.07%`.\n> The diff coverage is `6.25%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5269/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5269?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5269 +/- ##\n==========================================\n- Coverage 79.11% 79.03% -0.08% \n==========================================\n Files 138 138 \n Lines 24080 24102 +22 \n==========================================\n- Hits 19050 19049 -1 \n- Misses 5030 5053 +23 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5269?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5269/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `17.85% <6.25%> (-0.84%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5269/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.51% <0.00%> (-1.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5269/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.92% <0.00%> (+0.14%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5269?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5269?src=pr&el=footer). Last update [0e1fce3...78562e2](https://codecov.io/gh/huggingface/transformers/pull/5269?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "All tests are ok!" ]
1,593
1,593
1,593
CONTRIBUTOR
null
Revival of #5051 I have accidentally deleted the previous branch so I recreated this same one.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5269/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5269/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5269", "html_url": "https://github.com/huggingface/transformers/pull/5269", "diff_url": "https://github.com/huggingface/transformers/pull/5269.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5269.patch", "merged_at": 1593412713000 }
https://api.github.com/repos/huggingface/transformers/issues/5268
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5268/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5268/comments
https://api.github.com/repos/huggingface/transformers/issues/5268/events
https://github.com/huggingface/transformers/pull/5268
645,377,215
MDExOlB1bGxSZXF1ZXN0NDM5ODY2Mzg4
5,268
Fix LR decay in TF Trainer
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,593
1,593
1,593
CONTRIBUTOR
null
Revival of #5051 I have accidentally deleted the previous branch so I recreated this same one.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5268/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5268/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5268", "html_url": "https://github.com/huggingface/transformers/pull/5268", "diff_url": "https://github.com/huggingface/transformers/pull/5268.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5268.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/5267
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5267/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5267/comments
https://api.github.com/repos/huggingface/transformers/issues/5267/events
https://github.com/huggingface/transformers/issues/5267
645,150,768
MDU6SXNzdWU2NDUxNTA3Njg=
5,267
ValueError in T5 community colab notebook.
{ "login": "y-rokutan", "id": 24562381, "node_id": "MDQ6VXNlcjI0NTYyMzgx", "avatar_url": "https://avatars.githubusercontent.com/u/24562381?v=4", "gravatar_id": "", "url": "https://api.github.com/users/y-rokutan", "html_url": "https://github.com/y-rokutan", "followers_url": "https://api.github.com/users/y-rokutan/followers", "following_url": "https://api.github.com/users/y-rokutan/following{/other_user}", "gists_url": "https://api.github.com/users/y-rokutan/gists{/gist_id}", "starred_url": "https://api.github.com/users/y-rokutan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/y-rokutan/subscriptions", "organizations_url": "https://api.github.com/users/y-rokutan/orgs", "repos_url": "https://api.github.com/users/y-rokutan/repos", "events_url": "https://api.github.com/users/y-rokutan/events{/privacy}", "received_events_url": "https://api.github.com/users/y-rokutan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "hi @y-rokutan, you are right, `DataCollator` is not a `Class` anymore, its a `callable` now, so remove subclassing and change `collate_batch` method to ` __call__` \r\n\r\nI'll update the notebook once new version of transformers is released", "hi @patil-suraj, thx for your quick reply. I appreciate your contributions.\r\nDo you have any idea about ValueError issue? I'm googling for fix but still have the same error.", "What is your nlp version ? Try using nlp==0.2.0", "`!pip install -U nlp==0.2.0` worked! nlp0.3.0 seems not for this notebook. Thx!", "hi i am running the notebook without any changes . But its very slow i think its not using TPU at all" ]
1,593
1,593
1,593
NONE
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): T5 Language I am using the model on (English, Chinese ...): English The problem arises when using: * [x] the community notebook https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb: * It raises subclassing error while defining `class T2TDataCollator(DataCollator)`. DataCollator is not a class anymore on the latest master branch, just `class T2TDataCollator()` fixes the error. * Even that bug is fixed, the notebook raises another ValueError while importing dataset.pt: ``` Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn fn(gindex, *args) File "<ipython-input-18-4f8aea5d9d8b>", line 191, in _mp_fn main() File "<ipython-input-18-4f8aea5d9d8b>", line 145, in main train_dataset = torch.load(data_args.train_file_path) File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 589, in load return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args) File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 847, in _load result = unpickler.load() File "/usr/local/lib/python3.6/dist-packages/nlp/splits.py", line 493, in __setitem__ raise ValueError("Cannot add elem. Use .add() instead.") ValueError: Cannot add elem. Use .add() instead. ``` ## To reproduce Steps to reproduce the behavior: 1. Run the community colab notebook: https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb 2. You will get the error. ## Expected behavior * Not applicable ## Environment info * It's running on colab.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5267/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5267/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5266
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5266/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5266/comments
https://api.github.com/repos/huggingface/transformers/issues/5266/events
https://github.com/huggingface/transformers/issues/5266
645,145,563
MDU6SXNzdWU2NDUxNDU1NjM=
5,266
Finetune T5 on other Dataset, AssertionError: assert tokenized.input_ids.shape[1] == max_length
{ "login": "ShoubhikBanerjee", "id": 44529417, "node_id": "MDQ6VXNlcjQ0NTI5NDE3", "avatar_url": "https://avatars.githubusercontent.com/u/44529417?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ShoubhikBanerjee", "html_url": "https://github.com/ShoubhikBanerjee", "followers_url": "https://api.github.com/users/ShoubhikBanerjee/followers", "following_url": "https://api.github.com/users/ShoubhikBanerjee/following{/other_user}", "gists_url": "https://api.github.com/users/ShoubhikBanerjee/gists{/gist_id}", "starred_url": "https://api.github.com/users/ShoubhikBanerjee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ShoubhikBanerjee/subscriptions", "organizations_url": "https://api.github.com/users/ShoubhikBanerjee/orgs", "repos_url": "https://api.github.com/users/ShoubhikBanerjee/repos", "events_url": "https://api.github.com/users/ShoubhikBanerjee/events{/privacy}", "received_events_url": "https://api.github.com/users/ShoubhikBanerjee/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1834056635, "node_id": "MDU6TGFiZWwxODM0MDU2NjM1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization", "name": "Core: Tokenization", "color": "FF4446", "default": false, "description": "Internals of the library; Tokenization." } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,593
1,598
1,598
NONE
null
# ❓ Questions & Help ## Details I was trying to finetune Amazon Food Review Dataset on T5 using **Latest code from master**. I formatted the data, as .source and .target. When running [finetune_t5.sh](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune.py), First it gives error : **Keyword arguments {'add_prefix_space': True} not recognized.** And then : `File "/content/transformers/examples/summarization/utils.py", line 51, in encode_file assert tokenized.input_ids.shape[1] == max_length AssertionError` Could you please say, whether it is a bug or I am doing something wrong?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5266/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5266/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5265
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5265/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5265/comments
https://api.github.com/repos/huggingface/transformers/issues/5265/events
https://github.com/huggingface/transformers/issues/5265
645,121,297
MDU6SXNzdWU2NDUxMjEyOTc=
5,265
test_torch_fillmask failing on GPU
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 1108649053, "node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz", "url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted", "name": "Help wanted", "color": "008672", "default": false, "description": "Extra attention is needed, help appreciated" } ]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "note to self: enro also fails on cpu, and rerunning on commit where it was green now fails. So change is something S3 related.\r\n", "```python\r\ntests/test_pipelines.py:148: in _test_mono_column_pipeline\r\n set([o[key] for o in result]), set([o[key] for o in expect]),\r\nE AssertionError: Items in the first set but not the second:\r\nE '<s>My name is Chris</s>'\r\nE '<s>My name is John</s>'\r\nE Items in the second set but not the first:\r\nE '<s> My name is John</s>'\r\nE '<s> My name is:</s>'\r\n```" ]
1,593
1,593
1,593
CONTRIBUTOR
null
```bash FAILED tests/test_modeling_bart.py::MBartIntegrationTests::test_enro_forward expected_slice = torch.tensor([9.0078, 10.1113, 14.4787], device=torch_device, dtype=model.dtype) result_slice = logits[0][0][:3] > self.assertTrue(torch.allclose(expected_slice, result_slice, atol=TOLERANCE)) E AssertionError: False is not true tests/test_modeling_bart.py:258: AssertionError FAILED tests/test_pipelines.py::MonoColumnInputTestCase::test_torch_fill_mask_results tests/test_pipelines.py:147: in _test_mono_column_pipeline set([o[key] for o in result]), set([o[key] for o in expect]), E AssertionError: Items in the first set but not the second: E '<s>My name is John</s>' E '<s>My name is Chris</s>' E Items in the second set but not the first: E '<s> My name is John</s>' E '<s> My name is:</s>' ``` cc @julien-c I'll figure it out.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5265/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5265/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5264
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5264/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5264/comments
https://api.github.com/repos/huggingface/transformers/issues/5264/events
https://github.com/huggingface/transformers/issues/5264
645,097,325
MDU6SXNzdWU2NDUwOTczMjU=
5,264
Set the number of times to evaluate per epoch when using Trainer
{ "login": "alexorona", "id": 11825654, "node_id": "MDQ6VXNlcjExODI1NjU0", "avatar_url": "https://avatars.githubusercontent.com/u/11825654?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alexorona", "html_url": "https://github.com/alexorona", "followers_url": "https://api.github.com/users/alexorona/followers", "following_url": "https://api.github.com/users/alexorona/following{/other_user}", "gists_url": "https://api.github.com/users/alexorona/gists{/gist_id}", "starred_url": "https://api.github.com/users/alexorona/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexorona/subscriptions", "organizations_url": "https://api.github.com/users/alexorona/orgs", "repos_url": "https://api.github.com/users/alexorona/repos", "events_url": "https://api.github.com/users/alexorona/events{/privacy}", "received_events_url": "https://api.github.com/users/alexorona/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,593
1,598
1,598
CONTRIBUTOR
null
Basically, I want to see how the training is going against the evaluation dataset twice per epoch when using `Trainer`. To calculate `logging_steps` so that `evaluate()` is called a certain number of times per epoch when training, will this hold-up to single and multi-device training? ``` evals_per_epoch = 2 logging_steps = len(train_dataset) / train_batch_size / gradient_accumulation_steps // evals_per_epoch ``` I'm getting myself confused over the implementation of `global_steps` and `logging_steps` inside of `Trainer.train`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5264/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5264/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5263
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5263/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5263/comments
https://api.github.com/repos/huggingface/transformers/issues/5263/events
https://github.com/huggingface/transformers/issues/5263
645,079,591
MDU6SXNzdWU2NDUwNzk1OTE=
5,263
[examples] Verify marian and mbart BLEU scores with examples/seq2seq/run_eval.py
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 1936351150, "node_id": "MDU6TGFiZWwxOTM2MzUxMTUw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Examples", "name": "Examples", "color": "d4c5f9", "default": false, "description": "Which is related to examples in general" }, { "id": 2009457320, "node_id": "MDU6TGFiZWwyMDA5NDU3MzIw", "url": "https://api.github.com/repos/huggingface/transformers/labels/translation", "name": "translation", "color": "b2d2f4", "default": false, "description": "machine translation utilities and models" }, { "id": 2039044877, "node_id": "MDU6TGFiZWwyMDM5MDQ0ODc3", "url": "https://api.github.com/repos/huggingface/transformers/labels/marian", "name": "marian", "color": "30cc95", "default": false, "description": "" } ]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "Surprised. Maybe the opus data overlaps the wmt test set?", "![image](https://user-images.githubusercontent.com/6045025/90946657-faf75b80-e3fc-11ea-856a-dc598d992112.png)\r\n", "Opus #s are legit!", "@sshleifer do you using the default configuration in Readme ? I didn't get 37.1, and I have tried the configuration mentioned as original paper, I have got a 36.6 after postprocessing wmt_en_ro enro.\r\n\r\nIn addition, I have a confusion about the code. If I don't add extra parameters, there is a warning in the output log. I try to solve it. I can't set a breakpoint at the place where the warning is generated. It's difficult for me to locate the reason. Can you help me look at it?\r\n\r\n`Keyword arguments {'add_prefix_space': False} not recognized.\r\n`", "(1) Which split, val or test?\r\n(2) What was the score before post-processing?\r\n\r\nThe warning is from [here](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils.py#L261), you can safely ignore it.\r\n", "### Results on wmt_en_ro/test\r\n```bash\r\npython run_eval.py Helsinki-NLP/opus-mt-en-ro wmt_en_ro/test.source enro_test_translations.txt --reference_path wmt_en_ro/test.target -\r\n-task translation --score_path mar_test_bleu.json --fp16 --bs 64\r\n{'bleu': 27.6865, 'n_obs': 1999, 'runtime': 85, 'seconds_per_sample': 0.0425}\r\n```\r\n```bash\r\nro_post_process enro_finetune/test_generations.txt wmt_en_ro/test.target\r\n# 37.4\r\n```\r\n\r\n### Postprocessing Setup\r\n```bash\r\ncd $HOME\r\ngit clone [email protected]:moses-smt/mosesdecoder.git\r\ncd mosesdecoder \r\ngit clone [email protected]:rsennrich/wmt16-scripts.git\r\nro_post_process () {\r\n sys=$1\r\n ref=$2\r\n export MOSES_PATH=$HOME/mosesdecoder\r\n REPLACE_UNICODE_PUNCT=$MOSES_PATH/scripts/tokenizer/replace-unicode-punctuation.perl\r\n NORM_PUNC=$MOSES_PATH/scripts/tokenizer/normalize-punctuation.perl\r\n REM_NON_PRINT_CHAR=$MOSES_PATH/scripts/tokenizer/remove-non-printing-char.perl\r\n REMOVE_DIACRITICS=$MOSES_PATH/wmt16-scripts/preprocess/remove-diacritics.py\r\n NORMALIZE_ROMANIAN=$MOSES_PATH/wmt16-scripts/preprocess/normalise-romanian.py\r\n TOKENIZER=$MOSES_PATH/scripts/tokenizer/tokenizer.perl\r\n\r\n\r\n\r\n lang=ro\r\n for file in $sys $ref; do\r\n cat $file \\\r\n | $REPLACE_UNICODE_PUNCT \\\r\n | $NORM_PUNC -l $lang \\\r\n | $REM_NON_PRINT_CHAR \\\r\n | $NORMALIZE_ROMANIAN \\\r\n | $REMOVE_DIACRITICS \\\r\n | $TOKENIZER -no-escape -l $lang \\\r\n > $(basename $file).tok\r\n done\r\n # compute BLEU\r\n cat $(basename $sys).tok | sacrebleu -tok none -s none -b $(basename $ref).tok\r\n}\r\n```\r\n\r\n\r\n```bash\r\nro_post_process enro_test_translations.txt wmt_en_ro/test.target\r\n```\r\n", "@sshleifer That's warning can be sefaly ignore it, that is great, I have try to figure out the reason for occuring.\r\n\r\nI using the test set, I have got 21.9 before post-processing. I'm using this postprocessing setup and run_eval.py, My analysis is that my model performance is not enough, maybe the fine-tuning parameters are inaccurate, or there are too many training steps during fine-tuning. \r\n\r\nI am using this version with following code link.\r\nhttps://github.com/huggingface/transformers/blob/b9772897ec9f54c1a83263b059bfd37acda936d5/examples/seq2seq/finetune.py#L371\r\n\r\nthis has been changed in lastest version. \r\nhttps://github.com/huggingface/transformers/blob/9e89390ce1e785e72452207139a334cd3bf745ff/examples/seq2seq/finetune.py#L396\r\n\r\nI am not a native English speaker, If my words is unclear, you can let me repeat it in time.\r\n", "I don't understand what code you ran, what you expected to happen, and what happened.\r\n\r\nThe change you highlight should not affect your results.", "my last comment has two question.\r\nthe first question is my results can not be 37.7.\r\nthe second question is I set save_top_k ==-1, I have solved it \r\n\r\nthe follows is my runing parameters for the first question.\r\n`bash train_mbart_cc25_enro_pap.sh --output_dir $OUTPUT_DIR --gpus 1 --sortish_sampler`\r\n\r\nwhere train_mbart_cc25_enro_pap.sh as follows :\r\n\r\n```\r\nBS=16\r\nMAX_LEN=128\r\npython finetune.py \\\r\n --learning_rate=3e-5 \\\r\n --do_train \\\r\n --val_check_interval=0.25 \\\r\n --adam_eps 1e-06 \\\r\n --num_train_epochs 4 --src_lang en_XX --tgt_lang ro_RO \\\r\n --data_dir $ENRO_DIR \\\r\n --max_source_length $MAX_LEN --max_target_length $MAX_LEN --val_max_target_length $MAX_LEN --test_max_target_length $MAX_LEN \\\r\n --train_batch_size=$BS --eval_batch_size=$BS \\\r\n --task translation \\\r\n --warmup_steps 2500 \\\r\n --freeze_embeds \\\r\n --model_name_or_path=$MODEL_PATH \\\r\n --label_smoothing 0.2 \\\r\n --dropout 0.3 \\\r\n \"$@\"\r\n```", "that's my transformers report bleu as follows, 21.9 is using sacrebleu before postprocessing. @sshleifer \r\n```{'bleu':` 26.0506, 'n_obs': 1999, 'runtime': 681, 'seconds_per_sample': 0.3407}```\r\n\r\nand I have try to using the configuration as README with using `--fp16`, and I get a error log as follows :\r\n```\r\n2020-09-15 15:58:46.000 [INFO] [Driver] RuntimeError: CUDA out of memory. Tried to allocate 490.00 MiB (GPU 0; 31.75 GiB total capacity; 29.17 GiB already allocated; 241.44 MiB free; 30.35 GiB reserved in total by PyTorch)\r\n```\r\n\r\nand I have see the README mentioned as follows\r\n\r\n> This should take < 6h/epoch on a 16GB v100 and achieve test BLEU above 26 To get results in line with fairseq, you need to do some postprocessing. \r\n", "+ 26.06 is definitely expected behavior, as the README indicates. \r\n+ The best I've ever scored from finetuning is 26.8 after training for 6 epochs. (Took 24h).\r\n+ `--save_top-k=-1`, I can look into.\r\n+ What command did you run with `sacrebleu` to get 21.9? The HF run_eval.py uses the `sacrebleu.corpus_bleu` python function.\r\n+ I would recommend playing with the Helsinki-NLP/ models for faster MT finetuning.", "> * 26.06 is definitely expected behavior, as the README indicates.\r\n> * The best I've ever scored from finetuning is 26.8 after training for 6 epochs. (Took 24h).\r\n> * `--save_top-k=-1`, I can look into.\r\n> * What command did you run with `sacrebleu` to get 21.9? The HF run_eval.py uses the `sacrebleu.corpus_bleu` python function.\r\n> * I would recommend playing with the Helsinki-NLP/ models for faster MT finetuning.\r\n\r\n@sshleifer Hi , Thank you for your kindness help. I have got a closer score 37.2 to 37.7, I have a curious question, why is `Helsinki-NLP/ models` faster ?\r\n\r\n", "They are way smaller: 74 million parameters (Marian) vs 610M (mBART)" ]
1,593
1,602
1,602
CONTRIBUTOR
null
Got MBART 26.8 before postprocessing on wmt_en_ro, 37.1 after. 6 minutes. (first # same as fairseq, second should be also). Marian: 27.7/37.4 90 Seconds
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5263/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5263/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5262
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5262/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5262/comments
https://api.github.com/repos/huggingface/transformers/issues/5262/events
https://github.com/huggingface/transformers/issues/5262
645,078,924
MDU6SXNzdWU2NDUwNzg5MjQ=
5,262
Cannot control wandb metadata when running examples/seq2seq/finetune.py
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 1845609017, "node_id": "MDU6TGFiZWwxODQ1NjA5MDE3", "url": "https://api.github.com/repos/huggingface/transformers/labels/seq2seq", "name": "seq2seq", "color": "fef2c0", "default": false, "description": "" }, { "id": 2159774240, "node_id": "MDU6TGFiZWwyMTU5Nzc0MjQw", "url": "https://api.github.com/repos/huggingface/transformers/labels/wandb", "name": "wandb", "color": "f9e05e", "default": false, "description": "" } ]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "You should be able to do it with environment variables. See [these docs](https://docs.wandb.com/library/environment-variables)\r\nLet me know if that solves the issue.\r\n\r\nThe reason of the current implementation (and auto init of wandb) was to be able to instrument all example scripts without having to add custom wandb code.\r\n\r\nThere is [documentation on W&B related to transformers](https://docs.wandb.com/library/integrations/huggingface) that probably need to be updated to add this detail. We should probably also find a place to add documentation related to W&B integration in transformers repo. Let me know if I can be of any help.", "You could make `examples/wandb.md` with information and link to it from `examples/README.md`? Or just link to [this](https://docs.wandb.com/library/integrations/huggingface) in `examples/README.md`\r\n\r\nFor the lightning `WandbLogger` integration, does `$WANDB_PROJECT` take precedence over passing `WandbLogger(project='project_name')`?", "Thanks, that's a great idea!\r\n\r\nFor the lightning integration, any parameter you pass explicitly in `WandbLogger` should take precedence.\r\n", "@sshleifer it seems that the pytorch-lightning integration is commonly used so I also added a reference to it in my PR.\r\nJust curious, is it because you cannot do distributed computing with `Trainer` on multiple machines?\r\n\r\nWould it make sense to add an easier integration on `lightning_base` with a similar logic to `Trainer` and `TFTrainer`, ie by using `WandbLogger` on `generic_train` whenever wandb is installed and logged in (and ignore it otherwise)?", "As long as we can avoid\r\n(1) generating a lot of logs when I run the unittests on my local machine\r\n(2) enabling gradient watching by default\r\n\r\nThat sounds like a good change to me. @patil-suraj what do you think? do you use wandb?", "Yes, adding wandb integration on `lightning_base` makes sense to me given wandb is enabled by default in `Trainer` and `TFTraner` , this will enable wandb logging for `run_pl_ner` and `run_pl_glue`\r\n\r\nWith `Trainer` I can disable logging when I'm testing using `WANDB_DISABLED` env variable and gradient watching by setting \r\n`WANDB_WATCH` . \r\n\r\nenv variables should avoid excessive command line args", "Thanks for the feedback @sshleifer and @patil-suraj \r\nI can try to propose something similar for `lightning_base`. I'll wait for PR #5607 to be closed (feel free to comment if you think it's missing details) and I'll update the README accordingly when adding this functionality.", "Awesome. I had no idea that PR existed. Tag me next time and sorry for being so slow." ]
1,593
1,594
1,594
CONTRIBUTOR
null
There is no easy way to control the wandb project_name kwarg. How can we facilitate this without massive number of command line args?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5262/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5262/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5261
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5261/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5261/comments
https://api.github.com/repos/huggingface/transformers/issues/5261/events
https://github.com/huggingface/transformers/issues/5261
645,073,457
MDU6SXNzdWU2NDUwNzM0NTc=
5,261
[proposal] Move tests/utils.py to src/transformers/testing_utils.py so that examples can import
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 2139563322, "node_id": "MDU6TGFiZWwyMTM5NTYzMzIy", "url": "https://api.github.com/repos/huggingface/transformers/labels/cleanup", "name": "cleanup", "color": "e7fc49", "default": false, "description": "" } ]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "I think that would be cool. Since we're testing the examples, it makes sense to not duplicate the exact same code.", "Sounds good to me" ]
1,593
1,593
1,593
CONTRIBUTOR
null
for both groups of tests, the import would be ```python from transformers.testing_utils import slow ``` Motivation: I was about to rewrite the @slow decorator today and felt that this was cleaner. Any objections? @julien-c @LysandreJik @thomwolf @sgugger @patrickvonplaten @mfuntowicz @anyoneelseimforgetting
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5261/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5261/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5260
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5260/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5260/comments
https://api.github.com/repos/huggingface/transformers/issues/5260/events
https://github.com/huggingface/transformers/issues/5260
645,059,403
MDU6SXNzdWU2NDUwNTk0MDM=
5,260
BertTokenizerFast does not support `pad_to_max_length` argument
{ "login": "jarednielsen", "id": 4564897, "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jarednielsen", "html_url": "https://github.com/jarednielsen", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "repos_url": "https://api.github.com/users/jarednielsen/repos", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "type": "User", "site_admin": false }
[ { "id": 1834056635, "node_id": "MDU6TGFiZWwxODM0MDU2NjM1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization", "name": "Core: Tokenization", "color": "FF4446", "default": false, "description": "Internals of the library; Tokenization." } ]
closed
false
null
[]
[ "Hi @jarednielsen, if you installed from source then padding is handled in a different way. You'll need to use the newly added `padding` argument. According to the docs \r\n\r\n`padding` (:obj:`Union[bool, str]`, `optional`, defaults to :obj:`False`):\r\n Activate and control padding. Accepts the following values:\r\n \r\n * `True` or `'longest'`: pad to the longest sequence in the batch (or no padding if only a single sequence if provided),\r\n * `'max_length'`: pad to a max length specified in `max_length` or to the max acceptable input length for the model if no length is provided (`max_length=None`)\r\n * `False` or `'do_not_pad'` (default): No padding (i.e. can output batch with sequences of uneven lengths)\r\n", "Yes, this works on master (both the old and new tokenizer API) and should work in the new release that will be out very soon.", "Thank you for the quick response! Reading https://github.com/huggingface/transformers/pull/4510 makes it much clearer.", "Yes, we even have a nice tutorial on the new tokenizer API now thanks to the amazing @sgugger:\r\nhttps://huggingface.co/transformers/master/preprocessing.html" ]
1,593
1,593
1,593
CONTRIBUTOR
null
# 🐛 Bug The fast tokenizer has different behavior from the normal tokenizer. ```python from transformers import BertTokenizer, BertTokenizerFast BertTokenizer.from_pretrained("bert-base-uncased").encode("hello world", max_length=128, pad_to_max_length="right") # succeeds BertTokenizerFast.from_pretrained("bert-base-uncased").encode("hello world", max_length=128, pad_to_max_length="right") *** TypeError: enable_padding() got an unexpected keyword argument 'max_length' ``` ## Environment info - `transformers` version: 2.11.0 - `tokenizers` version: 0.8.0rc3 - Platform: Ubuntu 18.04 - Python version: 3.7
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5260/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5260/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5259
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5259/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5259/comments
https://api.github.com/repos/huggingface/transformers/issues/5259/events
https://github.com/huggingface/transformers/pull/5259
645,044,647
MDExOlB1bGxSZXF1ZXN0NDM5NTczOTk0
5,259
Create README.md
{ "login": "Moumeneb1", "id": 25756717, "node_id": "MDQ6VXNlcjI1NzU2NzE3", "avatar_url": "https://avatars.githubusercontent.com/u/25756717?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Moumeneb1", "html_url": "https://github.com/Moumeneb1", "followers_url": "https://api.github.com/users/Moumeneb1/followers", "following_url": "https://api.github.com/users/Moumeneb1/following{/other_user}", "gists_url": "https://api.github.com/users/Moumeneb1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Moumeneb1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Moumeneb1/subscriptions", "organizations_url": "https://api.github.com/users/Moumeneb1/orgs", "repos_url": "https://api.github.com/users/Moumeneb1/repos", "events_url": "https://api.github.com/users/Moumeneb1/events{/privacy}", "received_events_url": "https://api.github.com/users/Moumeneb1/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5259?src=pr&el=h1) Report\n> Merging [#5259](https://codecov.io/gh/huggingface/transformers/pull/5259?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d12ceb48bad126768e44d2bd958fa7638abd0f16&el=desc) will **decrease** coverage by `0.03%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5259/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5259?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5259 +/- ##\n==========================================\n- Coverage 79.10% 79.07% -0.04% \n==========================================\n Files 138 138 \n Lines 24073 24073 \n==========================================\n- Hits 19043 19035 -8 \n- Misses 5030 5038 +8 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5259?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5259/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `88.05% <0.00%> (-1.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5259/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.89% <0.00%> (-0.89%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5259?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5259?src=pr&el=footer). Last update [d12ceb4...39fef05](https://codecov.io/gh/huggingface/transformers/pull/5259?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Thanks for sharing 🤗 \r\n\r\n[model page](https://huggingface.co/moumeneb1/flaubert-base-cased-ecology_crisis)" ]
1,593
1,593
1,593
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5259/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5259/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5259", "html_url": "https://github.com/huggingface/transformers/pull/5259", "diff_url": "https://github.com/huggingface/transformers/pull/5259.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5259.patch", "merged_at": 1593064568000 }
https://api.github.com/repos/huggingface/transformers/issues/5258
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5258/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5258/comments
https://api.github.com/repos/huggingface/transformers/issues/5258/events
https://github.com/huggingface/transformers/pull/5258
645,015,286
MDExOlB1bGxSZXF1ZXN0NDM5NTQ5Nzkx
5,258
save_pretrained: mkdir(exist_ok=True)
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5258?src=pr&el=h1) Report\n> Merging [#5258](https://codecov.io/gh/huggingface/transformers/pull/5258?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/efae6645e223f29cf05eeafe95105a9f869b66dd&el=desc) will **decrease** coverage by `0.51%`.\n> The diff coverage is `50.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5258/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5258?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5258 +/- ##\n==========================================\n- Coverage 77.69% 77.17% -0.52% \n==========================================\n Files 138 138 \n Lines 24291 24300 +9 \n==========================================\n- Hits 18872 18754 -118 \n- Misses 5419 5546 +127 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5258?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5258/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `75.13% <0.00%> (-0.32%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5258/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.39% <50.00%> (-0.10%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5258/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.93% <50.00%> (-0.21%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5258/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `92.39% <66.66%> (-1.32%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5258/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.68% <100.00%> (+0.03%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5258/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5258/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5258/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.26% <0.00%> (-0.34%)` | :arrow_down: |\n| ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/5258/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5258?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5258?src=pr&el=footer). Last update [1af58c0...ac579a6](https://codecov.io/gh/huggingface/transformers/pull/5258?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "I'm favorable to this (I think I wanted to do it some time ago)\r\n\r\nIf we end up merging this, we should also clean up a lot of `os.makedir` calls \"upstream\" to this (example scripts, etc.)", "Awesome, I'll grep for makedirs and mkdir and see what I can delete tomorrow.", "Great change! ", "@thomwolf why did you guys put `logger.error` instead of the raising normal exceptions in the python tokenizer file?  Should I raise a `NotADirectoryException` if a save path is mis-specified as a file or keep the `logger.error`, `return None` logic?\r\n\r\nThis will be for calls like `tokenizer.save_pretrained(\"tokenizer.json\")`", "You should also clean up some calls to `os.makedir` in the Trainer, I think?" ]
1,593
1,593
1,593
CONTRIBUTOR
null
Old Logic: if you pass something that is not a pre-existing directory -> Error New Logic: if you pass a file that exists -> Error. if you pass a path that doesn't exist -> we call `mkdir path`, no error. if you pass an existing directory -> no error. This is not a breaking change, since no calls that previously succeeded produce different results. Costs: - you might occasionally make a directory called `pytorch_model.bin` for a confused user. - a little bit of error checking code Benefits: - fewer late failures during training because you forgot to mkdir. This happens to me a lot. - It feels like the spirit of the lib is to make NLP easier for the user, and I think this change is a small step in that direction. Feedback much appreciated, I want to see if people like this before I add tests.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5258/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5258/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5258", "html_url": "https://github.com/huggingface/transformers/pull/5258", "diff_url": "https://github.com/huggingface/transformers/pull/5258.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5258.patch", "merged_at": 1593370428000 }
https://api.github.com/repos/huggingface/transformers/issues/5257
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5257/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5257/comments
https://api.github.com/repos/huggingface/transformers/issues/5257/events
https://github.com/huggingface/transformers/pull/5257
644,982,096
MDExOlB1bGxSZXF1ZXN0NDM5NTIxNjY2
5,257
Tokenization tutorial
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5257?src=pr&el=h1) Report\n> Merging [#5257](https://codecov.io/gh/huggingface/transformers/pull/5257?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0148c262e79f5ca12140d7fc35a6d3e0d80d5d3b&el=desc) will **decrease** coverage by `0.90%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5257/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5257?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5257 +/- ##\n==========================================\n- Coverage 79.01% 78.10% -0.91% \n==========================================\n Files 138 138 \n Lines 24064 24064 \n==========================================\n- Hits 19013 18795 -218 \n- Misses 5051 5269 +218 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5257?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5257/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `49.30% <0.00%> (-42.12%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5257/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.71% <0.00%> (-2.07%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5257/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.14% <0.00%> (-0.13%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5257/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.38% <0.00%> (+0.94%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5257/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.90% <0.00%> (+1.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5257?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5257?src=pr&el=footer). Last update [0148c26...7bf3c46](https://codecov.io/gh/huggingface/transformers/pull/5257?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,593
1,593
1,593
COLLABORATOR
null
This takes most of the description in #4510 and organizes it as a tokenizer tutorial for section 2 of the documentation. Preview is [here](https://52645-155220641-gh.circle-artifacts.com/0/docs/_build/html/preprocessing.html).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5257/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5257/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5257", "html_url": "https://github.com/huggingface/transformers/pull/5257", "diff_url": "https://github.com/huggingface/transformers/pull/5257.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5257.patch", "merged_at": 1593038600000 }
https://api.github.com/repos/huggingface/transformers/issues/5256
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5256/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5256/comments
https://api.github.com/repos/huggingface/transformers/issues/5256/events
https://github.com/huggingface/transformers/issues/5256
644,956,043
MDU6SXNzdWU2NDQ5NTYwNDM=
5,256
RobertaTokenizerFast produces a different output than RobertaTokenizer
{ "login": "HHousen", "id": 11785397, "node_id": "MDQ6VXNlcjExNzg1Mzk3", "avatar_url": "https://avatars.githubusercontent.com/u/11785397?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HHousen", "html_url": "https://github.com/HHousen", "followers_url": "https://api.github.com/users/HHousen/followers", "following_url": "https://api.github.com/users/HHousen/following{/other_user}", "gists_url": "https://api.github.com/users/HHousen/gists{/gist_id}", "starred_url": "https://api.github.com/users/HHousen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HHousen/subscriptions", "organizations_url": "https://api.github.com/users/HHousen/orgs", "repos_url": "https://api.github.com/users/HHousen/repos", "events_url": "https://api.github.com/users/HHousen/events{/privacy}", "received_events_url": "https://api.github.com/users/HHousen/received_events", "type": "User", "site_admin": false }
[ { "id": 1834056635, "node_id": "MDU6TGFiZWwxODM0MDU2NjM1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization", "name": "Core: Tokenization", "color": "FF4446", "default": false, "description": "Internals of the library; Tokenization." } ]
closed
false
null
[]
[ "For records for the new users: Now, it seemed to have been resolved. I tried to reproduce this in my notebook but getting same results for both of them:\r\n\r\n<img width=\"1383\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/104596164/bde951ac-8b9c-4996-966b-1d11f4c12d35\">\r\n" ]
1,593
1,684
1,593
CONTRIBUTOR
null
# 🐛 Bug `RobertaTokenizerFast.tokenize()` produces a different output than `RobertaTokenizer.tokenize()`. I am not sure if this is an issue that will impact model performance. Is this intended? I assumed the fast tokenizers should be consistent with the normal ones in terms of outputs. ## Information Model I am using (Bert, XLNet ...): Roberta Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce ```python from transformers import RobertaTokenizer, RobertaTokenizerFast tokenizer = RobertaTokenizer.from_pretrained("roberta-base") tokens = tokenizer.tokenize("This is a test. </s> <s> Another one. </s> <s> Yet another one.") print("Normal Tokens: " + str(tokens)) ids = tokenizer.convert_tokens_to_ids(tokens) print("Normal IDs: " + str(ids)) tokenizer = RobertaTokenizerFast.from_pretrained("roberta-base") tokens = tokenizer.tokenize("This is a test. </s> <s> Another one. </s> <s> Yet another one.") print("Fast Tokens: " + str(tokens)) ids = tokenizer.convert_tokens_to_ids(tokens) print("Fast IDs: " + str(ids)) ``` Output: ``` Normal Tokens: ['This', 'Ġis', 'Ġa', 'Ġtest', '.', '</s>', '<s>', 'ĠAnother', 'Ġone', '.', '</s>', '<s>', 'ĠYet', 'Ġanother', 'Ġone', '.'] Normal IDs: [713, 16, 10, 1296, 4, 2, 0, 2044, 65, 4, 2, 0, 3507, 277, 65, 4] Fast Tokens: ['ĠThis', 'Ġis', 'Ġa', 'Ġtest', '.', 'Ġ', '</s>', 'Ġ', '<s>', 'ĠAnother', 'Ġone', '.', 'Ġ', '</s>', 'Ġ', '<s>', 'ĠYet', 'Ġanother', 'Ġone', '.'] Fast IDs: [152, 16, 10, 1296, 4, 1437, 2, 1437, 0, 2044, 65, 4, 1437, 2, 1437, 0, 3507, 277, 65, 4] ``` Using `tokenizer.enocde()` instead of `tokenizer.convert_tokens_to_ids(tokenizer.tokenize())` solves the discrepancy with the first token but still inserts token id `1437` between `</s>` and `<s>`. ## Expected behavior `RobertaTokenizerFast` produces the same output as `RobertaTokenizer`. ## Environment info - `transformers` version: 2.11.0 - Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.5.1+cu101 (True) - Tensorflow version (GPU?): 2.2.0 (True) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5256/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5256/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5255
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5255/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5255/comments
https://api.github.com/repos/huggingface/transformers/issues/5255/events
https://github.com/huggingface/transformers/pull/5255
644,860,276
MDExOlB1bGxSZXF1ZXN0NDM5NDExMzAy
5,255
Fix first test
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,593
1,593
1,593
COLLABORATOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5255/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5255/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5255", "html_url": "https://github.com/huggingface/transformers/pull/5255", "diff_url": "https://github.com/huggingface/transformers/pull/5255.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5255.patch", "merged_at": 1593026165000 }
https://api.github.com/repos/huggingface/transformers/issues/5254
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5254/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5254/comments
https://api.github.com/repos/huggingface/transformers/issues/5254/events
https://github.com/huggingface/transformers/pull/5254
644,856,787
MDExOlB1bGxSZXF1ZXN0NDM5NDA4MTY2
5,254
Move GenerationMixin to separate file
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5254?src=pr&el=h1) Report\n> Merging [#5254](https://codecov.io/gh/huggingface/transformers/pull/5254?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0267668c3d648c6e41afda97f5df8671ee880ac3&el=desc) will **increase** coverage by `0.52%`.\n> The diff coverage is `77.79%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5254/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5254?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5254 +/- ##\n==========================================\n+ Coverage 77.01% 77.53% +0.52% \n==========================================\n Files 128 140 +12 \n Lines 21615 24334 +2719 \n==========================================\n+ Hits 16646 18868 +2222 \n- Misses 4969 5466 +497 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5254?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2FsYmVydC5weQ==) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2N0cmwucHk=) | `97.05% <ø> (ø)` | |\n| [src/transformers/configuration\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2Rpc3RpbGJlcnQucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/5254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VuY29kZXJfZGVjb2Rlci5weQ==) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2dwdDIucHk=) | `97.22% <ø> (ø)` | |\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/5254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX29wZW5haS5weQ==) | `97.14% <ø> (ø)` | |\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JvYmVydGEucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `96.42% <ø> (ø)` | |\n| ... and [159 more](https://codecov.io/gh/huggingface/transformers/pull/5254/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5254?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5254?src=pr&el=footer). Last update [482a599...356e825](https://codecov.io/gh/huggingface/transformers/pull/5254?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Awesome, very much in favor of this change! \r\n\r\n", "I changed the name and mentioned the intended child class in the comment as suggested by @sshleifer \r\n\r\nI tried playing around with `importlib` to import `shape_list` dynamically from `modeling_tf_utils.py` but I'm getting stumped making it work with relative imports. Any suggestions / pointers @patrickvonplaten?\r\n\r\nI also didn't see `importlib` used anywhere else for similar purposes so I feel a little uneasy about bringing in additional machinery to the lib :) ", "> This is great! That cleans up the `modeling_(tf_)utils` a lot!\r\n> \r\n> I'm thinking that `modeling_generation_utils.py` and `modeling_tf_generation_utils.py` would probably be better names. When I see this I'm thinking that `generation` is another model, rather than a utility file.\r\n> \r\n> Pinging @thomwolf so he can give his opinion on introducing this mixin.\r\n\r\nChanged the names :) ", "I moved `shape_list` back to the main `modeling_tf_utils.py` (duplicated in `generation_tf_utils.py`) and renamed the files to Thom's suggestion. Should be ready to merge @LysandreJik !" ]
1,593
1,593
1,593
MEMBER
null
This PR splits the `modeling_utils.py` and `modeling_tf_utils.py` by moving the code and methods related to generation to `modeling_generation.py` and `modeling_tf_generation.py` respectively. Both of these files were getting pretty long, with the code dedicated to generation taking about 1000 LOC in each while being completely disjoint from the rest. This re-organization should make the code easier to read and contribute to. There are no functional changes, I literally just created a new Mixin class for each and moved the functions as is.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5254/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5254/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5254", "html_url": "https://github.com/huggingface/transformers/pull/5254", "diff_url": "https://github.com/huggingface/transformers/pull/5254.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5254.patch", "merged_at": 1593528128000 }
https://api.github.com/repos/huggingface/transformers/issues/5253
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5253/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5253/comments
https://api.github.com/repos/huggingface/transformers/issues/5253/events
https://github.com/huggingface/transformers/pull/5253
644,847,563
MDExOlB1bGxSZXF1ZXN0NDM5NDAwMDEx
5,253
Use master _static
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,593
1,593
1,593
COLLABORATOR
null
Make all doc versions use the _static from master to make sure they all have the same version controller, link to hugging face logo etc.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5253/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5253/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5253", "html_url": "https://github.com/huggingface/transformers/pull/5253", "diff_url": "https://github.com/huggingface/transformers/pull/5253.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5253.patch", "merged_at": 1593025575000 }
https://api.github.com/repos/huggingface/transformers/issues/5252
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5252/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5252/comments
https://api.github.com/repos/huggingface/transformers/issues/5252/events
https://github.com/huggingface/transformers/pull/5252
644,761,355
MDExOlB1bGxSZXF1ZXN0NDM5MzI3MDQ3
5,252
[Tokenization] Fix #5181 - make #5155 more explicit - move back the default logging level in tests to WARNING
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5252?src=pr&el=h1) Report\n> Merging [#5252](https://codecov.io/gh/huggingface/transformers/pull/5252?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7ac91107119f95a9034e5404bd5af34355d0ffa5&el=desc) will **increase** coverage by `1.60%`.\n> The diff coverage is `86.66%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5252/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5252?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5252 +/- ##\n==========================================\n+ Coverage 77.48% 79.09% +1.60% \n==========================================\n Files 138 138 \n Lines 24073 24071 -2 \n==========================================\n+ Hits 18653 19038 +385 \n+ Misses 5420 5033 -387 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5252?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5252/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91.48% <85.71%> (-1.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5252/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.16% <100.00%> (+0.34%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5252/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.35% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5252/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+1.41%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5252/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.65% <0.00%> (+2.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5252/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `99.14% <0.00%> (+2.56%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5252/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.62% <0.00%> (+3.53%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5252/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `91.41% <0.00%> (+42.11%)` | :arrow_up: |\n| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/5252/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5252?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5252?src=pr&el=footer). Last update [7ac9110...230551f](https://codecov.io/gh/huggingface/transformers/pull/5252?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Ok, this one is ready for review/merge.\r\nIt fixes an important bug in the interaction of padding and truncation for slow tokenizers in the new backend.\r\nIt also adds a lot of tests and tweaks a bit the tests to make them faster and less verbose." ]
1,593
1,593
1,593
MEMBER
null
Padding to max sequence length while truncating to another length did not behave as expected on slow tokenizers as raised in #5181 by @sshleifer (it was truncating and then padding back to original length...) This PR adds more tests to cover various combinations of padding + truncation strategies. Fix #5181 This PR also: - make #5155 clearer by changing the assertion in a cleaner error message (until the data processors are refactored) - move back the default level of logging in tests to `logging.WARNING` - switch some really slow tokenizations tests on CPU (when the inputs go in the full models) to `@slow` and speed-up testing by limiting the max sequence length used for testing
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5252/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5252/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5252", "html_url": "https://github.com/huggingface/transformers/pull/5252", "diff_url": "https://github.com/huggingface/transformers/pull/5252.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5252.patch", "merged_at": 1593098669000 }
https://api.github.com/repos/huggingface/transformers/issues/5251
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5251/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5251/comments
https://api.github.com/repos/huggingface/transformers/issues/5251/events
https://github.com/huggingface/transformers/pull/5251
644,739,240
MDExOlB1bGxSZXF1ZXN0NDM5MzA4OTIz
5,251
Fix version controller links (for realsies)
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,593
1,593
1,593
COLLABORATOR
null
This time tested on pretty much all situations and add proper links as a result. In particular: - not sure that `location.toString()` will end with a '/' or not when on the index, this works with both - when nested, use the previous value and not an absolute one - the base url needs to be slice up to the version, including it for stable
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5251/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5251/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5251", "html_url": "https://github.com/huggingface/transformers/pull/5251", "diff_url": "https://github.com/huggingface/transformers/pull/5251.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5251.patch", "merged_at": 1593015224000 }
https://api.github.com/repos/huggingface/transformers/issues/5250
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5250/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5250/comments
https://api.github.com/repos/huggingface/transformers/issues/5250/events
https://github.com/huggingface/transformers/pull/5250
644,727,873
MDExOlB1bGxSZXF1ZXN0NDM5Mjk5NjMw
5,250
Fix tensor label type inference in default collator
{ "login": "joeddav", "id": 9353833, "node_id": "MDQ6VXNlcjkzNTM4MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joeddav", "html_url": "https://github.com/joeddav", "followers_url": "https://api.github.com/users/joeddav/followers", "following_url": "https://api.github.com/users/joeddav/following{/other_user}", "gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}", "starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joeddav/subscriptions", "organizations_url": "https://api.github.com/users/joeddav/orgs", "repos_url": "https://api.github.com/users/joeddav/repos", "events_url": "https://api.github.com/users/joeddav/events{/privacy}", "received_events_url": "https://api.github.com/users/joeddav/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "What's the perf impact of a try/catch?", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5250?src=pr&el=h1) Report\n> Merging [#5250](https://codecov.io/gh/huggingface/transformers/pull/5250?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/49f6e7a3c6729025e0d412ee19786c71811a6390&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5250/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5250?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5250 +/- ##\n=======================================\n Coverage 77.96% 77.96% \n=======================================\n Files 138 138 \n Lines 23886 23887 +1 \n=======================================\n+ Hits 18622 18623 +1 \n Misses 5264 5264 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5250?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/5250/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `98.36% <100.00%> (+0.02%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5250/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `76.42% <0.00%> (-0.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5250/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.26% <0.00%> (+0.12%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5250?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5250?src=pr&el=footer). Last update [49f6e7a...513dcb5](https://codecov.io/gh/huggingface/transformers/pull/5250?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Is it supposed to be slower? I ran a quick profile iterating over IMDb train set and saw no difference, but can manually check for `torch.tensor` instead if that's preferred." ]
1,593
1,598
1,593
CONTRIBUTOR
null
Quick fix to `default_data_collator` allowing it to recognize the correct type of PT tensor label inputs rather than always casting them to float (#5060)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5250/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5250/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5250", "html_url": "https://github.com/huggingface/transformers/pull/5250", "diff_url": "https://github.com/huggingface/transformers/pull/5250.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5250.patch", "merged_at": 1593621614000 }
https://api.github.com/repos/huggingface/transformers/issues/5249
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5249/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5249/comments
https://api.github.com/repos/huggingface/transformers/issues/5249/events
https://github.com/huggingface/transformers/issues/5249
644,727,499
MDU6SXNzdWU2NDQ3Mjc0OTk=
5,249
Distilroberta Tokenizer and Encoder not aligning
{ "login": "dhairyadalal", "id": 22040959, "node_id": "MDQ6VXNlcjIyMDQwOTU5", "avatar_url": "https://avatars.githubusercontent.com/u/22040959?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhairyadalal", "html_url": "https://github.com/dhairyadalal", "followers_url": "https://api.github.com/users/dhairyadalal/followers", "following_url": "https://api.github.com/users/dhairyadalal/following{/other_user}", "gists_url": "https://api.github.com/users/dhairyadalal/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhairyadalal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhairyadalal/subscriptions", "organizations_url": "https://api.github.com/users/dhairyadalal/orgs", "repos_url": "https://api.github.com/users/dhairyadalal/repos", "events_url": "https://api.github.com/users/dhairyadalal/events{/privacy}", "received_events_url": "https://api.github.com/users/dhairyadalal/received_events", "type": "User", "site_admin": false }
[ { "id": 1834056635, "node_id": "MDU6TGFiZWwxODM0MDU2NjM1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization", "name": "Core: Tokenization", "color": "FF4446", "default": false, "description": "Internals of the library; Tokenization." } ]
closed
false
null
[]
[ "Dug a bit deeper. It seems encode doesn't deterministically tokenize on white-spaces. In the cat example: 16101 -> \"super\" and 2422 -> \" super\". Is there an option in encode to force white space splitting. I guess the hack is to tokenize each word separately but that rather inefficient ", "For the GPT2/Roberta tokenizers, the space before a word is part of the word which explain the discrepancy you see.\r\n\r\nYou can set `add_prefix_space` at initialization, e.g. `tokenizer = AutoTokenizer.from_pretrained(\"distilroberta-base\", add_prefix_space=True)` to always add a space before the text. Performances will be slightly lower as showed in https://github.com/huggingface/transformers/issues/3788 but you will get a consistent behavior when encoding a text and a subspan separately.", "I'm not seeing the behavior you described @thomwolf . Perhaps I'm misunderstanding what add_prefix_space does. \r\n\r\n```\r\ntokenizer = AutoTokenizer.from_pretrained(\"distilroberta-base\", add_prefix_space=True)\r\ntext = \"Cats are super coolio\"\r\nsubtext = \"super coolio\"\r\n\r\nprint(tokenizer.encode(text, add_special_tokens=True))\r\nprint(tokenizer.encode(subtext, add_special_tokens=False))\r\n#[0, 20913, 32, **2422, 3035, 1020**, 2]\r\n#[16101, 3035, 1020]\r\n```\r\n\r\n```\r\ntokenizer = AutoTokenizer.from_pretrained(\"distilroberta-base\", add_prefix_space=False)\r\ntext = \"Cats are super coolio\"\r\nsubtext = \"super coolio\"\r\n\r\nprint(tokenizer.encode(text, add_special_tokens=True))\r\nprint(tokenizer.encode(subtext, add_special_tokens=False))\r\n# [0, 20913, 32, **2422, 3035, 1020**, 2]\r\n# [16101, 3035, 1020]\r\n```\r\nHowever, I add special tokens on the subtext, that does seem to work. It just requires extra line parse out the special characters during insertion but I can work with that. This behavior work regardless of whether add_prefix_space is enabled or not.\r\n\r\n```\r\ntokenizer = AutoTokenizer.from_pretrained(\"distilroberta-base\")\r\ntext = \"Cats are super coolio\"\r\nsubtext = \"super coolio\"\r\n\r\nprint(tokenizer.encode(text, add_special_tokens=True))\r\nprint(tokenizer.encode(subtext, add_special_tokens=True))\r\n\r\n# [0, 20913, 32, 2422, 3035, 1020, 2]\r\n# [0, 2422, 3035, 1020, 2]\r\n```\r\n\r\n", "I see. This is now fixed on master and will be in the next release.\r\nHere is the current behavior with your examples:\r\n```python\r\nfrom transformers import AutoTokenizer\r\ntokenizer = AutoTokenizer.from_pretrained(\"distilroberta-base\", add_prefix_space=True)\r\ntext = \"Cats are super coolio\"\r\nsubtext = \"super coolio\"\r\n\r\nprint(tokenizer.encode(text, add_special_tokens=True))\r\n[0, 20913, 32, 2422, 3035, 1020, 2]\r\nprint(tokenizer.encode(subtext, add_special_tokens=False))\r\n[2422, 3035, 1020]\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"distilroberta-base\", add_prefix_space=False)\r\nprint(tokenizer.encode(text, add_special_tokens=True))\r\n[0, 347, 2923, 32, 2422, 3035, 1020, 2]\r\nprint(tokenizer.encode(subtext, add_special_tokens=False))\r\n[16101, 3035, 1020]\r\n```\r\n\r\nYou can also confirm the behavior with the tokens (the prefix space of the word is this `Ġ` in GPT2/Roberta tokenizers):\r\n```python\r\ntokenizer = AutoTokenizer.from_pretrained(\"distilroberta-base\", add_prefix_space=True)\r\ntokenizer.tokenize(text)\r\n['ĠCats', 'Ġare', 'Ġsuper', 'Ġcool', 'io']\r\ntokenizer.tokenize(subtext)\r\n['Ġsuper', 'Ġcool', 'io']\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"distilroberta-base\", add_prefix_space=False)\r\ntokenizer.tokenize(text)\r\n['C', 'ats', 'Ġare', 'Ġsuper', 'Ġcool', 'io']\r\ntokenizer.tokenize(subtext)\r\n['super', 'Ġcool', 'io']\r\n```", "@thomwolf Thanks! I'll go ahead close this issue. Appreciate the quick turnaround and fix. " ]
1,593
1,593
1,593
NONE
null
I working on a sequence tagging task where I to extract cause and effect sub spans for a given text. For example, extract <e1> and <e2> for the text below. `"<e2>The Sunshine State drew in a net influx of about $17.7 billion in adjusted gross income (AGI) - most of which (72 percent) came from those aged 55 and older.</e2> <e1>It is consistently one of the most popular destinations for retirees due to affordability and low taxes.</e1> Florida's $17.7 billion in net AGI dwarves the remaining 19 states that saw a positive net influx of income - which combined for a total of $19.4 billion."` For the model, I'm try to generate a BIO style tags that align with the tokenized input from the Distilroberta tokenizer. How I'm finding the after encoding the tokenized input, there is misalignment with the expected tags. ``` text = """The Sunshine State drew in a net influx of about $17.7 billion in adjusted gross income (AGI) - most of which (72 percent) came from those aged 55 and older. It is consistently one of the most popular destinations for retirees due to affordability and low taxes. Florida's $17.7 billion in net AGI dwarves the remaining 19 states that saw a positive net influx of income - which combined for a total of $19.4 billion.""" cause = 'It is consistently one of the most popular destinations for retirees due to affordability and low taxes.' effect = 'The Sunshine State drew in a net influx of about $17.7 billion in adjusted gross income (AGI) - most of which (72 percent) came from those aged 55 and older.' # Test that cause and effect are valid substring in text print(text.find(cause)) # 160 print(text.find(effect)) # 0 from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("distilroberta-base") # Convert cause into Tags cause_toks = tokenizer.tokenize(cause) cause_tags = ["B-cause"] + ["I-cause"] * (len(cause_toks) -1) # Convert effect into Tags effect_toks = tokenizer.tokenize(effect) effect_tags = ["B-cause"] + ["I-cause"] * (len(cause_toks) -1) # Convert text tokinzed string text_toks = tokenizer.tokenize(text) text_toks_string = " ".join(text_toks) text_toks_string = text_toks_string.replace(" ".join(cause_toks), " ".join(cause_tags)) text_toks_string = text_toks_string.replace(" ".join(effect_toks)," ".join(effect_tags)) text_toks = [tok if tok in ["B-cause", "I-cause", "B-effect", "I-effect"] else "O" for tok in text_toks_string.split()] #['B-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'O', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'I-cause', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O'] print("text toks len: ", len(text_toks) + 2) # include start and end tokens print("encoded text len: ", len(tokenizer.encode(text))) # text toks len: 77 # encoded text len: 100 ``` I also found I don't seem to get consistent behavior encoding a text and subspan seperately either. ``` text = "Cats are super coolio" subtext = "super coolio" print(tokenizer.encode(text)) # [0, 20913, 32, 2422, 3035, 1020, 2] print(tokenizer.encode(subtext, add_special_tokens=False))` # [16101, 3035, 1020] ``` Is my understanding on encode wrong? I thought encode converts the BPE tokens to numerical values and adds the cls and sep tokens according to the beginning and end. But it seems like something else is going on.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5249/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5249/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5248
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5248/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5248/comments
https://api.github.com/repos/huggingface/transformers/issues/5248/events
https://github.com/huggingface/transformers/pull/5248
644,710,068
MDExOlB1bGxSZXF1ZXN0NDM5Mjg1MTg2
5,248
Fix links in version selector
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,593
1,593
1,593
COLLABORATOR
null
This fixes the links provided by the version selector.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5248/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5248/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5248", "html_url": "https://github.com/huggingface/transformers/pull/5248", "diff_url": "https://github.com/huggingface/transformers/pull/5248.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5248.patch", "merged_at": 1593012955000 }
https://api.github.com/repos/huggingface/transformers/issues/5247
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5247/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5247/comments
https://api.github.com/repos/huggingface/transformers/issues/5247/events
https://github.com/huggingface/transformers/pull/5247
644,695,518
MDExOlB1bGxSZXF1ZXN0NDM5MjczMzk5
5,247
[WIP] Support label_smoothed_cross_entropy
{ "login": "ieBoytsov", "id": 61888740, "node_id": "MDQ6VXNlcjYxODg4NzQw", "avatar_url": "https://avatars.githubusercontent.com/u/61888740?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ieBoytsov", "html_url": "https://github.com/ieBoytsov", "followers_url": "https://api.github.com/users/ieBoytsov/followers", "following_url": "https://api.github.com/users/ieBoytsov/following{/other_user}", "gists_url": "https://api.github.com/users/ieBoytsov/gists{/gist_id}", "starred_url": "https://api.github.com/users/ieBoytsov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ieBoytsov/subscriptions", "organizations_url": "https://api.github.com/users/ieBoytsov/orgs", "repos_url": "https://api.github.com/users/ieBoytsov/repos", "events_url": "https://api.github.com/users/ieBoytsov/events{/privacy}", "received_events_url": "https://api.github.com/users/ieBoytsov/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5247?src=pr&el=h1) Report\n> Merging [#5247](https://codecov.io/gh/huggingface/transformers/pull/5247?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/aa6a29bc25b663e1311c5c4fb96b004cf8a6d2b6&el=desc) will **increase** coverage by `0.38%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5247/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5247?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5247 +/- ##\n==========================================\n+ Coverage 77.92% 78.30% +0.38% \n==========================================\n Files 137 137 \n Lines 23475 23475 \n==========================================\n+ Hits 18292 18383 +91 \n+ Misses 5183 5092 -91 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5247?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5247/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `95.18% <0.00%> (+0.37%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5247/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.28% <0.00%> (+0.82%)` | :arrow_up: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5247/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `56.68% <0.00%> (+28.02%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5247?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5247?src=pr&el=footer). Last update [aa6a29b...4d63b64](https://codecov.io/gh/huggingface/transformers/pull/5247?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "The one thing i dont understand is where to initialise this loss in `finetune.py`.\r\n\r\nAs i see now there is now no loss instances in `finetune.py`, the loss function comes from given model and initialised during modelling_*.py", "This PR seems outdated. The issue has been resolved with PR #5919", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,593
1,603
1,603
CONTRIBUTOR
null
By default there is no implementation of cross entropy with soft labels in Pytorch as discussed in #5168 I found a feature request in pytorch for it but it is still not done. https://github.com/pytorch/pytorch/issues/7455 There is also discussion where i found an implemented version of this loss. I checked that it performs the same as nn.CrossEntropyLoss given smoothing=0.0 and accurately smoothes labels given nonzero smoothing. I ported it with small refactoring. Key improvements that are planned in this PR: 1) Add label smoothing cross entropy loss 2) support and parametrise loss choice in `finetune.py`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5247/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5247/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5247", "html_url": "https://github.com/huggingface/transformers/pull/5247", "diff_url": "https://github.com/huggingface/transformers/pull/5247.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5247.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/5246
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5246/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5246/comments
https://api.github.com/repos/huggingface/transformers/issues/5246/events
https://github.com/huggingface/transformers/pull/5246
644,684,576
MDExOlB1bGxSZXF1ZXN0NDM5MjY0NDg5
5,246
Fix deploy doc
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,593
1,593
1,593
COLLABORATOR
null
Try to update the master doc like the stable docs to see if this fixes the problem of things not being copied over.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5246/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5246/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5246", "html_url": "https://github.com/huggingface/transformers/pull/5246", "diff_url": "https://github.com/huggingface/transformers/pull/5246.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5246.patch", "merged_at": 1593010746000 }
https://api.github.com/repos/huggingface/transformers/issues/5245
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5245/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5245/comments
https://api.github.com/repos/huggingface/transformers/issues/5245/events
https://github.com/huggingface/transformers/pull/5245
644,672,255
MDExOlB1bGxSZXF1ZXN0NDM5MjU0MzAy
5,245
[Benchmarks] improve Example Plotter
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5245?src=pr&el=h1) Report\n> Merging [#5245](https://codecov.io/gh/huggingface/transformers/pull/5245?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9fe09cec76efa1e221c3fd6eb8520ba0a911f092&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5245/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5245?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5245 +/- ##\n==========================================\n- Coverage 77.93% 77.92% -0.01% \n==========================================\n Files 138 138 \n Lines 23860 23859 -1 \n==========================================\n- Hits 18595 18593 -2 \n- Misses 5265 5266 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5245?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/benchmark/benchmark\\_args\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5245/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3NfdXRpbHMucHk=) | `89.13% <ø> (-0.24%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5245/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.00% <0.00%> (-0.15%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5245?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5245?src=pr&el=footer). Last update [9fe09ce...882b09d](https://codecov.io/gh/huggingface/transformers/pull/5245?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,593
1,593
1,593
MEMBER
null
This PR make it possible to plot csv files that have "N/A" values.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5245/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5245/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5245", "html_url": "https://github.com/huggingface/transformers/pull/5245", "diff_url": "https://github.com/huggingface/transformers/pull/5245.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5245.patch", "merged_at": 1593176415000 }
https://api.github.com/repos/huggingface/transformers/issues/5244
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5244/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5244/comments
https://api.github.com/repos/huggingface/transformers/issues/5244/events
https://github.com/huggingface/transformers/pull/5244
644,665,106
MDExOlB1bGxSZXF1ZXN0NDM5MjQ4NDI0
5,244
Add some prints to debug deploy script
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,593
1,593
1,593
COLLABORATOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5244/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5244/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5244", "html_url": "https://github.com/huggingface/transformers/pull/5244", "diff_url": "https://github.com/huggingface/transformers/pull/5244.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5244.patch", "merged_at": 1593009422000 }
https://api.github.com/repos/huggingface/transformers/issues/5243
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5243/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5243/comments
https://api.github.com/repos/huggingface/transformers/issues/5243/events
https://github.com/huggingface/transformers/pull/5243
644,622,335
MDExOlB1bGxSZXF1ZXN0NDM5MjEzNDg2
5,243
Don't recreate old docs
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5243?src=pr&el=h1) Report\n> Merging [#5243](https://codecov.io/gh/huggingface/transformers/pull/5243?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/173528e3685bc4321630b7f979d01896c57a5c15&el=desc) will **increase** coverage by `0.02%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5243/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5243?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5243 +/- ##\n==========================================\n+ Coverage 77.96% 77.99% +0.02% \n==========================================\n Files 138 138 \n Lines 23839 23839 \n==========================================\n+ Hits 18586 18593 +7 \n+ Misses 5253 5246 -7 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5243?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5243/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.26% <0.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5243/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.15% <0.00%> (+0.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5243/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.90% <0.00%> (+1.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5243?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5243?src=pr&el=footer). Last update [173528e...80b87e8](https://codecov.io/gh/huggingface/transformers/pull/5243?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,593
1,593
1,593
COLLABORATOR
null
Change the check to look at a directory on the doc hosts instead of CircleCI to avoid creating old docs at each commit.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5243/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5243/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5243", "html_url": "https://github.com/huggingface/transformers/pull/5243", "diff_url": "https://github.com/huggingface/transformers/pull/5243.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5243.patch", "merged_at": 1593007147000 }
https://api.github.com/repos/huggingface/transformers/issues/5242
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5242/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5242/comments
https://api.github.com/repos/huggingface/transformers/issues/5242/events
https://github.com/huggingface/transformers/pull/5242
644,619,250
MDExOlB1bGxSZXF1ZXN0NDM5MjEwOTA1
5,242
[Benchmark] fix print in benchmark
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5242?src=pr&el=h1) Report\n> Merging [#5242](https://codecov.io/gh/huggingface/transformers/pull/5242?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9fe09cec76efa1e221c3fd6eb8520ba0a911f092&el=desc) will **decrease** coverage by `0.02%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5242/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5242?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5242 +/- ##\n==========================================\n- Coverage 77.93% 77.91% -0.03% \n==========================================\n Files 138 138 \n Lines 23860 23860 \n==========================================\n- Hits 18595 18590 -5 \n- Misses 5265 5270 +5 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5242?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/benchmark/benchmark\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5242/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `69.84% <ø> (ø)` | |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5242/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `38.44% <0.00%> (-1.18%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5242/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.00% <0.00%> (-0.15%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5242/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.26% <0.00%> (+0.12%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5242?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5242?src=pr&el=footer). Last update [9fe09ce...911baec](https://codecov.io/gh/huggingface/transformers/pull/5242?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,593
1,593
1,593
MEMBER
null
Tiny change pretty print results.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5242/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5242/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5242", "html_url": "https://github.com/huggingface/transformers/pull/5242", "diff_url": "https://github.com/huggingface/transformers/pull/5242.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5242.patch", "merged_at": 1593007130000 }
https://api.github.com/repos/huggingface/transformers/issues/5241
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5241/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5241/comments
https://api.github.com/repos/huggingface/transformers/issues/5241/events
https://github.com/huggingface/transformers/pull/5241
644,588,524
MDExOlB1bGxSZXF1ZXN0NDM5MTg1NjQ0
5,241
[Benchmark] Extend Benchmark to all model type extensions
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5241?src=pr&el=h1) Report\n> Merging [#5241](https://codecov.io/gh/huggingface/transformers/pull/5241?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1ae132a07d7f294cf58cd50f7db8723d00e282de&el=desc) will **increase** coverage by `0.44%`.\n> The diff coverage is `40.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5241/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5241?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5241 +/- ##\n==========================================\n+ Coverage 77.49% 77.93% +0.44% \n==========================================\n Files 138 138 \n Lines 23787 23806 +19 \n==========================================\n+ Hits 18433 18554 +121 \n+ Misses 5354 5252 -102 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5241?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/benchmark/benchmark.py](https://codecov.io/gh/huggingface/transformers/pull/5241/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrLnB5) | `74.01% <33.33%> (-5.12%)` | :arrow_down: |\n| [src/transformers/benchmark/benchmark\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5241/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3RmLnB5) | `79.81% <37.50%> (-2.88%)` | :arrow_down: |\n| [src/transformers/benchmark/benchmark\\_args\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5241/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3NfdXRpbHMucHk=) | `89.36% <100.00%> (+0.23%)` | :arrow_up: |\n| [src/transformers/benchmark/benchmark\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5241/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `69.84% <100.00%> (+0.07%)` | :arrow_up: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5241/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-28.03%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5241/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `88.05% <0.00%> (-1.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5241/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `90.45% <0.00%> (-0.83%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5241/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `76.42% <0.00%> (-0.39%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5241/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `94.81% <0.00%> (-0.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5241/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.00% <0.00%> (+0.29%)` | :arrow_up: |\n| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/5241/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5241?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5241?src=pr&el=footer). Last update [1ae132a...a007369](https://codecov.io/gh/huggingface/transformers/pull/5241?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,593
1,593
1,593
MEMBER
null
This PR does the following changes: 1) - The default model class to benchmark is the one that can be found under config.architectures 2) - Improve plotting file
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5241/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5241/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5241", "html_url": "https://github.com/huggingface/transformers/pull/5241", "diff_url": "https://github.com/huggingface/transformers/pull/5241.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5241.patch", "merged_at": 1593004303000 }
https://api.github.com/repos/huggingface/transformers/issues/5240
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5240/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5240/comments
https://api.github.com/repos/huggingface/transformers/issues/5240/events
https://github.com/huggingface/transformers/pull/5240
644,587,971
MDExOlB1bGxSZXF1ZXN0NDM5MTg1MTc4
5,240
[WIP] Add 🤗nlp in examples using the updated tokenizer API
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5240?src=pr&el=h1) Report\n> Merging [#5240](https://codecov.io/gh/huggingface/transformers/pull/5240?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7c41057d5090f5e665f2404878369ecb13939def&el=desc) will **decrease** coverage by `1.27%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5240/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5240?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5240 +/- ##\n==========================================\n- Coverage 78.34% 77.07% -1.28% \n==========================================\n Files 138 138 \n Lines 23841 23841 \n==========================================\n- Hits 18679 18376 -303 \n- Misses 5162 5465 +303 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5240?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5240/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `19.92% <0.00%> (-75.00%)` | :arrow_down: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5240/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-28.03%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5240/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `88.05% <0.00%> (-1.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5240/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `89.95% <0.00%> (-0.92%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5240/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `76.42% <0.00%> (-0.39%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5240/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91.50% <0.00%> (-0.32%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5240/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.71% <0.00%> (-0.30%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5240?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5240?src=pr&el=footer). Last update [7c41057...9df4881](https://codecov.io/gh/huggingface/transformers/pull/5240?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,593
1,651
1,599
MEMBER
null
This PR superseded #4864 This PR examines how to best make use of all the features of 🤗nlp in the examples. First example studied is GLUE. The main goal is to have explicit data processing (target: no data processing happening inside transformers) as well as add some efficiency features like dynamic/optimized batching. The second goal is to make this a lot more efficient, fast, and reproducible.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5240/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5240/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5240", "html_url": "https://github.com/huggingface/transformers/pull/5240", "diff_url": "https://github.com/huggingface/transformers/pull/5240.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5240.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/5239
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5239/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5239/comments
https://api.github.com/repos/huggingface/transformers/issues/5239/events
https://github.com/huggingface/transformers/issues/5239
644,566,742
MDU6SXNzdWU2NDQ1NjY3NDI=
5,239
Multilingual MNLI model
{ "login": "iuria21", "id": 26438571, "node_id": "MDQ6VXNlcjI2NDM4NTcx", "avatar_url": "https://avatars.githubusercontent.com/u/26438571?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iuria21", "html_url": "https://github.com/iuria21", "followers_url": "https://api.github.com/users/iuria21/followers", "following_url": "https://api.github.com/users/iuria21/following{/other_user}", "gists_url": "https://api.github.com/users/iuria21/gists{/gist_id}", "starred_url": "https://api.github.com/users/iuria21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iuria21/subscriptions", "organizations_url": "https://api.github.com/users/iuria21/orgs", "repos_url": "https://api.github.com/users/iuria21/repos", "events_url": "https://api.github.com/users/iuria21/events{/privacy}", "received_events_url": "https://api.github.com/users/iuria21/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,593
1,598
1,598
NONE
null
Hi, I'm trying Zero-Shot Learning with 'facebook/bart-large-mnli' and it's pretty well, but I want to do it in Spanish. Is there any multilingual mnli model? Thanks for your attention!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5239/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5239/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5238
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5238/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5238/comments
https://api.github.com/repos/huggingface/transformers/issues/5238/events
https://github.com/huggingface/transformers/issues/5238
644,565,283
MDU6SXNzdWU2NDQ1NjUyODM=
5,238
Not Implemented Error
{ "login": "4rshdeep", "id": 23432952, "node_id": "MDQ6VXNlcjIzNDMyOTUy", "avatar_url": "https://avatars.githubusercontent.com/u/23432952?v=4", "gravatar_id": "", "url": "https://api.github.com/users/4rshdeep", "html_url": "https://github.com/4rshdeep", "followers_url": "https://api.github.com/users/4rshdeep/followers", "following_url": "https://api.github.com/users/4rshdeep/following{/other_user}", "gists_url": "https://api.github.com/users/4rshdeep/gists{/gist_id}", "starred_url": "https://api.github.com/users/4rshdeep/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/4rshdeep/subscriptions", "organizations_url": "https://api.github.com/users/4rshdeep/orgs", "repos_url": "https://api.github.com/users/4rshdeep/repos", "events_url": "https://api.github.com/users/4rshdeep/events{/privacy}", "received_events_url": "https://api.github.com/users/4rshdeep/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1834054694, "node_id": "MDU6TGFiZWwxODM0MDU0Njk0", "url": "https://api.github.com/repos/huggingface/transformers/labels/TensorFlow", "name": "TensorFlow", "color": "FF6F00", "default": false, "description": "Anything TensorFlow" } ]
closed
false
null
[]
[ "This is likely due because you are using a wrong version of Tensorflow. Can you run `transformers-cli env`, as the template suggests, and place the result here?", "Can you post the full stack trace? It's hard to debug this way.", "@BramVanroy for your reference\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nNotImplementedError Traceback (most recent call last)\r\n<ipython-input-2-db5a2030f0d4> in <module>\r\n 3 inputs = tf.keras.Input(shape=(50, 64), dtype='int32')\r\n 4 model = TFBertModel.from_pretrained('bert-base-uncased')\r\n----> 5 outputs = tf.keras.layers.TimeDistributed(model)(inputs)\r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/base_layer.py in __call__(self, inputs, *args, **kwargs)\r\n 840 not base_layer_utils.is_in_eager_or_tf_function()):\r\n 841 with auto_control_deps.AutomaticControlDependencies() as acd:\r\n--> 842 outputs = call_fn(cast_inputs, *args, **kwargs)\r\n 843 # Wrap Tensors in `outputs` in `tf.identity` to avoid\r\n 844 # circular dependencies.\r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/layers/wrappers.py in call(self, inputs, training, mask)\r\n 254 y = self.layer(inputs, **kwargs)\r\n 255 # Shape: (num_samples, timesteps, ...)\r\n--> 256 output_shape = self.compute_output_shape(input_shape).as_list()\r\n 257 output_shape = self._get_shape_tuple(\r\n 258 (-1, input_length), y, 1, output_shape[2:])\r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/layers/wrappers.py in compute_output_shape(self, input_shape)\r\n 208 child_input_shape = tensor_shape.TensorShape([input_shape[0]] +\r\n 209 input_shape[2:])\r\n--> 210 child_output_shape = self.layer.compute_output_shape(child_input_shape)\r\n 211 if not isinstance(child_output_shape, tensor_shape.TensorShape):\r\n 212 child_output_shape = tensor_shape.TensorShape(child_output_shape)\r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/network.py in compute_output_shape(self, input_shape)\r\n 710 def compute_output_shape(self, input_shape):\r\n 711 if not self._is_graph_network:\r\n--> 712 return super(Network, self).compute_output_shape(input_shape)\r\n 713 \r\n 714 # Convert any shapes in tuple format to TensorShapes.\r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/base_layer.py in compute_output_shape(self, input_shape)\r\n 637 'layer (%s).' % self.__class__.__name__)\r\n 638 return nest.map_structure(lambda t: t.shape, outputs)\r\n--> 639 raise NotImplementedError\r\n 640 \r\n 641 @doc_controls.for_subclass_implementers\r\n\r\nNotImplementedError: \r\n```", "Could this be caused by the fact that TFBertModel returns a tuple (hidden states, pooled output)? In @4rshdeep's case, would we expect TimeDistributed to return a shape of (batch_size, 50, 768), or (batch_size, 50, 64, 768)?", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "I'm having the same error with using the data Streaming app on HuggingFace." ]
1,593
1,676
1,600
NONE
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): TFBert Language I am using the model on (English, Chinese ...): Any The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) ## To reproduce I am trying to run the Bert model on a sequence of sentences. Steps to reproduce the behavior: ``` import tensorflow as tf from transformers import BertTokenizer, TFBertModel inputs = tf.keras.Input(shape=(50, 64), dtype='int32') model = TFBertModel.from_pretrained('bert-base-uncased') outputs = tf.keras.layers.TimeDistributed(model)(inputs) ``` I get a not implemented error, NotImplementedError Traceback (most recent call last) <ipython-input-5-631f3cd2e8b2> in <module> ----> 1 outputs = tf.keras.layers.TimeDistributed(model)(inputs) The same code works fine for ``` inputs = tf.keras.Input(shape=(10, 128, 128, 3)) conv_2d_layer = tf.keras.layers.Conv2D(64, (3, 3)) outputs = tf.keras.layers.TimeDistributed(conv_2d_layer)(inputs) outputs.shape ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior I should be able to run the bert model on the sequence of sentences. <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.0.0 - Platform: Linux-5.0.0-1028-azure-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.8 - PyTorch version (GPU?): 1.5.1 (True) - Tensorflow version (GPU?): 2.0.0 (True) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5238/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5238/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5237
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5237/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5237/comments
https://api.github.com/repos/huggingface/transformers/issues/5237/events
https://github.com/huggingface/transformers/issues/5237
644,476,456
MDU6SXNzdWU2NDQ0NzY0NTY=
5,237
BART(base) - Finetune Is this a bug ? Or I am doing something wrong?
{ "login": "ShoubhikBanerjee", "id": 44529417, "node_id": "MDQ6VXNlcjQ0NTI5NDE3", "avatar_url": "https://avatars.githubusercontent.com/u/44529417?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ShoubhikBanerjee", "html_url": "https://github.com/ShoubhikBanerjee", "followers_url": "https://api.github.com/users/ShoubhikBanerjee/followers", "following_url": "https://api.github.com/users/ShoubhikBanerjee/following{/other_user}", "gists_url": "https://api.github.com/users/ShoubhikBanerjee/gists{/gist_id}", "starred_url": "https://api.github.com/users/ShoubhikBanerjee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ShoubhikBanerjee/subscriptions", "organizations_url": "https://api.github.com/users/ShoubhikBanerjee/orgs", "repos_url": "https://api.github.com/users/ShoubhikBanerjee/repos", "events_url": "https://api.github.com/users/ShoubhikBanerjee/events{/privacy}", "received_events_url": "https://api.github.com/users/ShoubhikBanerjee/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Can you post the full error message?", "Are you asking for this or more??\r\n```bash\r\nTypeError Traceback (most recent call last)\r\n\r\n<ipython-input-13-31ed8b67b601> in <module>()\r\n 2 # see ``examples/summarization/bart/run_eval.py`` for a longer example\r\n 3 model = BartForConditionalGeneration.from_pretrained('FinetuneOutput/best_tfmr')\r\n----> 4 tokenizer = BartTokenizer.from_pretrained('FinetuneOutput/best_tfmr/')\r\n 5 model.eval()\r\n\r\n5 frames\r\n\r\n/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py in __init__(self, **kwargs)\r\n 511 else:\r\n 512 raise TypeError(\r\n--> 513 \"special token {} has to be either str or AddedTokenFast but got: {}\".format(key, type(value))\r\n 514 )\r\n 515 \r\n\r\nTypeError: special token mask_token has to be either str or AddedTokenFast but got: <class 'dict'>\r\n```", "I can't reproduce this on master.\r\n\r\nCan you work with a stable release version or do you need features that are on master only?", "I am working from your master branch. \r\n\r\nAfter commit no #5227 , its not working, previously it was working fine.\r\n\r\nThere was also an example to load from \"best_tfmr\". But now its missing from [this](https://github.com/huggingface/transformers/tree/master/examples/summarization) readme", "Ok, we are fixing some issues related to tokenizer serialization here: https://github.com/huggingface/transformers/pull/5056\r\n\r\nMaybe it will solve your problem as well. Should be merged pretty soon.", "Thanx a lot Sir.\r\n\r\nAfter \"factory resetting\" colab, and build the project from the source again, solved the issue.\r\n\r\nI am really sorry for the inconvenience and appreciate the time you gave me. \r\n\r\nThank a lot. From next time on wards I will keep in mind to \"factory reset\" colab. :) " ]
1,592
1,592
1,592
NONE
null
## I have finetuned my dataset using latest piece of code from master, by cloning and the building. Now when I try to load my finetuned model as shown in the examples of "summarization" (previous commit). `from transformers import BartTokenizer, BartForConditionalGeneration, BartConfig model = BartForConditionalGeneration.from_pretrained('FinetuneOutput/best_tfmr') tokenizer = BartTokenizer.from_pretrained('FinetuneOutput/best_tfmr')` I got the following error : `TypeError: special token mask_token has to be either str or AddedTokenFast but got: <class 'dict'>` Am I missing something, or it is a BUG again ?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5237/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5237/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5236
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5236/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5236/comments
https://api.github.com/repos/huggingface/transformers/issues/5236/events
https://github.com/huggingface/transformers/pull/5236
644,467,159
MDExOlB1bGxSZXF1ZXN0NDM5MDgzNDQ3
5,236
Model cards for Hate-speech-CNERG models
{ "login": "SaiSakethAluru", "id": 21140068, "node_id": "MDQ6VXNlcjIxMTQwMDY4", "avatar_url": "https://avatars.githubusercontent.com/u/21140068?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SaiSakethAluru", "html_url": "https://github.com/SaiSakethAluru", "followers_url": "https://api.github.com/users/SaiSakethAluru/followers", "following_url": "https://api.github.com/users/SaiSakethAluru/following{/other_user}", "gists_url": "https://api.github.com/users/SaiSakethAluru/gists{/gist_id}", "starred_url": "https://api.github.com/users/SaiSakethAluru/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SaiSakethAluru/subscriptions", "organizations_url": "https://api.github.com/users/SaiSakethAluru/orgs", "repos_url": "https://api.github.com/users/SaiSakethAluru/repos", "events_url": "https://api.github.com/users/SaiSakethAluru/events{/privacy}", "received_events_url": "https://api.github.com/users/SaiSakethAluru/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5236?src=pr&el=h1) Report\n> Merging [#5236](https://codecov.io/gh/huggingface/transformers/pull/5236?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5e31a98ab70607c820cc2ad358d81916adad0313&el=desc) will **decrease** coverage by `0.36%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5236/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5236?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5236 +/- ##\n==========================================\n- Coverage 78.34% 77.97% -0.37% \n==========================================\n Files 138 138 \n Lines 23841 23841 \n==========================================\n- Hits 18679 18591 -88 \n- Misses 5162 5250 +88 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5236?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5236/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-28.03%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5236/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `89.95% <0.00%> (-0.92%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5236/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `76.42% <0.00%> (-0.39%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5236/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91.50% <0.00%> (-0.32%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5236/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.86% <0.00%> (-0.15%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5236/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+1.17%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5236?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5236?src=pr&el=footer). Last update [5e31a98...fb34f21](https://codecov.io/gh/huggingface/transformers/pull/5236?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,592
1,593
1,593
CONTRIBUTOR
null
Made minor updates in previous model cards and added new cards for the newly updated models.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5236/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5236/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5236", "html_url": "https://github.com/huggingface/transformers/pull/5236", "diff_url": "https://github.com/huggingface/transformers/pull/5236.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5236.patch", "merged_at": 1593013269000 }
https://api.github.com/repos/huggingface/transformers/issues/5235
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5235/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5235/comments
https://api.github.com/repos/huggingface/transformers/issues/5235/events
https://github.com/huggingface/transformers/issues/5235
644,440,362
MDU6SXNzdWU2NDQ0NDAzNjI=
5,235
Does to T5 Transformer training scale to multiple GPUs?
{ "login": "abhisheknovoic", "id": 62595485, "node_id": "MDQ6VXNlcjYyNTk1NDg1", "avatar_url": "https://avatars.githubusercontent.com/u/62595485?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abhisheknovoic", "html_url": "https://github.com/abhisheknovoic", "followers_url": "https://api.github.com/users/abhisheknovoic/followers", "following_url": "https://api.github.com/users/abhisheknovoic/following{/other_user}", "gists_url": "https://api.github.com/users/abhisheknovoic/gists{/gist_id}", "starred_url": "https://api.github.com/users/abhisheknovoic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhisheknovoic/subscriptions", "organizations_url": "https://api.github.com/users/abhisheknovoic/orgs", "repos_url": "https://api.github.com/users/abhisheknovoic/repos", "events_url": "https://api.github.com/users/abhisheknovoic/events{/privacy}", "received_events_url": "https://api.github.com/users/abhisheknovoic/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi @abhisheknovoic , you can use T5 on multiple GPU's. Have a look at this community notebook https://github.com/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb.\r\n\r\nIt uses pytorch-lightning, so its very easy to setup multi-gpu training.See this guide https://pytorch-lightning.readthedocs.io/en/latest/multi_gpu.html", "@patil-suraj , thanks for the reference. I will take a look at it. Just to confirm, if I use the code as is in the notebook, will it run on multiple GPUs, or do I need to learn a bit about pytorch-lightning and then make some more changes for multi GPU support? Thanks Suraj !", "You won't need to make any changes to the code, you'll just to specify number of gpus when initialising lightning trainer. You can check their multi-gpu docs for more info ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "> Hi @abhisheknovoic , you can use T5 on multiple GPU's. Have a look at this community notebook https://github.com/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb.\r\n> \r\n> It uses pytorch-lightning, so its very easy to setup multi-gpu training.See this guide https://pytorch-lightning.readthedocs.io/en/latest/multi_gpu.html\r\n\r\nHello, Suraj. Is there any way to to achieve model parallelism using your solution? Or maybe this tool https://towardsdatascience.com/model-parallelism-in-one-line-of-code-352b7de5645a can be applied to your code?", "Hi @patil-suraj I am trying to use your notebook with pytorch lightening for multiple tasks, in this case one have multiple eval dataloaders, and metrics are also multiple, do you have an idea how I can extend your notebook for multiple tasks to handle this? thanks a lot" ]
1,592
1,604
1,598
NONE
null
Hello team, I have a large set of sequence to sequence dataset. Basically, a huge bunch of input text sequences to output text sequences. I want to train a T5 network on this. I have the following specific questions. a. Can I use the sample code here (along with my own code) to train T5 on my data? - https://huggingface.co/transformers/model_doc/t5.html b. Will that automatically scale to multiple GPUs? What if I want to further scale to tens of GPUs across different machines. Does HuggingFace support that? Thanks
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5235/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5235/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5234
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5234/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5234/comments
https://api.github.com/repos/huggingface/transformers/issues/5234/events
https://github.com/huggingface/transformers/pull/5234
644,379,983
MDExOlB1bGxSZXF1ZXN0NDM5MDExOTg5
5,234
Fix model path
{ "login": "artemg", "id": 134111, "node_id": "MDQ6VXNlcjEzNDExMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/134111?v=4", "gravatar_id": "", "url": "https://api.github.com/users/artemg", "html_url": "https://github.com/artemg", "followers_url": "https://api.github.com/users/artemg/followers", "following_url": "https://api.github.com/users/artemg/following{/other_user}", "gists_url": "https://api.github.com/users/artemg/gists{/gist_id}", "starred_url": "https://api.github.com/users/artemg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/artemg/subscriptions", "organizations_url": "https://api.github.com/users/artemg/orgs", "repos_url": "https://api.github.com/users/artemg/repos", "events_url": "https://api.github.com/users/artemg/events{/privacy}", "received_events_url": "https://api.github.com/users/artemg/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,592
1,598
1,598
NONE
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5234/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5234/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5234", "html_url": "https://github.com/huggingface/transformers/pull/5234", "diff_url": "https://github.com/huggingface/transformers/pull/5234.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5234.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/5233
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5233/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5233/comments
https://api.github.com/repos/huggingface/transformers/issues/5233/events
https://github.com/huggingface/transformers/pull/5233
644,357,788
MDExOlB1bGxSZXF1ZXN0NDM4OTkzODMz
5,233
Fix PABEE division by zero error
{ "login": "JetRunner", "id": 22514219, "node_id": "MDQ6VXNlcjIyNTE0MjE5", "avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JetRunner", "html_url": "https://github.com/JetRunner", "followers_url": "https://api.github.com/users/JetRunner/followers", "following_url": "https://api.github.com/users/JetRunner/following{/other_user}", "gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}", "starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions", "organizations_url": "https://api.github.com/users/JetRunner/orgs", "repos_url": "https://api.github.com/users/JetRunner/repos", "events_url": "https://api.github.com/users/JetRunner/events{/privacy}", "received_events_url": "https://api.github.com/users/JetRunner/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5233?src=pr&el=h1) Report\n> Merging [#5233](https://codecov.io/gh/huggingface/transformers/pull/5233?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9022ef021a56db975d25c7108cbd19d0dd399174&el=desc) will **increase** coverage by `1.27%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5233/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5233?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5233 +/- ##\n==========================================\n+ Coverage 77.08% 78.36% +1.27% \n==========================================\n Files 138 138 \n Lines 23841 23841 \n==========================================\n+ Hits 18379 18683 +304 \n+ Misses 5462 5158 -304 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5233?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5233/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.00% <0.00%> (+0.29%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5233/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91.82% <0.00%> (+0.31%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5233/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `90.86% <0.00%> (+0.91%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5233/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.90% <0.00%> (+1.38%)` | :arrow_up: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5233/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `56.68% <0.00%> (+28.02%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5233/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `94.92% <0.00%> (+75.00%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5233?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5233?src=pr&el=footer). Last update [9022ef0...3d20fa2](https://codecov.io/gh/huggingface/transformers/pull/5233?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,592
1,592
1,592
CONTRIBUTOR
null
Fix the `division_by_zero` error when `patience` is set to `0` during inference. https://github.com/JetRunner/PABEE/issues/2
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5233/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5233/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5233", "html_url": "https://github.com/huggingface/transformers/pull/5233", "diff_url": "https://github.com/huggingface/transformers/pull/5233.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5233.patch", "merged_at": 1592986237000 }
https://api.github.com/repos/huggingface/transformers/issues/5232
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5232/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5232/comments
https://api.github.com/repos/huggingface/transformers/issues/5232/events
https://github.com/huggingface/transformers/issues/5232
644,316,548
MDU6SXNzdWU2NDQzMTY1NDg=
5,232
BertTokenizerFast.convert_tokens_to_string converts ids to string, not tokens to string
{ "login": "HHousen", "id": 11785397, "node_id": "MDQ6VXNlcjExNzg1Mzk3", "avatar_url": "https://avatars.githubusercontent.com/u/11785397?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HHousen", "html_url": "https://github.com/HHousen", "followers_url": "https://api.github.com/users/HHousen/followers", "following_url": "https://api.github.com/users/HHousen/following{/other_user}", "gists_url": "https://api.github.com/users/HHousen/gists{/gist_id}", "starred_url": "https://api.github.com/users/HHousen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HHousen/subscriptions", "organizations_url": "https://api.github.com/users/HHousen/orgs", "repos_url": "https://api.github.com/users/HHousen/repos", "events_url": "https://api.github.com/users/HHousen/events{/privacy}", "received_events_url": "https://api.github.com/users/HHousen/received_events", "type": "User", "site_admin": false }
[ { "id": 1834056635, "node_id": "MDU6TGFiZWwxODM0MDU2NjM1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization", "name": "Core: Tokenization", "color": "FF4446", "default": false, "description": "Internals of the library; Tokenization." } ]
closed
false
null
[]
[ "You're right, this method is actually not provided on the Fast tokenizers and wrongly linked to the `decode()` method.\r\nWe should remove it in the short-term.\r\n\r\nDo you need it for a specific workflow?", "I need to decode a sequence of input ids to a string. However, I cannot use `tokenizer.batch_decode` because I would like to remove all special tokens except for the [SEP] token, which I want to replace with a token that is not in the tokenizer's vocabulary (so I cannot change the input ids before decoding). To do this I modify the functionality of `tokenizer.convert_ids_to_tokens` to create my modified list of tokens, then I run `tokenizer.convert_tokens_to_string` and `tokenizer.clean_up_tokenization` to create my final sequence.", "I see.\r\n\r\nCan you add your special token at the end of the vocabulary without updating the model inputs and then just replace the SEP token by your new token id prior to decoding?\r\n\r\n```python\r\nfrom transformers import AutoTokenizer\r\n\r\ntokenizer = AutoTokenizer.from_pretrained('bert-base-cased')\r\ntoken.add_tokens('[MY_NEW_TOKEN]')\r\nnew_token_id = tokenizer.convert_tokens_to_ids('[MY_NEW_TOKEN]')\r\n\r\ninputs = tokenizer.encode(\"hello how are you\")\r\ninputs = [new_token_id if tok == tokenizer.sep_token_id else tok for tok in inputs]\r\ndecoded_outputs = tokenizer.decode(inputs)\r\n```", "> I see.\r\n> \r\n> Can you add your special token at the end of the vocabulary without updating the model inputs and then just replace the SEP token by your new token id prior to decoding?\r\n> \r\n> ```python\r\n> from transformers import AutoTokenizer\r\n> \r\n> tokenizer = AutoTokenizer.from_pretrained('bert-base-cased')\r\n> token.add_tokens('[MY_NEW_TOKEN]')\r\n> new_token_id = tokenizer.convert_tokens_to_ids('[MY_NEW_TOKEN]')\r\n> \r\n> inputs = tokenizer.encode(\"hello how are you\")\r\n> inputs = [new_token_id if tok == tokenizer.sep_token_id else tok for tok in inputs]\r\n> decoded_outputs = tokenizer.decode(inputs)\r\n> ```\r\n\r\nUsing this example works around this problem and simplifies my code. Thanks." ]
1,592
1,593
1,593
CONTRIBUTOR
null
# 🐛 Bug The `BertTokenizerFast.convert_tokens_to_string` function expects a list of integers instead of a list of strings as the function implies. This does not happen for the normal `BertTokenizer`. The [BertTokenizerFast](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_bert.py#L550) does not override `convert_tokens_to_string` as it is defined in [tokenization_utils_fast.py](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils_fast.py#L206), which causes this issue. Within `tokenization_utils_fast.py`, the `convert_tokens_to_string` function calls `self._tokenizer.decode` which expects ids (integers not strings). This issue does not arise when using the normal BertTokenizer because that class overrides `convert_tokens_to_string` as can be seen [here](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_bert.py#L230). However, the implementation in [tokenization_utils.py](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils.py#L839) is incorrect according to the docstring. The function should return `" ".join(tokens)` by default and the call to `convert_ids_to_tokens` should be removed because that function accepts ids not tokens. ## Information Model I am using (Bert, XLNet ...): Bert Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce ``` from transformers import BertTokenizerFast, BertTokenizer # Error tokenizer = BertTokenizerFast.from_pretrained("bert-base-uncased") tokens = tokenizer.tokenize("This is a sentence.") print(tokens) output = tokenizer.convert_tokens_to_string(tokens) # No Error because `convert_tokens_to_string` overridden tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") tokens = tokenizer.tokenize("This is a sentence.") print(tokens) output = tokenizer.convert_tokens_to_string(tokens) ``` Output: ``` ['this', 'is', 'a', 'sentence', '.'] Traceback (most recent call last): File "test.py", line 7, in <module> output = tokenizer.convert_tokens_to_string(tokens) File "/home/user/anaconda3/envs/testing/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py", line 209, in convert_tokens_to_string return self._tokenizer.decode(tokens, skip_special_tokens=skip_special_tokens) File "/home/user/anaconda3/envs/testing/lib/python3.8/site-packages/tokenizers/implementations/base_tokenizer.py", line 267, in decode return self._tokenizer.decode(ids, skip_special_tokens=skip_special_tokens) TypeError: 'str' object cannot be interpreted as an integer ``` ## Expected behavior The `BertTokenizerFast.convert_tokens_to_string` function converts a list of tokens (which are strings) to a single string. ## Environment info - `transformers` version: 2.11.0 - Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.5.1+cu101 (True) - Tensorflow version (GPU?): 2.2.0 (True) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5232/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5232/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5231
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5231/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5231/comments
https://api.github.com/repos/huggingface/transformers/issues/5231/events
https://github.com/huggingface/transformers/issues/5231
644,305,954
MDU6SXNzdWU2NDQzMDU5NTQ=
5,231
BertAbs run_summarization.py example fails with errors
{ "login": "nik-suri", "id": 21992945, "node_id": "MDQ6VXNlcjIxOTkyOTQ1", "avatar_url": "https://avatars.githubusercontent.com/u/21992945?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nik-suri", "html_url": "https://github.com/nik-suri", "followers_url": "https://api.github.com/users/nik-suri/followers", "following_url": "https://api.github.com/users/nik-suri/following{/other_user}", "gists_url": "https://api.github.com/users/nik-suri/gists{/gist_id}", "starred_url": "https://api.github.com/users/nik-suri/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nik-suri/subscriptions", "organizations_url": "https://api.github.com/users/nik-suri/orgs", "repos_url": "https://api.github.com/users/nik-suri/repos", "events_url": "https://api.github.com/users/nik-suri/events{/privacy}", "received_events_url": "https://api.github.com/users/nik-suri/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,592
1,598
1,598
NONE
null
# 🐛 Bug ## Information Attempting to use BertAbs with the official example script for summarization: https://github.com/huggingface/transformers/tree/master/examples/summarization/bertabs#summarize-any-text The language I am attempting to summarize for is English. Simply attempting to run a command like ```python run_summarization.py --documents_dir ../../../../test-summaries/ --no_cuda true --min_length 50 --max_length 200 --alpha 0.95``` fails with the following error: ``` Traceback (most recent call last): File "run_summarization.py", line 15, in <module> from .utils_summarization import ( ModuleNotFoundError: No module named '__main__.utils_summarization'; '__main__' is not a package ``` I thought the import line was strange and changed it to `from utils_summarization import (` (note that I removed the `.` which preceded `utils_summarization`. This seemed to fix the error, although I am unsure if it is the correct fix. Nevertheless, even with this temporary fix that I made, the `run_summarization.py` script fails with the following error: ``` INFO:filelock:Lock 140401652398456 acquired on /home/nikhil/.cache/torch/transformers/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084.lock INFO:transformers.file_utils:https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt not found in cache or force_download set to True, downloading to /home/nikhil/.cache/torch/transformers/tmpjgcj6x3w Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 232k/232k [00:00<00:00, 919kB/s] INFO:transformers.file_utils:storing https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt in cache at /home/nikhil/.cache/torch/transformers/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084 INFO:transformers.file_utils:creating metadata file for /home/nikhil/.cache/torch/transformers/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084 INFO:filelock:Lock 140401652398456 released on /home/nikhil/.cache/torch/transformers/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084.lock INFO:transformers.tokenization_utils_base:loading file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt from cache at /home/nikhil/.cache/torch/transformers/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084 Traceback (most recent call last): File "/home/nikhil/.pyenv/versions/huggingface/lib/python3.6/site-packages/transformers/configuration_utils.py", line 243, in get_config_dict raise EnvironmentError OSError During handling of the above exception, another exception occurred: Traceback (most recent call last): File "run_summarization.py", line 324, in <module> main() File "run_summarization.py", line 309, in main evaluate(args) File "run_summarization.py", line 33, in evaluate model = BertAbs.from_pretrained("bertabs-finetuned-cnndm") File "/home/nikhil/.pyenv/versions/huggingface/lib/python3.6/site-packages/transformers/modeling_utils.py", line 602, in from_pretrained **kwargs, File "/home/nikhil/.pyenv/versions/huggingface/lib/python3.6/site-packages/transformers/configuration_utils.py", line 201, in from_pretrained config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs) File "/home/nikhil/.pyenv/versions/huggingface/lib/python3.6/site-packages/transformers/configuration_utils.py", line 252, in get_config_dict raise EnvironmentError(msg) OSError: Can't load config for 'bertabs-finetuned-cnndm'. Make sure that: - 'bertabs-finetuned-cnndm' is a correct model identifier listed on 'https://huggingface.co/models' - or 'bertabs-finetuned-cnndm' is the correct path to a directory containing a config.json file ``` Based on the error message, I looked up `bertabs-finetuned-cnndm` on https://huggingface.co/models to find that there is no exact match for this model name. The closest match is called `remi/bertabs-finetuned-cnndm-extractive-abstractive-summarization`. Should the script be updated to include this model name instead? ## Environment info Output of `transformers-cli env`: ``` - `transformers` version: 2.11.0 - Platform: Linux-4.4.0-18362-Microsoft-x86_64-with-debian-bullseye-sid - Python version: 3.6.9 - PyTorch version (GPU?): 1.5.1+cpu (False) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5231/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5231/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5230
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5230/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5230/comments
https://api.github.com/repos/huggingface/transformers/issues/5230/events
https://github.com/huggingface/transformers/pull/5230
644,232,095
MDExOlB1bGxSZXF1ZXN0NDM4ODkxODc0
5,230
Fix convert_graph_to_onnx script
{ "login": "n1t0", "id": 1217986, "node_id": "MDQ6VXNlcjEyMTc5ODY=", "avatar_url": "https://avatars.githubusercontent.com/u/1217986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/n1t0", "html_url": "https://github.com/n1t0", "followers_url": "https://api.github.com/users/n1t0/followers", "following_url": "https://api.github.com/users/n1t0/following{/other_user}", "gists_url": "https://api.github.com/users/n1t0/gists{/gist_id}", "starred_url": "https://api.github.com/users/n1t0/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/n1t0/subscriptions", "organizations_url": "https://api.github.com/users/n1t0/orgs", "repos_url": "https://api.github.com/users/n1t0/repos", "events_url": "https://api.github.com/users/n1t0/events{/privacy}", "received_events_url": "https://api.github.com/users/n1t0/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5230?src=pr&el=h1) Report\n> Merging [#5230](https://codecov.io/gh/huggingface/transformers/pull/5230?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9022ef021a56db975d25c7108cbd19d0dd399174&el=desc) will **increase** coverage by `0.89%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5230/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5230?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5230 +/- ##\n==========================================\n+ Coverage 77.08% 77.98% +0.89% \n==========================================\n Files 138 138 \n Lines 23841 23841 \n==========================================\n+ Hits 18379 18592 +213 \n+ Misses 5462 5249 -213 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5230?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5230/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.00% <0.00%> (+0.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5230/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.90% <0.00%> (+1.38%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5230/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `94.92% <0.00%> (+75.00%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5230?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5230?src=pr&el=footer). Last update [9022ef0...833fb6d](https://codecov.io/gh/huggingface/transformers/pull/5230?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,592
1,593
1,593
MEMBER
null
- Remove all references to `args` in methods, using arguments instead. This lets us use the `convert` method directly by importing it in another script. - Check that the wanted framework is installed before creating the pipeline, otherwise, it might fail to instantiate it.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5230/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5230/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5230", "html_url": "https://github.com/huggingface/transformers/pull/5230", "diff_url": "https://github.com/huggingface/transformers/pull/5230.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5230.patch", "merged_at": 1593065823000 }
https://api.github.com/repos/huggingface/transformers/issues/5229
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5229/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5229/comments
https://api.github.com/repos/huggingface/transformers/issues/5229/events
https://github.com/huggingface/transformers/pull/5229
644,230,145
MDExOlB1bGxSZXF1ZXN0NDM4ODkwMzA1
5,229
Cleaning TensorFlow models
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks @LysandreJik !\r\n\r\n> The loss is computed differently than it is with the PyTorch models. Here it is returned example-wise (therefore with a shape of (batch_size,), whereas the PyTorch models return the loss as a scalar. @jplu is there a reason for this implementation?\r\n\r\nBecause it is the generic approach to use as some of the other reductions are not compliant with custom training loop. For example I see that you have used `SUM_OVER_BATCH_SIZE` instead of `None` but this removes the compatibility with custom training loops like we have in the trainer, see the [doc](https://www.tensorflow.org/api_docs/python/tf/keras/losses/Reduction). Then can you undo this part please.\r\n\r\nI do the reduction then directly in the trainer and not in the model, but we can do the reduction manually inside either the loss functions, or the `call` methods as you wish :)\r\n\r\n> The TensorFlow models should be able to handle three types of scenarios: keyword arguments, dictionary, and tuple/list. Right now the labels can only be passed through the keyword argument. This PR changes that, and adds a test.\r\n\r\nGood catch, thanks for having fixed this!\r\n\r\n> Most of the QA models had is_impossible, cls_index and p_mask in their signature while not making use of them. These have been removed. Users relying on the order of arguments in the signature will be affected by this\r\n\r\nOk, I didn't know, when I reworked the TF models, I mostly took examples on the list of parameters from the PT part at a time T, I should have been more carefull on later changes. Sorry.\r\n\r\n> The labels were generally placed before the output_attentions and output_hidden_states that have recently been added to the models. This resulted in an error in the documentation as the labels (part of the head model) were added after the output_attentions and output_hidden_states (part of the base model). The arguments have been re-ordered to once again respect the order of **base_arguments, **head_arguments\r\n\r\nThanks!! Like previously I should have been more careful on recent changes. My bad.", "Okay, thanks for the review @jplu, I'll revert that part.", "I'll update the documentation in the next PR", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5229?src=pr&el=h1) Report\n> Merging [#5229](https://codecov.io/gh/huggingface/transformers/pull/5229?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c01480bba3b2f0bd8516679476235f4701c21b3b&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `89.06%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5229/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5229?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5229 +/- ##\n========================================\n Coverage 77.98% 77.99% \n========================================\n Files 138 138 \n Lines 23839 24014 +175 \n========================================\n+ Hits 18592 18729 +137 \n- Misses 5247 5285 +38 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5229?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5229/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `75.63% <51.85%> (+0.40%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5229/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.29% <71.42%> (+4.83%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5229/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `92.82% <86.66%> (+18.33%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5229/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.76% <100.00%> (+3.96%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5229/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.74% <100.00%> (+16.21%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5229/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.72% <100.00%> (+3.52%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5229/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `77.90% <100.00%> (+2.18%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5229/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `91.77% <100.00%> (+11.92%)` | :arrow_up: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/5229/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `25.00% <0.00%> (-73.34%)` | :arrow_down: |\n| [...rc/transformers/data/datasets/language\\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/5229/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `34.69% <0.00%> (-57.15%)` | :arrow_down: |\n| ... and [16 more](https://codecov.io/gh/huggingface/transformers/pull/5229/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5229?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5229?src=pr&el=footer). Last update [c01480b...15321a4](https://codecov.io/gh/huggingface/transformers/pull/5229?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "That's awesome, thanks @LysandreJik !" ]
1,592
1,593
1,593
MEMBER
null
While writing docstrings for #5036, I stumbled upon a few bugs in TensorFlow, especially related to the loss computation. I'm patching them in this PR. Here's the list of the bugs solved: ### Loss computation - The loss is computed differently than it is with the PyTorch models. Here it is returned example-wise (therefore with a shape of `(batch_size,)`, whereas the PyTorch models return the loss as a scalar. @jplu is there a reason for this implementation? - The TensorFlow models should be able to handle three types of scenarios: keyword arguments, dictionary, and tuple/list. Right now the `labels` can only be passed through the keyword argument. This PR changes that, and adds a test. ### Missing models in the test files A few models were implemented but were not tested. Some of these models were not working as expected, therefore they've been updated. - TF DistilBERT for multiple choice (added test and patched) - TF DistilBERT for token classification - TF Electra for QA - TF RoBERTa for multiple choice - TF XLNet for multiple choice (added test and patched) ### Misc - Most of the QA models had `is_impossible`, `cls_index` and `p_mask` in their signature while not making use of them. These have been removed. **Users relying on the order of arguments in the signature will be affected by this** - The `labels` were generally placed before the `output_attentions` and `output_hidden_states` that have recently been added to the models. This resulted in an error in the documentation as the `labels` (part of the head model) were added after the `output_attentions` and `output_hidden_states` (part of the base model). The arguments have been re-ordered to once again respect the order of `**base_arguments, **head_arguments`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5229/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5229/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5229", "html_url": "https://github.com/huggingface/transformers/pull/5229", "diff_url": "https://github.com/huggingface/transformers/pull/5229.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5229.patch", "merged_at": 1593013040000 }
https://api.github.com/repos/huggingface/transformers/issues/5228
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5228/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5228/comments
https://api.github.com/repos/huggingface/transformers/issues/5228/events
https://github.com/huggingface/transformers/issues/5228
644,208,415
MDU6SXNzdWU2NDQyMDg0MTU=
5,228
Embedding index out of range in self
{ "login": "zht1130", "id": 23211139, "node_id": "MDQ6VXNlcjIzMjExMTM5", "avatar_url": "https://avatars.githubusercontent.com/u/23211139?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zht1130", "html_url": "https://github.com/zht1130", "followers_url": "https://api.github.com/users/zht1130/followers", "following_url": "https://api.github.com/users/zht1130/following{/other_user}", "gists_url": "https://api.github.com/users/zht1130/gists{/gist_id}", "starred_url": "https://api.github.com/users/zht1130/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zht1130/subscriptions", "organizations_url": "https://api.github.com/users/zht1130/orgs", "repos_url": "https://api.github.com/users/zht1130/repos", "events_url": "https://api.github.com/users/zht1130/events{/privacy}", "received_events_url": "https://api.github.com/users/zht1130/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\nCan you share a self-contained code example reproducing the bug?", "> Hi,\r\n> Can you share a self-contained code example reproducing the bug?\r\n\r\nSorry, I think it is due to a bug in my code. Please close it.", "@zht1130 Were you able to identify the bug? I'm seeing a similar error.", "> @zht1130 Were you able to identify the bug? I'm seeing a similar error.\r\n\r\n1. Run the model on the CPU instead of GPU will give you more detailed information.\r\n2. BERT layer only receives token ids whose length smaller than 512." ]
1,592
1,593
1,592
NONE
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): Bert cased (size=768) Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) When I feed the ids converted by BERTtokenizer to BERT embedding layer, it shows that the dimension does not match. The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) task: text binary classification. dataset: ICLR2020 peer reviews ## To reproduce Steps to reproduce the behavior: 1. use BERT model and BERT tokenizer 2. convert the text of any datasets to ids 3. feed to the BERT model <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior The dimension should match. Actually, the code works two days ago. I did not change anything and today it does not work. The error is: return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) and index out of range in self. ## Environment info 2020-06-23 23:18:22.738569: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/transformers/commands/env.py:36: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.config.list_physical_devices('GPU')` instead. 2020-06-23 23:18:24.794569: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX512F 2020-06-23 23:18:24.849719: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 2000160000 Hz 2020-06-23 23:18:24.850210: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x42a8bc0 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-06-23 23:18:24.850259: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2020-06-23 23:18:24.857345: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1 2020-06-23 23:18:24.860619: E tensorflow/stream_executor/cuda/cuda_driver.cc:313] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected 2020-06-23 23:18:24.860659: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (68cbbf79e491): /proc/driver/nvidia/version does not exist Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 2.11.0 - Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.5.1+cu101 (False) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5228/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5228/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5227
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5227/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5227/comments
https://api.github.com/repos/huggingface/transformers/issues/5227/events
https://github.com/huggingface/transformers/pull/5227
644,118,897
MDExOlB1bGxSZXF1ZXN0NDM4Nzk4MTU4
5,227
[pl_examples] revert deletion of optimizer_step
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5227?src=pr&el=h1) Report\n> Merging [#5227](https://codecov.io/gh/huggingface/transformers/pull/5227?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c01480bba3b2f0bd8516679476235f4701c21b3b&el=desc) will **increase** coverage by `0.06%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5227/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5227?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5227 +/- ##\n==========================================\n+ Coverage 77.98% 78.05% +0.06% \n==========================================\n Files 138 138 \n Lines 23839 23839 \n==========================================\n+ Hits 18592 18608 +16 \n+ Misses 5247 5231 -16 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5227?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5227/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `76.42% <0.00%> (-0.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5227/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.86% <0.00%> (-0.30%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5227/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.33% <0.00%> (-0.24%)` | :arrow_down: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5227/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `35.03% <0.00%> (+6.36%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5227?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5227?src=pr&el=footer). Last update [c01480b...51bb9b3](https://codecov.io/gh/huggingface/transformers/pull/5227?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,592
1,592
1,592
CONTRIBUTOR
null
using default `optimizer_step` has at least 2 issues: The default version... 1) doesn't call `lr_scheduler.step()` 2) does call `self.trainer.scaler.step(optimizer)` I haven't diagnosed which of these is the main culprit of the issue I was seeing (very high loss, not going down). This fixes that issue. I suspect it is mostly the latter: @williamFalcon
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5227/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5227/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5227", "html_url": "https://github.com/huggingface/transformers/pull/5227", "diff_url": "https://github.com/huggingface/transformers/pull/5227.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5227.patch", "merged_at": 1592944846000 }
https://api.github.com/repos/huggingface/transformers/issues/5226
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5226/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5226/comments
https://api.github.com/repos/huggingface/transformers/issues/5226/events
https://github.com/huggingface/transformers/issues/5226
644,115,360
MDU6SXNzdWU2NDQxMTUzNjA=
5,226
Self documenting Payload instead of Tuples as output of Transformer
{ "login": "bhoov", "id": 24350185, "node_id": "MDQ6VXNlcjI0MzUwMTg1", "avatar_url": "https://avatars.githubusercontent.com/u/24350185?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhoov", "html_url": "https://github.com/bhoov", "followers_url": "https://api.github.com/users/bhoov/followers", "following_url": "https://api.github.com/users/bhoov/following{/other_user}", "gists_url": "https://api.github.com/users/bhoov/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhoov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhoov/subscriptions", "organizations_url": "https://api.github.com/users/bhoov/orgs", "repos_url": "https://api.github.com/users/bhoov/repos", "events_url": "https://api.github.com/users/bhoov/events{/privacy}", "received_events_url": "https://api.github.com/users/bhoov/received_events", "type": "User", "site_admin": false }
[ { "id": 1834056761, "node_id": "MDU6TGFiZWwxODM0MDU2NzYx", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling", "name": "Core: Modeling", "color": "FF8446", "default": false, "description": "Internals of the library; Models." } ]
closed
false
null
[]
[ "I agree with this change but what about backward compat? I think, as you said, `namedtuple` could be used.", "I think the `namedtuple` would be best for both getting the desired features and also backward compatibility. But to make it backward compatible, wouldn't we just need any object that supports indexing (edit: and unpacking)? What other special features of a tuple are important to maintain for backwards compatibility?", "Unfortunately, `torch.jit` does not support dict, namedtuples or other kinds of fancy outputs... just plain old tuples. See [this issue](https://github.com/pytorch/pytorch/issues/373440) for instance.\r\n\r\nDict support has been added recently, so we could consider switching to that with a breaking change once it lands in a stable release of PyTorch, but this would pin us on PyTorch 1.6.0 minimum... Not sure the benefit of this for documentation would be worth it when every output of every model is cleanly explained in its documentation.", "I would argue that, despite clean documentation, there is a huge advantage to having self documenting payloads. Playing around with the outputs of the results in a jupyter notebook will give you auto completion of the fields, remove any ambiguity, and (if you are trying to develop an application that can interpret the output of as many different transformer models as possible) remove the need to look up the output format for every model and version of a model (e.g., `LMHead` or `NextSentencePrediction`).\r\n\r\nI suppose if we wanted to avoid breaking changes with JIT, we can allow each model to have an optional parameter that enables annotation of the output or not. I believe we would find that it would ideally be enabled by default for a smoother user experience, and then any decorator that JITs a code could disable the annotation in favor of a regular tuple. But even if it is a flag that we have to manually set it would still enable better applications needing to support different models.\r\n\r\nThoughts then on making it an optional flag?", "Yes @julien-c also gave me the idea of the flag, hadn't thought of it. You can check the PR linked above for a prototype of doing this while not breaking any backward compatibility.", "Skimmed through the changes and it looks very nice! Would love to see something like this for all the models. Thanks so much for doing this!", "I think this is now closed by $5438 ." ]
1,592
1,594
1,594
NONE
null
I propose a replacing the default Tuple outputs with payloads where every field in the tuple is accessible by a name. This has the following benefits: 1. Being more accessible to newcomers of the library 2. Eliminating suspicious comments in the source code describing the output (see below) 3. Make it easier to extract particular values from different models for inspection -- adding a named field is easier than mangling the order of an existing Tuple 4. Like 3, where new models can output their own metadata without reordering expected structure 5. A consistent output structure makes it easier to compare different models in applications that involve multiple architectures. ## Motivation This will be my first formal feature request, as I am getting weary of forgetting what the Tuple output in the forward pass of a Transformer means. I find myself constantly returning to the source code for each model and playing with the shapes and values of the different fields to check whether something is what I expect it to be. ### The Problem Currently, the output of a Transformer (I use Bert as an example here, potentially not the most recent version) is structured and documented in the source code as follows: ``` python ... for i, layer_module in enumerate(self.layer): if self.output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) layer_outputs = layer_module( hidden_states, attention_mask, head_mask[i], encoder_hidden_states, encoder_attention_mask ) hidden_states = layer_outputs[0] if self.output_attentions: all_attentions = all_attentions + (layer_outputs[1],) # Add last layer if self.output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) outputs = (hidden_states,) if self.output_hidden_states: outputs = outputs + (all_hidden_states,) if self.output_attentions: outputs = outputs + (all_attentions,) if self.output_additional_info: outputs = outputs + (all_additional_info,) return outputs # last-layer hidden state, (all hidden states), (all attentions) ``` Of course, that last comment is highly dependent on what you pass to the configuration. What if you desire all the attentions but not the hidden_states? Now, attention is at `outputs[1]` instead of `outputs[2]`. Utterly confusing. And what if you have an architecture that has different outputs? Here is an example from the output of a T5 model. ``` python ... # Add last layer if self.output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) outputs = (hidden_states,) if use_cache is True: assert self.is_decoder, "`use_cache` can only be set to `True` if {} is used as a decoder".format(self) outputs = outputs + (present_key_value_states,) if self.output_hidden_states: outputs = outputs + (all_hidden_states,) if self.output_attentions: outputs = outputs + (all_attentions,) return outputs # last-layer hidden state, (presents,) (all hidden states), (all attentions) ``` There's this new field `presents` that again confuses the order. It starts to get a bit confusing. In addition, one of my projects the past many months has been to visually compare and interpret different Transformer models [exbert](http://exbert.net/). This means that I often want to edit the source code and extract, for example, the keys / values / head embeddings prior to the final projection into the embeddings that are passed to the next layer. Extracting these and passing them through the model is more complicated than it should be -- there are no hooks that I can use to catch arbitrary information within a module's forward pass (potentially a separate feature request, but I feel this would slow the prototyping speed of this library quite a bit), and I worry about messing with an expected order to the Tuple output. It is also really easy to forget which field in a list of 8 items is the one I want. Architectures are also incorporated quickly into Transformers (kudos!), and it would be great to know what inference information I have available for a model simply by looking at the object outputted by the forward pass. ### Possible Solutions I would like the return object of every Transformer's forward pass to be a Payload where the information outputted is easily identified by the fields. E.g., for a LMHead Transformer: ``` python { logits: __, past: __, hidden_states: __, attentions: __, } ``` Where a non-LMHead Transformer would not include logits and offset indexing into the output. This also allows models like T5 to unambiguously add additional fields without compromising the structure of the tuple. ``` python { last_layer_hidden_state: __, presents: __, } ``` It would be trivial to add fields to this at the output: ``` python output = {"last_hidden_state": hidden_states} if self.output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) outputs['all_hidden_states'] = all_hidden_states if self.output_attentions: outputs['attentions'] = all_attentions return outputs # Self documenting fields, no comment needed! :D ``` Payloads like this would also work for intermediate modules, though some naming convention to indicate the main output intended to be used by the next module/layer would be necessary. Python's `namedtuples` could also be an option, though easily adding fields to this immutable structure is a bit more challenging for interpretability work. Additional alternatives could be [`namedlist`](https://pypi.org/project/namedlist/) or [`recordclass`](https://pypi.org/project/recordclass/). ## Contribution I am willing to continue thinking of solutions and work towards this goal, but making a single PR would be both a sweeping change across library and the way each module is coded. However, I believe this change to be important for increasing the accessibility of the library for both newcomers and those who want to build applications around the different models.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5226/reactions", "total_count": 4, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5226/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5225
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5225/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5225/comments
https://api.github.com/repos/huggingface/transformers/issues/5225/events
https://github.com/huggingface/transformers/pull/5225
644,085,865
MDExOlB1bGxSZXF1ZXN0NDM4NzcwODg2
5,225
Add hugs
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5225?src=pr&el=h1) Report\n> Merging [#5225](https://codecov.io/gh/huggingface/transformers/pull/5225?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c01480bba3b2f0bd8516679476235f4701c21b3b&el=desc) will **increase** coverage by `0.35%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5225/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5225?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5225 +/- ##\n==========================================\n+ Coverage 77.98% 78.34% +0.35% \n==========================================\n Files 138 138 \n Lines 23839 23839 \n==========================================\n+ Hits 18592 18676 +84 \n+ Misses 5247 5163 -84 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5225?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5225/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.27% <0.00%> (-0.89%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5225/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `76.42% <0.00%> (-0.39%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5225/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91.82% <0.00%> (+0.31%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5225/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `90.86% <0.00%> (+0.91%)` | :arrow_up: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5225/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `56.68% <0.00%> (+28.02%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5225?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5225?src=pr&el=footer). Last update [c01480b...1aa097c](https://codecov.io/gh/huggingface/transformers/pull/5225?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,592
1,592
1,592
COLLABORATOR
null
Enforce that there are not transformers, Transformers, `transformers` but only 🤗 Transformers in the documentation.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5225/reactions", "total_count": 5, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5225/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5225", "html_url": "https://github.com/huggingface/transformers/pull/5225", "diff_url": "https://github.com/huggingface/transformers/pull/5225.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5225.patch", "merged_at": 1592999775000 }
https://api.github.com/repos/huggingface/transformers/issues/5224
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5224/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5224/comments
https://api.github.com/repos/huggingface/transformers/issues/5224/events
https://github.com/huggingface/transformers/pull/5224
644,061,192
MDExOlB1bGxSZXF1ZXN0NDM4NzUwMzk3
5,224
Use the script in utils
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5224?src=pr&el=h1) Report\n> Merging [#5224](https://codecov.io/gh/huggingface/transformers/pull/5224?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c01480bba3b2f0bd8516679476235f4701c21b3b&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5224/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5224?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5224 +/- ##\n==========================================\n- Coverage 77.98% 77.98% -0.01% \n==========================================\n Files 138 138 \n Lines 23839 23839 \n==========================================\n- Hits 18592 18590 -2 \n- Misses 5247 5249 +2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5224?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5224/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.86% <0.00%> (-0.30%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5224?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5224?src=pr&el=footer). Last update [c01480b...45e8866](https://codecov.io/gh/huggingface/transformers/pull/5224?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,592
1,592
1,592
COLLABORATOR
null
Since we have the script `download_glue_data` in the utils folder, changing the instructions in the README for the GLUE example to use it for now (of course nlp will ultimately make this even easier) since it's easier than copying the gist in a local file.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5224/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5224/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5224", "html_url": "https://github.com/huggingface/transformers/pull/5224", "diff_url": "https://github.com/huggingface/transformers/pull/5224.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5224.patch", "merged_at": 1592999759000 }
https://api.github.com/repos/huggingface/transformers/issues/5223
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5223/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5223/comments
https://api.github.com/repos/huggingface/transformers/issues/5223/events
https://github.com/huggingface/transformers/pull/5223
644,054,568
MDExOlB1bGxSZXF1ZXN0NDM4NzQ0ODc1
5,223
Only put tensors on a device
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5223?src=pr&el=h1) Report\n> Merging [#5223](https://codecov.io/gh/huggingface/transformers/pull/5223?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c01480bba3b2f0bd8516679476235f4701c21b3b&el=desc) will **decrease** coverage by `0.03%`.\n> The diff coverage is `60.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5223/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5223?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5223 +/- ##\n==========================================\n- Coverage 77.98% 77.95% -0.04% \n==========================================\n Files 138 138 \n Lines 23839 23841 +2 \n==========================================\n- Hits 18592 18586 -6 \n- Misses 5247 5255 +8 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5223?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5223/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <60.00%> (+0.04%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5223/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.51% <0.00%> (-1.39%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5223/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `76.42% <0.00%> (-0.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5223/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.86% <0.00%> (-0.30%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5223?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5223?src=pr&el=footer). Last update [c01480b...10ff478](https://codecov.io/gh/huggingface/transformers/pull/5223?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,592
1,592
1,592
COLLABORATOR
null
Fix Trainer when users have inputs containing non-tensor values.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5223/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5223/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5223", "html_url": "https://github.com/huggingface/transformers/pull/5223", "diff_url": "https://github.com/huggingface/transformers/pull/5223.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5223.patch", "merged_at": 1592947818000 }
https://api.github.com/repos/huggingface/transformers/issues/5222
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5222/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5222/comments
https://api.github.com/repos/huggingface/transformers/issues/5222/events
https://github.com/huggingface/transformers/pull/5222
644,045,339
MDExOlB1bGxSZXF1ZXN0NDM4NzM3Mzc5
5,222
Add version control menu
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5222?src=pr&el=h1) Report\n> Merging [#5222](https://codecov.io/gh/huggingface/transformers/pull/5222?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c439752482759c94784e11a87dcbf08ce69dccf3&el=desc) will **decrease** coverage by `0.10%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5222/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5222?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5222 +/- ##\n==========================================\n- Coverage 78.07% 77.97% -0.11% \n==========================================\n Files 138 138 \n Lines 23786 23786 \n==========================================\n- Hits 18572 18547 -25 \n- Misses 5214 5239 +25 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5222?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5222/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-5.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5222/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.51% <0.00%> (-1.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5222/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `88.05% <0.00%> (-1.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5222/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `78.61% <0.00%> (-0.20%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5222/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.86% <0.00%> (-0.15%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5222?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5222?src=pr&el=footer). Last update [c439752...12b85f4](https://codecov.io/gh/huggingface/transformers/pull/5222?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "nice. In terms of UI you could also just have used a `<select>` element (maybe slightly more explicit UI) but I guess this works too" ]
1,592
1,592
1,592
COLLABORATOR
null
This PR adds at the top of the navigation bar a menu to pick a version of the docs. A few comments: When switching version, the reader is sent on the same page of the docs in the older version (so it gives an error if the same page did not exist in this version of the docs). I don't know if this is preferable to the alternative (sending back to the index of the other version in the docs). Let me know what you think. The menu will disappear once the reader goes to an older version of the docs (since it did not exist back then) unless we find a way to cherry pick in each release (but I doubt it's worth it). Preview is [here](https://51918-155220641-gh.circle-artifacts.com/0/docs/_build/html/index.html). Side note, on a local build (like this one), the version appears as 'html', but it will be the right one once merged. Also, the preview/local build only has one version, so the links don't work there.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5222/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5222/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5222", "html_url": "https://github.com/huggingface/transformers/pull/5222", "diff_url": "https://github.com/huggingface/transformers/pull/5222.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5222.patch", "merged_at": 1592946312000 }
https://api.github.com/repos/huggingface/transformers/issues/5221
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5221/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5221/comments
https://api.github.com/repos/huggingface/transformers/issues/5221/events
https://github.com/huggingface/transformers/issues/5221
644,024,605
MDU6SXNzdWU2NDQwMjQ2MDU=
5,221
gpt2.generate breaks on FP16 Apex training.
{ "login": "Laksh1997", "id": 59830552, "node_id": "MDQ6VXNlcjU5ODMwNTUy", "avatar_url": "https://avatars.githubusercontent.com/u/59830552?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Laksh1997", "html_url": "https://github.com/Laksh1997", "followers_url": "https://api.github.com/users/Laksh1997/followers", "following_url": "https://api.github.com/users/Laksh1997/following{/other_user}", "gists_url": "https://api.github.com/users/Laksh1997/gists{/gist_id}", "starred_url": "https://api.github.com/users/Laksh1997/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Laksh1997/subscriptions", "organizations_url": "https://api.github.com/users/Laksh1997/orgs", "repos_url": "https://api.github.com/users/Laksh1997/repos", "events_url": "https://api.github.com/users/Laksh1997/events{/privacy}", "received_events_url": "https://api.github.com/users/Laksh1997/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "This may be an error on my part, apologies. Just confirming.", "This is supposed to work, misaligned address is usually a flavor of OOM. I would try to cut your batch size.", "@sshleifer Batch size is 16, am on a V100 on FP16. Also I'm training with a batch size of 512, so surely batch size is fine in this case?", "Okay, turns out the problem is still there.\r\n\r\nHere is my code:\r\n\r\n```\r\n @torch.no_grad()\r\n def generate_no_grad(self, num_samples, batch_size, **kwargs):\r\n device = next(self.parameters()).device\r\n num_iters = num_samples // batch_size\r\n all_smiles = []\r\n for idx in range(num_iters):\r\n input_ids = torch.empty(batch_size, 1).fill_(self.tokenizer.bos_token_id)\r\n input_ids = input_ids.to(device).long()\r\n generated_ids = self.encoder.generate(input_ids, **kwargs)\r\n smiles = self.tokenizer.decode(generated_ids)\r\n all_smiles.extend(smiles)\r\n return all_smiles\r\n\r\n def generate(self):\r\n smiles = self.generate_no_grad(\r\n num_samples=50,\r\n batch_size=16,\r\n max_length=self.collater.max_length,\r\n do_sample=True,\r\n num_beams=1,\r\n temperature=1.0,\r\n top_k=500,\r\n top_p=1.0,\r\n repetition_penalty=1.0,\r\n pad_token_id=self.tokenizer.pad_token_id,\r\n bos_token_id=self.tokenizer.bos_token_id,\r\n eos_token_id=self.tokenizer.eos_token_id,\r\n length_penalty=1,\r\n no_repeat_ngram_size=0,\r\n num_return_sequences=1,\r\n use_cache=True,\r\n )\r\n ....\r\n```\r\n\r\nBugs out on batch size 16, num_samples=10_000, max_length=100", "@patrickvonplaten Any idea with this?\r\n\r\nAlso, I'm on torch 1.4 and cu101, could upgrading to 1.5.1 and cu102 fix this?", "I cannot reproduce the error in the notebook. Looking into it more. ", "Hmmmm, getting the same error on batch size 8 and num samples = 50.\r\n\r\nhere's my colab where I'm trying to reproduce: https://colab.research.google.com/drive/13uvd_Y_VHoZqQxyZ0OdyyNAJEurrW2LX?usp=sharing\r\n\r\n@sshleifer I'm guessing the CICD checks for torch 1.4 cu101?", "Okay. I can only think that this is a pytorch 1.4 cu101 error. Will update image to 1.5.1, try to tomorrow and update this accordingly. Cheers!", "Is there anything from the code above that would warrant this error?", "Well, well, well @sshleifer I moved to the pytorch nightly conda builds (1.6-dev, which has native AMP) and now the issue is no longer there.\r\n\r\nWeird ...", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,592
1,598
1,598
NONE
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): GPT2 Language I am using the model on (English, Chinese ...): Molecule The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Run Pytorch lightning with Apex (or just default apex training), then during validation try and generate samples with the model (which is on fp16). ## Trace ``` 2020-06-23T18:30:00.337+01:00 | THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1579022034529/work/aten/src/THC/THCReduceAll.cuh line=327 error=716 : misaligned address -- | --   | 2020-06-23T18:30:00.338+01:00 | Validation sanity check: 0it [00:00, ?it/s] Validation sanity check: 50% 1/2 [00:01<00:01, 1.08s/it]generating smiles   | 2020-06-23T18:30:00.341+01:00 | Traceback (most recent call last):   | 2020-06-23T18:30:00.341+01:00 | File "/home/user/miniconda/envs/py36/bin/transformervae", line 11, in <module>   | 2020-06-23T18:30:00.341+01:00 | load_entry_point('exs-transformervae', 'console_scripts', 'transformervae')()   | 2020-06-23T18:30:00.341+01:00 | File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/click/core.py", line 829, in __call__   | 2020-06-23T18:30:00.341+01:00 | return self.main(*args, **kwargs)   | 2020-06-23T18:30:00.341+01:00 | File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/click/core.py", line 782, in main   | 2020-06-23T18:30:00.341+01:00 | rv = self.invoke(ctx)   | 2020-06-23T18:30:00.341+01:00 | File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/click/core.py", line 1259, in invoke   | 2020-06-23T18:30:00.341+01:00 | return _process_result(sub_ctx.command.invoke(sub_ctx))   | 2020-06-23T18:30:00.341+01:00 | File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/click/core.py", line 1066, in invoke   | 2020-06-23T18:30:00.341+01:00 | return ctx.invoke(self.callback, **ctx.params)   | 2020-06-23T18:30:00.341+01:00 | File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/click/core.py", line 610, in invoke   | 2020-06-23T18:30:00.341+01:00 | return callback(*args, **kwargs)   | 2020-06-23T18:30:00.341+01:00 | File "/app/transformervae/cli.py", line 404, in pretrain   | 2020-06-23T18:30:00.341+01:00 | trainer.fit(model)   | 2020-06-23T18:30:00.341+01:00 | File "/home/uwandb: Waiting for W&B process to finish, PID 28   | 2020-06-23T18:30:00.341+01:00 | ser/miniconda/envs/py36/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 918, in fit   | 2020-06-23T18:30:00.341+01:00 | self.single_gpu_train(model)   | 2020-06-23T18:30:00.341+01:00 | File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 176, in single_gpu_train   | 2020-06-23T18:30:00.341+01:00 | self.run_pretrain_routine(model)   | 2020-06-23T18:30:00.341+01:00 | File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 1076, in run_pretrain_routine   | 2020-06-23T18:30:00.341+01:00 | False)   | 2020-06-23T18:30:00.341+01:00 | File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 330, in _evaluate   | 2020-06-23T18:30:00.341+01:00 | eval_results = model.validation_epoch_end(outputs)   | 2020-06-23T18:30:00.341+01:00 | File "/app/transformervae/models/base.py", line 95, in validation_epoch_end   | 2020-06-23T18:30:00.341+01:00 | return self._shared_eval_end(output, "val")   | 2020-06-23T18:30:00.341+01:00 | File "/app/transformervae/models/lm.py", line 152, in _shared_eval_end   | 2020-06-23T18:30:00.341+01:00 | use_cache=True,   | 2020-06-23T18:30:00.341+01:00 | File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 49, in decorate_no_grad   | 2020-06-23T18:30:00.341+01:00 | return func(*args, **kwargs)   | 2020-06-23T18:30:00.341+01:00 | File "/app/transformervae/models/lm.py", line 173, in generate_no_grad   | 2020-06-23T18:30:00.341+01:00 | generated_ids = self.encoder.generate(input_ids, **kwargs)   | 2020-06-23T18:30:00.341+01:00 | File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 49, in decorate_no_grad   | 2020-06-23T18:30:00.341+01:00 | return func(*args, **kwargs)   | 2020-06-23T18:30:00.341+01:00 | File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/transformers/modeling_utils.py", line 1181, in generate   | 2020-06-23T18:30:00.341+01:00 | model_specific_kwargs=model_specific_kwargs,   | 2020-06-23T18:30:00.341+01:00 | File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/transformers/modeling_utils.py", line 1285, in _generate_no_beam_search   | 2020-06-23T18:30:00.341+01:00 | if unfinished_sents.max() == 0:   | 2020-06-23T18:30:00.341+01:00 | RuntimeError: cuda runtime error (716) : misaligned address at /opt/conda/conda-bld/pytorch_1579022034529/work/aten/src/THC/THCReduceAll.cuh:327 ``` ## Expected behavior Should generate samples. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.11.0 - Platform: Ubuntu - Python version: 3.6.9 - PyTorch version (GPU?): 1.4.0 cu101 - Tensorflow version (GPU?): n/a - Using GPU in script?: V100 - Using distributed or parallel set-up in script?: no but FP16 training
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5221/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5221/timeline
completed
null
null